diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar Whats New and Whats Improved in This Update.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar Whats New and Whats Improved in This Update.md
deleted file mode 100644
index 35941c19f6268c2d57019dd1d19a9c2aff80dbb1..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar Whats New and Whats Improved in This Update.md
+++ /dev/null
@@ -1,144 +0,0 @@
-
- - Overview: What is the 2018.0.2 version and what are its improvements and fixes - Download: How to download and install the full .rar file | | H2: What is AutoCAD Civil 3D and what are its features and benefits | - Definition: A software for civil engineering design and documentation - Features: A list of some of the main features of AutoCAD Civil 3D such as dynamic models, object-oriented environment, BIM tools, etc. - Benefits: A list of some of the benefits of using AutoCAD Civil 3D such as efficiency, accuracy, collaboration, etc. | | H2: What is the 2018.0.2 version and what are its improvements and fixes | - Release date: When was the 2018.0.2 version released and by whom - Improvements: A list of some of the improvements made in the 2018.0.2 version such as performance, stability, compatibility, etc. - Fixes: A list of some of the fixes made in the 2018.0.2 version such as bugs, errors, issues, etc. | | H2: How to download and install the full .rar file | - Requirements: What are the system requirements for running AutoCAD Civil 3D 2018.0.2 - Sources: Where can you find the full .rar file for download - Steps: How to download and install the full .rar file step by step | | H1: Conclusion | - Summary: A brief summary of the main points of the article - Recommendation: A recommendation to download and use AutoCAD Civil 3D 2018.0.2 - Call to action: A call to action to visit a website or contact a service for more information or assistance | | H1: FAQs | - Q1: What is the difference between AutoCAD and AutoCAD Civil 3D? - A1: AutoCAD is a general-purpose CAD software that can be used for various design and drafting applications, while AutoCAD Civil 3D is a specialized software that focuses on civil engineering design and documentation. - Q2: What are some of the applications of AutoCAD Civil 3D? - A2: Some of the applications of AutoCAD Civil 3D are surveying, land development, transportation engineering, water resources engineering, environmental engineering, etc. - Q3: How much does AutoCAD Civil 3D cost? - A3: AutoCAD Civil 3D is available as a subscription-based service that costs $2,155 per year or $270 per month. - Q4: How can I learn AutoCAD Civil 3D? - A4: You can learn AutoCAD Civil 3D by taking online courses, watching tutorials, reading manuals, joining forums, or hiring a trainer. - Q5: How can I get support for AutoCAD Civil 3D? - A5: You can get support for AutoCAD Civil 3D by visiting the official website, contacting the customer service, accessing the knowledge base, or joining the community. | # Article with HTML formatting ```html
Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar: What is it and why you need it
-
If you are a civil engineer or a civil engineering student, you probably have heard of AutoCAD Civil 3D, one of the most popular and powerful software for civil engineering design and documentation. But do you know what is Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar and why you need it? In this article, we will explain what this software is, what are its features and benefits, what are its improvements and fixes in the latest version, and how to download and install it.
-
What is AutoCAD Civil 3D and what are its features and benefits
-
AutoCAD Civil 3D is a software developed by Autodesk, a leading company in design and engineering software solutions. It is a software that allows you to create civil engineering designs and documentation using dynamic models, an object-oriented environment, and powerful tools for building information modeling (BIM).
-
Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar
Some of the main features of AutoCAD Civil 3D are:
-
-
Dynamic models: You can create dynamic models that update automatically as you make changes to your design parameters.
-
Object-oriented environment: You can work with objects that have properties and behaviors that reflect real-world elements such as surfaces, alignments, profiles, corridors, pipes, etc.
-
BIM tools: You can use BIM tools to analyze your design data, generate reports and presentations, collaborate with other stakeholders, and integrate with other software applications.
-
-
Some of the benefits of using AutoCAD Civil 3D are:
-
-
Efficiency: You can save time and resources by using dynamic models that reduce errors and rework.
-
Accuracy: You can improve your design quality and accuracy by using object-oriented environment that reflects real-world conditions.
-
Collaboration: You can enhance your collaboration and communication with other stakeholders by using BIM tools that facilitate data sharing and coordination.
-
-
What is the 2018.0.2 version and what are its improvements and fixes
-
The 2018.0.2 version of AutoCAD Civil 3D is the latest update released by Autodesk on November 6th, which includes several improvements and fixes that enhance the performance, stability, compatibility, etc.
-
The following table summarizes some of the main improvements made in this version:
-
-
Area
Description
-
Civil View
The performance has been improved when importing large quantities of objects into Autodesk InfraWorks.
-
Data Shortcuts
The performance has been improved when creating data shortcuts for corridors with large quantities of baselines.
-
Drawing Management
The stability has been improved when opening drawings containing data shortcuts.
-
Pipes
The stability has been improved when editing pipe networks in section views.
-
Railings
The stability has been improved when editing railings in profile views.
-
Roadway Design
The stability has been improved when editing corridors with large quantities of regions.
-
User Interface
The compatibility has been improved with high resolution monitors.
-
Xref
The performance has been improved when opening drawings containing xrefs.
-
-
The following table summarizes some of the main fixes made in this version:
-
-
Bug ID
Description
-
CIVIL-12900
An issue where corridor solids were not created correctly for some corridors has been resolved.
-
CIVIL-13076
An issue where corridor feature lines were not created correctly for some corridors has been resolved.
-
CIVIL-13107
An issue where corridor solids were not displayed correctly in section views has been resolved.
-
CIVIL-13108
An issue where corridor feature lines were not displayed correctly in section views has been resolved.
-
CIVIL-13109
An issue where corridor solids were not displayed correctly in plan views has been resolved.
-```html has been resolved.
-
CIVIL-13111
An issue where corridor solids were not displayed correctly in 3D views has been resolved.
-
CIVIL-13112
An issue where corridor feature lines were not displayed correctly in 3D views has been resolved.
-
CIVIL-13113
An issue where corridor solids were not exported correctly to Autodesk InfraWorks has been resolved.
-
CIVIL-13114
An issue where corridor feature lines were not exported correctly to Autodesk InfraWorks has been resolved.
-
CIVIL-13115
An issue where corridor solids were not exported correctly to Autodesk Navisworks has been resolved.
-
CIVIL-13116
An issue where corridor feature lines were not exported correctly to Autodesk Navisworks has been resolved.
-
CIVIL-13117
An issue where corridor solids were not exported correctly to Autodesk Revit has been resolved.
-
CIVIL-13118
An issue where corridor feature lines were not exported correctly to Autodesk Revit has been resolved.
-
-
How to download and install the full .rar file
-
If you want to download and install the full .rar file of AutoCAD Civil 3D 2018.0.2 (x64), you need to make sure that your system meets the following requirements:
-
-
Operating System: Microsoft Windows 10 (64-bit only), 8.1 (64-bit only), or 7 SP1 (64-bit only)
-
Processor: Minimum: 2.5–2.9 GHz or faster processor / Recommended: 3+ GHz or faster processor
-
Memory: Minimum: 4 GB / Recommended: 16 GB
-
Display Resolution: Minimum: 1360 x 768 (1920 x 1080 recommended) with True Color / Maximum: 4K (3840 x 2160)
-
Display Card: Minimum: 1 GB GPU with 29 GB/s Bandwidth and DirectX 11 compliant / Recommended: 4 GB GPU with 106 GB/s Bandwidth and DirectX 11 compliant
-
Disk Space: Installation: 10 GB
-
Browser: Google Chrome (for AutoCAD web app)
-
.NET Framework: .NET Framework Version 4.6
-
-
Once you have checked your system requirements, you can find the full .rar file for download from various sources on the internet, such as 4shared, SolidTorrents, or Archive.org. However, be careful of the potential risks of downloading files from unverified or untrusted sources, such as viruses, malware, or corrupted files.
-
Autodesk AutoCAD Civil 3D 2018 x64 full version download
-How to install Autodesk AutoCAD Civil 3D 2018.0.2 on 64-bit Windows
-Autodesk AutoCAD Civil 3D 2018.0.2 crack serial keygen
-Autodesk AutoCAD Civil 3D 2018 for civil engineering design
-Autodesk AutoCAD Civil 3D 2018.0.2 patch update
-Autodesk AutoCAD Civil 3D 2018 x64 free trial
-Autodesk AutoCAD Civil 3D 2018 system requirements
-Autodesk AutoCAD Civil 3D 2018 tutorial pdf
-Autodesk AutoCAD Civil 3D 2018 new features and enhancements
-Autodesk AutoCAD Civil 3D 2018 license activation
-Autodesk AutoCAD Civil 3D 2018 online training course
-Autodesk AutoCAD Civil 3D 2018 vs Revit
-Autodesk AutoCAD Civil 3D 2018 user guide
-Autodesk AutoCAD Civil 3D 2018 product key generator
-Autodesk AutoCAD Civil 3D 2018 software review
-Autodesk AutoCAD Civil 3D 2018 tips and tricks
-Autodesk AutoCAD Civil 3D 2018 support forum
-Autodesk AutoCAD Civil 3D 2018 best practices
-Autodesk AutoCAD Civil 3D 2018 comparison with other versions
-Autodesk AutoCAD Civil 3D 2018 keyboard shortcuts
-Autodesk AutoCAD Civil 3D 2018 price and discount
-Autodesk AutoCAD Civil 3D 2018 torrent magnet link
-Autodesk AutoCAD Civil 3D 2018 workflow and tools
-Autodesk AutoCAD Civil 3D 2018 certification exam
-Autodesk AutoCAD Civil 3D 2018 video tutorial
-Autodesk AutoCAD Civil 3D 2018 sample projects and files
-Autodesk AutoCAD Civil 3D 2018 error and troubleshooting
-Autodesk AutoCAD Civil 3D 2018 customization and add-ons
-Autodesk AutoCAD Civil 3D 2018 release date and changelog
-Autodesk AutoCAD Civil 3D 2018 benefits and advantages
-Autodesk AutoCAD Civil 3D 2018 alternatives and competitors
-Autodesk AutoCAD Civil 3D 2018 feedback and testimonials
-Autodesk AutoCAD Civil 3D 2018 subscription and renewal
-Autodesk AutoCAD Civil 3D 2018 offline installer
-Autodesk AutoCAD Civil 3D 2018 compatibility and interoperability
-Autodesk AutoCAD Civil RAR file extraction and installation guide
-How to use Autodesk AutoCAD Civil for land development and infrastructure design
-How to upgrade from previous versions of Autodesk AutoCAD Civil
-How to uninstall or remove Autodesk AutoCAD Civil from your computer
-How to optimize the performance of Autodesk AutoCAD Civil
-How to import and export data in Autodesk AutoCAD Civil
-How to create and edit surfaces, alignments, profiles, corridors, and pipe networks in Autodesk Auto CAD Civil
-How to use dynamic modeling and analysis tools in Autodesk Auto CAD Civil
-How to collaborate and share data with other users in Autodesk Auto CAD Civil
-How to generate reports and documentation in Autodesk Auto CAD Civil
-How to apply standards and styles in Autodesk Auto CAD Civil
-How to use geospatial data and coordinate systems in Autodesk Auto CAD Civil
-How to create and manage point clouds in Autodesk Auto CAD Civil
-How to use visualization and rendering tools in Autodesk Auto CAD
-
To download and install the full .rar file, follow these steps:
-
-
Download the full .rar file from your preferred source and save it to your computer.
-
Extract the .rar file using a software such as WinRAR or 7-Zip.
-
Run the setup.exe file as administrator and follow the instructions on the screen.
-
Enter your serial number and product key when prompted. You can find them on your Autodesk Account or on the packaging of your product.
-
Select your installation options, such as language, components, and location.
-
Click Install and wait for the installation to complete.
-
Restart your computer if required.
-
Launch AutoCAD Civil 3D 2018.0.2 (x64) and enjoy!
-
-
Conclusion
-
In this article, we have explained what is Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar, what are its features and benefits, what are its improvements and fixes in the latest version, and how to download and install it. We hope you have found this article useful and informative.
-
If you are looking for a software that can help you create civil engineering designs and documentation using dynamic models, an object-oriented environment, and powerful BIM tools, we recommend you to download and use AutoCAD Civil 3D 2018.0.2 (x64). It is a comprehensive solution that can improve your efficiency, accuracy, and collaboration in your civil engineering projects.
-
If you want to learn more about AutoCAD Civil 3D 2018.0.2 (x64), you can visit the official website, contact the customer service, access the knowledge base, or join the community. You can also find more resources such as tutorials, manuals, forums, or trainers online.
-
FAQs
-
Q1: What is the difference between AutoCAD and AutoCAD Civil 3D?
-
A1: AutoCAD is a general-purpose CAD software that can be used for various design and drafting applications, while AutoCAD Civil 3D is a specialized software that focuses on civil engineering design and documentation.
-
Q2: What are some of the applications of AutoCAD Civil 3D?
-
A2: Some of the applications of AutoCAD Civil 3D are surveying, land development, transportation engineering, water resources engineering, environmental engineering,, etc.
-
Q3: How much does AutoCAD Civil 3D cost?
-
A3: AutoCAD Civil 3D is available as a subscription-based service that costs $2,155 per year or $270 per month.
-
Q4: How can I learn AutoCAD Civil 3D?
-
A4: You can learn AutoCAD Civil 3D by taking online courses,<
-watching tutorials,<
-reading manuals,<
-[24][25][26][27][28][29][30][31][32][33][34][35][36][37], joining forums,[38][39][40][41][42] or hiring a trainer.[43][44][45][46][47][48][49][50]
-
Q5: How can I get support for AutoCAD Civil 3D?
-
A5: You can get support for AutoCAD Civil 3D by visiting the official website,[51] contacting the customer service,[52] accessing the knowledge base,[53] or joining the community.[54]
-
0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cutmaster2dprov1331fullcrackserialkeygenfree The Benefits and Features of This Powerful Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cutmaster2dprov1331fullcrackserialkeygenfree The Benefits and Features of This Powerful Software.md
deleted file mode 100644
index d80a0a16d44927bd313dad9713b7844472fcd85c..0000000000000000000000000000000000000000
--- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cutmaster2dprov1331fullcrackserialkeygenfree The Benefits and Features of This Powerful Software.md
+++ /dev/null
@@ -1,21 +0,0 @@
-
-
What is CutMaster 2D Pro v1.3.3.1?
-
If you are looking for a professional and powerful software program for cutting and slicing up images and videos, you might want to check out CutMaster 2D Pro v1.3.3.1. This software is a highly responsive application that allows users to slice images like a professional.
-
CutMaster 2D Pro v1.3.3.1 is a very versatile program that can handle any type of image or video format, such as JPG, PNG, BMP, GIF, MP4, AVI, MOV, etc.
With CutMaster 2D Pro v1.3.3.1, you can quickly and easily create professional style cuts for your projects, such as banners, logos, posters, flyers, brochures, etc.
-
You can also use it to edit your personal photos and videos, such as cropping, rotating, resizing, adding effects, etc.
-
CutMaster 2D Pro v1.3.3.1 has a user-friendly interface that makes it easy to navigate and operate.
-
It also has a lot of features and tools that make it stand out from other similar programs.
-
Why do you need CutMaster 2D Pro v1.3.3.1?
-
There are many reasons why you might need CutMaster 2D Pro v1.3.3.1 for your image and video cutting needs.
-
Some of them are:
-
-
It saves you time and money. You don't have to spend hours or days trying to cut your images and videos manually or using other complicated programs that require a lot of skills and resources.
-
It improves your quality and creativity. You can achieve high-quality results with minimal effort and maximum accuracy using CutMaster 2D Pro v1.3.3.1.
-
It enhances your productivity and efficiency. You can work faster and smarter with CutMaster 2D Pro v1.3.3.1 by using its advanced features and tools that automate and simplify your cutting process.
-
It gives you more flexibility and control. You can customize your cuts according to your preferences and needs using CutMaster 2D Pro v1.3.3.
-
-
How to download CutMaster 2D Pro v1. 0a6ba089eb
-
-
\ No newline at end of file
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Altiumfiletypenotrecognized.md b/spaces/1gistliPinn/ChatGPT4/Examples/Altiumfiletypenotrecognized.md
deleted file mode 100644
index 0cc60207e7730876c3466c8bcb09be7e11ac7297..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Altiumfiletypenotrecognized.md
+++ /dev/null
@@ -1,54 +0,0 @@
-
-
- . . but when I want to save a change in the second project it says, ¨are not allowed to save´
-
-I think the problem is not in the second project because the second project has a lot of.J3 files to simulate and verify the.sch file. So I ask you if I can read the.sch file in the first project and save it in the second project or if I have to read the.sch file in the second project . . ...
-
-thank you very much in advance . . . . . . .
-
-A:
-
-Sorry, I'm a little late with this answer, but in the mean time, I have just had a similar issue.
-
-In order to "share" a project between multiple Altium instances, you need to make sure that:
-
-The.sch project file is saved in the "Save As..." dialog (not the "Copy" dialog).
-
-The new.sch project file is saved in the same folder as the.sch file of the original project.
-
-I'm sure this has been covered elsewhere on the internet, but here are the links I found through a quick google search:
-
-Q:
-
-Map not applied to ArrayList inside a class
-
-I have an application that reads about 1000 lines from a file and uses the information to make a list of customers. I am trying to print the last name of the customer to console but when I try to use my map I get an error.
-
-My Customer class:
-
-public class Customer {
-
- private String lastName;
-
- private String firstName;
-
- private String address;
-
- public Customer(String firstName, String lastName, String address)
-
- this.firstName = firstName;
-
- this.lastName = lastName;
-
- this.address = address;
-
-
-
- public String getLastName() {
-
- return this.lastName;
-
- public 4fefd39f24
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Autoclosets 80 Con Serial.md b/spaces/1gistliPinn/ChatGPT4/Examples/Autoclosets 80 Con Serial.md
deleted file mode 100644
index b5da8335327c3a5bca9514f1163bd9ef86b4df2f..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Autoclosets 80 Con Serial.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-18 Jul 2020 - Race Man 3 Full Movie In Hindi Hd 720p Download Free - brawlhalla como conseguir monedasMammoth Glory Coins. Download Race Man 3 In Hindi Hd 720p Pc...
-Apr 19, 2019 - Download Race Man 3 Full Movie In Hindi Hd 720p...
-Download Race Man 3 In Hindi Hd 720p Pc free race man 3 in english download race man 3 in english watch race man 3 in english full movie download race man 3 in english download movie race man 3 in english watch race man 3 in english full movie download race man 3 in english
-in english download 8a78ff9644
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Durgasaptashatibeejmantrasadhanapdf35 [UPD].md b/spaces/1gistliPinn/ChatGPT4/Examples/Durgasaptashatibeejmantrasadhanapdf35 [UPD].md
deleted file mode 100644
index 257e8b0184b05d95e70b2580db02d6e969bbce5b..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/Durgasaptashatibeejmantrasadhanapdf35 [UPD].md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
-An Introduction To Bunraku [HOT] · Free E Book Download __HOT__ In Pdf Lang Lang Piano Book · Durgasaptashatibeejmantrasadhanapdf35. 4d29de3e1b
-
-
-
diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/El Kulubud Daria Mecmuatul-Ahzabn Dzeltme ve Snflandrmasyla Oluturulan Du Kitab PDF ndir.md b/spaces/1gistliPinn/ChatGPT4/Examples/El Kulubud Daria Mecmuatul-Ahzabn Dzeltme ve Snflandrmasyla Oluturulan Du Kitab PDF ndir.md
deleted file mode 100644
index 42c2cf82fad7d9d07b38f4650e2b07e08c625292..0000000000000000000000000000000000000000
--- a/spaces/1gistliPinn/ChatGPT4/Examples/El Kulubud Daria Mecmuatul-Ahzabn Dzeltme ve Snflandrmasyla Oluturulan Du Kitab PDF ndir.md
+++ /dev/null
@@ -1,6 +0,0 @@
-
-
- aaccfb2cb3
-
-
-
diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator APK for Android 6.0 The Best Way to Enjoy Retro Games.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator APK for Android 6.0 The Best Way to Enjoy Retro Games.md
deleted file mode 100644
index c4aba6688cbf2378f7aab85c15cdb32c15ec469c..0000000000000000000000000000000000000000
--- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator APK for Android 6.0 The Best Way to Enjoy Retro Games.md
+++ /dev/null
@@ -1,228 +0,0 @@
-
-
Introduction
-
If you are a fan of Nintendo GameCube and Wii games, you might have wished to play them on your Android device. Well, thanks to Dolphin Emulator, you can do just that! Dolphin Emulator is a free and open-source software that allows you to run GameCube and Wii games on your Android device in full HD (1080p) with several enhancements, such as compatibility with all PC controllers, turbo speed, networked multiplayer, and even more.
Dolphin Emulator has been around since 2003 as a desktop application for Windows, Linux, and macOS. It was the first GameCube emulator that could successfully run commercial games. Later on, it also gained support for Wii emulation. In 2013, Dolphin Emulator was ported to Android as a beta version, and since then it has been updated regularly with new features and bug fixes.
-
However, Dolphin Emulator is not a perfect emulator. It has some requirements and challenges that you need to be aware of before using it on your Android device. For example, you need a powerful device that can handle the emulation workload, you need to obtain the GameCube and Wii games legally from your own discs or backups, you need to install the app manually from an external source, you need to configure the settings and preferences according to your device and game compatibility, and you need to troubleshoot some errors and issues that may arise during the emulation process.
-
In this article, I will provide you with a comprehensive guide on how to download, install, and use Dolphin Emulator Android 6.0 APK on your Android device. I will also answer some frequently asked questions and share some user reviews about this emulator.
-
Downloading Dolphin Emulator Android 6.0 APK
-
The first step to use Dolphin Emulator on your Android device is to download the APK file from a reliable source. The APK file is an executable file that contains the app's code and resources. You can download Dolphin Emulator Android 6.0 APK from either the official website or other sources.
-
How to download Dolphin Emulator Android 6.0 APK from the official website?
-
The official website of Dolphin Emulator is https://dolphin-emu.org. Here you can find the latest news, updates, downloads, and documentation about the emulator. You can also join the community forums, chat rooms, and social media pages to interact with other users and developers.
-
To download Dolphin Emulator Android 6.0 APK from the official website, follow these steps:
Click on the Download button on the top right corner of the homepage.
-
Select Android from the drop-down menu.
-
You will be redirected to a page with a list of available versions of Dolphin Emulator for Android. The latest version is usually at the top of the list.
-
Click on the Download APK button next to the version you want to download. You can also check the release notes, changelog, and compatibility list for each version by clicking on the respective links.
-
A pop-up window will appear asking you to confirm your download. Click on OK to proceed.
-
The APK file will be downloaded to your device's default download folder. You can check the progress and status of your download on your notification bar or download manager app.
-
-
How to download Dolphin Emulator Android 6.0 APK from other sources?
-
If you cannot access the official website of Dolphin Emulator for some reason, or if you want to download an older or modified version of Dolphin Emulator Android 6.0 APK, you can also find it on other sources, such as third-party websites, app stores, file hosting services, or torrent sites. However, you need to be careful when downloading from these sources, as they may not be trustworthy or safe. Some of them may contain malware, viruses, spyware, adware, or other unwanted programs that can harm your device or compromise your privacy. Some of them may also provide fake or corrupted files that may not work properly or cause errors and issues with your emulator.
-
dolphin emulator android 6.0 apk download
-dolphin emulator android 6.0 apk free
-dolphin emulator android 6.0 apk latest version
-dolphin emulator android 6.0 apk mod
-dolphin emulator android 6.0 apk no root
-dolphin emulator android 6.0 apk offline
-dolphin emulator android 6.0 apk old version
-dolphin emulator android 6.0 apk pro
-dolphin emulator android 6.0 apk reddit
-dolphin emulator android 6.0 apk update
-dolphin emulator android 6.0 apk xda
-dolphin emulator android 6.0 apk youtube
-dolphin emulator android 6.0 games apk
-dolphin emulator android 6.0 marshmallow apk
-dolphin emulator android 6.0 nougat apk
-dolphin emulator android 6.0 oreo apk
-dolphin emulator android 6.0 pie apk
-dolphin emulator android 6.0 q apk
-dolphin emulator android 6.0 r apk
-dolphin emulator android 6.0 s apk
-best settings for dolphin emulator android 6.0 apk
-how to install dolphin emulator android 6.0 apk
-how to use dolphin emulator android 6.0 apk
-is dolphin emulator android 6.0 apk safe
-what is dolphin emulator android 6.0 apk
-where to download dolphin emulator android 6.0 apk
-why dolphin emulator android 6.0 apk not working
-wii games for dolphin emulator android 6.0 apk
-gamecube games for dolphin emulator android 6.0 apk
-nintendo games for dolphin emulator android 6.0 apk
-mario games for dolphin emulator android 6.0 apk
-zelda games for dolphin emulator android 6.0 apk
-pokemon games for dolphin emulator android 6.0 apk
-resident evil games for dolphin emulator android 6.0 apk
-sonic games for dolphin emulator android 6.0 apk
-kirby games for dolphin emulator android 6.0 apk
-metroid games for dolphin emulator android 6.0 apk
-fire emblem games for dolphin emulator android 6.0 apk
-animal crossing games for dolphin emulator android 6.0 apk
-super smash bros games for dolphin emulator android 6.0 apk
-mario kart games for dolphin emulator android 6.0 apk
-mario party games for dolphin emulator android 6.0 apk
-mario sports games for dolphin emulator android 6.0 apk
-mario golf games for dolphin emulator android 6.0 apk
-mario tennis games for dolphin emulator android 6.0 apk
-mario baseball games for dolphin emulator android 6.0 apk
-mario soccer games for dolphin emulator android 6.0 apk
-mario basketball games for dolphin emulator android 6.0 apk
-
To download Dolphin Emulator Android 6.0 APK from other sources, follow these steps:
-
-
Search for "Dolphin Emulator Android 6.0 APK" on your preferred search engine or app store. You can also use keywords such as "download", "free", "latest", "modded", "cracked", "unlocked", etc. to narrow down your search results.
-
Browse through the results and select a source that looks reliable and reputable. You can check the ratings, reviews, comments, feedback, and reputation of the source before downloading from it. You can also use tools such as VirusTotal, Malwarebytes, or Norton to scan the URL or file for any potential threats.
-
Click on the Download button or link on the source's page. You may have to go through some ads, pop-ups, surveys, or captcha verification before you can access the download link. Be careful not to click on any suspicious or misleading links or buttons that may redirect you to unwanted sites or install unwanted programs on your device.
-
The APK file will be downloaded to your device's default download folder. You can check the progress and status of your download on your notification bar or download manager app.
-
-
How to verify the integrity and safety of the downloaded file?
-
After downloading Dolphin Emulator Android 6.0 APK from any source, you should always verify the integrity and safety of the downloaded file before installing it on your device. This is to ensure that the file is authentic, complete, and free from any malicious code or modification that may affect its performance or functionality.
-
To verify the integrity and safety of the downloaded file, follow these steps:
-
-
Check the file size and name of the downloaded file. Compare it with the original file size and name from the official website or source. If there is a significant difference in size or name, it may indicate that the file is fake or corrupted.
-
Check the file extension of the downloaded file. It should be ".apk" which stands for Android Package Kit. If it is anything else, such as ".zip", ".rar", ".exe", ".bin", etc., it may indicate that the file is not an APK file or that it contains other files that may be harmful or unnecessary.
-
Check the file signature or checksum of the downloaded file. This is a unique code that identifies and verifies the authenticity and integrity of a file. You can use tools such as MD5 & SHA Checksum Utility, HashTab, or Checksum Calculator to generate and compare the file signature or checksum of the downloaded file with the original one from the official website or source. If they match, it means that the file is authentic and intact. If they don't match, it means that the file is fake or corrupted.
-
Scan the file with a reputable antivirus or anti-malware program, such as Avast, Malwarebytes, or Norton. These programs can detect and remove any malicious code or modification that may be hidden in the file. They can also protect your device from any potential threats that may arise from installing or running the file.
-
-
If the downloaded file passes all these checks, you can proceed to install it on your device. If not, you should delete it immediately and download it again from a different source.
-
Installing Dolphin Emulator Android 6.0 APK
-
The next step to use Dolphin Emulator on your Android device is to install the APK file on your device. However, since Dolphin Emulator is not available on the Google Play Store, you need to install it manually from an external source. This means that you need to grant permissions and overcome security restrictions that may prevent you from installing apps from unknown sources.
-
How to install Dolphin Emulator Android 6.0 APK on your Android device?
-
To install Dolphin Emulator Android 6.0 APK on your Android device, follow these steps:
-
-
Locate the downloaded APK file on your device's file manager app or download manager app. You can also use a third-party file manager app, such as ES File Explorer, File Manager, or Solid Explorer to locate the file.
-
Tap on the APK file to open it. A pop-up window will appear asking you to confirm your installation. Tap on Install to proceed.
-
If you see a message saying "For your security, your phone is not allowed to install unknown apps from this source", tap on Settings. This will take you to a screen where you can enable the option to allow installing apps from unknown sources. Depending on your device model and Android version, this option may be called "Unknown sources", "Install unknown apps", "Allow app installs", or something similar. Toggle the switch or check the box next to this option to enable it.
-
Go back to the installation screen and tap on Install again. The installation process will begin and may take a few seconds or minutes depending on your device's speed and performance.
-
Once the installation is complete, you will see a message saying "App installed". Tap on Open to launch Dolphin Emulator on your device. You can also tap on Done to close the installation screen and find Dolphin Emulator on your app drawer or home screen.
-
-
How to grant permissions and overcome security restrictions?
-
Dolphin Emulator requires some permissions and access to certain features and functions of your device in order to work properly. For example, it needs access to your storage, camera, microphone, location, network, etc. You need to grant these permissions and overcome any security restrictions that may prevent Dolphin Emulator from accessing these features and functions.
-
To grant permissions and overcome security restrictions, follow these steps:
-
-
The first time you launch Dolphin Emulator on your device, you will see a series of pop-up windows asking you to grant various permissions to the app. Tap on Allow or Accept for each permission request. You can also tap on Deny or Reject if you don't want to grant a certain permission, but this may affect the performance or functionality of the app.
-
If you want to change or manage the permissions for Dolphin Emulator later, go to your device's settings app and look for the option called "Apps", "Applications", "App Manager", or something similar. Tap on this option and find Dolphin Emulator from the list of installed apps. Tap on Dolphin Emulator and then tap on Permissions. Here you can see all the permissions that Dolphin Emulator has requested and whether they are granted or denied. You can toggle the switch or check the box next to each permission to grant or revoke it.
-
Some features and functions of Dolphin Emulator may be blocked or restricted by your device's security settings, such as battery optimization, data usage, background activity, overlay, etc. These settings may prevent Dolphin Emulator from running smoothly or at all. To overcome these security restrictions, go to your device's settings app and look for the option called "Security", "Privacy", "Battery", "Data", or something similar. Tap on this option and find Dolphin Emulator from the list of apps or features. Tap on Dolphin Emulator and then tap on the option that allows you to disable or bypass the security restriction. For example, you may need to disable battery optimization, allow unrestricted data usage, enable background activity, allow overlay, etc.
-
-
By granting permissions and overcoming security restrictions, you can ensure that Dolphin Emulator can access all the features and functions it needs to run GameCube and Wii games on your Android device.
-
Using Dolphin Emulator Android 6.0 APK
-
After installing Dolphin Emulator Android 6.0 APK on your Android device, you can start using it to play GameCube and Wii games on your device. However, before you can do that, you need to obtain and load the games on the emulator. You also need to customize the graphics and audio settings of the emulator according to your device and game compatibility. You also need to connect and use controllers with the emulator if you prefer to play with physical buttons and joysticks. You also need to play online multiplayer games with the emulator if you want to enjoy the social aspect of gaming.
-
How to obtain and load GameCube and Wii games on Dolphin Emulator Android 6.0 APK?
-
Dolphin Emulator does not come with any GameCube or Wii games pre-installed or included in the app. You need to obtain the games legally from your own discs or backups and load them on the emulator. The games are usually in the form of ISO or WBFS files that contain the game data and can be read by the emulator.
-
To obtain and load GameCube and Wii games on Dolphin Emulator Android 6.0 APK, follow these steps:
-
-
If you have the original GameCube or Wii discs, you can use a disc drive and a software tool, such as CleanRip, RawDump, or FriiDump to rip the discs and create ISO or WBFS files on your computer. You can also use a modded Wii console and a software tool, such as USB Loader GX, WiiFlow, or CFG USB Loader to rip the discs and create ISO or WBFS files on a USB drive.
-
If you have backup copies of GameCube or Wii games, you can use a software tool, such as Wii Backup Manager, Witgui, or Wii Backup Fusion to convert them into ISO or WBFS files on your computer.
-
Once you have the ISO or WBFS files of the games you want to play, you need to transfer them to your Android device's storage. You can use a USB cable, a microSD card, a cloud service, or a wireless method to do so.
-
On your Android device, launch Dolphin Emulator and tap on the Add Folder button on the top right corner of the screen. This will allow you to browse your device's storage and select the folder where you stored your ISO or WBFS files.
-
Dolphin Emulator will scan the folder and display all the games that it can recognize in a grid view. You can tap on any game to see more details about it, such as title, region, size, rating, etc.
-
To load a game, simply tap on its icon and wait for Dolphin Emulator to launch it. You will see a loading screen with some information about the game and the emulator's status.
-
Once the game is loaded, you can start playing it on your Android device using either touch controls or physical controllers.
-
-
How to customize the graphics and audio settings of Dolphin Emulator Android 6.0 APK?
-
Dolphin Emulator allows you to customize the graphics and audio settings of each game according to your device's capabilities and preferences. You can adjust the resolution, aspect ratio, anti-aliasing, anisotropic filtering, texture scaling, frame rate, sound volume, and other options to enhance or optimize your gaming experience. However, you should also be aware that some of these settings may affect the performance or compatibility of the emulator or the game. You may need to experiment with different settings to find the best balance between quality and speed.
-
To customize the graphics and audio settings of Dolphin Emulator Android 6.0 APK, follow these steps:
-
-
On your Android device, launch Dolphin Emulator and tap on the Menu button on the top left corner of the screen. This will open a sidebar with various options.
-
Tap on Settings to access the emulator's settings menu.
-
Tap on Graphics to access the graphics settings menu. Here you can see four tabs: General, Enhancements, Hacks, and Advanced. Each tab contains different options that you can tweak according to your needs and preferences.
-
The General tab allows you to change the basic graphics settings, such as video backend, aspect ratio, resolution, vsync, etc.
-
The Enhancements tab allows you to change the advanced graphics settings, such as anti-aliasing, anisotropic filtering, texture scaling, post-processing effects, etc.
-
The Hacks tab allows you to change the performance-related graphics settings, such as skip EFB access, ignore format changes, store EFB copies to texture only, etc.
-
The Advanced tab allows you to change the experimental graphics settings, such as shader compilation mode, asynchronous shader compilation, etc.
-
To change any of these settings, simply tap on the option and select the value or toggle the switch that suits your needs and preferences. You can also tap on the i icon next to each option to see a brief explanation of what it does and how it affects the emulation.
-
If you want to reset all the graphics settings to their default values, tap on the Reset All Settings button at the bottom of the screen.
-
To save your changes and exit the graphics settings menu, tap on the Back button on your device or emulator.
-
To access the audio settings menu, tap on Audio from the settings menu. Here you can see two options: Enable Sound Output and Volume.
-
To enable or disable sound output from the emulator, toggle the switch next to Enable Sound Output. If you disable sound output, you will not hear any sound from the emulator or the game.
-
To adjust the volume of the sound output from the emulator, drag the slider next to Volume. You can also use your device's volume buttons to adjust the volume.
-
To save your changes and exit the audio settings menu, tap on the Back button on your device or emulator.
-
-
Troubleshooting Dolphin Emulator Android 6.0 APK
-
Dolphin Emulator is a complex software that may encounter some errors and issues during its operation. Some of these errors and issues may be caused by factors such as device specifications, game compatibility, app configuration, network connection, etc. Some of them may be easy to fix or resolve by following some simple steps or tips. Some of them may require more advanced or technical solutions or assistance from the developers or support team.
-
How to fix common errors and issues with Dolphin Emulator Android 6.0 APK?
-
To fix common errors and issues with Dolphin Emulator Android 6.0 APK, follow these steps:
-
-
If you experience crashes, freezes, slowdowns, glitches, or other performance problems with Dolphin Emulator or a game, try these tips:
-
-
Close any other apps or processes that may be running in the background and consuming your device's resources.
-
Clean your device's cache and memory using a cleaning app or tool.
-
Restart your device and launch Dolphin Emulator again.
-
Lower or disable some of the graphics and audio settings that may be taxing your device's capabilities.
-
Check if your device meets the minimum system requirements for Dolphin Emulator and the game you are trying to play.
-
Update Dolphin Emulator to the latest version available.
-
Update your device's software and drivers to the latest version available.
-
Check if the game you are trying to play is compatible with Dolphin Emulator and your device. You can use the compatibility list on the official website or the game wiki to see the compatibility rating, issues, and solutions for each game.
-
Try a different version or build of Dolphin Emulator or the game you are trying to play. You can find older or newer versions or builds of Dolphin Emulator on the download page or the development versions page. You can find different versions or regions of GameCube and Wii games on various websites or sources.
-
Try a different game file format or compression method. Dolphin Emulator supports ISO and WBFS file formats, as well as compressed formats such as GCZ, CISO, RVZ, etc. Some formats or compression methods may work better or worse than others depending on the game and your device.
-
Report the error or issue to the developers or support team of Dolphin Emulator. You can use the issue tracker on GitHub, the forums, the Discord server, or the contact form on the official website to report the error or issue. Provide as much information as possible, such as your device model and specifications, Dolphin Emulator version and settings, game name and version, error message and screenshot, steps to reproduce the error or issue, etc.
-
-
If you experience problems with downloading, installing, updating, or uninstalling Dolphin Emulator Android 6.0 APK, try these tips:
-
-
Check your device's storage space and make sure you have enough free space to download, install, update, or uninstall Dolphin Emulator Android 6.0 APK.
-
Check your device's network connection and make sure you have a stable and fast internet connection to download, install, update, or uninstall Dolphin Emulator Android 6.0 APK.
-
Check your device's security settings and make sure you have enabled the option to allow installing apps from unknown sources.
-
Check the integrity and safety of the APK file you downloaded and make sure it is authentic, complete, and free from any malicious code or modification.
-
Use a reliable and reputable source to download Dolphin Emulator Android 6.0 APK. Avoid sources that may provide fake or corrupted files that may not work properly or cause errors and issues with your emulator.
-
Use a file manager app or tool to locate and manage the APK file on your device's storage. Avoid renaming, moving, deleting, or modifying the APK file in any way that may affect its installation or operation.
-
If you want to update Dolphin Emulator Android 6.0 APK, you can either download and install the latest version from the official website or source, or use the built-in updater feature in the app's settings menu. Do not use both methods at the same time as this may cause conflicts or errors.
-
If you want to uninstall Dolphin Emulator Android 6.0 APK, you can either use your device's settings app or a third-party uninstaller app or tool to remove it from your device. Make sure you also delete any leftover files or folders related to Dolphin Emulator Android 6.0 APK from your device's storage.
-
-
-
Conclusion
-
Dolphin Emulator Android 6.0 APK is a great way to play Nintendo GameCube and Wii games on your Android device. It has many features and benefits that make it one of the best emulators available for Android. However, it also has some requirements and challenges that you need to be aware of before using it on your device. You need to download, install, and use it properly according to your device's specifications and preferences. You also need to troubleshoot some errors and issues that may arise during its operation.
-
In this article, I have provided you with a comprehensive guide on how to download, install, and use Dolphin Emulator Android 6.0 APK on your Android device. I have also answered some frequently asked questions and shared some user reviews about this emulator. I hope this article has been helpful and informative for you.
-
If you have any questions, comments, feedback, or suggestions about this article or Dolphin Emulator Android 6.0 APK, please feel free to leave them below. I would love to hear from you and help you out. Thank you for reading and happy gaming!
-
Frequently Asked Questions
-
Here are some of the most frequently asked questions about Dolphin Emulator Android 6.0 APK:
-
Is Dolphin Emulator Android 6.0 APK legal?
-
Dolphin Emulator Android 6.0 APK is legal as long as you use it for personal and non-commercial purposes. Dolphin Emulator is a free and open-source software that does not violate any intellectual property rights or laws. However, the games that you play on Dolphin Emulator may be subject to copyright and licensing restrictions. You should only play games that you own legally from your own discs or backups. You should not download, share, or distribute games that you do not own or have permission to use.
-
Is Dolphin Emulator Android 6.0 APK safe?
-
Dolphin Emulator Android 6.0 APK is safe as long as you download it from a reliable and reputable source, such as the official website or source. You should also verify the integrity and safety of the downloaded file before installing it on your device. You should also scan the file with a reputable antivirus or anti-malware program to detect and remove any malicious code or modification that may be hidden in the file. You should also grant permissions and overcome security restrictions that may prevent Dolphin Emulator from accessing certain features and functions of your device.
-
Is Dolphin Emulator Android 6.0 APK compatible with my device?
-
Dolphin Emulator Android 6.0 APK is compatible with most Android devices that run on Android 5.0 (Lollipop) or higher and have a 64-bit processor (ARMv8 or x86_64). However, some devices may not be able to run Dolphin Emulator or some games smoothly or at all due to their hardware limitations or software issues. You should check your device's specifications and compare them with the minimum system requirements for Dolphin Emulator and the game you want to play. You should also check the compatibility list on the official website or the game wiki to see if your device and game are compatible with Dolphin Emulator.
-
How can I improve the performance of Dolphin Emulator Android 6.0 APK?
-
You can improve the performance of Dolphin Emulator Android 6.0 APK by following these tips:
-
-
Use a powerful device that can handle the emulation workload.
-
Close any other apps or processes that may be running in the background and consuming your device's resources.
-
Clean your device's cache and memory using a cleaning app or tool.
-
Restart your device and launch Dolphin Emulator again.
-
Lower or disable some of the graphics and audio settings that may be taxing your device's capabilities.
-
Update Dolphin Emulator to the latest version available.
-
Update your device's software and drivers to the latest version available.
-
Try a different version or build of Dolphin Emulator or the game you are trying to play.
-
Try a different game file format or compression method.
-
Report any errors or issues to the developers or support team of Dolphin Emulator.
-
-
How can I get more games for Dolphin Emulator Android 6.0 APK?
-
You can get more games for Dolphin Emulator Android 6.0 APK by following these steps:
-
-
If you have the original GameCube or Wii discs, you can use a disc drive and a software tool, such as CleanRip, RawDump, or FriiDump to rip the discs and create ISO or WBFS files on your computer. You can also use a modded Wii console and a software tool, such as USB Loader GX, WiiFlow, or CFG USB Loader to rip the discs and create ISO or WBFS files on a USB drive.
-
If you have backup copies of GameCube or Wii games, you can use a software tool, such as Wii Backup Manager, Witgui, or Wii Backup Fusion to convert them into ISO or WBFS files on your computer.
-
If you want to download GameCube or Wii games from the internet, you can use various websites or sources that offer them legally and safely. However, you should be careful when downloading from these sources, as they may not be trustworthy or safe. Some of them may contain malware, viruses, spyware, adware, or other unwanted programs that can harm your device or compromise your privacy. Some of them may also provide fake or corrupted files that may not work properly or cause errors and issues with your emulator.
-
Once you have the ISO or WBFS files of the games you want to play, you need to transfer them to your Android device's storage. You can use a USB cable, a microSD card, a cloud service, or a wireless method to do so.
-
On your Android device, launch Dolphin Emulator and tap on the Add Folder button on the top right corner of the screen. This will allow you to browse your device's storage and select the folder where you stored your ISO or WBFS files.
-
Dolphin Emulator will scan the folder and display all the games that it can recognize in a grid view. You can tap on any game to see more details about it, such as title, region, size, rating, etc.
-
To load a game, simply tap on its icon and wait for Dolphin Emulator to launch it. You will see a loading screen with some information about the game and the emulator's status.
-
Once the game is loaded, you can start playing it on your Android device using either touch controls or physical controllers.
-
-
User Reviews
-
Here are some of the user reviews about Dolphin Emulator Android 6.0 APK from various sources:
-
Positive Reviews
-
-
"This is the best emulator for GameCube and Wii games on Android. It runs smoothly and has many options to customize. I can play my favorite games in HD with no lag or glitches. The touch controls are responsive and easy to use. The controller support is also great. I can connect my PS4 controller via Bluetooth and play wirelessly. The online multiplayer feature is also amazing. I can play with my friends online using Netplay or Wiimmfi. This emulator is a must-have for any Nintendo fan."
-
"I love this emulator. It allows me to play GameCube and Wii games on my phone that I never got to play before. The graphics are stunning and the sound is clear. The emulation is fast and stable. The settings are easy to understand and adjust. The compatibility list is impressive and growing. The developers are active and responsive. They update the app regularly with new features and bug fixes. They also listen to feedback and suggestions from the users. This emulator is worth every penny."
-
"This emulator is awesome. It works perfectly on my device and has no issues at all. I can play all the games I want with no problems. The graphics are beautiful and the sound is crisp. The emulation is accurate and faithful. The settings are comprehensive and flexible. The compatibility list is extensive and reliable. The developers are amazing and supportive. They update the app frequently with new features and bug fixes. They also communicate with the users and provide help and guidance. This emulator is a masterpiece."
-
-
Negative Reviews
-
-
"This emulator is terrible. It does not work on my device and has many issues. I cannot play any games with it because it crashes, freezes, slows down, glitches, or shows errors. The graphics are ugly and the sound is distorted. The emulation is poor and inaccurate. The settings are confusing and limited. The compatibility list is outdated and inaccurate. The developers are lazy and unresponsive. They do not update the app regularly or fix any bugs or issues. They also ignore feedback and complaints from the users. This emulator is a waste of time."
-
"I hate this emulator. It works poorly on my device and has many problems. I can only play a few games with it because it lags, skips, flickers, or shows errors. The graphics are low-quality and the sound is noisy. The emulation is slow and unstable. The settings are hard to understand and change. The compatibility list is incomplete and unreliable. The developers are rude and unhelpful. They do not update the app often or fix any bugs or issues. They also argue with feedback and suggestions from the users. This emulator is a joke."
-
"This emulator is disappointing . It works fine on my device but has many limitations. I can play some games with it but not all of them. The graphics are decent but not amazing. The sound is okay but not great. The emulation is fast but not smooth. The settings are simple but not enough. The compatibility list is accurate but not comprehensive. The developers are nice but not supportive. They update the app sometimes but not regularly. They also accept feedback and suggestions from the users but not implement them. This emulator is a letdown."
-
-
-
This is the end of the article. I hope you enjoyed reading it and learned something new about Dolphin Emulator Android 6.0 APK. If you did, please share it with your friends and family who may also be interested in this topic. If you have any questions, comments, feedback, or suggestions about this article or Dolphin Emulator Android 6.0 APK, please leave them below. I would love to hear from you and help you out. Thank you for reading and happy gaming!
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1phancelerku/anime-remove-background/Download Age of Conquest IV and Create Your Own Custom Maps and Scenarios.md b/spaces/1phancelerku/anime-remove-background/Download Age of Conquest IV and Create Your Own Custom Maps and Scenarios.md
deleted file mode 100644
index d85ea6f06cb3f606f588d19a67988028108f3b15..0000000000000000000000000000000000000000
--- a/spaces/1phancelerku/anime-remove-background/Download Age of Conquest IV and Create Your Own Custom Maps and Scenarios.md
+++ /dev/null
@@ -1,190 +0,0 @@
-
-
Age of Conquest IV: A Turn-Based Grand Strategy Wargame
-
Do you love strategy games that let you command your armies in historical and fictional scenarios? Do you enjoy playing solo or with your friends in cross-platform multiplayer matches? Do you want to create your own custom maps and scenarios with a map editor? If you answered yes to any of these questions, then you might want to check out Age of Conquest IV, a turn-based grand strategy wargame that offers all these features and more.
Age of Conquest IV is a game developed and published by Noble Master LLC, a small indie studio based in Hawaii. It was released in 2016 for Windows, Mac, Linux, Android, iOS, and web browsers. It is the fourth installment in the Age of Conquest series, which started in 2002 as a Java applet game.
-
Age of Conquest IV is a game that lets you create your own warring experience by choosing from hundreds of factions and maps that span from ancient to modern times. You can play as the Roman Empire, the Inca, France, Russia, Japan, or the Chinese Dynasties, among many others. You can also play on maps that depict Europe, Colonization, Asian Empires, American Wars, World Conquest, and more.
-
The game is turn-based, meaning that you and your opponents take turns to move your units, build your economy, conduct diplomacy, and wage war. The game has a streamlined user interface that makes it easy to learn and play. You can play against the computer AI, which has different difficulty levels and personalities. You can also play online or locally with other players in cross-platform multiplayer matches. You can form alliances and fight co-op style with the AI and other players for ultimate victory.
-
Features of Age of Conquest IV
-
Age of Conquest IV has many features that make it a fun and challenging game for strategy lovers. Here are some of them:
-
Ancient to Modern
-
The game offers a variety of map scenarios that cover different time periods and regions of the world. You can play on historical maps that depict real events and conflicts, such as the Rise of Rome, the Hundred Years' War, the Napoleonic Wars, or the Cold War. You can also play on fictional maps that imagine alternative scenarios or fantasy worlds, such as Middle Earth, Westeros, or Atlantis.
-
Diplomacy & Economy
-
The game also features a diplomacy and economy system that adds depth and realism to the gameplay. You can negotiate with other factions for peace, trade, alliances, or war. You can also manage your population, happiness, and taxes in each province. You have to balance your income and expenses, as well as deal with rebellions and revolts if your people are unhappy.
-
Single & Multiplayer
-
The game allows you to play solo or with others in various modes. You can play skirmish matches against the AI or hotseat with friends and family on the same device. You can also play online with other players from around the world in cross-platform multiplayer matches. The game has a ranking and rating system that tracks your performance and skill level. You can also chat with other players and join clans for more social interaction.
-
Modding
-
The game also supports modding, which means that you can create your own custom maps and scenarios with a map editor. You can use the built-in tools to design your own terrain, provinces, factions , and units. You can also import and export your maps and share them with other players. You can also download and play maps created by other players from the online map store. You can rate and comment on the maps you play and give feedback to the creators.
-
How to Download Age of Conquest IV?
-
If you are interested in playing Age of Conquest IV, you have several options to download the game. Here are some of them:
-
age of conquest 4 free download
-age of conquest 4 steam download
-age of conquest 4 download for pc
-age of conquest 4 download for android
-age of conquest 4 download for ios
-age of conquest 4 download for mac
-age of conquest 4 download for linux
-age of conquest 4 download maps
-age of conquest 4 download modding
-age of conquest 4 browser download
-age of conquest 4 full version download
-age of conquest 4 all maps download
-age of conquest 4 portable download
-age of conquest 4 bundle download
-age of conquest 4 generic download
-age of conquest 4 legacy download
-age of conquest 4 install download
-age of conquest 4 update download
-age of conquest 4 patch download
-age of conquest 4 online download
-age of conquest 4 offline download
-age of conquest 4 turn based strategy game download
-age of conquest 4 grand strategy wargame download
-age of conquest 4 roman empire download
-age of conquest 4 inca download
-age of conquest 4 france download
-age of conquest 4 russia download
-age of conquest 4 japan download
-age of conquest 4 china download
-age of conquest 4 europe map download
-age of conquest 4 colonization map download
-age of conquest 4 asian empires map download
-age of conquest 4 american wars map download
-age of conquest 4 world conquest map download
-age of conquest 4 diplomacy and economy game download
-age of conquest 4 single player game download
-age of conquest 4 multiplayer game download
-age of conquest 4 co-op game download
-age of conquest 4 cross-platform game download
-age of conquest 4 ranking and rating game download
-age of conquest 4 skirmish mode game download
-age of conquest 4 hotseat mode game download
-age of conquest 4 modding tool game download
-age of conquest 4 map editor game download
-age of conquest 4 custom maps game download
-age of conquest iv free to play game download
-
Direct Downloads
-
You can download the game directly from the official website of Noble Master LLC. The website offers downloads for Windows, Mac, Linux, Android, and iOS devices. You can also play the game online on your web browser without downloading anything. The direct downloads are free, but they have some limitations, such as fewer maps and factions, and no multiplayer mode. You can unlock the full version of the game by purchasing a license key for $4.99 USD.
-
3rd Party Downloads
-
You can also download the game from 3rd party platforms, such as Steam, Google Play, App Store, or Amazon. These platforms offer the full version of the game for a similar price as the direct downloads. You can also enjoy some additional features, such as achievements, leaderboards, cloud saves, and more. However, you may need to create an account and install additional software to use these platforms.
-
What are the System Requirements for Age of Conquest IV?
-
Age of Conquest IV is a relatively low-spec game that can run on most devices. However, you may still want to check the system requirements before downloading the game to ensure a smooth gameplay experience. Here are the minimum and recommended requirements for the game:
-
Minimum Requirements
-
-
-
OS
-
CPU
-
RAM
-
Graphics
-
Storage
-
-
-
Windows XP or later
-
1 GHz single-core processor
-
512 MB
-
OpenGL 2.0 compatible with 128 MB VRAM
-
150 MB
-
-
-
Mac OS X 10.7 or later
-
1 GHz single-core processor
-
512 MB
-
OpenGL 2.0 compatible with 128 MB VRAM
-
150 MB
-
-
-
Linux (Ubuntu 12.04 or later)
-
1 GHz single-core processor
-
512 MB
-
OpenGL 2.0 compatible with 128 MB VRAM
-
150 MB
-
-
-
Android 4.0 or later
-
1 GHz single-core processor
-
512 MB
-
N/A
-
N/A
-
-
-
iOS 8.0 or later
-
N/A
-
N/A
-
N/A
-
N/A
-
-
-
Web Browser (Chrome, Firefox, Safari, Edge)
-
N/A
-
N/A
-
N/A
-
N/A
-
-
Recommended Requirements
-
-
-
OS
-
CPU
-
RAM
-
Graphics
-
Storage
-
-
Windows 7 or later
-
2 GHz dual-core processor
-
1 GB
-
OpenGL 2.0 compatible with 256 MB VRAM
-
300 MB
-
-
-
Mac OS X 10.10 or later
-
2 GHz dual-core processor
-
1 GB
-
OpenGL 2.0 compatible with 256 MB VRAM
-
300 MB
-
-
-
Linux (Ubuntu 14.04 or later)
-
2 GHz dual-core processor
-
1 GB
-
OpenGL 2.0 compatible with 256 MB VRAM
-
300 MB
-
-
-
Android 5.0 or later
-
2 GHz dual-core processor
-
1 GB
-
N/A
-
N/A
-
-
-
iOS 10.0 or later
-
N/A
-
N/A
-
N/A
-
N/A
-
-
How to Play Age of Conquest IV?
-
If you have downloaded and installed Age of Conquest IV, you may be wondering how to play the game. Here are some steps to help you get started:
-
Tutorial
-
The game has a tutorial mode that teaches you the basics of the game, such as how to move your units, build your economy, conduct diplomacy, and wage war. The tutorial mode consists of several missions that guide you through different aspects of the game. You can access the tutorial mode from the main menu by clicking on the "Tutorial" button. You can also watch video tutorials on the official website or YouTube channel of Noble Master LLC.
-
Tips and Tricks
-
The game also has a tips and tricks section that gives you some useful advice and information on how to play the game better. You can access the tips and tricks section from the main menu by clicking on the "Tips & Tricks" button. You can also find more tips and tricks on the official forum or wiki of Noble Master LLC.
-
Conclusion
-
Age of Conquest IV is a turn-based grand strategy wargame that lets you create your own warring experience by choosing from hundreds of factions and maps that span from ancient to modern times. You can play solo or with others in cross-platform multiplayer matches. You can also create your own custom maps and scenarios with a map editor. The game is easy to learn and play, but challenging and rewarding to master. If you are a fan of strategy games, you should definitely give Age of Conquest IV a try.
-
If you have any questions or feedback about the game, you can contact Noble Master LLC through their official website, email, or social media accounts. You can also join their community of players and modders on their forum, wiki, discord, or reddit.
-
Frequently Asked Questions (FAQs)
-
Here are some common questions and answers about Age of Conquest IV:
-
-
Is Age of Conquest IV free?
-
The game is free to download and play, but it has some limitations, such as fewer maps and factions, and no multiplayer mode. You can unlock the full version of the game by purchasing a license key for $4.99 USD.
-
Is Age of Conquest IV online?
-
The game has an online mode that allows you to play with other players from around the world in cross-platform multiplayer matches. You need an internet connection and an account to play online.
-
Is Age of Conquest IV offline?
-
The game has an offline mode that allows you to play solo or hotseat with friends and family on the same device. You do not need an internet connection or an account to play offline.
-
Is Age of Conquest IV historical?
-
The game has historical maps that depict real events and conflicts, such as the Rise of Rome, the Hundred Years' War, the Napoleonic Wars, or the Cold War. The game also has fictional maps that imagine alternative scenarios or fantasy worlds, such as Middle Earth, Westeros, or Atlantis.
-
Is Age of Conquest IV realistic?
-
The game is not meant to be a realistic simulation of history or warfare, but rather a fun and challenging strategy game that offers a variety of map scenarios and gameplay options. The game does not aim to be historically accurate or politically correct, but rather to provide an enjoyable and creative and diverse warring experience.
197e85843d
-
-
\ No newline at end of file
diff --git a/spaces/1toTree/lora_test/env.py b/spaces/1toTree/lora_test/env.py
deleted file mode 100644
index 29997bf1a7590c3d3e44aa85fa0948565a123e60..0000000000000000000000000000000000000000
--- a/spaces/1toTree/lora_test/env.py
+++ /dev/null
@@ -1,13 +0,0 @@
-############################################################################################################################
-# 修改下面的参数
-# (1)BASE_MODEL_NAME 代表你训练的基础模型
-BASE_MODEL_NAME = "runwayml/stable-diffusion-v1-5"
-
-# 是否开启lora
-# (2)LORA_WEIGHTS_PATH 代码你上传到huggingface后的lora权重。
-# LORA_WEIGHTS_PATH = None 表示不适应lora
-LORA_WEIGHTS_PATH = "1toTree/demo_test"
-
-# (3)PROMPTS 需要展示的prompt文本
-PROMPTS = "cartoon face"
-############################################################################################################################
\ No newline at end of file
diff --git a/spaces/2023Liu2023/bingo/src/components/chat-panel.tsx b/spaces/2023Liu2023/bingo/src/components/chat-panel.tsx
deleted file mode 100644
index 1fbc3c2bf05b914e0c229661832fbb560745f488..0000000000000000000000000000000000000000
--- a/spaces/2023Liu2023/bingo/src/components/chat-panel.tsx
+++ /dev/null
@@ -1,153 +0,0 @@
-'use client'
-
-import * as React from 'react'
-import Image from 'next/image'
-import Textarea from 'react-textarea-autosize'
-import { useAtomValue } from 'jotai'
-import { useEnterSubmit } from '@/lib/hooks/use-enter-submit'
-import { cn } from '@/lib/utils'
-
-import BrushIcon from '@/assets/images/brush.svg'
-import ChatIcon from '@/assets/images/chat.svg'
-import VisualSearchIcon from '@/assets/images/visual-search.svg'
-import SendIcon from '@/assets/images/send.svg'
-import PinIcon from '@/assets/images/pin.svg'
-import PinFillIcon from '@/assets/images/pin-fill.svg'
-
-import { useBing } from '@/lib/hooks/use-bing'
-import { voiceListenAtom } from '@/state'
-import Voice from './voice'
-import { ChatImage } from './chat-image'
-import { ChatAttachments } from './chat-attachments'
-
-export interface ChatPanelProps
- extends Pick<
- ReturnType,
- | 'generating'
- | 'input'
- | 'setInput'
- | 'sendMessage'
- | 'resetConversation'
- | 'isSpeaking'
- | 'attachmentList'
- | 'uploadImage'
- | 'setAttachmentList'
- > {
- id?: string
- className?: string
-}
-
-export function ChatPanel({
- isSpeaking,
- generating,
- input,
- setInput,
- className,
- sendMessage,
- resetConversation,
- attachmentList,
- uploadImage,
- setAttachmentList
-}: ChatPanelProps) {
- const inputRef = React.useRef(null)
- const {formRef, onKeyDown} = useEnterSubmit()
- const [focused, setFocused] = React.useState(false)
- const [active, setActive] = React.useState(false)
- const [pin, setPin] = React.useState(false)
- const [tid, setTid] = React.useState()
- const voiceListening = useAtomValue(voiceListenAtom)
-
- const setBlur = React.useCallback(() => {
- clearTimeout(tid)
- setActive(false)
- const _tid = setTimeout(() => setFocused(false), 2000);
- setTid(_tid)
- }, [tid])
-
- const setFocus = React.useCallback(() => {
- setFocused(true)
- setActive(true)
- clearTimeout(tid)
- inputRef.current?.focus()
- }, [tid])
-
- React.useEffect(() => {
- if (input) {
- setFocus()
- }
- }, [input])
-
- return (
-
- )
-}
diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/op/readme.md b/spaces/232labs/VToonify/vtoonify/model/stylegan/op/readme.md
deleted file mode 100644
index 7cffcfc72069ff9a098d292f9e37035031e19081..0000000000000000000000000000000000000000
--- a/spaces/232labs/VToonify/vtoonify/model/stylegan/op/readme.md
+++ /dev/null
@@ -1,12 +0,0 @@
-Code from [rosinality-stylegan2-pytorch-cp](https://github.com/senior-sigan/rosinality-stylegan2-pytorch-cpu)
-
-Scripts to convert rosinality/stylegan2-pytorch to the CPU compatible format
-
-If you would like to use CPU for testing or have a problem regarding the cpp extention (fused and upfirdn2d), please make the following changes:
-
-Change `model.stylegan.op` to `model.stylegan.op_cpu`
-https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/util.py#L14
-
-https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/model/simple_augment.py#L12
-
-https://github.com/williamyang1991/VToonify/blob/01b383efc00007f9b069585db41a7d31a77a8806/model/stylegan/model.py#L11
diff --git a/spaces/2ndelement/voicevox/voicevox_engine/downloadable_library.py b/spaces/2ndelement/voicevox/voicevox_engine/downloadable_library.py
deleted file mode 100644
index e4abf88b9e4ec7d971d30bf0e226e1584c17c23b..0000000000000000000000000000000000000000
--- a/spaces/2ndelement/voicevox/voicevox_engine/downloadable_library.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import base64
-import json
-import zipfile
-from io import BytesIO
-from pathlib import Path
-from typing import List
-
-from fastapi import HTTPException
-
-from voicevox_engine.model import DownloadableLibrary
-
-__all__ = ["LibraryManager"]
-
-INFO_FILE = "metas.json"
-
-
-class LibraryManager:
- def __init__(self, library_root_dir: Path):
- self.library_root_dir = library_root_dir
- self.library_root_dir.mkdir(exist_ok=True)
-
- def downloadable_libraries(self):
- # == ダウンロード情報をネットワーク上から取得する場合
- # url = "https://example.com/downloadable_libraries.json"
- # response = requests.get(url)
- # return list(map(DownloadableLibrary.parse_obj, response.json()))
-
- # == ダウンロード情報をjsonファイルから取得する場合
- # with open(
- # self.root_dir / "engine_manifest_assets" / "downloadable_libraries.json",
- # encoding="utf-8",
- # ) as f:
- # return list(map(DownloadableLibrary.parse_obj, json.load(f)))
-
- # ダミーとして、speaker_infoのアセットを読み込む
- with open(
- "./engine_manifest_assets/downloadable_libraries.json",
- encoding="utf-8",
- ) as f:
- libraries = json.load(f)
- speaker_info = libraries[0]["speakers"][0]["speaker_info"]
- mock_root_dir = Path("./speaker_info/7ffcb7ce-00ec-4bdc-82cd-45a8889e43ff")
- speaker_info["policy"] = (mock_root_dir / "policy.md").read_text()
- speaker_info["portrait"] = base64.b64encode(
- (mock_root_dir / "portrait.png").read_bytes()
- )
- for style_info in speaker_info["style_infos"]:
- style_id = style_info["id"]
- style_info["icon"] = base64.b64encode(
- (mock_root_dir / "icons" / f"{style_id}.png").read_bytes()
- )
- style_info["voice_samples"] = [
- base64.b64encode(
- (
- mock_root_dir / "voice_samples" / f"{style_id}_{i:0>3}.wav"
- ).read_bytes()
- )
- for i in range(1, 4)
- ]
- return list(map(DownloadableLibrary.parse_obj, libraries))
-
- def installed_libraries(self) -> List[DownloadableLibrary]:
- library = []
- for library_dir in self.library_root_dir.iterdir():
- if library_dir.is_dir():
- with open(library_dir / INFO_FILE, encoding="utf-8") as f:
- library.append(json.load(f))
- return library
-
- def install_library(self, library_id: str, file: BytesIO):
- for downloadable_library in self.downloadable_libraries():
- if downloadable_library.uuid == library_id:
- library_info = downloadable_library.dict()
- break
- else:
- raise HTTPException(status_code=404, detail="指定された音声ライブラリが見つかりません。")
- library_dir = self.library_root_dir / library_id
- library_dir.mkdir(exist_ok=True)
- with open(library_dir / INFO_FILE, "w", encoding="utf-8") as f:
- json.dump(library_info, f, indent=4, ensure_ascii=False)
- with zipfile.ZipFile(file) as zf:
- if zf.testzip() is not None:
- raise HTTPException(status_code=422, detail="不正なZIPファイルです。")
-
- zf.extractall(library_dir)
- return library_dir
diff --git a/spaces/777DUKE/Ballin/README.md b/spaces/777DUKE/Ballin/README.md
deleted file mode 100644
index f446f58fa7b52e7e0474e61c078bac675263a025..0000000000000000000000000000000000000000
--- a/spaces/777DUKE/Ballin/README.md
+++ /dev/null
@@ -1,10 +0,0 @@
----
-title: Ballin
-emoji: 🐠
-colorFrom: green
-colorTo: indigo
-sdk: docker
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/adversarial/discriminators/msd.py b/spaces/AIConsultant/MusicGen/audiocraft/adversarial/discriminators/msd.py
deleted file mode 100644
index c4e67e29b46ab22f6ffeec85ffc64d8b99800b1b..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/adversarial/discriminators/msd.py
+++ /dev/null
@@ -1,126 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import typing as tp
-
-import numpy as np
-import torch
-import torch.nn as nn
-
-from ...modules import NormConv1d
-from .base import MultiDiscriminator, MultiDiscriminatorOutputType
-
-
-class ScaleDiscriminator(nn.Module):
- """Waveform sub-discriminator.
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- kernel_sizes (Sequence[int]): Kernel sizes for first and last convolutions.
- filters (int): Number of initial filters for convolutions.
- max_filters (int): Maximum number of filters.
- downsample_scales (Sequence[int]): Scale for downsampling implemented as strided convolutions.
- inner_kernel_sizes (Sequence[int] or None): Kernel sizes for inner convolutions.
- groups (Sequence[int] or None): Groups for inner convolutions.
- strides (Sequence[int] or None): Strides for inner convolutions.
- paddings (Sequence[int] or None): Paddings for inner convolutions.
- norm (str): Normalization method.
- activation (str): Activation function.
- activation_params (dict): Parameters to provide to the activation function.
- pad (str): Padding for initial convolution.
- pad_params (dict): Parameters to provide to the padding module.
- """
- def __init__(self, in_channels=1, out_channels=1, kernel_sizes: tp.Sequence[int] = [5, 3],
- filters: int = 16, max_filters: int = 1024, downsample_scales: tp.Sequence[int] = [4, 4, 4, 4],
- inner_kernel_sizes: tp.Optional[tp.Sequence[int]] = None, groups: tp.Optional[tp.Sequence[int]] = None,
- strides: tp.Optional[tp.Sequence[int]] = None, paddings: tp.Optional[tp.Sequence[int]] = None,
- norm: str = 'weight_norm', activation: str = 'LeakyReLU',
- activation_params: dict = {'negative_slope': 0.2}, pad: str = 'ReflectionPad1d',
- pad_params: dict = {}):
- super().__init__()
- assert len(kernel_sizes) == 2
- assert kernel_sizes[0] % 2 == 1
- assert kernel_sizes[1] % 2 == 1
- assert (inner_kernel_sizes is None or len(inner_kernel_sizes) == len(downsample_scales))
- assert (groups is None or len(groups) == len(downsample_scales))
- assert (strides is None or len(strides) == len(downsample_scales))
- assert (paddings is None or len(paddings) == len(downsample_scales))
- self.activation = getattr(torch.nn, activation)(**activation_params)
- self.convs = nn.ModuleList()
- self.convs.append(
- nn.Sequential(
- getattr(torch.nn, pad)((np.prod(kernel_sizes) - 1) // 2, **pad_params),
- NormConv1d(in_channels, filters, kernel_size=np.prod(kernel_sizes), stride=1, norm=norm)
- )
- )
-
- in_chs = filters
- for i, downsample_scale in enumerate(downsample_scales):
- out_chs = min(in_chs * downsample_scale, max_filters)
- default_kernel_size = downsample_scale * 10 + 1
- default_stride = downsample_scale
- default_padding = (default_kernel_size - 1) // 2
- default_groups = in_chs // 4
- self.convs.append(
- NormConv1d(in_chs, out_chs,
- kernel_size=inner_kernel_sizes[i] if inner_kernel_sizes else default_kernel_size,
- stride=strides[i] if strides else default_stride,
- groups=groups[i] if groups else default_groups,
- padding=paddings[i] if paddings else default_padding,
- norm=norm))
- in_chs = out_chs
-
- out_chs = min(in_chs * 2, max_filters)
- self.convs.append(NormConv1d(in_chs, out_chs, kernel_size=kernel_sizes[0], stride=1,
- padding=(kernel_sizes[0] - 1) // 2, norm=norm))
- self.conv_post = NormConv1d(out_chs, out_channels, kernel_size=kernel_sizes[1], stride=1,
- padding=(kernel_sizes[1] - 1) // 2, norm=norm)
-
- def forward(self, x: torch.Tensor):
- fmap = []
- for layer in self.convs:
- x = layer(x)
- x = self.activation(x)
- fmap.append(x)
- x = self.conv_post(x)
- fmap.append(x)
- # x = torch.flatten(x, 1, -1)
- return x, fmap
-
-
-class MultiScaleDiscriminator(MultiDiscriminator):
- """Multi-Scale (MSD) Discriminator,
-
- Args:
- in_channels (int): Number of input channels.
- out_channels (int): Number of output channels.
- downsample_factor (int): Downsampling factor between the different scales.
- scale_norms (Sequence[str]): Normalization for each sub-discriminator.
- **kwargs: Additional args for ScaleDiscriminator.
- """
- def __init__(self, in_channels: int = 1, out_channels: int = 1, downsample_factor: int = 2,
- scale_norms: tp.Sequence[str] = ['weight_norm', 'weight_norm', 'weight_norm'], **kwargs):
- super().__init__()
- self.discriminators = nn.ModuleList([
- ScaleDiscriminator(in_channels, out_channels, norm=norm, **kwargs) for norm in scale_norms
- ])
- self.downsample = nn.AvgPool1d(downsample_factor * 2, downsample_factor, padding=downsample_factor)
-
- @property
- def num_discriminators(self):
- return len(self.discriminators)
-
- def forward(self, x: torch.Tensor) -> MultiDiscriminatorOutputType:
- logits = []
- fmaps = []
- for i, disc in enumerate(self.discriminators):
- if i != 0:
- self.downsample(x)
- logit, fmap = disc(x)
- logits.append(logit)
- fmaps.append(fmap)
- return logits, fmaps
diff --git a/spaces/AIConsultant/MusicGen/audiocraft/optim/cosine_lr_scheduler.py b/spaces/AIConsultant/MusicGen/audiocraft/optim/cosine_lr_scheduler.py
deleted file mode 100644
index 1e4f0bbf28f1ad893a301f1bfac1da8e97370337..0000000000000000000000000000000000000000
--- a/spaces/AIConsultant/MusicGen/audiocraft/optim/cosine_lr_scheduler.py
+++ /dev/null
@@ -1,48 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-import math
-
-from torch.optim import Optimizer
-from torch.optim.lr_scheduler import _LRScheduler
-
-
-class CosineLRScheduler(_LRScheduler):
- """Cosine LR scheduler.
-
- Args:
- optimizer (Optimizer): Torch optimizer.
- warmup_steps (int): Number of warmup steps.
- total_steps (int): Total number of steps.
- lr_min_ratio (float): Minimum learning rate.
- cycle_length (float): Cycle length.
- """
- def __init__(self, optimizer: Optimizer, total_steps: int, warmup_steps: int,
- lr_min_ratio: float = 0.0, cycle_length: float = 1.0):
- self.warmup_steps = warmup_steps
- assert self.warmup_steps >= 0
- self.total_steps = total_steps
- assert self.total_steps >= 0
- self.lr_min_ratio = lr_min_ratio
- self.cycle_length = cycle_length
- super().__init__(optimizer)
-
- def _get_sched_lr(self, lr: float, step: int):
- if step < self.warmup_steps:
- lr_ratio = step / self.warmup_steps
- lr = lr_ratio * lr
- elif step <= self.total_steps:
- s = (step - self.warmup_steps) / (self.total_steps - self.warmup_steps)
- lr_ratio = self.lr_min_ratio + 0.5 * (1 - self.lr_min_ratio) * \
- (1. + math.cos(math.pi * s / self.cycle_length))
- lr = lr_ratio * lr
- else:
- lr_ratio = self.lr_min_ratio
- lr = lr_ratio * lr
- return lr
-
- def get_lr(self):
- return [self._get_sched_lr(lr, self.last_epoch) for lr in self.base_lrs]
diff --git a/spaces/AONYLMR/White-box-Cartoonization/README.md b/spaces/AONYLMR/White-box-Cartoonization/README.md
deleted file mode 100644
index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000
--- a/spaces/AONYLMR/White-box-Cartoonization/README.md
+++ /dev/null
@@ -1,15 +0,0 @@
----
-python_version: 3.7
-title: White Box Cartoonization
-emoji: 📚
-colorFrom: purple
-colorTo: green
-sdk: gradio
-sdk_version: 2.9.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: hylee/White-box-Cartoonization
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/trainer.py b/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/trainer.py
deleted file mode 100644
index 748a21465d7c93ad8fdc374fbc6bd6d40a575ee7..0000000000000000000000000000000000000000
--- a/spaces/Ababababababbababa/Ashaar/poetry_diacritizer/trainer.py
+++ /dev/null
@@ -1,447 +0,0 @@
-import os
-from typing import Dict
-
-from diacritization_evaluation import der, wer
-import torch
-from torch import nn
-from torch import optim
-from torch.cuda.amp import autocast
-from torch.utils.tensorboard.writer import SummaryWriter
-from tqdm import tqdm
-from tqdm import trange
-
-from .config_manager import ConfigManager
-from dataset import load_iterators
-from diacritizer import CBHGDiacritizer, Seq2SeqDiacritizer, GPTDiacritizer
-from poetry_diacritizer.util.learning_rates import LearningRateDecay
-from poetry_diacritizer.options import OptimizerType
-from poetry_diacritizer.util.utils import (
- categorical_accuracy,
- count_parameters,
- initialize_weights,
- plot_alignment,
- repeater,
-)
-
-import wandb
-
-wandb.login()
-
-
-class Trainer:
- def run(self):
- raise NotImplementedError
-
-
-class GeneralTrainer(Trainer):
- def __init__(self, config_path: str, model_kind: str, model_desc: str) -> None:
- self.config_path = config_path
- self.model_kind = model_kind
- self.config_manager = ConfigManager(
- config_path=config_path, model_kind=model_kind
- )
- self.config = self.config_manager.config
- self.losses = []
- self.lr = 0
- self.pad_idx = 0
- self.criterion = nn.CrossEntropyLoss(ignore_index=self.pad_idx)
- self.set_device()
-
- self.config_manager.create_remove_dirs()
- self.text_encoder = self.config_manager.text_encoder
- self.start_symbol_id = self.text_encoder.start_symbol_id
- self.summary_manager = SummaryWriter(log_dir=self.config_manager.log_dir)
- if model_desc == "":
- model_desc = self.model_kind
- wandb.init(project="diacratization", name=model_desc, config=self.config)
- self.model = self.config_manager.get_model()
-
- self.optimizer = self.get_optimizer()
- self.model = self.model.to(self.device)
-
- self.load_model(model_path=self.config.get("train_resume_model_path"))
- self.load_diacritizer()
-
- self.initialize_model()
-
- self.print_config()
-
- def set_device(self):
- if self.config.get("device"):
- self.device = self.config["device"]
- else:
- self.device = "cuda" if torch.cuda.is_available() else "cpu"
-
- def print_config(self):
- self.config_manager.dump_config()
- self.config_manager.print_config()
-
- if self.global_step > 1:
- print(f"loaded form {self.global_step}")
-
- parameters_count = count_parameters(self.model)
- print(f"The model has {parameters_count} trainable parameters parameters")
-
- def load_diacritizer(self):
- if self.model_kind in ["cbhg", "baseline"]:
- self.diacritizer = CBHGDiacritizer(self.config_path, self.model_kind)
- elif self.model_kind in ["seq2seq", "tacotron_based"]:
- self.diacritizer = Seq2SeqDiacritizer(self.config_path, self.model_kind)
- elif self.model_kind in ["gpt"]:
- self.diacritizer = GPTDiacritizer(self.config_path, self.model_kind)
-
- def initialize_model(self):
- if self.global_step > 1:
- return
- if self.model_kind == "transformer":
- print("Initializing using xavier_uniform_")
- self.model.apply(initialize_weights)
-
- def print_losses(self, step_results, tqdm):
- self.summary_manager.add_scalar(
- "loss/loss", step_results["loss"], global_step=self.global_step
- )
-
- tqdm.display(f"loss: {step_results['loss']}", pos=3)
- for pos, n_steps in enumerate(self.config["n_steps_avg_losses"]):
- if len(self.losses) > n_steps:
-
- self.summary_manager.add_scalar(
- f"loss/loss-{n_steps}",
- sum(self.losses[-n_steps:]) / n_steps,
- global_step=self.global_step,
- )
- tqdm.display(
- f"{n_steps}-steps average loss: {sum(self.losses[-n_steps:]) / n_steps}",
- pos=pos + 4,
- )
-
- def evaluate(self, iterator, tqdm, use_target=True, log = True):
- epoch_loss = 0
- epoch_acc = 0
- self.model.eval()
- tqdm.set_description(f"Eval: {self.global_step}")
- with torch.no_grad():
- for batch_inputs in iterator:
- batch_inputs["src"] = batch_inputs["src"].to(self.device)
- batch_inputs["lengths"] = batch_inputs["lengths"].to("cpu")
- if use_target:
- batch_inputs["target"] = batch_inputs["target"].to(self.device)
- else:
- batch_inputs["target"] = None
-
- outputs = self.model(
- src=batch_inputs["src"],
- target=batch_inputs["target"],
- lengths=batch_inputs["lengths"],
- )
-
- predictions = outputs["diacritics"]
-
- predictions = predictions.view(-1, predictions.shape[-1])
- targets = batch_inputs["target"]
- targets = targets.view(-1)
- loss = self.criterion(predictions, targets.to(self.device))
- acc = categorical_accuracy(
- predictions, targets.to(self.device), self.pad_idx
- )
-
- epoch_loss += loss.item()
- epoch_acc += acc.item()
- if log:
- wandb.log({"evaluate_loss": loss.item(), "evaluate_acc": acc.item()})
- tqdm.update()
-
- tqdm.reset()
- return epoch_loss / len(iterator), epoch_acc / len(iterator)
-
- def evaluate_with_error_rates(self, iterator, tqdm, log = True):
- all_orig = []
- all_predicted = []
- results = {}
- self.diacritizer.set_model(self.model)
- evaluated_batches = 0
- tqdm.set_description(f"Calculating DER/WER {self.global_step}: ")
- for i, batch in enumerate(iterator):
- if evaluated_batches > int(self.config["error_rates_n_batches"]):
- break
-
- predicted = self.diacritizer.diacritize_batch(batch)
- all_predicted += predicted
- all_orig += batch["original"]
- if i > self.config["max_eval_batches"]:
- break
- tqdm.update()
-
- summary_texts = []
- orig_path = os.path.join(self.config_manager.prediction_dir, f"original.txt")
- predicted_path = os.path.join(
- self.config_manager.prediction_dir, f"predicted.txt"
- )
-
- table = wandb.Table(columns=["original", "predicted"])
- with open(orig_path, "w", encoding="utf8") as file:
- for sentence in all_orig:
- file.write(f"{sentence}\n")
-
- with open(predicted_path, "w", encoding="utf8") as file:
- for sentence in all_predicted:
- file.write(f"{sentence}\n")
-
- for i in range(int(self.config["n_predicted_text_tensorboard"])):
- if i > len(all_predicted):
- break
-
- summary_texts.append(
- (f"eval-text/{i}", f"{ all_orig[i]} |-> {all_predicted[i]}")
- )
- if i < 10:
- table.add_data(all_orig[i], all_predicted[i])
-
- if log:
- wandb.log({f"prediction_{self.global_step}": table}, commit=False)
-
- results["DER"] = der.calculate_der_from_path(orig_path, predicted_path)
- results["DER*"] = der.calculate_der_from_path(
- orig_path, predicted_path, case_ending=False
- )
- results["WER"] = wer.calculate_wer_from_path(orig_path, predicted_path)
- results["WER*"] = wer.calculate_wer_from_path(
- orig_path, predicted_path, case_ending=False
- )
- if log:
- wandb.log(results)
- tqdm.reset()
- return results, summary_texts
-
- def run(self):
- scaler = torch.cuda.amp.GradScaler()
- train_iterator, _, validation_iterator = load_iterators(self.config_manager)
- print("data loaded")
- print("----------------------------------------------------------")
- tqdm_eval = trange(0, len(validation_iterator), leave=True)
- tqdm_error_rates = trange(0, len(validation_iterator), leave=True)
- tqdm_eval.set_description("Eval")
- tqdm_error_rates.set_description("WER/DER : ")
- tqdm = trange(self.global_step, self.config["max_steps"] + 1, leave=True)
-
- for batch_inputs in repeater(train_iterator):
- tqdm.set_description(f"Global Step {self.global_step}")
- if self.config["use_decay"]:
- self.lr = self.adjust_learning_rate(
- self.optimizer, global_step=self.global_step
- )
- self.optimizer.zero_grad()
- if self.device == "cuda" and self.config["use_mixed_precision"]:
- with autocast():
- step_results = self.run_one_step(batch_inputs)
- scaler.scale(step_results["loss"]).backward()
- scaler.unscale_(self.optimizer)
- if self.config.get("CLIP"):
- torch.nn.utils.clip_grad_norm_(
- self.model.parameters(), self.config["CLIP"]
- )
-
- scaler.step(self.optimizer)
-
- scaler.update()
- else:
- step_results = self.run_one_step(batch_inputs)
-
- loss = step_results["loss"]
- loss.backward()
- if self.config.get("CLIP"):
- torch.nn.utils.clip_grad_norm_(
- self.model.parameters(), self.config["CLIP"]
- )
- self.optimizer.step()
-
- self.losses.append(step_results["loss"].item())
- wandb.log({"train_loss": step_results["loss"].item()})
-
- self.print_losses(step_results, tqdm)
-
- self.summary_manager.add_scalar(
- "meta/learning_rate", self.lr, global_step=self.global_step
- )
-
- if self.global_step % self.config["model_save_frequency"] == 0:
- torch.save(
- {
- "global_step": self.global_step,
- "model_state_dict": self.model.state_dict(),
- "optimizer_state_dict": self.optimizer.state_dict(),
- },
- os.path.join(
- self.config_manager.models_dir,
- f"{self.global_step}-snapshot.pt",
- ),
- )
-
- if self.global_step % self.config["evaluate_frequency"] == 0:
- loss, acc = self.evaluate(validation_iterator, tqdm_eval)
- self.summary_manager.add_scalar(
- "evaluate/loss", loss, global_step=self.global_step
- )
- self.summary_manager.add_scalar(
- "evaluate/acc", acc, global_step=self.global_step
- )
- tqdm.display(
- f"Evaluate {self.global_step}: accuracy, {acc}, loss: {loss}", pos=8
- )
- self.model.train()
-
- if (
- self.global_step % self.config["evaluate_with_error_rates_frequency"]
- == 0
- ):
- error_rates, summery_texts = self.evaluate_with_error_rates(
- validation_iterator, tqdm_error_rates
- )
- if error_rates:
- WER = error_rates["WER"]
- DER = error_rates["DER"]
- DER1 = error_rates["DER*"]
- WER1 = error_rates["WER*"]
-
- self.summary_manager.add_scalar(
- "error_rates/WER",
- WER / 100,
- global_step=self.global_step,
- )
- self.summary_manager.add_scalar(
- "error_rates/DER",
- DER / 100,
- global_step=self.global_step,
- )
- self.summary_manager.add_scalar(
- "error_rates/DER*",
- DER1 / 100,
- global_step=self.global_step,
- )
- self.summary_manager.add_scalar(
- "error_rates/WER*",
- WER1 / 100,
- global_step=self.global_step,
- )
-
- error_rates = f"DER: {DER}, WER: {WER}, DER*: {DER1}, WER*: {WER1}"
- tqdm.display(f"WER/DER {self.global_step}: {error_rates}", pos=9)
-
- for tag, text in summery_texts:
- self.summary_manager.add_text(tag, text)
-
- self.model.train()
-
- if self.global_step % self.config["train_plotting_frequency"] == 0:
- self.plot_attention(step_results)
-
- self.report(step_results, tqdm)
-
- self.global_step += 1
- if self.global_step > self.config["max_steps"]:
- print("Training Done.")
- return
-
- tqdm.update()
-
- def run_one_step(self, batch_inputs: Dict[str, torch.Tensor]):
- batch_inputs["src"] = batch_inputs["src"].to(self.device)
- batch_inputs["lengths"] = batch_inputs["lengths"].to("cpu")
- batch_inputs["target"] = batch_inputs["target"].to(self.device)
-
- outputs = self.model(
- src=batch_inputs["src"],
- target=batch_inputs["target"],
- lengths=batch_inputs["lengths"],
- )
-
- predictions = outputs["diacritics"].contiguous()
- targets = batch_inputs["target"].contiguous()
- predictions = predictions.view(-1, predictions.shape[-1])
- targets = targets.view(-1)
- loss = self.criterion(predictions.to(self.device), targets.to(self.device))
- outputs.update({"loss": loss})
- return outputs
-
- def predict(self, iterator):
- pass
-
- def load_model(self, model_path: str = None, load_optimizer: bool = True):
- with open(
- self.config_manager.base_dir / f"{self.model_kind}_network.txt", "w"
- ) as file:
- file.write(str(self.model))
-
- if model_path is None:
- last_model_path = self.config_manager.get_last_model_path()
- if last_model_path is None:
- self.global_step = 1
- return
- else:
- last_model_path = model_path
-
- print(f"loading from {last_model_path}")
- saved_model = torch.load(last_model_path)
- self.model.load_state_dict(saved_model["model_state_dict"])
- if load_optimizer:
- self.optimizer.load_state_dict(saved_model["optimizer_state_dict"])
- self.global_step = saved_model["global_step"] + 1
-
- def get_optimizer(self):
- if self.config["optimizer"] == OptimizerType.Adam:
- optimizer = optim.Adam(
- self.model.parameters(),
- lr=self.config["learning_rate"],
- betas=(self.config["adam_beta1"], self.config["adam_beta2"]),
- weight_decay=self.config["weight_decay"],
- )
- elif self.config["optimizer"] == OptimizerType.SGD:
- optimizer = optim.SGD(
- self.model.parameters(), lr=self.config["learning_rate"], momentum=0.9
- )
- else:
- raise ValueError("Optimizer option is not valid")
-
- return optimizer
-
- def get_learning_rate(self):
- return LearningRateDecay(
- lr=self.config["learning_rate"],
- warmup_steps=self.config.get("warmup_steps", 4000.0),
- )
-
- def adjust_learning_rate(self, optimizer, global_step):
- learning_rate = self.get_learning_rate()(global_step=global_step)
- for param_group in optimizer.param_groups:
- param_group["lr"] = learning_rate
- return learning_rate
-
- def plot_attention(self, results):
- pass
-
- def report(self, results, tqdm):
- pass
-
-
-class Seq2SeqTrainer(GeneralTrainer):
- def plot_attention(self, results):
- plot_alignment(
- results["attention"][0],
- str(self.config_manager.plot_dir),
- self.global_step,
- )
-
- self.summary_manager.add_image(
- "Train/attention",
- results["attention"][0].unsqueeze(0),
- global_step=self.global_step,
- )
-
-
-class GPTTrainer(GeneralTrainer):
- pass
-
-
-class CBHGTrainer(GeneralTrainer):
- pass
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/filedropzone/FileDropZone.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/filedropzone/FileDropZone.js
deleted file mode 100644
index daf209af32dbd77733298e37ac415438465ebc78..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/filedropzone/FileDropZone.js
+++ /dev/null
@@ -1,2 +0,0 @@
-import FileDropZone from '../../../plugins/filedropzone.js';
-export default FileDropZone;
\ No newline at end of file
diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/LayoutChildren.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/LayoutChildren.js
deleted file mode 100644
index bf68daddd4e91e74d1ee9fd8c03b42a4fa32132f..0000000000000000000000000000000000000000
--- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/overlapsizer/LayoutChildren.js
+++ /dev/null
@@ -1,60 +0,0 @@
-import ResizeGameObject from '../../../plugins/utils/size/ResizeGameObject.js';
-import PreLayoutChild from '../basesizer/utils/PreLayoutChild.js';
-import LayoutChild from '../basesizer/utils/LayoutChild.js';
-import CheckSize from '../basesizer/utils/CheckSize.js';
-
-var LayoutChildren = function () {
- var child, childConfig, padding;
- var startX = this.innerLeft,
- startY = this.innerTop;
- var innerWidth = this.innerWidth,
- innerHeight = this.innerHeight;
- var x, y, width, height; // Align zone
- var childWidth, childHeight;
- // Layout current page
- var children = this.sizerChildren;
- for (var key in children) {
- child = children[key];
- if (child.rexSizer.hidden) {
- continue;
- }
-
- childConfig = child.rexSizer;
- padding = childConfig.padding;
-
- PreLayoutChild.call(this, child);
-
- // Set size
- if (child.isRexSizer) {
- child.runLayout(
- this,
- this.getExpandedChildWidth(child),
- this.getExpandedChildHeight(child)
- );
- CheckSize(child, this);
- } else {
- childWidth = undefined;
- childHeight = undefined;
- if (childConfig.expandWidth) { // Expand width
- childWidth = innerWidth - padding.left - padding.right;
- }
- if (childConfig.expandHeight) { // Expand height
- childHeight = innerHeight - padding.top - padding.bottom;
- }
- ResizeGameObject(child, childWidth, childHeight);
- }
-
- // Set position
- x = (startX + padding.left);
- width = innerWidth - padding.left - padding.right;
- y = (startY + padding.top);
- height = innerHeight - padding.top - padding.bottom;
-
- LayoutChild.call(this,
- child, x, y, width, height, childConfig.align,
- childConfig.alignOffsetX, childConfig.alignOffsetY
- );
- }
-}
-
-export default LayoutChildren;
\ No newline at end of file
diff --git a/spaces/AlexWang/lama/bin/paper_runfiles/generate_test_celeba-hq.sh b/spaces/AlexWang/lama/bin/paper_runfiles/generate_test_celeba-hq.sh
deleted file mode 100644
index 7e04bba426f1c6c0528d88a0e28a5da0dde7ca3e..0000000000000000000000000000000000000000
--- a/spaces/AlexWang/lama/bin/paper_runfiles/generate_test_celeba-hq.sh
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env bash
-
-# paths to data are valid for mml-ws01
-OUT_DIR="/media/inpainting/paper_data/CelebA-HQ_val_test"
-
-source "$(dirname $0)/env.sh"
-
-for datadir in "val" "test"
-do
- for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512
- do
- "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-celeba-hq \
- location.out_dir=$OUT_DIR cropping.out_square_crop=False
-
- "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats"
- done
-done
diff --git a/spaces/AlgoveraAI/web3-wallet/app.py b/spaces/AlgoveraAI/web3-wallet/app.py
deleted file mode 100644
index cbca271d630edf1a04a9342c195b3e0d6df5618e..0000000000000000000000000000000000000000
--- a/spaces/AlgoveraAI/web3-wallet/app.py
+++ /dev/null
@@ -1,106 +0,0 @@
-import gradio as gr
-from ocean_lib.config import Config
-from ocean_lib.models.btoken import BToken #BToken is ERC20
-from ocean_lib.ocean.ocean import Ocean
-from ocean_lib.web3_internal.wallet import Wallet
-from ocean_lib.web3_internal.currency import from_wei # wei is the smallest denomination of ether e.g. like cents
-# from ocean_lib.web3_internal.currency import pretty_ether_and_wei
-from wallet import get_wallet
-
-config = Config('config.ini')
-ocean = Ocean(config)
-
-def wallet(private_key):
-
- if private_key:
- mnemonic = None
- else:
- account, mnemonic = get_wallet()
-
- private_key = account.key.hex()
-
- wallet = Wallet(ocean.web3, private_key, transaction_timeout=20, block_confirmations=config.block_confirmations)
- address = wallet.address
-
- OCEAN_token = BToken(ocean.web3, ocean.OCEAN_address)
-
- eth_balance = from_wei(ocean.web3.eth.get_balance(address))
- ocean_balance = from_wei(OCEAN_token.balanceOf(address))
-
- return address, private_key, mnemonic, eth_balance, ocean_balance
-
-# def wallet(private_key, did):
-# wallet = Wallet(ocean.web3, private_key, transaction_timeout=20, block_confirmations=config.block_confirmations)
-# address = wallet.address
-# OCEAN_token = BToken(ocean.web3, ocean.OCEAN_address)
-
-# eth_balance = from_wei(ocean.web3.eth.get_balance(wallet.address))
-# ocean_balance = from_wei(OCEAN_token.balanceOf(wallet.address))
-
-# asset = ocean.assets.resolve(did)
-
-# ALG_ddo = ocean.assets.resolve(did)
-# alg_token = ocean.get_data_token(ALG_ddo.data_token_address)
-
-# alg_token_balance = pretty_ether_and_wei(alg_token.balanceOf(wallet.address))
-
-# return address, eth_balance, ocean_balance, alg_token_balance
-
-description = (
- "This demo shows the balance of tokens in your Web3 wallet. If you do not have a Web3 wallet, leave the input field empty when running and the app will create a wallet for you. "
- "A wallet consists of a public and private key. You can think of the public key like your email address and the private key like your password. "
- "The public key can be easily determined from the private key, but not vice versa. "
- "The private key is output in the form of both a hexadecimal number and the corresponding mnemonic phrase, which is easier to remember. "
- "If you want to continue to use the same wallet in future, you should store the private key (and/or the mnemonic phrase, which can be used to recover the private key). "
- "Then enter the private key to the input field when running the app. "
- "Do not give your private key to anyone ever. In fact, it is bad practice to store your private key on your PC for wallets that contain tokens with real value. "
- "However, we are using test tokens on the Ethereum test network (Rinkeby) where the tokens have no real value. "
- "Initially, your wallet should have no ETH and OCEAN tokens in it. You can then request ETH and OCEAN test tokens by entering your public address into faucets (follow the links at the bottom of the page). "
- "Then wait about 15 seconds and re-run the app for the same private key. "
- "This demo uses the Ocean Protocol Python library in the backend. For more information on the advantages of combinining Ocean and HuggingFace, check out the blog post link below. "
- ""
-)
-
-# description = (
-# "This demo shows the balance of algorithm tokens, as well as ETH and OCEAN, in your Web3 wallet (for a given private key). The algorithm tokens will be used to run Algovera apps on HF spaces in future. "
-# "Currently, you need to export your private key from a MetaMask wallet (we plan to randomly generate a private key in the app and bypass MetaMask in future). "
-# "For a guide on how to install MetaMask (an extension in your browser), check the link at the bottom of the page. "
-# "We highly recommend doing this with a wallet that has no real tokens in it. We use a test network (Rinkeby) where the tokens have no real value. "
-# "After an initial setup, your wallet should have no tokens. You can request ETH and OCEAN test tokens from faucets at the links at the bottom of the page. "
-# "To buy an algorithm token (using the OCEAN and ETH), you can search for algorithms on the Ocean marketplace (see link at bottom). Make sure to use algorithms that are on the Rinkeby test network (you need to select Rinkeby from the dropdown menu). "
-# "We have provided a link to our DCGAN model on the test network at the bottom. If you can't see it you are not on the test network. "
-# "After you buy an algorithm token, you need to locate the DID in the metadata on the marketplace. Then enter it into the input textbox. "
-# "Later we will add HF Spaces apps to search algorithms and buy algorithm tokens, which you can use to run demos of the algorithms. "
-# "This demo uses the Ocean Python library in the backend (see link below)."
-# )
-
-article = (
- "
"
-)
-
-
-interface = gr.Interface(
- wallet,
- [
- gr.inputs.Textbox(label="Private Key"),
- ],
- [
- #gr.outputs.Textbox(label="Public Key"),
- #gr.outputs.Textbox(label="Algorithm token balance"),
- gr.outputs.Textbox(label="Public Address"),
- gr.outputs.Textbox(label="Private Key"),
- gr.outputs.Textbox(label="Recovery Passphrase"),
- gr.outputs.Textbox(label="ETH balance"),
- gr.outputs.Textbox(label="OCEAN balance"),
- ],
- title="Web3 Wallet",
- description=description,
- article=article,
- theme="huggingface",
-)
-
-interface.launch()
\ No newline at end of file
diff --git a/spaces/Aloento/9Nine-PITS/yin.py b/spaces/Aloento/9Nine-PITS/yin.py
deleted file mode 100644
index 7266c67f51e2acdfe14c1921a7be1c526fa0413a..0000000000000000000000000000000000000000
--- a/spaces/Aloento/9Nine-PITS/yin.py
+++ /dev/null
@@ -1,165 +0,0 @@
-# remove np from https://github.com/dhchoi99/NANSY/blob/master/models/yin.py
-# adapted from https://github.com/patriceguyot/Yin
-# https://github.com/NVIDIA/mellotron/blob/master/yin.py
-
-import torch
-import torch.nn.functional as F
-
-
-def differenceFunction(x, N, tau_max):
- """
- Compute difference function of data x. This corresponds to equation (6) in [1]
- This solution is implemented directly with torch rfft.
-
-
- :param x: audio data (Tensor)
- :param N: length of data
- :param tau_max: integration window size
- :return: difference function
- :rtype: list
- """
-
- # x = np.array(x, np.float64) #[B,T]
- assert x.dim() == 2
- b, w = x.shape
- if w < tau_max:
- x = F.pad(x, (tau_max - w - (tau_max - w) // 2, (tau_max - w) // 2),
- 'constant',
- mode='reflect')
- w = tau_max
- # x_cumsum = np.concatenate((np.array([0.]), (x * x).cumsum()))
- x_cumsum = torch.cat(
- [torch.zeros([b, 1], device=x.device), (x * x).cumsum(dim=1)], dim=1)
- size = w + tau_max
- p2 = (size // 32).bit_length()
- # p2 = ceil(log2(size+1 // 32))
- nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32)
- size_pad = min(n * 2 ** p2 for n in nice_numbers if n * 2 ** p2 >= size)
- fc = torch.fft.rfft(x, size_pad) # [B,F]
- conv = torch.fft.irfft(fc * fc.conj())[:, :tau_max]
- return x_cumsum[:, w:w - tau_max:
- -1] + x_cumsum[:, w] - x_cumsum[:, :tau_max] - 2 * conv
-
-
-def differenceFunction_np(x, N, tau_max):
- """
- Compute difference function of data x. This corresponds to equation (6) in [1]
- This solution is implemented directly with Numpy fft.
-
-
- :param x: audio data
- :param N: length of data
- :param tau_max: integration window size
- :return: difference function
- :rtype: list
- """
-
- x = np.array(x, np.float64)
- w = x.size
- tau_max = min(tau_max, w)
- x_cumsum = np.concatenate((np.array([0.]), (x * x).cumsum()))
- size = w + tau_max
- p2 = (size // 32).bit_length()
- nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32)
- size_pad = min(x * 2 ** p2 for x in nice_numbers if x * 2 ** p2 >= size)
- fc = np.fft.rfft(x, size_pad)
- conv = np.fft.irfft(fc * fc.conjugate())[:tau_max]
- return x_cumsum[w:w -
- tau_max:-1] + x_cumsum[w] - x_cumsum[:tau_max] - 2 * conv
-
-
-def cumulativeMeanNormalizedDifferenceFunction(df, N, eps=1e-8):
- """
- Compute cumulative mean normalized difference function (CMND).
-
- This corresponds to equation (8) in [1]
-
- :param df: Difference function
- :param N: length of data
- :return: cumulative mean normalized difference function
- :rtype: list
- """
- # np.seterr(divide='ignore', invalid='ignore')
- # scipy method, assert df>0 for all element
- # cmndf = df[1:] * np.asarray(list(range(1, N))) / (np.cumsum(df[1:]).astype(float) + eps)
- B, _ = df.shape
- cmndf = df[:,
- 1:] * torch.arange(1, N, device=df.device, dtype=df.dtype).view(
- 1, -1) / (df[:, 1:].cumsum(dim=-1) + eps)
- return torch.cat(
- [torch.ones([B, 1], device=df.device, dtype=df.dtype), cmndf], dim=-1)
-
-
-def differenceFunctionTorch(xs: torch.Tensor, N, tau_max) -> torch.Tensor:
- """pytorch backend batch-wise differenceFunction
- has 1e-4 level error with input shape of (32, 22050*1.5)
- Args:
- xs:
- N:
- tau_max:
-
- Returns:
-
- """
- xs = xs.double()
- w = xs.shape[-1]
- tau_max = min(tau_max, w)
- zeros = torch.zeros((xs.shape[0], 1))
- x_cumsum = torch.cat((torch.zeros((xs.shape[0], 1), device=xs.device),
- (xs * xs).cumsum(dim=-1, dtype=torch.double)),
- dim=-1) # B x w
- size = w + tau_max
- p2 = (size // 32).bit_length()
- nice_numbers = (16, 18, 20, 24, 25, 27, 30, 32)
- size_pad = min(x * 2 ** p2 for x in nice_numbers if x * 2 ** p2 >= size)
-
- fcs = torch.fft.rfft(xs, n=size_pad, dim=-1)
- convs = torch.fft.irfft(fcs * fcs.conj())[:, :tau_max]
- y1 = torch.flip(x_cumsum[:, w - tau_max + 1:w + 1], dims=[-1])
- y = y1 + x_cumsum[:, w].unsqueeze(-1) - x_cumsum[:, :tau_max] - 2 * convs
- return y
-
-
-def cumulativeMeanNormalizedDifferenceFunctionTorch(dfs: torch.Tensor,
- N,
- eps=1e-8) -> torch.Tensor:
- arange = torch.arange(1, N, device=dfs.device, dtype=torch.float64)
- cumsum = torch.cumsum(dfs[:, 1:], dim=-1,
- dtype=torch.float64).to(dfs.device)
-
- cmndfs = dfs[:, 1:] * arange / (cumsum + eps)
- cmndfs = torch.cat(
- (torch.ones(cmndfs.shape[0], 1, device=dfs.device), cmndfs), dim=-1)
- return cmndfs
-
-
-if __name__ == '__main__':
- wav = torch.randn(32, int(22050 * 1.5)).cuda()
- wav_numpy = wav.detach().cpu().numpy()
- x = wav_numpy[0]
-
- w_len = 2048
- w_step = 256
- tau_max = 2048
- W = 2048
-
- startFrames = list(range(0, x.shape[-1] - w_len, w_step))
- startFrames = np.asarray(startFrames)
- # times = startFrames / sr
- frames = [x[..., t:t + W] for t in startFrames]
- frames = np.asarray(frames)
- frames_torch = torch.from_numpy(frames).cuda()
-
- cmndfs0 = []
- for idx, frame in enumerate(frames):
- df = differenceFunction(frame, frame.shape[-1], tau_max)
- cmndf = cumulativeMeanNormalizedDifferenceFunction(df, tau_max)
- cmndfs0.append(cmndf)
- cmndfs0 = np.asarray(cmndfs0)
-
- dfs = differenceFunctionTorch(frames_torch, frames_torch.shape[-1],
- tau_max)
- cmndfs1 = cumulativeMeanNormalizedDifferenceFunctionTorch(
- dfs, tau_max).detach().cpu().numpy()
- print(cmndfs0.shape, cmndfs1.shape)
- print(np.sum(np.abs(cmndfs0 - cmndfs1)))
diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/__init__.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_models/e4e/encoders/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/inpaint.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/inpaint.md
deleted file mode 100644
index 3646edb9a20da129d68032a541f184041acfe74e..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/ko/using-diffusers/inpaint.md
+++ /dev/null
@@ -1,75 +0,0 @@
-
-
-# Text-guided 이미지 인페인팅(inpainting)
-
-[[코랩에서 열기]]
-
-[`StableDiffusionInpaintPipeline`]은 마스크와 텍스트 프롬프트를 제공하여 이미지의 특정 부분을 편집할 수 있도록 합니다. 이 기능은 인페인팅 작업을 위해 특별히 훈련된 [`runwayml/stable-diffusion-inpainting`](https://huggingface.co/runwayml/stable-diffusion-inpainting)과 같은 Stable Diffusion 버전을 사용합니다.
-
-먼저 [`StableDiffusionInpaintPipeline`] 인스턴스를 불러옵니다:
-
-```python
-import PIL
-import requests
-import torch
-from io import BytesIO
-
-from diffusers import StableDiffusionInpaintPipeline
-
-pipeline = StableDiffusionInpaintPipeline.from_pretrained(
- "runwayml/stable-diffusion-inpainting",
- torch_dtype=torch.float16,
-)
-pipeline = pipeline.to("cuda")
-```
-
-나중에 교체할 강아지 이미지와 마스크를 다운로드하세요:
-
-```python
-def download_image(url):
- response = requests.get(url)
- return PIL.Image.open(BytesIO(response.content)).convert("RGB")
-
-
-img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
-mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
-
-init_image = download_image(img_url).resize((512, 512))
-mask_image = download_image(mask_url).resize((512, 512))
-```
-
-이제 마스크를 다른 것으로 교체하라는 프롬프트를 만들 수 있습니다:
-
-```python
-prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
-image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
-```
-
-`image` | `mask_image` | `prompt` | output |
-:-------------------------:|:-------------------------:|:-------------------------:|-------------------------:|
- | | ***Face of a yellow cat, high resolution, sitting on a park bench*** | |
-
-
-
-이전의 실험적인 인페인팅 구현에서는 품질이 낮은 다른 프로세스를 사용했습니다. 이전 버전과의 호환성을 보장하기 위해 새 모델이 포함되지 않은 사전학습된 파이프라인을 불러오면 이전 인페인팅 방법이 계속 적용됩니다.
-
-
-
-아래 Space에서 이미지 인페인팅을 직접 해보세요!
-
-
diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_stable_diffusion_checkpoint_to_onnx.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_stable_diffusion_checkpoint_to_onnx.py
deleted file mode 100644
index c527c8037b77d9fe9c10b0dabb505fb4a2657f0c..0000000000000000000000000000000000000000
--- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/scripts/convert_stable_diffusion_checkpoint_to_onnx.py
+++ /dev/null
@@ -1,265 +0,0 @@
-# Copyright 2023 The HuggingFace Team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-import os
-import shutil
-from pathlib import Path
-
-import onnx
-import torch
-from packaging import version
-from torch.onnx import export
-
-from diffusers import OnnxRuntimeModel, OnnxStableDiffusionPipeline, StableDiffusionPipeline
-
-
-is_torch_less_than_1_11 = version.parse(version.parse(torch.__version__).base_version) < version.parse("1.11")
-
-
-def onnx_export(
- model,
- model_args: tuple,
- output_path: Path,
- ordered_input_names,
- output_names,
- dynamic_axes,
- opset,
- use_external_data_format=False,
-):
- output_path.parent.mkdir(parents=True, exist_ok=True)
- # PyTorch deprecated the `enable_onnx_checker` and `use_external_data_format` arguments in v1.11,
- # so we check the torch version for backwards compatibility
- if is_torch_less_than_1_11:
- export(
- model,
- model_args,
- f=output_path.as_posix(),
- input_names=ordered_input_names,
- output_names=output_names,
- dynamic_axes=dynamic_axes,
- do_constant_folding=True,
- use_external_data_format=use_external_data_format,
- enable_onnx_checker=True,
- opset_version=opset,
- )
- else:
- export(
- model,
- model_args,
- f=output_path.as_posix(),
- input_names=ordered_input_names,
- output_names=output_names,
- dynamic_axes=dynamic_axes,
- do_constant_folding=True,
- opset_version=opset,
- )
-
-
-@torch.no_grad()
-def convert_models(model_path: str, output_path: str, opset: int, fp16: bool = False):
- dtype = torch.float16 if fp16 else torch.float32
- if fp16 and torch.cuda.is_available():
- device = "cuda"
- elif fp16 and not torch.cuda.is_available():
- raise ValueError("`float16` model export is only supported on GPUs with CUDA")
- else:
- device = "cpu"
- pipeline = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=dtype).to(device)
- output_path = Path(output_path)
-
- # TEXT ENCODER
- num_tokens = pipeline.text_encoder.config.max_position_embeddings
- text_hidden_size = pipeline.text_encoder.config.hidden_size
- text_input = pipeline.tokenizer(
- "A sample prompt",
- padding="max_length",
- max_length=pipeline.tokenizer.model_max_length,
- truncation=True,
- return_tensors="pt",
- )
- onnx_export(
- pipeline.text_encoder,
- # casting to torch.int32 until the CLIP fix is released: https://github.com/huggingface/transformers/pull/18515/files
- model_args=(text_input.input_ids.to(device=device, dtype=torch.int32)),
- output_path=output_path / "text_encoder" / "model.onnx",
- ordered_input_names=["input_ids"],
- output_names=["last_hidden_state", "pooler_output"],
- dynamic_axes={
- "input_ids": {0: "batch", 1: "sequence"},
- },
- opset=opset,
- )
- del pipeline.text_encoder
-
- # UNET
- unet_in_channels = pipeline.unet.config.in_channels
- unet_sample_size = pipeline.unet.config.sample_size
- unet_path = output_path / "unet" / "model.onnx"
- onnx_export(
- pipeline.unet,
- model_args=(
- torch.randn(2, unet_in_channels, unet_sample_size, unet_sample_size).to(device=device, dtype=dtype),
- torch.randn(2).to(device=device, dtype=dtype),
- torch.randn(2, num_tokens, text_hidden_size).to(device=device, dtype=dtype),
- False,
- ),
- output_path=unet_path,
- ordered_input_names=["sample", "timestep", "encoder_hidden_states", "return_dict"],
- output_names=["out_sample"], # has to be different from "sample" for correct tracing
- dynamic_axes={
- "sample": {0: "batch", 1: "channels", 2: "height", 3: "width"},
- "timestep": {0: "batch"},
- "encoder_hidden_states": {0: "batch", 1: "sequence"},
- },
- opset=opset,
- use_external_data_format=True, # UNet is > 2GB, so the weights need to be split
- )
- unet_model_path = str(unet_path.absolute().as_posix())
- unet_dir = os.path.dirname(unet_model_path)
- unet = onnx.load(unet_model_path)
- # clean up existing tensor files
- shutil.rmtree(unet_dir)
- os.mkdir(unet_dir)
- # collate external tensor files into one
- onnx.save_model(
- unet,
- unet_model_path,
- save_as_external_data=True,
- all_tensors_to_one_file=True,
- location="weights.pb",
- convert_attribute=False,
- )
- del pipeline.unet
-
- # VAE ENCODER
- vae_encoder = pipeline.vae
- vae_in_channels = vae_encoder.config.in_channels
- vae_sample_size = vae_encoder.config.sample_size
- # need to get the raw tensor output (sample) from the encoder
- vae_encoder.forward = lambda sample, return_dict: vae_encoder.encode(sample, return_dict)[0].sample()
- onnx_export(
- vae_encoder,
- model_args=(
- torch.randn(1, vae_in_channels, vae_sample_size, vae_sample_size).to(device=device, dtype=dtype),
- False,
- ),
- output_path=output_path / "vae_encoder" / "model.onnx",
- ordered_input_names=["sample", "return_dict"],
- output_names=["latent_sample"],
- dynamic_axes={
- "sample": {0: "batch", 1: "channels", 2: "height", 3: "width"},
- },
- opset=opset,
- )
-
- # VAE DECODER
- vae_decoder = pipeline.vae
- vae_latent_channels = vae_decoder.config.latent_channels
- vae_out_channels = vae_decoder.config.out_channels
- # forward only through the decoder part
- vae_decoder.forward = vae_encoder.decode
- onnx_export(
- vae_decoder,
- model_args=(
- torch.randn(1, vae_latent_channels, unet_sample_size, unet_sample_size).to(device=device, dtype=dtype),
- False,
- ),
- output_path=output_path / "vae_decoder" / "model.onnx",
- ordered_input_names=["latent_sample", "return_dict"],
- output_names=["sample"],
- dynamic_axes={
- "latent_sample": {0: "batch", 1: "channels", 2: "height", 3: "width"},
- },
- opset=opset,
- )
- del pipeline.vae
-
- # SAFETY CHECKER
- if pipeline.safety_checker is not None:
- safety_checker = pipeline.safety_checker
- clip_num_channels = safety_checker.config.vision_config.num_channels
- clip_image_size = safety_checker.config.vision_config.image_size
- safety_checker.forward = safety_checker.forward_onnx
- onnx_export(
- pipeline.safety_checker,
- model_args=(
- torch.randn(
- 1,
- clip_num_channels,
- clip_image_size,
- clip_image_size,
- ).to(device=device, dtype=dtype),
- torch.randn(1, vae_sample_size, vae_sample_size, vae_out_channels).to(device=device, dtype=dtype),
- ),
- output_path=output_path / "safety_checker" / "model.onnx",
- ordered_input_names=["clip_input", "images"],
- output_names=["out_images", "has_nsfw_concepts"],
- dynamic_axes={
- "clip_input": {0: "batch", 1: "channels", 2: "height", 3: "width"},
- "images": {0: "batch", 1: "height", 2: "width", 3: "channels"},
- },
- opset=opset,
- )
- del pipeline.safety_checker
- safety_checker = OnnxRuntimeModel.from_pretrained(output_path / "safety_checker")
- feature_extractor = pipeline.feature_extractor
- else:
- safety_checker = None
- feature_extractor = None
-
- onnx_pipeline = OnnxStableDiffusionPipeline(
- vae_encoder=OnnxRuntimeModel.from_pretrained(output_path / "vae_encoder"),
- vae_decoder=OnnxRuntimeModel.from_pretrained(output_path / "vae_decoder"),
- text_encoder=OnnxRuntimeModel.from_pretrained(output_path / "text_encoder"),
- tokenizer=pipeline.tokenizer,
- unet=OnnxRuntimeModel.from_pretrained(output_path / "unet"),
- scheduler=pipeline.scheduler,
- safety_checker=safety_checker,
- feature_extractor=feature_extractor,
- requires_safety_checker=safety_checker is not None,
- )
-
- onnx_pipeline.save_pretrained(output_path)
- print("ONNX pipeline saved to", output_path)
-
- del pipeline
- del onnx_pipeline
- _ = OnnxStableDiffusionPipeline.from_pretrained(output_path, provider="CPUExecutionProvider")
- print("ONNX pipeline is loadable")
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--model_path",
- type=str,
- required=True,
- help="Path to the `diffusers` checkpoint to convert (either a local directory or on the Hub).",
- )
-
- parser.add_argument("--output_path", type=str, required=True, help="Path to the output model.")
-
- parser.add_argument(
- "--opset",
- default=14,
- type=int,
- help="The version of the ONNX operator set to use.",
- )
- parser.add_argument("--fp16", action="store_true", default=False, help="Export the models in `float16` mode")
-
- args = parser.parse_args()
-
- convert_models(args.model_path, args.output_path, args.opset, args.fp16)
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/pafpn/README.md b/spaces/Andy1621/uniformer_image_detection/configs/pafpn/README.md
deleted file mode 100644
index 03227e2644223c535e0608e4ddb16c7f26523b4c..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/pafpn/README.md
+++ /dev/null
@@ -1,26 +0,0 @@
-# Path Aggregation Network for Instance Segmentation
-
-## Introduction
-
-[ALGORITHM]
-
-```
-@inproceedings{liu2018path,
- author = {Shu Liu and
- Lu Qi and
- Haifang Qin and
- Jianping Shi and
- Jiaya Jia},
- title = {Path Aggregation Network for Instance Segmentation},
- booktitle = {Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
- year = {2018}
-}
-```
-
-## Results and Models
-
-## Results and Models
-
-| Backbone | style | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
-|:-------------:|:----------:|:-------:|:--------:|:--------------:|:------:|:-------:|:------:|:--------:|
-| R-50-FPN | pytorch | 1x | 4.0 | 17.2 | 37.5 | | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/pafpn/faster_rcnn_r50_pafpn_1x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/pafpn/faster_rcnn_r50_pafpn_1x_coco/faster_rcnn_r50_pafpn_1x_coco_bbox_mAP-0.375_20200503_105836-b7b4b9bd.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/pafpn/faster_rcnn_r50_pafpn_1x_coco/faster_rcnn_r50_pafpn_1x_coco_20200503_105836.log.json) |
diff --git a/spaces/Andy1621/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_1x_coco.py b/spaces/Andy1621/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_1x_coco.py
deleted file mode 100644
index 512eca60b290854c5f42614c899b90bbbb735e24..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_detection/configs/sparse_rcnn/sparse_rcnn_r50_fpn_1x_coco.py
+++ /dev/null
@@ -1,95 +0,0 @@
-_base_ = [
- '../_base_/datasets/coco_detection.py',
- '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py'
-]
-num_stages = 6
-num_proposals = 100
-model = dict(
- type='SparseRCNN',
- pretrained='torchvision://resnet50',
- backbone=dict(
- type='ResNet',
- depth=50,
- num_stages=4,
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[256, 512, 1024, 2048],
- out_channels=256,
- start_level=0,
- add_extra_convs='on_input',
- num_outs=4),
- rpn_head=dict(
- type='EmbeddingRPNHead',
- num_proposals=num_proposals,
- proposal_feature_channel=256),
- roi_head=dict(
- type='SparseRoIHead',
- num_stages=num_stages,
- stage_loss_weights=[1] * num_stages,
- proposal_feature_channel=256,
- bbox_roi_extractor=dict(
- type='SingleRoIExtractor',
- roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=2),
- out_channels=256,
- featmap_strides=[4, 8, 16, 32]),
- bbox_head=[
- dict(
- type='DIIHead',
- num_classes=80,
- num_ffn_fcs=2,
- num_heads=8,
- num_cls_fcs=1,
- num_reg_fcs=3,
- feedforward_channels=2048,
- in_channels=256,
- dropout=0.0,
- ffn_act_cfg=dict(type='ReLU', inplace=True),
- dynamic_conv_cfg=dict(
- type='DynamicConv',
- in_channels=256,
- feat_channels=64,
- out_channels=256,
- input_feat_shape=7,
- act_cfg=dict(type='ReLU', inplace=True),
- norm_cfg=dict(type='LN')),
- loss_bbox=dict(type='L1Loss', loss_weight=5.0),
- loss_iou=dict(type='GIoULoss', loss_weight=2.0),
- loss_cls=dict(
- type='FocalLoss',
- use_sigmoid=True,
- gamma=2.0,
- alpha=0.25,
- loss_weight=2.0),
- bbox_coder=dict(
- type='DeltaXYWHBBoxCoder',
- clip_border=False,
- target_means=[0., 0., 0., 0.],
- target_stds=[0.5, 0.5, 1., 1.])) for _ in range(num_stages)
- ]),
- # training and testing settings
- train_cfg=dict(
- rpn=None,
- rcnn=[
- dict(
- assigner=dict(
- type='HungarianAssigner',
- cls_cost=dict(type='FocalLossCost', weight=2.0),
- reg_cost=dict(type='BBoxL1Cost', weight=5.0),
- iou_cost=dict(type='IoUCost', iou_mode='giou',
- weight=2.0)),
- sampler=dict(type='PseudoSampler'),
- pos_weight=1) for _ in range(num_stages)
- ]),
- test_cfg=dict(rpn=None, rcnn=dict(max_per_img=num_proposals)))
-
-# optimizer
-optimizer = dict(_delete_=True, type='AdamW', lr=0.000025, weight_decay=0.0001)
-optimizer_config = dict(_delete_=True, grad_clip=dict(max_norm=1, norm_type=2))
-# learning policy
-lr_config = dict(policy='step', step=[8, 11])
-runner = dict(type='EpochBasedRunner', max_epochs=12)
diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_512x512_40k_voc12aug.py b/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_512x512_40k_voc12aug.py
deleted file mode 100644
index 8a6968ea583758191fa8e94497c7186e653c7afb..0000000000000000000000000000000000000000
--- a/spaces/Andy1621/uniformer_image_segmentation/configs/encnet/encnet_r101-d8_512x512_40k_voc12aug.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './encnet_r50-d8_512x512_40k_voc12aug.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/README.md b/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/README.md
deleted file mode 100644
index a4c807b9d53faec2f8dae2be7ca8076e8bd48090..0000000000000000000000000000000000000000
--- a/spaces/AnishKumbhar/ChatBot/text-generation-webui-main/README.md
+++ /dev/null
@@ -1,420 +0,0 @@
-**Breaking change: WebUI now uses PyTorch 2.1.**
-
-* For one-click installer users: If you encounter problems after updating, rerun the update script. If issues persist, delete the `installer_files` folder and use the start script to reinstall requirements.
-* For manual installations, update PyTorch with the [provided command](https://github.com/oobabooga/text-generation-webui/#2-install-pytorch).
-
-# Text generation web UI
-
-A Gradio web UI for Large Language Models.
-
-Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) of text generation.
-
-| |  |
-|:---:|:---:|
-| |  |
-
-## Features
-
-* 3 interface modes: default (two columns), notebook, and chat
-* Multiple model backends: [transformers](https://github.com/huggingface/transformers), [llama.cpp](https://github.com/ggerganov/llama.cpp), [ExLlama](https://github.com/turboderp/exllama), [ExLlamaV2](https://github.com/turboderp/exllamav2), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [CTransformers](https://github.com/marella/ctransformers), [AutoAWQ](https://github.com/casper-hansen/AutoAWQ)
-* Dropdown menu for quickly switching between different models
-* LoRA: load and unload LoRAs on the fly, train a new LoRA using QLoRA
-* Precise instruction templates for chat mode, including Llama-2-chat, Alpaca, Vicuna, WizardLM, StableLM, and many others
-* 4-bit, 8-bit, and CPU inference through the transformers library
-* Use llama.cpp models with transformers samplers (`llamacpp_HF` loader)
-* [Multimodal pipelines, including LLaVA and MiniGPT-4](https://github.com/oobabooga/text-generation-webui/tree/main/extensions/multimodal)
-* [Extensions framework](docs/Extensions.md)
-* [Custom chat characters](docs/Chat-mode.md)
-* Very efficient text streaming
-* Markdown output with LaTeX rendering, to use for instance with [GALACTICA](https://github.com/paperswithcode/galai)
-* API, including endpoints for websocket streaming ([see the examples](https://github.com/oobabooga/text-generation-webui/blob/main/api-examples))
-
-To learn how to use the various features, check out the Documentation: https://github.com/oobabooga/text-generation-webui/tree/main/docs
-
-## Installation
-
-### One-click installers
-
-1) Clone or download the repository.
-2) Run the `start_linux.sh`, `start_windows.bat`, `start_macos.sh`, or `start_wsl.bat` script depending on your OS.
-3) Select your GPU vendor when asked.
-4) Have fun!
-
-#### How it works
-
-The script creates a folder called `installer_files` where it sets up a Conda environment using Miniconda. The installation is self-contained: if you want to reinstall, just delete `installer_files` and run the start script again.
-
-To launch the webui in the future after it is already installed, run the same `start` script.
-
-#### Getting updates
-
-Run `update_linux.sh`, `update_windows.bat`, `update_macos.sh`, or `update_wsl.bat`.
-
-#### Running commands
-
-If you ever need to install something manually in the `installer_files` environment, you can launch an interactive shell using the cmd script: `cmd_linux.sh`, `cmd_windows.bat`, `cmd_macos.sh`, or `cmd_wsl.bat`.
-
-#### Defining command-line flags
-
-To define persistent command-line flags like `--listen` or `--api`, edit the `CMD_FLAGS.txt` file with a text editor and add them there. Flags can also be provided directly to the start scripts, for instance, `./start-linux.sh --listen`.
-
-#### Other info
-
-* There is no need to run any of those scripts as admin/root.
-* For additional instructions about AMD setup, WSL setup, and nvcc installation, consult [this page](https://github.com/oobabooga/text-generation-webui/blob/main/docs/One-Click-Installers.md).
-* The installer has been tested mostly on NVIDIA GPUs. If you can find a way to improve it for your AMD/Intel Arc/Mac Metal GPU, you are highly encouraged to submit a PR to this repository. The main file to be edited is `one_click.py`.
-* For automated installation, you can use the `GPU_CHOICE`, `LAUNCH_AFTER_INSTALL`, and `INSTALL_EXTENSIONS` environment variables. For instance: `GPU_CHOICE=A LAUNCH_AFTER_INSTALL=False INSTALL_EXTENSIONS=False ./start_linux.sh`.
-
-### Manual installation using Conda
-
-Recommended if you have some experience with the command-line.
-
-#### 0. Install Conda
-
-https://docs.conda.io/en/latest/miniconda.html
-
-On Linux or WSL, it can be automatically installed with these two commands ([source](https://educe-ubc.github.io/conda.html)):
-
-```
-curl -sL "https://repo.anaconda.com/miniconda/Miniconda3-latest-Linux-x86_64.sh" > "Miniconda3.sh"
-bash Miniconda3.sh
-```
-
-#### 1. Create a new conda environment
-
-```
-conda create -n textgen python=3.10
-conda activate textgen
-```
-
-#### 2. Install Pytorch
-
-| System | GPU | Command |
-|--------|---------|---------|
-| Linux/WSL | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118` |
-| Linux/WSL | CPU only | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu` |
-| Linux | AMD | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.6` |
-| MacOS + MPS | Any | `pip3 install torch torchvision torchaudio` |
-| Windows | NVIDIA | `pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118` |
-| Windows | CPU only | `pip3 install torch torchvision torchaudio` |
-
-The up-to-date commands can be found here: https://pytorch.org/get-started/locally/.
-
-#### 3. Install the web UI
-
-```
-git clone https://github.com/oobabooga/text-generation-webui
-cd text-generation-webui
-pip install -r requirements.txt
-```
-
-#### AMD, Metal, Intel Arc, and CPUs without AVX2
-
-1) Replace the last command above with
-
-```
-pip install -r requirements_nowheels.txt
-```
-
-2) Manually install llama-cpp-python using the appropriate command for your hardware: [Installation from PyPI](https://github.com/abetlen/llama-cpp-python#installation-from-pypi).
-
-3) Do the same for CTransformers: [Installation](https://github.com/marella/ctransformers#installation).
-
-4) AMD: Manually install AutoGPTQ: [Installation](https://github.com/PanQiWei/AutoGPTQ#installation).
-
-5) AMD: Manually install [ExLlama](https://github.com/turboderp/exllama) by simply cloning it into the `repositories` folder (it will be automatically compiled at runtime after that):
-
-```
-cd text-generation-webui
-git clone https://github.com/turboderp/exllama repositories/exllama
-```
-
-#### bitsandbytes on older NVIDIA GPUs
-
-bitsandbytes >= 0.39 may not work. In that case, to use `--load-in-8bit`, you may have to downgrade like this:
-
-* Linux: `pip install bitsandbytes==0.38.1`
-* Windows: `pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl`
-
-### Alternative: Docker
-
-```
-ln -s docker/{Dockerfile,docker-compose.yml,.dockerignore} .
-cp docker/.env.example .env
-# Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model
-docker compose up --build
-```
-
-* You need to have docker compose v2.17 or higher installed. See [this guide](https://github.com/oobabooga/text-generation-webui/blob/main/docs/Docker.md) for instructions.
-* For additional docker files, check out [this repository](https://github.com/Atinoda/text-generation-webui-docker).
-
-### Updating the requirements
-
-From time to time, the `requirements.txt` changes. To update, use these commands:
-
-```
-conda activate textgen
-cd text-generation-webui
-pip install -r requirements.txt --upgrade
-```
-
-## Downloading models
-
-Models should be placed in the `text-generation-webui/models` folder. They are usually downloaded from [Hugging Face](https://huggingface.co/models?pipeline_tag=text-generation&sort=downloads).
-
-* Transformers or GPTQ models are made of several files and must be placed in a subfolder. Example:
-
-```
-text-generation-webui
-├── models
-│ ├── lmsys_vicuna-33b-v1.3
-│ │ ├── config.json
-│ │ ├── generation_config.json
-│ │ ├── pytorch_model-00001-of-00007.bin
-│ │ ├── pytorch_model-00002-of-00007.bin
-│ │ ├── pytorch_model-00003-of-00007.bin
-│ │ ├── pytorch_model-00004-of-00007.bin
-│ │ ├── pytorch_model-00005-of-00007.bin
-│ │ ├── pytorch_model-00006-of-00007.bin
-│ │ ├── pytorch_model-00007-of-00007.bin
-│ │ ├── pytorch_model.bin.index.json
-│ │ ├── special_tokens_map.json
-│ │ ├── tokenizer_config.json
-│ │ └── tokenizer.model
-```
-
-* GGUF models are a single file and should be placed directly into `models`. Example:
-
-```
-text-generation-webui
-├── models
-│ ├── llama-2-13b-chat.Q4_K_M.gguf
-```
-
-In both cases, you can use the "Model" tab of the UI to download the model from Hugging Face automatically. It is also possible to download via the command-line with `python download-model.py organization/model` (use `--help` to see all the options).
-
-#### GPT-4chan
-
-
-
-Instructions
-
-
-[GPT-4chan](https://huggingface.co/ykilcher/gpt-4chan) has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:
-
-* Torrent: [16-bit](https://archive.org/details/gpt4chan_model_float16) / [32-bit](https://archive.org/details/gpt4chan_model)
-* Direct download: [16-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model_float16/) / [32-bit](https://theswissbay.ch/pdf/_notpdf_/gpt4chan_model/)
-
-The 32-bit version is only relevant if you intend to run the model in CPU mode. Otherwise, you should use the 16-bit version.
-
-After downloading the model, follow these steps:
-
-1. Place the files under `models/gpt4chan_model_float16` or `models/gpt4chan_model`.
-2. Place GPT-J 6B's config.json file in that same folder: [config.json](https://huggingface.co/EleutherAI/gpt-j-6B/raw/main/config.json).
-3. Download GPT-J 6B's tokenizer files (they will be automatically detected when you attempt to load GPT-4chan):
-
-```
-python download-model.py EleutherAI/gpt-j-6B --text-only
-```
-
-When you load this model in default or notebook modes, the "HTML" tab will show the generated text in 4chan format:
-
-
-
-
-
-## Starting the web UI
-
- conda activate textgen
- cd text-generation-webui
- python server.py
-
-Then browse to
-
-`http://localhost:7860/?__theme=dark`
-
-Optionally, you can use the following command-line flags:
-
-#### Basic settings
-
-| Flag | Description |
-|--------------------------------------------|-------------|
-| `-h`, `--help` | Show this help message and exit. |
-| `--multi-user` | Multi-user mode. Chat histories are not saved or automatically loaded. WARNING: this is highly experimental. |
-| `--character CHARACTER` | The name of the character to load in chat mode by default. |
-| `--model MODEL` | Name of the model to load by default. |
-| `--lora LORA [LORA ...]` | The list of LoRAs to load. If you want to load more than one LoRA, write the names separated by spaces. |
-| `--model-dir MODEL_DIR` | Path to directory with all the models. |
-| `--lora-dir LORA_DIR` | Path to directory with all the loras. |
-| `--model-menu` | Show a model menu in the terminal when the web UI is first launched. |
-| `--settings SETTINGS_FILE` | Load the default interface settings from this yaml file. See `settings-template.yaml` for an example. If you create a file called `settings.yaml`, this file will be loaded by default without the need to use the `--settings` flag. |
-| `--extensions EXTENSIONS [EXTENSIONS ...]` | The list of extensions to load. If you want to load more than one extension, write the names separated by spaces. |
-| `--verbose` | Print the prompts to the terminal. |
-| `--chat-buttons` | Show buttons on chat tab instead of hover menu. |
-
-#### Model loader
-
-| Flag | Description |
-|--------------------------------------------|-------------|
-| `--loader LOADER` | Choose the model loader manually, otherwise, it will get autodetected. Valid options: transformers, autogptq, gptq-for-llama, exllama, exllama_hf, llamacpp, rwkv, ctransformers |
-
-#### Accelerate/transformers
-
-| Flag | Description |
-|---------------------------------------------|-------------|
-| `--cpu` | Use the CPU to generate text. Warning: Training on CPU is extremely slow.|
-| `--auto-devices` | Automatically split the model across the available GPU(s) and CPU. |
-| `--gpu-memory GPU_MEMORY [GPU_MEMORY ...]` | Maximum GPU memory in GiB to be allocated per GPU. Example: `--gpu-memory 10` for a single GPU, `--gpu-memory 10 5` for two GPUs. You can also set values in MiB like `--gpu-memory 3500MiB`. |
-| `--cpu-memory CPU_MEMORY` | Maximum CPU memory in GiB to allocate for offloaded weights. Same as above.|
-| `--disk` | If the model is too large for your GPU(s) and CPU combined, send the remaining layers to the disk. |
-| `--disk-cache-dir DISK_CACHE_DIR` | Directory to save the disk cache to. Defaults to `cache/`. |
-| `--load-in-8bit` | Load the model with 8-bit precision (using bitsandbytes).|
-| `--bf16` | Load the model with bfloat16 precision. Requires NVIDIA Ampere GPU. |
-| `--no-cache` | Set `use_cache` to False while generating text. This reduces the VRAM usage a bit with a performance cost. |
-| `--xformers` | Use xformer's memory efficient attention. This should increase your tokens/s. |
-| `--sdp-attention` | Use torch 2.0's sdp attention. |
-| `--trust-remote-code` | Set trust_remote_code=True while loading a model. Necessary for ChatGLM and Falcon. |
-| `--use_fast` | Set use_fast=True while loading a tokenizer. |
-
-#### Accelerate 4-bit
-
-⚠️ Requires minimum compute of 7.0 on Windows at the moment.
-
-| Flag | Description |
-|---------------------------------------------|-------------|
-| `--load-in-4bit` | Load the model with 4-bit precision (using bitsandbytes). |
-| `--compute_dtype COMPUTE_DTYPE` | compute dtype for 4-bit. Valid options: bfloat16, float16, float32. |
-| `--quant_type QUANT_TYPE` | quant_type for 4-bit. Valid options: nf4, fp4. |
-| `--use_double_quant` | use_double_quant for 4-bit. |
-
-#### GGUF (for llama.cpp and ctransformers)
-
-| Flag | Description |
-|-------------|-------------|
-| `--threads` | Number of threads to use. |
-| `--threads-batch THREADS_BATCH` | Number of threads to use for batches/prompt processing. |
-| `--n_batch` | Maximum number of prompt tokens to batch together when calling llama_eval. |
-| `--n-gpu-layers N_GPU_LAYERS` | Number of layers to offload to the GPU. Only works if llama-cpp-python was compiled with BLAS. Set this to 1000000000 to offload all layers to the GPU. |
-| `--n_ctx N_CTX` | Size of the prompt context. |
-
-#### llama.cpp
-
-| Flag | Description |
-|---------------|---------------|
-| `--mul_mat_q` | Activate new mulmat kernels. |
-| `--tensor_split TENSOR_SPLIT` | Split the model across multiple GPUs, comma-separated list of proportions, e.g. 18,17 |
-| `--llama_cpp_seed SEED` | Seed for llama-cpp models. Default 0 (random). |
-| `--cache-capacity CACHE_CAPACITY` | Maximum cache capacity. Examples: 2000MiB, 2GiB. When provided without units, bytes will be assumed. |
-|`--cfg-cache` | llamacpp_HF: Create an additional cache for CFG negative prompts. |
-| `--no-mmap` | Prevent mmap from being used. |
-| `--mlock` | Force the system to keep the model in RAM. |
-| `--numa` | Activate NUMA task allocation for llama.cpp |
-| `--cpu` | Use the CPU version of llama-cpp-python instead of the GPU-accelerated version. |
-
-#### ctransformers
-
-| Flag | Description |
-|-------------|-------------|
-| `--model_type MODEL_TYPE` | Model type of pre-quantized model. Currently gpt2, gptj, gptneox, falcon, llama, mpt, starcoder (gptbigcode), dollyv2, and replit are supported. |
-
-#### AutoGPTQ
-
-| Flag | Description |
-|------------------|-------------|
-| `--triton` | Use triton. |
-| `--no_inject_fused_attention` | Disable the use of fused attention, which will use less VRAM at the cost of slower inference. |
-| `--no_inject_fused_mlp` | Triton mode only: disable the use of fused MLP, which will use less VRAM at the cost of slower inference. |
-| `--no_use_cuda_fp16` | This can make models faster on some systems. |
-| `--desc_act` | For models that don't have a quantize_config.json, this parameter is used to define whether to set desc_act or not in BaseQuantizeConfig. |
-| `--disable_exllama` | Disable ExLlama kernel, which can improve inference speed on some systems. |
-
-#### ExLlama
-
-| Flag | Description |
-|------------------|-------------|
-|`--gpu-split` | Comma-separated list of VRAM (in GB) to use per GPU device for model layers, e.g. `20,7,7` |
-|`--max_seq_len MAX_SEQ_LEN` | Maximum sequence length. |
-|`--cfg-cache` | ExLlama_HF: Create an additional cache for CFG negative prompts. Necessary to use CFG with that loader, but not necessary for CFG with base ExLlama. |
-
-#### GPTQ-for-LLaMa
-
-| Flag | Description |
-|---------------------------|-------------|
-| `--wbits WBITS` | Load a pre-quantized model with specified precision in bits. 2, 3, 4 and 8 are supported. |
-| `--model_type MODEL_TYPE` | Model type of pre-quantized model. Currently LLaMA, OPT, and GPT-J are supported. |
-| `--groupsize GROUPSIZE` | Group size. |
-| `--pre_layer PRE_LAYER [PRE_LAYER ...]` | The number of layers to allocate to the GPU. Setting this parameter enables CPU offloading for 4-bit models. For multi-gpu, write the numbers separated by spaces, eg `--pre_layer 30 60`. |
-| `--checkpoint CHECKPOINT` | The path to the quantized checkpoint file. If not specified, it will be automatically detected. |
-| `--monkey-patch` | Apply the monkey patch for using LoRAs with quantized models.
-
-#### DeepSpeed
-
-| Flag | Description |
-|---------------------------------------|-------------|
-| `--deepspeed` | Enable the use of DeepSpeed ZeRO-3 for inference via the Transformers integration. |
-| `--nvme-offload-dir NVME_OFFLOAD_DIR` | DeepSpeed: Directory to use for ZeRO-3 NVME offloading. |
-| `--local_rank LOCAL_RANK` | DeepSpeed: Optional argument for distributed setups. |
-
-#### RWKV
-
-| Flag | Description |
-|---------------------------------|-------------|
-| `--rwkv-strategy RWKV_STRATEGY` | RWKV: The strategy to use while loading the model. Examples: "cpu fp32", "cuda fp16", "cuda fp16i8". |
-| `--rwkv-cuda-on` | RWKV: Compile the CUDA kernel for better performance. |
-
-#### RoPE (for llama.cpp, ExLlama, ExLlamaV2, and transformers)
-
-| Flag | Description |
-|------------------|-------------|
-| `--alpha_value ALPHA_VALUE` | Positional embeddings alpha factor for NTK RoPE scaling. Use either this or compress_pos_emb, not both. |
-| `--rope_freq_base ROPE_FREQ_BASE` | If greater than 0, will be used instead of alpha_value. Those two are related by rope_freq_base = 10000 * alpha_value ^ (64 / 63). |
-| `--compress_pos_emb COMPRESS_POS_EMB` | Positional embeddings compression factor. Should be set to (context length) / (model's original context length). Equal to 1/rope_freq_scale. |
-
-#### Gradio
-
-| Flag | Description |
-|---------------------------------------|-------------|
-| `--listen` | Make the web UI reachable from your local network. |
-| `--listen-host LISTEN_HOST` | The hostname that the server will use. |
-| `--listen-port LISTEN_PORT` | The listening port that the server will use. |
-| `--share` | Create a public URL. This is useful for running the web UI on Google Colab or similar. |
-| `--auto-launch` | Open the web UI in the default browser upon launch. |
-| `--gradio-auth USER:PWD` | set gradio authentication like "username:password"; or comma-delimit multiple like "u1:p1,u2:p2,u3:p3" |
-| `--gradio-auth-path GRADIO_AUTH_PATH` | Set the gradio authentication file path. The file should contain one or more user:password pairs in this format: "u1:p1,u2:p2,u3:p3" |
-| `--ssl-keyfile SSL_KEYFILE` | The path to the SSL certificate key file. |
-| `--ssl-certfile SSL_CERTFILE` | The path to the SSL certificate cert file. |
-
-#### API
-
-| Flag | Description |
-|---------------------------------------|-------------|
-| `--api` | Enable the API extension. |
-| `--public-api` | Create a public URL for the API using Cloudfare. |
-| `--public-api-id PUBLIC_API_ID` | Tunnel ID for named Cloudflare Tunnel. Use together with public-api option. |
-| `--api-blocking-port BLOCKING_PORT` | The listening port for the blocking API. |
-| `--api-streaming-port STREAMING_PORT` | The listening port for the streaming API. |
-
-#### Multimodal
-
-| Flag | Description |
-|---------------------------------------|-------------|
-| `--multimodal-pipeline PIPELINE` | The multimodal pipeline to use. Examples: `llava-7b`, `llava-13b`. |
-
-## Presets
-
-Inference settings presets can be created under `presets/` as yaml files. These files are detected automatically at startup.
-
-The presets that are included by default are the result of a contest that received 7215 votes. More details can be found [here](https://github.com/oobabooga/oobabooga.github.io/blob/main/arena/results.md).
-
-## Contributing
-
-If you would like to contribute to the project, check out the [Contributing guidelines](https://github.com/oobabooga/text-generation-webui/wiki/Contributing-guidelines).
-
-## Community
-
-* Subreddit: https://www.reddit.com/r/oobabooga/
-* Discord: https://discord.gg/jwZCF2dPQN
-
-## Acknowledgment
-
-In August 2023, [Andreessen Horowitz](https://a16z.com/) (a16z) provided a generous grant to encourage and support my independent work on this project. I am **extremely** grateful for their trust and recognition, which will allow me to dedicate more time towards realizing the full potential of text-generation-webui.
diff --git a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/gui/ui_model.py b/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/gui/ui_model.py
deleted file mode 100644
index 5a258d5f5ea04693aa5e4a497ae97a5f016dbbaa..0000000000000000000000000000000000000000
--- a/spaces/Anonymous-123/ImageNet-Editing/object_removal/TFill/gui/ui_model.py
+++ /dev/null
@@ -1,290 +0,0 @@
-from gui.ui_win import Ui_Form
-from gui.ui_draw import *
-from PIL import Image, ImageQt
-import numpy as np
-import random, io, os
-import torch
-import torch.nn.functional as F
-import torchvision.transforms as transforms
-from util import task, util
-from dataloader.image_folder import make_dataset
-from dataloader.data_loader import get_transform
-from model import create_model
-
-
-class ui_model(QtWidgets.QWidget, Ui_Form):
- """define the class of UI"""
- shape = 'line'
- CurrentWidth = 1
-
- def __init__(self, opt):
- super(ui_model, self).__init__()
-
- self.setupUi(self)
-
- self.opt = opt
- self.show_result_flag = False
- self.mask_type = None
- self.img_power = None
- self.model_names = ['celeba', 'ffhq', 'imagenet', 'places2']
- self.img_root = './examples/'
- self.img_files = ['celeba/img', 'ffhq/img', 'imagenet/img', 'places2/img']
-
- self.show_logo()
-
- self.comboBox.activated.connect(self.load_model) # select model
- self.pushButton_2.clicked.connect(self.select_image) # manually select an image
- self.pushButton_3.clicked.connect(self.random_image) # randomly select an image
- self.pushButton_4.clicked.connect(self.load_mask) # manually select a mask
- self.pushButton_5.clicked.connect(self.random_mask) # randomly select a mask
-
- # draw/erasure the mask
- self.radioButton.toggled.connect(lambda: self.draw_mask('line')) # draw the line
- self.radioButton_2.toggled.connect(lambda: self.draw_mask('rectangle')) # draw the rectangle
- self.radioButton_3.toggled.connect(lambda: self.draw_mask('center')) # center mask
- self.spinBox.valueChanged.connect(self.change_thickness)
- self.pushButton.clicked.connect(self.clear_mask)
-
- # fill image
- self.pushButton_6.clicked.connect(self.fill_image)
- self.comboBox_2.activated.connect(self.show_result)
- self.pushButton_7.clicked.connect(self.save_result)
-
- opt.preprocess = 'scale_shortside'
- self.transform_o = get_transform(opt, convert=False, augment=False)
- self.pil2tensor = transforms.ToTensor()
-
- def show_logo(self):
- """Show the logo of NTU and BTC"""
- img = QtWidgets.QLabel(self)
- img.setGeometry(1000, 10, 140, 50)
-
- pixmap = QtGui.QPixmap("./gui/logo/NTU_logo.jpg") # read examples
- pixmap = pixmap.scaled(140, 140, QtCore.Qt.KeepAspectRatio, QtCore.Qt.SmoothTransformation)
- img.setPixmap(pixmap)
- img.show()
- img1 = QtWidgets.QLabel(self)
- img1.setGeometry(1200, 10, 70, 50)
-
- pixmap1 = QtGui.QPixmap("./gui/logo/BTC_logo.png") # read examples
- pixmap1 = pixmap1.scaled(70, 70, QtCore.Qt.KeepAspectRatio, QtCore.Qt.SmoothTransformation)
- img1.setPixmap(pixmap1)
- img1.show()
-
- def show_image(self, img):
- """Show the masked examples"""
- show_img = img.copy()
- if self.mask_type == 'center':
- sub_img = Image.fromarray(np.uint8(255 * np.ones((int(self.pw/2), int(self.pw/2), 3))))
- mask = Image.fromarray(np.uint8(255 * np.ones((int(self.pw/2), int(self.pw/2)))))
- show_img.paste(sub_img, box=(int(self.pw/4), int(self.pw/4)), mask=mask)
- elif self.mask_type == 'external':
- mask = Image.open(self.mname).resize(self.img_power.size).convert('RGB')
- mask_L = Image.open(self.mname).resize(self.img_power.size).convert('L')
- show_img = Image.composite(mask, show_img, mask_L)
- self.new_painter(ImageQt.ImageQt(show_img))
-
- def show_result(self):
- """Show different kind examples"""
- value = self.comboBox_2.currentIndex()
- if value == 0:
- self.new_painter(ImageQt.ImageQt(self.img_power))
- elif value == 1:
- masked_img = torch.where(self.mask > 0, self.img_m, torch.ones_like(self.img_m))
- masked_img = Image.fromarray(util.tensor2im(masked_img.detach()))
- self.new_painter(ImageQt.ImageQt(masked_img))
- elif value == 2:
- if 'refine' in self.opt.coarse_or_refine:
- img_out = Image.fromarray(util.tensor2im(self.img_ref_out.detach()))
- else:
- img_out = Image.fromarray(util.tensor2im(self.img_out.detach()))
- self.new_painter(ImageQt.ImageQt(img_out))
-
- def save_result(self):
- """Save the results to the disk"""
- util.mkdir(self.opt.results_dir)
- img_name = self.fname.split('/')[-1]
- data_name = self.opt.img_file.split('/')[-1].split('.')[0]
-
- original_name = '%s_%s_%s' % ('original', data_name, img_name) # save the original image
- original_path = os.path.join(self.opt.results_dir, original_name)
- img_original = util.tensor2im(self.img_truth)
- util.save_image(img_original, original_path)
-
- mask_name = '%s_%s_%d_%s' % ('mask', data_name, self.PaintPanel.iteration, img_name)
- mask_path = os.path.join(self.opt.results_dir, mask_name)
- mask = self.mask.repeat(1, 3, 1, 1)
- img_mask = util.tensor2im(1-mask)
- util.save_image(img_mask, mask_path)
-
- #save masked image
- masked_img_name = '%s_%s_%d_%s' % ('masked_img', data_name, self.PaintPanel.iteration, img_name)
- img_path = os.path.join(self.opt.results_dir, masked_img_name)
- img = torch.where(self.mask < 0.2, torch.ones_like(self.img_truth), self.img_truth)
- masked_img = util.tensor2im(img)
- util.save_image(masked_img, img_path)
-
- # save the generated results
- img_g_name = '%s_%s_%d_%s' % ('g', data_name, self.PaintPanel.iteration, img_name)
- img_path = os.path.join(self.opt.results_dir, img_g_name)
- img_g = util.tensor2im(self.img_g)
- util.save_image(img_g, img_path)
-
- # save the results
- result_name = '%s_%s_%d_%s' % ('out', data_name, self.PaintPanel.iteration, img_name)
- result_path = os.path.join(self.opt.results_dir, result_name)
- img_result = util.tensor2im(self.img_out)
- util.save_image(img_result, result_path)
-
- # save the refined results
- if 'tc' in self.opt.model and 'refine' in self.opt.coarse_or_refine:
- result_name = '%s_%s_%d_%s' % ('ref', data_name, self.PaintPanel.iteration, img_name)
- result_path = os.path.join(self.opt.results_dir, result_name)
- img_result = util.tensor2im(self.img_ref_out)
- util.save_image(img_result, result_path)
-
- def load_model(self):
- """Load different kind models"""
- value = self.comboBox.currentIndex()
- if value == 0:
- raise NotImplementedError("Please choose a model")
- else:
- index = value-1 # define the model type and dataset type
- self.opt.name = self.model_names[index]
- self.opt.img_file = self.img_root + self.img_files[index % len(self.img_files)]
- self.model = create_model(self.opt)
- self.model.setup(self.opt)
-
- def load_image(self, fname):
- """Load the image"""
- self.img_o = Image.open(fname).convert('RGB')
- self.ow, self.oh = self.img_o.size
- self.img_power = self.transform_o(self.img_o)
- self.pw, self.ph = self.img_power.size
-
- return self.img_power
-
- def select_image(self):
- """Load the image"""
- self.fname, _ = QtWidgets.QFileDialog.getOpenFileName(self, 'select the image', self.opt.img_file, '*')
- img = self.load_image(self.fname)
-
- self.mask_type = 'none'
- self.show_image(img)
-
- def random_image(self):
- """Random load the test image"""
- image_paths, image_size = make_dataset(self.opt.img_file)
- item = random.randint(0, image_size-1)
- self.fname = image_paths[item]
- img = self.load_image(self.fname)
-
- self.mask_type = 'none'
- self.show_image(img)
-
- def load_mask(self):
- """Load a mask"""
- self.mask_type = 'external'
- self.mname, _ = QtWidgets.QFileDialog.getOpenFileName(self, 'select the mask', self.opt.mask_file,'*')
-
- self.show_image(self.img_power)
-
- def random_mask(self):
- """Random load the test mask"""
- if self.opt.mask_file == 'none':
- raise NotImplementedError("Please input the mask path")
- self.mask_type = 'external'
- mask_paths, mask_size = make_dataset(self.opt.mask_file)
- item = random.randint(0, mask_size - 1)
- self.mname = mask_paths[item]
-
- self.show_image(self.img_power)
-
- def read_mask(self):
- """Read the mask from the painted plain"""
- self.PaintPanel.saveDraw()
- buffer = QtCore.QBuffer()
- buffer.open(QtCore.QBuffer.ReadWrite)
- self.PaintPanel.map.save(buffer, 'PNG')
- pil_im = Image.open(io.BytesIO(buffer.data()))
-
- return pil_im
-
- def new_painter(self, image=None):
- """Build a painter to load and process the image"""
- # painter
- self.PaintPanel = painter(self, image)
- self.PaintPanel.close()
- if image is not None:
- w, h = image.size().width(), image.size().height()
- self.stackedWidget.setGeometry(QtCore.QRect(250+int(512-w/2), 100+int(128-h/8), w, h))
- self.stackedWidget.insertWidget(0, self.PaintPanel)
- self.stackedWidget.setCurrentWidget(self.PaintPanel)
-
- def change_thickness(self, num):
- """Change the width of the painter"""
- self.CurrentWidth = num
- self.PaintPanel.CurrentWidth = num
-
- def draw_mask(self, masktype):
- """Draw the mask"""
- if masktype == 'center':
- self.mask_type = 'center'
- if self.img_power is not None:
- self.show_image(self.img_power)
- else:
- self.mask_type = 'draw'
- self.shape = masktype
- self.PaintPanel.shape = masktype
-
- def clear_mask(self):
- """Clear the mask"""
- self.mask_type = 'draw'
- if self.PaintPanel.Brush:
- self.PaintPanel.Brush = False
- else:
- self.PaintPanel.Brush = True
-
- def set_input(self):
- """Set the input for the network"""
- img_o = self.pil2tensor(self.img_o).unsqueeze(0)
- img = self.pil2tensor(self.img_power).unsqueeze(0)
- if self.mask_type == 'draw':
- # get the test mask from painter
- mask = self.read_mask()
- mask = torch.autograd.Variable(self.pil2tensor(mask)).unsqueeze(0)[:, 0:1, :, :]
- elif self.mask_type == 'center':
- mask = torch.zeros_like(img)[:, 0:1, :, :]
- mask[:, :, int(self.pw/4):int(3*self.pw/4), int(self.ph/4):int(3*self.ph/4)] = 1
- elif self.mask_type == 'external':
- mask = self.pil2tensor(Image.open(self.mname).resize((self.pw, self.ph)).convert('L')).unsqueeze(0)
- mask = (mask < 0.5).float()
- if len(self.opt.gpu_ids) > 0:
- img = img.cuda(self.opt.gpu_ids[0])
- mask = mask.cuda(self.opt.gpu_ids[0])
- img_o = img_o.cuda(self.opt.gpu_ids[0])
-
- self.mask = mask
- self.img_org = img_o * 2 - 1
- self.img_truth = img * 2 - 1
- self.img_m = self.mask * self.img_truth
-
- def fill_image(self):
- """Forward to get the completed results"""
- self.set_input()
- if self.PaintPanel.iteration < 1:
- with torch.no_grad():
- fixed_img = F.interpolate(self.img_m, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True).clamp(-1, 1)
- fixed_mask = (F.interpolate(self.mask, size=[self.opt.fixed_size, self.opt.fixed_size], mode='bicubic', align_corners=True) > 0.9).type_as(fixed_img)
- out, mask = self.model.netE(fixed_img, mask=fixed_mask, return_mask=True)
- out = self.model.netT(out, mask, bool_mask=False)
- self.img_g = self.model.netG(out)
- img_g_org = F.interpolate(self.img_g, size=self.img_truth.size()[2:], mode='bicubic', align_corners=True).clamp(-1, 1)
- self.img_out = self.mask * self.img_truth + (1 - self.mask) * img_g_org
- if 'refine' in self.opt.coarse_or_refine:
- img_ref = self.model.netG_Ref(self.img_out, mask=self.mask)
- self.img_ref_out = self.mask * self.img_truth + (1 - self.mask) * img_ref
- print('finish the completion')
-
- self.show_result_flag = True
- self.show_result()
\ No newline at end of file
diff --git a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/demo/create_coco_dataset.py b/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/demo/create_coco_dataset.py
deleted file mode 100644
index a0bb02a7e586d4fb4587635da545ff774f688f18..0000000000000000000000000000000000000000
--- a/spaces/Arulkumar03/GroundingDINO_SOTA_Zero_Shot_Model/demo/create_coco_dataset.py
+++ /dev/null
@@ -1,83 +0,0 @@
-import typer
-from groundingdino.util.inference import load_model, load_image, predict
-from tqdm import tqdm
-import torchvision
-import torch
-import fiftyone as fo
-
-
-def main(
- image_directory: str = 'test_grounding_dino',
- text_prompt: str = 'bus, car',
- box_threshold: float = 0.15,
- text_threshold: float = 0.10,
- export_dataset: bool = False,
- view_dataset: bool = False,
- export_annotated_images: bool = True,
- weights_path : str = "groundingdino_swint_ogc.pth",
- config_path: str = "../../GroundingDINO/groundingdino/config/GroundingDINO_SwinT_OGC.py",
- subsample: int = None,
- ):
-
- model = load_model(config_path, weights_path)
-
- dataset = fo.Dataset.from_images_dir(image_directory)
-
- samples = []
-
- if subsample is not None:
-
- if subsample < len(dataset):
- dataset = dataset.take(subsample).clone()
-
- for sample in tqdm(dataset):
-
- image_source, image = load_image(sample.filepath)
-
- boxes, logits, phrases = predict(
- model=model,
- image=image,
- caption=text_prompt,
- box_threshold=box_threshold,
- text_threshold=text_threshold,
- )
-
- detections = []
-
- for box, logit, phrase in zip(boxes, logits, phrases):
-
- rel_box = torchvision.ops.box_convert(box, 'cxcywh', 'xywh')
-
- detections.append(
- fo.Detection(
- label=phrase,
- bounding_box=rel_box,
- confidence=logit,
- ))
-
- # Store detections in a field name of your choice
- sample["detections"] = fo.Detections(detections=detections)
- sample.save()
-
- # loads the voxel fiftyone UI ready for viewing the dataset.
- if view_dataset:
- session = fo.launch_app(dataset)
- session.wait()
-
- # exports COCO dataset ready for training
- if export_dataset:
- dataset.export(
- 'coco_dataset',
- dataset_type=fo.types.COCODetectionDataset,
- )
-
- # saves bounding boxes plotted on the input images to disk
- if export_annotated_images:
- dataset.draw_labels(
- 'images_with_bounding_boxes',
- label_fields=['detections']
- )
-
-
-if __name__ == '__main__':
- typer.run(main)
diff --git a/spaces/Audio-AGI/WavJourney/pipeline.py b/spaces/Audio-AGI/WavJourney/pipeline.py
deleted file mode 100644
index 9c93b09568d5ada73175c8c3ea6ab8046878a05b..0000000000000000000000000000000000000000
--- a/spaces/Audio-AGI/WavJourney/pipeline.py
+++ /dev/null
@@ -1,229 +0,0 @@
-import datetime
-import os
-from string import Template
-import openai
-import re
-import glob
-import pickle
-import time
-import json5
-from retrying import retry
-from code_generator import check_json_script, collect_and_check_audio_data
-import random
-import string
-
-import utils
-import voice_presets
-from code_generator import AudioCodeGenerator
-
-# Enable this for debugging
-USE_OPENAI_CACHE = False
-openai_cache = []
-if USE_OPENAI_CACHE:
- os.makedirs('cache', exist_ok=True)
- for cache_file in glob.glob('cache/*.pkl'):
- with open(cache_file, 'rb') as file:
- openai_cache.append(pickle.load(file))
-
-def chat_with_gpt(prompt, api_key):
- if USE_OPENAI_CACHE:
- filtered_object = list(filter(lambda x: x['prompt'] == prompt, openai_cache))
- if len(filtered_object) > 0:
- response = filtered_object[0]['response']
- return response
-
- try:
- openai.api_key = api_key
- chat = openai.ChatCompletion.create(
- # model="gpt-3.5-turbo",
- model="gpt-4",
- messages=[
- {
- "role": "system",
- "content": "You are a helpful assistant."
- },
- {
- "role": "user",
- "content": prompt
- }
- ]
- )
- finally:
- openai.api_key = ''
-
- if USE_OPENAI_CACHE:
- cache_obj = {
- 'prompt': prompt,
- 'response': chat['choices'][0]['message']['content']
- }
- with open(f'cache/{time.time()}.pkl', 'wb') as _openai_cache:
- pickle.dump(cache_obj, _openai_cache)
- openai_cache.append(cache_obj)
-
- return chat['choices'][0]['message']['content']
-
-
-def get_file_content(filename):
- with open(filename, 'r') as file:
- return file.read().strip()
-
-
-def write_to_file(filename, content):
- with open(filename, 'w') as file:
- file.write(content)
-
-
-def extract_substring_with_quotes(input_string, quotes="'''"):
- pattern = f"{quotes}(.*?){quotes}"
- matches = re.findall(pattern, input_string, re.DOTALL)
- return matches
-
-
-def try_extract_content_from_quotes(content):
- if "'''" in content:
- return extract_substring_with_quotes(content)[0]
- elif "```" in content:
- return extract_substring_with_quotes(content, quotes="```")[0]
- else:
- return content
-
-def maybe_get_content_from_file(content_or_filename):
- if os.path.exists(content_or_filename):
- with open(content_or_filename, 'r') as file:
- return file.read().strip()
- return content_or_filename
-
-
-
-# Pipeline Interface Guidelines:
-#
-# Init calls:
-# - Init calls must be called before running the actual steps
-# - init_session() is called every time a gradio webpage is loaded
-#
-# Single Step:
-# - takes input (file or content) and output path as input
-# - most of time just returns output content
-#
-# Compositional Step:
-# - takes session_id as input (you have session_id, you have all the paths)
-# - run a series of steps
-
-# This is called for every new gradio webpage
-
-def init_session(session_id=''):
- def uid8():
- return ''.join(random.choices(string.ascii_lowercase + string.digits, k=8))
-
- if session_id == '':
- session_id = f'{datetime.datetime.now().strftime("%Y%m%d%H%M%S")}_{uid8()}'
- # create the paths
- os.makedirs(utils.get_session_voice_preset_path(session_id))
- os.makedirs(utils.get_session_audio_path(session_id))
- print(f'New session created, session_id={session_id}')
- return session_id
-
-@retry(stop_max_attempt_number=3)
-def input_text_to_json_script_with_retry(complete_prompt_path, api_key):
- print(" trying ...")
- complete_prompt = get_file_content(complete_prompt_path)
- json_response = try_extract_content_from_quotes(chat_with_gpt(complete_prompt, api_key))
- json_data = json5.loads(json_response)
-
- try:
- check_json_script(json_data)
- collect_and_check_audio_data(json_data)
- except Exception as err:
- print(f'JSON ERROR: {err}')
- retry_complete_prompt = f'{complete_prompt}\n```\n{json_response}```\nThe script above has format error(s). Return the fixed script.\n\nScript:\n'
- write_to_file(complete_prompt_path, retry_complete_prompt)
- raise err
-
- return json_response
-
-# Step 1: input_text to json
-def input_text_to_json_script(input_text, output_path, api_key):
- input_text = maybe_get_content_from_file(input_text)
- text_to_audio_script_prompt = get_file_content('prompts/text_to_json.prompt')
- prompt = f'{text_to_audio_script_prompt}\n\nInput text: {input_text}\n\nScript:\n'
- complete_prompt_path = output_path / 'complete_input_text_to_audio_script.prompt'
- write_to_file(complete_prompt_path, prompt)
- audio_script_response = input_text_to_json_script_with_retry(complete_prompt_path, api_key)
- generated_audio_script_filename = output_path / 'audio_script.json'
- write_to_file(generated_audio_script_filename, audio_script_response)
- return audio_script_response
-
-# Step 2: json to char-voice map
-def json_script_to_char_voice_map(json_script, voices, output_path, api_key):
- json_script_content = maybe_get_content_from_file(json_script)
- prompt = get_file_content('prompts/audio_script_to_character_voice_map.prompt')
- presets_str = '\n'.join(f"{preset['id']}: {preset['desc']}" for preset in voices.values())
- prompt = Template(prompt).substitute(voice_and_desc=presets_str)
- prompt = f"{prompt}\n\nAudio script:\n'''\n{json_script_content}\n'''\n\noutput:\n"
- write_to_file(output_path / 'complete_audio_script_to_char_voice_map.prompt', prompt)
- char_voice_map_response = try_extract_content_from_quotes(chat_with_gpt(prompt, api_key))
- char_voice_map = json5.loads(char_voice_map_response)
- # enrich char_voice_map with voice preset metadata
- complete_char_voice_map = {c: voices[char_voice_map[c]] for c in char_voice_map}
- char_voice_map_filename = output_path / 'character_voice_map.json'
- write_to_file(char_voice_map_filename, json5.dumps(complete_char_voice_map))
- return complete_char_voice_map
-
-# Step 3: json to py code
-def json_script_and_char_voice_map_to_audio_gen_code(json_script_filename, char_voice_map_filename, output_path, result_filename):
- audio_code_generator = AudioCodeGenerator()
- code = audio_code_generator.parse_and_generate(
- json_script_filename,
- char_voice_map_filename,
- output_path,
- result_filename
- )
- write_to_file(output_path / 'audio_generation.py', code)
-
-# Step 4: py code to final wav
-def audio_code_gen_to_result(audio_gen_code_path):
- audio_gen_code_filename = audio_gen_code_path / 'audio_generation.py'
- os.system(f'PYTHONPATH=. python {audio_gen_code_filename}')
-
-# Function call used by Gradio: input_text to json
-def generate_json_file(session_id, input_text, api_key):
- output_path = utils.get_session_path(session_id)
- # Step 1
- print(f'session_id={session_id}, Step 1: Writing audio script based on text: {input_text} ...')
- return input_text_to_json_script(input_text, output_path, api_key)
-
-# Function call used by Gradio: json to result wav
-def generate_audio(session_id, json_script, api_key):
- def count_lines(content):
- # Split the string using the newline character and count the non-empty lines
- return sum(1 for line in content.split('\n') if line.strip())
-
- max_lines = utils.get_max_script_lines()
- if count_lines(json_script) > max_lines:
- raise ValueError(f'The number of lines of the JSON script has exceeded {max_lines}!')
-
- output_path = utils.get_session_path(session_id)
- output_audio_path = utils.get_session_audio_path(session_id)
- voices = voice_presets.get_merged_voice_presets(session_id)
-
- # Step 2
- print(f'session_id={session_id}, Step 2: Parsing character voice with LLM...')
- char_voice_map = json_script_to_char_voice_map(json_script, voices, output_path, api_key)
- # Step 3
- json_script_filename = output_path / 'audio_script.json'
- char_voice_map_filename = output_path / 'character_voice_map.json'
- result_wav_basename = f'res_{session_id}'
- print(f'session_id={session_id}, Step 3: Compiling audio script to Python program ...')
- json_script_and_char_voice_map_to_audio_gen_code(json_script_filename, char_voice_map_filename, output_path, result_wav_basename)
- # Step 4
- print(f'session_id={session_id}, Step 4: Start running Python program ...')
- audio_code_gen_to_result(output_path)
-
- result_wav_filename = output_audio_path / f'{result_wav_basename}.wav'
- print(f'Done all processes, result: {result_wav_filename}')
- return result_wav_filename, char_voice_map
-
-# Convenient function call used by wavjourney_cli
-def full_steps(session_id, input_text, api_key):
- json_script = generate_json_file(session_id, input_text, api_key)
- return generate_audio(session_id, json_script, api_key)
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/config.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/config.py
deleted file mode 100644
index fabe7f0fbe1e41c6eb280f8f7d6ae2e9c4911135..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/config.py
+++ /dev/null
@@ -1,50 +0,0 @@
-from detectron2.config import CfgNode as CN
-
-
-def add_grit_config(cfg):
- _C = cfg
-
- _C.MODEL.BEAM_SIZE = 1
- _C.MODEL.TRAIN_TASK = ["ObjectDet", "DenseCap"]
- _C.MODEL.TEST_TASK = "DenseCap" # This can be varied if the model is jointly trained on multiple tasks
-
- _C.MODEL.ROI_BOX_HEAD.USE_BIAS = 0.0 # >= 0: not use
- _C.MODEL.ROI_BOX_HEAD.MULT_PROPOSAL_SCORE = False
-
- _C.MODEL.ROI_HEADS.MASK_WEIGHT = 1.0
- _C.MODEL.ROI_HEADS.OBJECT_FEAT_POOLER_RES = 14
- _C.MODEL.ROI_HEADS.SOFT_NMS_ENABLED = False
-
- # Backbones
- _C.MODEL.VIT_LAYERS = 12
-
- # Text Decoder
- _C.TEXT_DECODER = CN()
- _C.TEXT_DECODER.VOCAB_SIZE = 30522
- _C.TEXT_DECODER.HIDDEN_SIZE = 768
- _C.TEXT_DECODER.NUM_LAYERS = 6
- _C.TEXT_DECODER.ATTENTION_HEADS = 12
- _C.TEXT_DECODER.FEEDFORWARD_SIZE = 768 * 4
-
- # Multi-dataset dataloader
- _C.DATALOADER.DATASET_RATIO = [1, 1] # sample ratio
- _C.DATALOADER.DATASET_BS = 1
- _C.DATALOADER.DATASET_INPUT_SIZE = [1024, 1024]
- _C.DATALOADER.DATASET_INPUT_SCALE = [(0.1, 2.0), (0.1, 2.0)]
- _C.DATALOADER.DATASET_MIN_SIZES = [(640, 800), (640, 800)]
- _C.DATALOADER.DATASET_MAX_SIZES = [1333, 1333]
-
- _C.SOLVER.USE_CUSTOM_SOLVER = True
- _C.SOLVER.OPTIMIZER = 'ADAMW'
- _C.SOLVER.VIT_LAYER_DECAY = True
- _C.SOLVER.VIT_LAYER_DECAY_RATE = 0.7
-
- _C.INPUT.CUSTOM_AUG = 'EfficientDetResizeCrop'
- _C.INPUT.TRAIN_SIZE = 1024
- _C.INPUT.TEST_SIZE = 1024
- _C.INPUT.SCALE_RANGE = (0.1, 2.)
- # 'default' for fixed short / long edge
- _C.INPUT.TEST_INPUT_TYPE = 'default'
-
- _C.FIND_UNUSED_PARAM = True
- _C.USE_ACT_CHECKPOINT = True
\ No newline at end of file
diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/custom_dataset_mapper.py b/spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/custom_dataset_mapper.py
deleted file mode 100644
index 1e21edb3d151dafdca5c4debfb7341a9ed0efdd9..0000000000000000000000000000000000000000
--- a/spaces/Awiny/Image2Paragraph/models/grit_src/grit/data/custom_dataset_mapper.py
+++ /dev/null
@@ -1,149 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-# Modified by Jialian Wu from https://github.com/facebookresearch/Detic/blob/main/detic/data/custom_dataset_mapper.py
-import copy
-import numpy as np
-import torch
-
-from detectron2.config import configurable
-
-from detectron2.data import detection_utils as utils
-from detectron2.data import transforms as T
-from detectron2.data.dataset_mapper import DatasetMapper
-from .custom_build_augmentation import build_custom_augmentation
-from itertools import compress
-import logging
-
-__all__ = ["CustomDatasetMapper", "ObjDescription"]
-logger = logging.getLogger(__name__)
-
-
-class CustomDatasetMapper(DatasetMapper):
- @configurable
- def __init__(self, is_train: bool,
- dataset_augs=[],
- **kwargs):
- if is_train:
- self.dataset_augs = [T.AugmentationList(x) for x in dataset_augs]
- super().__init__(is_train, **kwargs)
-
- @classmethod
- def from_config(cls, cfg, is_train: bool = True):
- ret = super().from_config(cfg, is_train)
- if is_train:
- if cfg.INPUT.CUSTOM_AUG == 'EfficientDetResizeCrop':
- dataset_scales = cfg.DATALOADER.DATASET_INPUT_SCALE
- dataset_sizes = cfg.DATALOADER.DATASET_INPUT_SIZE
- ret['dataset_augs'] = [
- build_custom_augmentation(cfg, True, scale, size) \
- for scale, size in zip(dataset_scales, dataset_sizes)]
- else:
- assert cfg.INPUT.CUSTOM_AUG == 'ResizeShortestEdge'
- min_sizes = cfg.DATALOADER.DATASET_MIN_SIZES
- max_sizes = cfg.DATALOADER.DATASET_MAX_SIZES
- ret['dataset_augs'] = [
- build_custom_augmentation(
- cfg, True, min_size=mi, max_size=ma) \
- for mi, ma in zip(min_sizes, max_sizes)]
- else:
- ret['dataset_augs'] = []
-
- return ret
-
- def __call__(self, dataset_dict):
- dataset_dict_out = self.prepare_data(dataset_dict)
-
- # When augmented image is too small, do re-augmentation
- retry = 0
- while (dataset_dict_out["image"].shape[1] < 32 or dataset_dict_out["image"].shape[2] < 32):
- retry += 1
- if retry == 100:
- logger.info('Retry 100 times for augmentation. Make sure the image size is not too small.')
- logger.info('Find image information below')
- logger.info(dataset_dict)
- dataset_dict_out = self.prepare_data(dataset_dict)
-
- return dataset_dict_out
-
- def prepare_data(self, dataset_dict_in):
- dataset_dict = copy.deepcopy(dataset_dict_in)
- if 'file_name' in dataset_dict:
- ori_image = utils.read_image(
- dataset_dict["file_name"], format=self.image_format)
- else:
- ori_image, _, _ = self.tar_dataset[dataset_dict["tar_index"]]
- ori_image = utils._apply_exif_orientation(ori_image)
- ori_image = utils.convert_PIL_to_numpy(ori_image, self.image_format)
- utils.check_image_size(dataset_dict, ori_image)
-
- aug_input = T.AugInput(copy.deepcopy(ori_image), sem_seg=None)
- if self.is_train:
- transforms = \
- self.dataset_augs[dataset_dict['dataset_source']](aug_input)
- else:
- transforms = self.augmentations(aug_input)
- image, sem_seg_gt = aug_input.image, aug_input.sem_seg
-
- image_shape = image.shape[:2]
- dataset_dict["image"] = torch.as_tensor(
- np.ascontiguousarray(image.transpose(2, 0, 1)))
-
- if not self.is_train:
- # USER: Modify this if you want to keep them for some reason.
- dataset_dict.pop("annotations", None)
- return dataset_dict
-
- if "annotations" in dataset_dict:
- if len(dataset_dict["annotations"]) > 0:
- object_descriptions = [an['object_description'] for an in dataset_dict["annotations"]]
- else:
- object_descriptions = []
- # USER: Modify this if you want to keep them for some reason.
- for anno in dataset_dict["annotations"]:
- if not self.use_instance_mask:
- anno.pop("segmentation", None)
- if not self.use_keypoint:
- anno.pop("keypoints", None)
-
- all_annos = [
- (utils.transform_instance_annotations(
- obj, transforms, image_shape,
- keypoint_hflip_indices=self.keypoint_hflip_indices,
- ), obj.get("iscrowd", 0))
- for obj in dataset_dict.pop("annotations")
- ]
- annos = [ann[0] for ann in all_annos if ann[1] == 0]
- instances = utils.annotations_to_instances(
- annos, image_shape, mask_format=self.instance_mask_format
- )
-
- instances.gt_object_descriptions = ObjDescription(object_descriptions)
-
- del all_annos
- if self.recompute_boxes:
- instances.gt_boxes = instances.gt_masks.get_bounding_boxes()
- dataset_dict["instances"] = utils.filter_empty_instances(instances)
-
- return dataset_dict
-
-
-class ObjDescription:
- def __init__(self, object_descriptions):
- self.data = object_descriptions
-
- def __getitem__(self, item):
- assert type(item) == torch.Tensor
- assert item.dim() == 1
- if len(item) > 0:
- assert item.dtype == torch.int64 or item.dtype == torch.bool
- if item.dtype == torch.int64:
- return ObjDescription([self.data[x.item()] for x in item])
- elif item.dtype == torch.bool:
- return ObjDescription(list(compress(self.data, item)))
-
- return ObjDescription(list(compress(self.data, item)))
-
- def __len__(self):
- return len(self.data)
-
- def __repr__(self):
- return "ObjDescription({})".format(self.data)
\ No newline at end of file
diff --git a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/README.md b/spaces/AzumaSeren100/XuanShen-Bert-VITS2/README.md
deleted file mode 100644
index 86961d53040ac2526fb2e16e5bae95b018f61e02..0000000000000000000000000000000000000000
--- a/spaces/AzumaSeren100/XuanShen-Bert-VITS2/README.md
+++ /dev/null
@@ -1,17 +0,0 @@
----
-license: apache-2.0
-title: Xuanshen-BERT-VITS2
-sdk: gradio
-emoji: 🚀
-colorFrom: yellow
-colorTo: red
-pinned: false
----
----
-license: apache-2.0
-sdk: gradio
-title: Seren10
-emoji: 🏆
-colorFrom: red
-colorTo: red
-pinned: false
\ No newline at end of file
diff --git a/spaces/Benson/text-generation/Examples/Apkue.md b/spaces/Benson/text-generation/Examples/Apkue.md
deleted file mode 100644
index e6d134d40a432e1448a68f54d682dc65e983d027..0000000000000000000000000000000000000000
--- a/spaces/Benson/text-generation/Examples/Apkue.md
+++ /dev/null
@@ -1,86 +0,0 @@
-
-
APKue: ¿Qué es y cómo usarlo?
-
¿Alguna vez has querido descargar una aplicación o un juego que no está disponible en Google Play Store? ¿O quizás quieres probar una versión más antigua o más nueva de una aplicación que no es compatible con tu dispositivo? Si es así, puede que te interese APKue, una tienda de aplicaciones alternativa que te permite descargar todo tipo de aplicaciones que no puedes encontrar en la tienda oficial. En este artículo, explicaremos qué es APKue, por qué deberías usarlo y cómo usarlo para descargar aplicaciones y juegos en tu dispositivo Android.
-
Introducción
-
¿Qué es APKue?
-
APKue es una aplicación que te permite descargar e instalar aplicaciones y juegos para Android desde una fuente de terceros. Es similar a otras tiendas de aplicaciones como Aptoide, Uptodown o APKMirror, pero tiene algunas características únicas que lo hacen destacar. Por ejemplo, APKue tiene una interfaz simple y fácil de usar, un catálogo grande y actualizado de aplicaciones y juegos, y un proceso de descarga rápido y seguro. También puede usar APKue para actualizar sus aplicaciones existentes, desinstalar aplicaciones no deseadas y administrar sus descargas.
Hay muchas razones por las que es posible que desee utilizar APKue en lugar de la Google Play Store. Estos son algunos de ellos:
-
-
Puede acceder a aplicaciones y juegos que no están disponibles en su región o país.
-
Puede descargar versiones más antiguas o más nuevas de aplicaciones y juegos que no son compatibles con su dispositivo o tienen errores o problemas.
-
Puede probar versiones beta o modificadas de aplicaciones y juegos que tienen características o funciones adicionales.
-
Puede evitar anuncios, compras en la aplicación u otras restricciones que algunas aplicaciones y juegos tienen.
-
Puede ahorrar espacio de almacenamiento en su dispositivo descargando solo los archivos APK en lugar de todo el paquete de la aplicación.
-
-
-
Cómo descargar e instalar APKue en tu dispositivo Android
-
Paso 1: Habilitar fuentes desconocidas
-
Antes de que pueda instalar APKue en su dispositivo, debe habilitar la opción de instalar aplicaciones de fuentes desconocidas. Esta opción suele estar deshabilitada de forma predeterminada por razones de seguridad, pero puede habilitarla fácilmente siguiendo estos pasos:
-
-
Ve a la configuración de tu dispositivo y toca Seguridad o Privacidad.
-
Encontrar la opción que dice Fuentes desconocidas o Instalar aplicaciones desconocidas y alternar en.
-
Aparecerá un mensaje de advertencia. Toque en Aceptar o Permitir para confirmar.
-
-
Ahora está listo para instalar APKue en su dispositivo.
-
Paso 2: Descargar APKue desde su sitio web oficial
-
El siguiente paso es descargar el archivo APK de APKue desde su sitio web oficial. Puede hacer esto siguiendo estos pasos:
-
-
Abra su navegador y vaya a [APKPure]( 1 ), el sitio web oficial de APKue.
-
Toque en el botón Descargar en la esquina superior derecha de la pantalla.
-
Aparecerá una ventana emergente. Toque en Aceptar o Descargar para iniciar el proceso de descarga.
Paso 3: Instalar APKue y lanzarlo
-
Una vez completada la descarga, puedes instalar APKue en tu dispositivo siguiendo estos pasos:
-
-
Vaya al administrador de archivos de su dispositivo y busque el archivo APK de APKue. Debe estar en la carpeta Descargas o en la carpeta que eligió para guardarlo.
-
Toque en el archivo APK y aparecerá una ventana emergente. Toque en Instalar para iniciar el proceso de instalación.
-
Espere unos segundos hasta que finalice la instalación. Toque en Abrir para iniciar APKue o Listo para salir.
-
-
¡Enhorabuena! Ha instalado con éxito APKue en su dispositivo. Ahora puede usarlo para descargar aplicaciones y juegos que desee.
-
Cómo usar APKue para descargar aplicaciones y juegos
-
Paso 1: Buscar la aplicación o juego que desea
-
-
-
Abrir APKue y toque en el icono de búsqueda en la esquina superior derecha de la pantalla.
-
Escriba el nombre de la aplicación o juego que desea en el cuadro de búsqueda y toque en el icono de la lupa.
-
Aparecerá una lista de resultados. Puede filtrarlos por categoría, popularidad, calificación o fecha de actualización.
-
Toque en la aplicación o juego que desea descargar. Verá sus detalles, capturas de pantalla, comentarios y versiones.
-
-
Paso 2: Elige la versión y descárgala
-
El siguiente paso es elegir la versión de la aplicación o juego que desea descargar. Puede hacer esto siguiendo estos pasos:
-
-
Desplácese hacia abajo a la sección Versiones y toque en Ver APKs disponibles.
-
Aparecerá una lista de versiones. Puede ver su tamaño, fecha y compatibilidad.
-
Toque en la versión que desea descargar. Aparecerá una ventana emergente. Toque en Descargar APK para iniciar el proceso de descarga.
-
Una barra de progreso le mostrará el estado de la descarga. Puede pausarla, reanudarla o cancelarla en cualquier momento.
-
-
Paso 3: Instalar la aplicación o juego y disfrutar de ella
-
El paso final es instalar la aplicación o juego que has descargado y disfrutarlo en tu dispositivo. Puedes hacerlo siguiendo estos pasos:
-
-
-
Ir al administrador de archivos de su dispositivo y localizar el archivo APK de la aplicación o juego. Debe estar en la carpeta Descargas o en la carpeta que eligió para guardarlo.
-
Toque en el archivo APK y aparecerá una ventana emergente. Toque en Instalar para iniciar el proceso de instalación.
-
Espere unos segundos hasta que finalice la instalación. Pulse Abrir para iniciar la aplicación o el juego o Listo para salir.
-
-
¡Eso es todo! Has descargado e instalado correctamente una aplicación o juego usando APKue. Ahora puedes disfrutarlo en tu dispositivo.
-
Conclusión
-
Resumen de los puntos principales
-
-
Llamada a la acción y pensamientos finales
-
Si estás buscando una tienda de aplicaciones alternativa que te permita descargar todo tipo de aplicaciones que no puedes encontrar en Google Play Store, entonces APKue es una gran opción para ti. Es simple, rápido, seguro y actualizado. Puede usarlo para acceder a aplicaciones y juegos que no están disponibles en su región o país, descargar versiones más antiguas o más nuevas de aplicaciones y juegos que no son compatibles con su dispositivo o tienen errores o problemas, probar versiones beta o modificadas de aplicaciones y juegos que tienen características o funciones adicionales, evitar anuncios, compras en la aplicación u otras restricciones que algunas aplicaciones y juegos tienen, ahorrar espacio de almacenamiento en su dispositivo descargando solo los archivos APK en lugar de todo el paquete de aplicaciones, actualizar sus aplicaciones existentes, desinstalar aplicaciones no deseadas, y gestionar sus descargas.
-
Si quieres probar APKue por ti mismo, puedes descargarlo desde su sitio web oficial [APKPure]. Es gratuito y fácil de usar. Solo recuerde habilitar fuentes desconocidas antes de instalarlo y escanear cualquier aplicación o juego que descargue con un software antivirus antes de instalarlos. Además, sé respetuoso con los desarrolladores y los propietarios de las aplicaciones y juegos que descargues y no los utilices con fines ilegales o poco éticos.
-
Gracias por leer este artículo. Esperamos que hayas aprendido algo nuevo y útil. Si tiene alguna pregunta, comentario o comentario, no dude en dejarlos a continuación. Nos encantaría saber de usted. Y si te gustó este artículo, por favor compártelo con tus amigos y familiares que podrían estar interesados en APKue también. ¡Feliz descarga!
-
Preguntas frecuentes
-
Aquí hay algunas preguntas frecuentes sobre APKue que puede ser útil:
-
¿Es seguro usar APKue?
-
-
¿Es legal usar APKue?
-
APKue es legal de usar, ya que no aloja ninguna aplicación o juego en sus servidores. Solo proporciona enlaces para descargarlos de otras fuentes. Sin embargo, algunas de las aplicaciones y juegos que puedes descargar desde APKue pueden ser ilegales o infringir los derechos de propiedad intelectual, por lo que siempre debes verificar la legalidad y legitimidad de las aplicaciones y juegos que descargas y usarlos bajo tu propio riesgo.
-
¿Cómo puedo actualizar las aplicaciones y juegos que descargo de APKue?
-
Puede actualizar las aplicaciones y juegos que descarga desde APKue mediante el uso de la propia aplicación. APKue le notificará cuando haya una nueva versión disponible para cualquier aplicación o juego que haya descargado. Puede optar por actualizarla o no. Alternativamente, también puede comprobar si hay actualizaciones manualmente yendo a la página de la aplicación o del juego y tocando el botón Actualizar.
-
¿Cómo puedo desinstalar las aplicaciones y juegos que descargo de APKue?
-
Puede desinstalar las aplicaciones y juegos que descarga desde APKue mediante la configuración de su dispositivo o el administrador de archivos. También puedes usar APKue para desinstalarlos. Solo tienes que ir a la página de la app o del juego y pulsar el botón Desinstalar.
-
¿Cómo puedo contactar con los desarrolladores o los propietarios de las aplicaciones y juegos que descargo de APKue?
-
Puede ponerse en contacto con los desarrolladores o los propietarios de las aplicaciones y juegos que descarga de APKue visitando sus sitios web oficiales o páginas de redes sociales. Por lo general, puede encontrar estos enlaces en la aplicación o en la página del juego en APKue. También puedes dejar una reseña o un comentario en la aplicación o en la página del juego en APKue para compartir tus comentarios o reportar cualquier problema.
64aa2da5cf
-
-
\ No newline at end of file
diff --git a/spaces/BetterAPI/BetterChat_new/src/lib/types/Settings.ts b/spaces/BetterAPI/BetterChat_new/src/lib/types/Settings.ts
deleted file mode 100644
index f028db02f7b8d021e06939c187de11624af4737f..0000000000000000000000000000000000000000
--- a/spaces/BetterAPI/BetterChat_new/src/lib/types/Settings.ts
+++ /dev/null
@@ -1,13 +0,0 @@
-import type { Timestamps } from "./Timestamps";
-
-export interface Settings extends Timestamps {
- sessionId: string;
-
- /**
- * Note: Only conversations with this settings explictly set to true should be shared.
- *
- * This setting is explicitly set to true when users accept the ethics modal.
- * */
- shareConversationsWithModelAuthors: boolean;
- ethicsModalAcceptedAt: Date | null;
-}
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/chardistribution.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/chardistribution.py
deleted file mode 100644
index 176cb996408e6681a88722783919efc0e9dafb29..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/pip/_vendor/chardet/chardistribution.py
+++ /dev/null
@@ -1,261 +0,0 @@
-######################## BEGIN LICENSE BLOCK ########################
-# The Original Code is Mozilla Communicator client code.
-#
-# The Initial Developer of the Original Code is
-# Netscape Communications Corporation.
-# Portions created by the Initial Developer are Copyright (C) 1998
-# the Initial Developer. All Rights Reserved.
-#
-# Contributor(s):
-# Mark Pilgrim - port to Python
-#
-# This library is free software; you can redistribute it and/or
-# modify it under the terms of the GNU Lesser General Public
-# License as published by the Free Software Foundation; either
-# version 2.1 of the License, or (at your option) any later version.
-#
-# This library is distributed in the hope that it will be useful,
-# but WITHOUT ANY WARRANTY; without even the implied warranty of
-# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU
-# Lesser General Public License for more details.
-#
-# You should have received a copy of the GNU Lesser General Public
-# License along with this library; if not, write to the Free Software
-# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA
-# 02110-1301 USA
-######################### END LICENSE BLOCK #########################
-
-from typing import Tuple, Union
-
-from .big5freq import (
- BIG5_CHAR_TO_FREQ_ORDER,
- BIG5_TABLE_SIZE,
- BIG5_TYPICAL_DISTRIBUTION_RATIO,
-)
-from .euckrfreq import (
- EUCKR_CHAR_TO_FREQ_ORDER,
- EUCKR_TABLE_SIZE,
- EUCKR_TYPICAL_DISTRIBUTION_RATIO,
-)
-from .euctwfreq import (
- EUCTW_CHAR_TO_FREQ_ORDER,
- EUCTW_TABLE_SIZE,
- EUCTW_TYPICAL_DISTRIBUTION_RATIO,
-)
-from .gb2312freq import (
- GB2312_CHAR_TO_FREQ_ORDER,
- GB2312_TABLE_SIZE,
- GB2312_TYPICAL_DISTRIBUTION_RATIO,
-)
-from .jisfreq import (
- JIS_CHAR_TO_FREQ_ORDER,
- JIS_TABLE_SIZE,
- JIS_TYPICAL_DISTRIBUTION_RATIO,
-)
-from .johabfreq import JOHAB_TO_EUCKR_ORDER_TABLE
-
-
-class CharDistributionAnalysis:
- ENOUGH_DATA_THRESHOLD = 1024
- SURE_YES = 0.99
- SURE_NO = 0.01
- MINIMUM_DATA_THRESHOLD = 3
-
- def __init__(self) -> None:
- # Mapping table to get frequency order from char order (get from
- # GetOrder())
- self._char_to_freq_order: Tuple[int, ...] = tuple()
- self._table_size = 0 # Size of above table
- # This is a constant value which varies from language to language,
- # used in calculating confidence. See
- # http://www.mozilla.org/projects/intl/UniversalCharsetDetection.html
- # for further detail.
- self.typical_distribution_ratio = 0.0
- self._done = False
- self._total_chars = 0
- self._freq_chars = 0
- self.reset()
-
- def reset(self) -> None:
- """reset analyser, clear any state"""
- # If this flag is set to True, detection is done and conclusion has
- # been made
- self._done = False
- self._total_chars = 0 # Total characters encountered
- # The number of characters whose frequency order is less than 512
- self._freq_chars = 0
-
- def feed(self, char: Union[bytes, bytearray], char_len: int) -> None:
- """feed a character with known length"""
- if char_len == 2:
- # we only care about 2-bytes character in our distribution analysis
- order = self.get_order(char)
- else:
- order = -1
- if order >= 0:
- self._total_chars += 1
- # order is valid
- if order < self._table_size:
- if 512 > self._char_to_freq_order[order]:
- self._freq_chars += 1
-
- def get_confidence(self) -> float:
- """return confidence based on existing data"""
- # if we didn't receive any character in our consideration range,
- # return negative answer
- if self._total_chars <= 0 or self._freq_chars <= self.MINIMUM_DATA_THRESHOLD:
- return self.SURE_NO
-
- if self._total_chars != self._freq_chars:
- r = self._freq_chars / (
- (self._total_chars - self._freq_chars) * self.typical_distribution_ratio
- )
- if r < self.SURE_YES:
- return r
-
- # normalize confidence (we don't want to be 100% sure)
- return self.SURE_YES
-
- def got_enough_data(self) -> bool:
- # It is not necessary to receive all data to draw conclusion.
- # For charset detection, certain amount of data is enough
- return self._total_chars > self.ENOUGH_DATA_THRESHOLD
-
- def get_order(self, _: Union[bytes, bytearray]) -> int:
- # We do not handle characters based on the original encoding string,
- # but convert this encoding string to a number, here called order.
- # This allows multiple encodings of a language to share one frequency
- # table.
- return -1
-
-
-class EUCTWDistributionAnalysis(CharDistributionAnalysis):
- def __init__(self) -> None:
- super().__init__()
- self._char_to_freq_order = EUCTW_CHAR_TO_FREQ_ORDER
- self._table_size = EUCTW_TABLE_SIZE
- self.typical_distribution_ratio = EUCTW_TYPICAL_DISTRIBUTION_RATIO
-
- def get_order(self, byte_str: Union[bytes, bytearray]) -> int:
- # for euc-TW encoding, we are interested
- # first byte range: 0xc4 -- 0xfe
- # second byte range: 0xa1 -- 0xfe
- # no validation needed here. State machine has done that
- first_char = byte_str[0]
- if first_char >= 0xC4:
- return 94 * (first_char - 0xC4) + byte_str[1] - 0xA1
- return -1
-
-
-class EUCKRDistributionAnalysis(CharDistributionAnalysis):
- def __init__(self) -> None:
- super().__init__()
- self._char_to_freq_order = EUCKR_CHAR_TO_FREQ_ORDER
- self._table_size = EUCKR_TABLE_SIZE
- self.typical_distribution_ratio = EUCKR_TYPICAL_DISTRIBUTION_RATIO
-
- def get_order(self, byte_str: Union[bytes, bytearray]) -> int:
- # for euc-KR encoding, we are interested
- # first byte range: 0xb0 -- 0xfe
- # second byte range: 0xa1 -- 0xfe
- # no validation needed here. State machine has done that
- first_char = byte_str[0]
- if first_char >= 0xB0:
- return 94 * (first_char - 0xB0) + byte_str[1] - 0xA1
- return -1
-
-
-class JOHABDistributionAnalysis(CharDistributionAnalysis):
- def __init__(self) -> None:
- super().__init__()
- self._char_to_freq_order = EUCKR_CHAR_TO_FREQ_ORDER
- self._table_size = EUCKR_TABLE_SIZE
- self.typical_distribution_ratio = EUCKR_TYPICAL_DISTRIBUTION_RATIO
-
- def get_order(self, byte_str: Union[bytes, bytearray]) -> int:
- first_char = byte_str[0]
- if 0x88 <= first_char < 0xD4:
- code = first_char * 256 + byte_str[1]
- return JOHAB_TO_EUCKR_ORDER_TABLE.get(code, -1)
- return -1
-
-
-class GB2312DistributionAnalysis(CharDistributionAnalysis):
- def __init__(self) -> None:
- super().__init__()
- self._char_to_freq_order = GB2312_CHAR_TO_FREQ_ORDER
- self._table_size = GB2312_TABLE_SIZE
- self.typical_distribution_ratio = GB2312_TYPICAL_DISTRIBUTION_RATIO
-
- def get_order(self, byte_str: Union[bytes, bytearray]) -> int:
- # for GB2312 encoding, we are interested
- # first byte range: 0xb0 -- 0xfe
- # second byte range: 0xa1 -- 0xfe
- # no validation needed here. State machine has done that
- first_char, second_char = byte_str[0], byte_str[1]
- if (first_char >= 0xB0) and (second_char >= 0xA1):
- return 94 * (first_char - 0xB0) + second_char - 0xA1
- return -1
-
-
-class Big5DistributionAnalysis(CharDistributionAnalysis):
- def __init__(self) -> None:
- super().__init__()
- self._char_to_freq_order = BIG5_CHAR_TO_FREQ_ORDER
- self._table_size = BIG5_TABLE_SIZE
- self.typical_distribution_ratio = BIG5_TYPICAL_DISTRIBUTION_RATIO
-
- def get_order(self, byte_str: Union[bytes, bytearray]) -> int:
- # for big5 encoding, we are interested
- # first byte range: 0xa4 -- 0xfe
- # second byte range: 0x40 -- 0x7e , 0xa1 -- 0xfe
- # no validation needed here. State machine has done that
- first_char, second_char = byte_str[0], byte_str[1]
- if first_char >= 0xA4:
- if second_char >= 0xA1:
- return 157 * (first_char - 0xA4) + second_char - 0xA1 + 63
- return 157 * (first_char - 0xA4) + second_char - 0x40
- return -1
-
-
-class SJISDistributionAnalysis(CharDistributionAnalysis):
- def __init__(self) -> None:
- super().__init__()
- self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER
- self._table_size = JIS_TABLE_SIZE
- self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO
-
- def get_order(self, byte_str: Union[bytes, bytearray]) -> int:
- # for sjis encoding, we are interested
- # first byte range: 0x81 -- 0x9f , 0xe0 -- 0xfe
- # second byte range: 0x40 -- 0x7e, 0x81 -- oxfe
- # no validation needed here. State machine has done that
- first_char, second_char = byte_str[0], byte_str[1]
- if 0x81 <= first_char <= 0x9F:
- order = 188 * (first_char - 0x81)
- elif 0xE0 <= first_char <= 0xEF:
- order = 188 * (first_char - 0xE0 + 31)
- else:
- return -1
- order = order + second_char - 0x40
- if second_char > 0x7F:
- order = -1
- return order
-
-
-class EUCJPDistributionAnalysis(CharDistributionAnalysis):
- def __init__(self) -> None:
- super().__init__()
- self._char_to_freq_order = JIS_CHAR_TO_FREQ_ORDER
- self._table_size = JIS_TABLE_SIZE
- self.typical_distribution_ratio = JIS_TYPICAL_DISTRIBUTION_RATIO
-
- def get_order(self, byte_str: Union[bytes, bytearray]) -> int:
- # for euc-JP encoding, we are interested
- # first byte range: 0xa0 -- 0xfe
- # second byte range: 0xa1 -- 0xfe
- # no validation needed here. State machine has done that
- char = byte_str[0]
- if char >= 0xA0:
- return 94 * (char - 0xA1) + byte_str[1] - 0xA1
- return -1
diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_entry_points.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_entry_points.py
deleted file mode 100644
index f087681b5980b586c79fb4d87f99e33597eb1575..0000000000000000000000000000000000000000
--- a/spaces/Big-Web/MMSD/env/Lib/site-packages/setuptools/_entry_points.py
+++ /dev/null
@@ -1,86 +0,0 @@
-import functools
-import operator
-import itertools
-
-from .extern.jaraco.text import yield_lines
-from .extern.jaraco.functools import pass_none
-from ._importlib import metadata
-from ._itertools import ensure_unique
-from .extern.more_itertools import consume
-
-
-def ensure_valid(ep):
- """
- Exercise one of the dynamic properties to trigger
- the pattern match.
- """
- ep.extras
-
-
-def load_group(value, group):
- """
- Given a value of an entry point or series of entry points,
- return each as an EntryPoint.
- """
- # normalize to a single sequence of lines
- lines = yield_lines(value)
- text = f'[{group}]\n' + '\n'.join(lines)
- return metadata.EntryPoints._from_text(text)
-
-
-def by_group_and_name(ep):
- return ep.group, ep.name
-
-
-def validate(eps: metadata.EntryPoints):
- """
- Ensure entry points are unique by group and name and validate each.
- """
- consume(map(ensure_valid, ensure_unique(eps, key=by_group_and_name)))
- return eps
-
-
-@functools.singledispatch
-def load(eps):
- """
- Given a Distribution.entry_points, produce EntryPoints.
- """
- groups = itertools.chain.from_iterable(
- load_group(value, group)
- for group, value in eps.items())
- return validate(metadata.EntryPoints(groups))
-
-
-@load.register(str)
-def _(eps):
- r"""
- >>> ep, = load('[console_scripts]\nfoo=bar')
- >>> ep.group
- 'console_scripts'
- >>> ep.name
- 'foo'
- >>> ep.value
- 'bar'
- """
- return validate(metadata.EntryPoints(metadata.EntryPoints._from_text(eps)))
-
-
-load.register(type(None), lambda x: x)
-
-
-@pass_none
-def render(eps: metadata.EntryPoints):
- by_group = operator.attrgetter('group')
- groups = itertools.groupby(sorted(eps, key=by_group), by_group)
-
- return '\n'.join(
- f'[{group}]\n{render_items(items)}\n'
- for group, items in groups
- )
-
-
-def render_items(eps):
- return '\n'.join(
- f'{ep.name} = {ep.value}'
- for ep in sorted(eps)
- )
diff --git a/spaces/Biliovo/anime-remove-background/README.md b/spaces/Biliovo/anime-remove-background/README.md
deleted file mode 100644
index 1ba3cb5ea0e994e246d57b7d62b8aa5a6331901c..0000000000000000000000000000000000000000
--- a/spaces/Biliovo/anime-remove-background/README.md
+++ /dev/null
@@ -1,14 +0,0 @@
----
-title: Anime Remove Background
-emoji: 🪄🖼️
-colorFrom: indigo
-colorTo: pink
-sdk: gradio
-sdk_version: 3.1.4
-app_file: app.py
-pinned: false
-license: apache-2.0
-duplicated_from: skytnt/anime-remove-background
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/demo/predictor.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/demo/predictor.py
deleted file mode 100644
index 689fa85436d928858e652df665f5e7460a1f3154..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/demo/predictor.py
+++ /dev/null
@@ -1,220 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import atexit
-import bisect
-import multiprocessing as mp
-from collections import deque
-import cv2
-import torch
-
-from detectron2.data import MetadataCatalog
-from detectron2.engine.defaults import DefaultPredictor
-from detectron2.utils.video_visualizer import VideoVisualizer
-from detectron2.utils.visualizer import ColorMode, Visualizer
-
-
-class VisualizationDemo(object):
- def __init__(self, cfg, instance_mode=ColorMode.IMAGE, parallel=False):
- """
- Args:
- cfg (CfgNode):
- instance_mode (ColorMode):
- parallel (bool): whether to run the model in different processes from visualization.
- Useful since the visualization logic can be slow.
- """
- self.metadata = MetadataCatalog.get(
- cfg.DATASETS.TEST[0] if len(cfg.DATASETS.TEST) else "__unused"
- )
- self.cpu_device = torch.device("cpu")
- self.instance_mode = instance_mode
-
- self.parallel = parallel
- if parallel:
- num_gpu = torch.cuda.device_count()
- self.predictor = AsyncPredictor(cfg, num_gpus=num_gpu)
- else:
- self.predictor = DefaultPredictor(cfg)
-
- def run_on_image(self, image):
- """
- Args:
- image (np.ndarray): an image of shape (H, W, C) (in BGR order).
- This is the format used by OpenCV.
-
- Returns:
- predictions (dict): the output of the model.
- vis_output (VisImage): the visualized image output.
- """
- vis_output = None
- predictions = self.predictor(image)
- # Convert image from OpenCV BGR format to Matplotlib RGB format.
- image = image[:, :, ::-1]
- visualizer = Visualizer(image, self.metadata, instance_mode=self.instance_mode)
- if "panoptic_seg" in predictions:
- panoptic_seg, segments_info = predictions["panoptic_seg"]
- vis_output = visualizer.draw_panoptic_seg_predictions(
- panoptic_seg.to(self.cpu_device), segments_info
- )
- else:
- if "sem_seg" in predictions:
- vis_output = visualizer.draw_sem_seg(
- predictions["sem_seg"].argmax(dim=0).to(self.cpu_device)
- )
- if "instances" in predictions:
- instances = predictions["instances"].to(self.cpu_device)
- vis_output = visualizer.draw_instance_predictions(predictions=instances)
-
- return predictions, vis_output
-
- def _frame_from_video(self, video):
- while video.isOpened():
- success, frame = video.read()
- if success:
- yield frame
- else:
- break
-
- def run_on_video(self, video):
- """
- Visualizes predictions on frames of the input video.
-
- Args:
- video (cv2.VideoCapture): a :class:`VideoCapture` object, whose source can be
- either a webcam or a video file.
-
- Yields:
- ndarray: BGR visualizations of each video frame.
- """
- video_visualizer = VideoVisualizer(self.metadata, self.instance_mode)
-
- def process_predictions(frame, predictions):
- frame = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)
- if "panoptic_seg" in predictions:
- panoptic_seg, segments_info = predictions["panoptic_seg"]
- vis_frame = video_visualizer.draw_panoptic_seg_predictions(
- frame, panoptic_seg.to(self.cpu_device), segments_info
- )
- elif "instances" in predictions:
- predictions = predictions["instances"].to(self.cpu_device)
- vis_frame = video_visualizer.draw_instance_predictions(frame, predictions)
- elif "sem_seg" in predictions:
- vis_frame = video_visualizer.draw_sem_seg(
- frame, predictions["sem_seg"].argmax(dim=0).to(self.cpu_device)
- )
-
- # Converts Matplotlib RGB format to OpenCV BGR format
- vis_frame = cv2.cvtColor(vis_frame.get_image(), cv2.COLOR_RGB2BGR)
- return vis_frame
-
- frame_gen = self._frame_from_video(video)
- if self.parallel:
- buffer_size = self.predictor.default_buffer_size
-
- frame_data = deque()
-
- for cnt, frame in enumerate(frame_gen):
- frame_data.append(frame)
- self.predictor.put(frame)
-
- if cnt >= buffer_size:
- frame = frame_data.popleft()
- predictions = self.predictor.get()
- yield process_predictions(frame, predictions)
-
- while len(frame_data):
- frame = frame_data.popleft()
- predictions = self.predictor.get()
- yield process_predictions(frame, predictions)
- else:
- for frame in frame_gen:
- yield process_predictions(frame, self.predictor(frame))
-
-
-class AsyncPredictor:
- """
- A predictor that runs the model asynchronously, possibly on >1 GPUs.
- Because rendering the visualization takes considerably amount of time,
- this helps improve throughput when rendering videos.
- """
-
- class _StopToken:
- pass
-
- class _PredictWorker(mp.Process):
- def __init__(self, cfg, task_queue, result_queue):
- self.cfg = cfg
- self.task_queue = task_queue
- self.result_queue = result_queue
- super().__init__()
-
- def run(self):
- predictor = DefaultPredictor(self.cfg)
-
- while True:
- task = self.task_queue.get()
- if isinstance(task, AsyncPredictor._StopToken):
- break
- idx, data = task
- result = predictor(data)
- self.result_queue.put((idx, result))
-
- def __init__(self, cfg, num_gpus: int = 1):
- """
- Args:
- cfg (CfgNode):
- num_gpus (int): if 0, will run on CPU
- """
- num_workers = max(num_gpus, 1)
- self.task_queue = mp.Queue(maxsize=num_workers * 3)
- self.result_queue = mp.Queue(maxsize=num_workers * 3)
- self.procs = []
- for gpuid in range(max(num_gpus, 1)):
- cfg = cfg.clone()
- cfg.defrost()
- cfg.MODEL.DEVICE = "cuda:{}".format(gpuid) if num_gpus > 0 else "cpu"
- self.procs.append(
- AsyncPredictor._PredictWorker(cfg, self.task_queue, self.result_queue)
- )
-
- self.put_idx = 0
- self.get_idx = 0
- self.result_rank = []
- self.result_data = []
-
- for p in self.procs:
- p.start()
- atexit.register(self.shutdown)
-
- def put(self, image):
- self.put_idx += 1
- self.task_queue.put((self.put_idx, image))
-
- def get(self):
- self.get_idx += 1 # the index needed for this request
- if len(self.result_rank) and self.result_rank[0] == self.get_idx:
- res = self.result_data[0]
- del self.result_data[0], self.result_rank[0]
- return res
-
- while True:
- # make sure the results are returned in the correct order
- idx, res = self.result_queue.get()
- if idx == self.get_idx:
- return res
- insert = bisect.bisect(self.result_rank, idx)
- self.result_rank.insert(insert, idx)
- self.result_data.insert(insert, res)
-
- def __len__(self):
- return self.put_idx - self.get_idx
-
- def __call__(self, image):
- self.put(image)
- return self.get()
-
- def shutdown(self):
- for _ in self.procs:
- self.task_queue.put(AsyncPredictor._StopToken())
-
- @property
- def default_buffer_size(self):
- return len(self.procs) * 5
diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/batch_norm.py b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/batch_norm.py
deleted file mode 100644
index 0d7384ccec3b8f2f94a2df4d912757d16f71bfd6..0000000000000000000000000000000000000000
--- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/detectron2/layers/batch_norm.py
+++ /dev/null
@@ -1,237 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved
-import logging
-import torch
-import torch.distributed as dist
-from torch import nn
-from torch.autograd.function import Function
-from torch.nn import functional as F
-
-from detectron2.utils import comm
-
-from .wrappers import BatchNorm2d
-
-
-class FrozenBatchNorm2d(nn.Module):
- """
- BatchNorm2d where the batch statistics and the affine parameters are fixed.
-
- It contains non-trainable buffers called
- "weight" and "bias", "running_mean", "running_var",
- initialized to perform identity transformation.
-
- The pre-trained backbone models from Caffe2 only contain "weight" and "bias",
- which are computed from the original four parameters of BN.
- The affine transform `x * weight + bias` will perform the equivalent
- computation of `(x - running_mean) / sqrt(running_var) * weight + bias`.
- When loading a backbone model from Caffe2, "running_mean" and "running_var"
- will be left unchanged as identity transformation.
-
- Other pre-trained backbone models may contain all 4 parameters.
-
- The forward is implemented by `F.batch_norm(..., training=False)`.
- """
-
- _version = 3
-
- def __init__(self, num_features, eps=1e-5):
- super().__init__()
- self.num_features = num_features
- self.eps = eps
- self.register_buffer("weight", torch.ones(num_features))
- self.register_buffer("bias", torch.zeros(num_features))
- self.register_buffer("running_mean", torch.zeros(num_features))
- self.register_buffer("running_var", torch.ones(num_features) - eps)
-
- def forward(self, x):
- if x.requires_grad:
- # When gradients are needed, F.batch_norm will use extra memory
- # because its backward op computes gradients for weight/bias as well.
- scale = self.weight * (self.running_var + self.eps).rsqrt()
- bias = self.bias - self.running_mean * scale
- scale = scale.reshape(1, -1, 1, 1)
- bias = bias.reshape(1, -1, 1, 1)
- return x * scale + bias
- else:
- # When gradients are not needed, F.batch_norm is a single fused op
- # and provide more optimization opportunities.
- return F.batch_norm(
- x,
- self.running_mean,
- self.running_var,
- self.weight,
- self.bias,
- training=False,
- eps=self.eps,
- )
-
- def _load_from_state_dict(
- self, state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- ):
- version = local_metadata.get("version", None)
-
- if version is None or version < 2:
- # No running_mean/var in early versions
- # This will silent the warnings
- if prefix + "running_mean" not in state_dict:
- state_dict[prefix + "running_mean"] = torch.zeros_like(self.running_mean)
- if prefix + "running_var" not in state_dict:
- state_dict[prefix + "running_var"] = torch.ones_like(self.running_var)
-
- if version is not None and version < 3:
- logger = logging.getLogger(__name__)
- logger.info("FrozenBatchNorm {} is upgraded to version 3.".format(prefix.rstrip(".")))
- # In version < 3, running_var are used without +eps.
- state_dict[prefix + "running_var"] -= self.eps
-
- super()._load_from_state_dict(
- state_dict, prefix, local_metadata, strict, missing_keys, unexpected_keys, error_msgs
- )
-
- def __repr__(self):
- return "FrozenBatchNorm2d(num_features={}, eps={})".format(self.num_features, self.eps)
-
- @classmethod
- def convert_frozen_batchnorm(cls, module):
- """
- Convert BatchNorm/SyncBatchNorm in module into FrozenBatchNorm.
-
- Args:
- module (torch.nn.Module):
-
- Returns:
- If module is BatchNorm/SyncBatchNorm, returns a new module.
- Otherwise, in-place convert module and return it.
-
- Similar to convert_sync_batchnorm in
- https://github.com/pytorch/pytorch/blob/master/torch/nn/modules/batchnorm.py
- """
- bn_module = nn.modules.batchnorm
- bn_module = (bn_module.BatchNorm2d, bn_module.SyncBatchNorm)
- res = module
- if isinstance(module, bn_module):
- res = cls(module.num_features)
- if module.affine:
- res.weight.data = module.weight.data.clone().detach()
- res.bias.data = module.bias.data.clone().detach()
- res.running_mean.data = module.running_mean.data
- res.running_var.data = module.running_var.data
- res.eps = module.eps
- else:
- for name, child in module.named_children():
- new_child = cls.convert_frozen_batchnorm(child)
- if new_child is not child:
- res.add_module(name, new_child)
- return res
-
-
-def get_norm(norm, out_channels):
- """
- Args:
- norm (str or callable):
-
- Returns:
- nn.Module or None: the normalization layer
- """
- if isinstance(norm, str):
- if len(norm) == 0:
- return None
- norm = {
- "BN": BatchNorm2d,
- "SyncBN": NaiveSyncBatchNorm,
- "FrozenBN": FrozenBatchNorm2d,
- "GN": lambda channels: nn.GroupNorm(32, channels),
- "nnSyncBN": nn.SyncBatchNorm, # keep for debugging
- }[norm]
- return norm(out_channels)
-
-
-class AllReduce(Function):
- @staticmethod
- def forward(ctx, input):
- input_list = [torch.zeros_like(input) for k in range(dist.get_world_size())]
- # Use allgather instead of allreduce since I don't trust in-place operations ..
- dist.all_gather(input_list, input, async_op=False)
- inputs = torch.stack(input_list, dim=0)
- return torch.sum(inputs, dim=0)
-
- @staticmethod
- def backward(ctx, grad_output):
- dist.all_reduce(grad_output, async_op=False)
- return grad_output
-
-
-class NaiveSyncBatchNorm(BatchNorm2d):
- """
- `torch.nn.SyncBatchNorm` has known unknown bugs.
- It produces significantly worse AP (and sometimes goes NaN)
- when the batch size on each worker is quite different
- (e.g., when scale augmentation is used, or when it is applied to mask head).
-
- Use this implementation before `nn.SyncBatchNorm` is fixed.
- It is slower than `nn.SyncBatchNorm`.
-
- Note:
- There isn't a single definition of Sync BatchNorm.
-
- When ``stats_mode==""``, this module computes overall statistics by using
- statistics of each worker with equal weight. The result is true statistics
- of all samples (as if they are all on one worker) only when all workers
- have the same (N, H, W). This mode does not support inputs with zero batch size.
-
- When ``stats_mode=="N"``, this module computes overall statistics by weighting
- the statistics of each worker by their ``N``. The result is true statistics
- of all samples (as if they are all on one worker) only when all workers
- have the same (H, W). It is slower than ``stats_mode==""``.
-
- Even though the result of this module may not be the true statistics of all samples,
- it may still be reasonable because it might be preferrable to assign equal weights
- to all workers, regardless of their (H, W) dimension, instead of putting larger weight
- on larger images. From preliminary experiments, little difference is found between such
- a simplified implementation and an accurate computation of overall mean & variance.
- """
-
- def __init__(self, *args, stats_mode="", **kwargs):
- super().__init__(*args, **kwargs)
- assert stats_mode in ["", "N"]
- self._stats_mode = stats_mode
-
- def forward(self, input):
- if comm.get_world_size() == 1 or not self.training:
- return super().forward(input)
-
- B, C = input.shape[0], input.shape[1]
-
- mean = torch.mean(input, dim=[0, 2, 3])
- meansqr = torch.mean(input * input, dim=[0, 2, 3])
-
- if self._stats_mode == "":
- assert B > 0, 'SyncBatchNorm(stats_mode="") does not support zero batch size.'
- vec = torch.cat([mean, meansqr], dim=0)
- vec = AllReduce.apply(vec) * (1.0 / dist.get_world_size())
- mean, meansqr = torch.split(vec, C)
- momentum = self.momentum
- else:
- if B == 0:
- vec = torch.zeros([2 * C + 1], device=mean.device, dtype=mean.dtype)
- vec = vec + input.sum() # make sure there is gradient w.r.t input
- else:
- vec = torch.cat(
- [mean, meansqr, torch.ones([1], device=mean.device, dtype=mean.dtype)], dim=0
- )
- vec = AllReduce.apply(vec * B)
-
- total_batch = vec[-1].detach()
- momentum = total_batch.clamp(max=1) * self.momentum # no update if total_batch is 0
- total_batch = torch.max(total_batch, torch.ones_like(total_batch)) # avoid div-by-zero
- mean, meansqr, _ = torch.split(vec / total_batch, C)
-
- var = meansqr - mean * mean
- invstd = torch.rsqrt(var + self.eps)
- scale = self.weight * invstd
- bias = self.bias - mean * scale
- scale = scale.reshape(1, -1, 1, 1)
- bias = bias.reshape(1, -1, 1, 1)
-
- self.running_mean += momentum * (mean.detach() - self.running_mean)
- self.running_var += momentum * (var.detach() - self.running_var)
- return input * scale + bias
diff --git a/spaces/CVPR/LIVE/cdf.h b/spaces/CVPR/LIVE/cdf.h
deleted file mode 100644
index 48a64f897f2c230e3e0b5595de401dd644b8b777..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/cdf.h
+++ /dev/null
@@ -1,29 +0,0 @@
-#pragma once
-
-#include "diffvg.h"
-
-DEVICE int sample(const float *cdf, int num_entries, float u, float *updated_u = nullptr) {
- // Binary search the cdf
- auto lb = 0;
- auto len = num_entries - 1 - lb;
- while (len > 0) {
- auto half_len = len / 2;
- auto mid = lb + half_len;
- assert(mid >= 0 && mid < num_entries);
- if (u < cdf[mid]) {
- len = half_len;
- } else {
- lb = mid + 1;
- len = len - half_len - 1;
- }
- }
- lb = clamp(lb, 0, num_entries - 1);
- if (updated_u != nullptr) {
- if (lb > 0) {
- *updated_u = (u - cdf[lb - 1]) / (cdf[lb] - cdf[lb - 1]);
- } else {
- *updated_u = u / cdf[lb];
- }
- }
- return lb;
-}
diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/minimum_type.h b/spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/minimum_type.h
deleted file mode 100644
index 7e34f4f8a533403afa945716a18418583e55d0cc..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/minimum_type.h
+++ /dev/null
@@ -1,162 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-namespace thrust
-{
-
-namespace detail
-{
-
-namespace minimum_type_detail
-{
-
-//
-// Returns the minimum type or is empty
-// if T1 and T2 are unrelated.
-//
-template struct minimum_type_impl {};
-
-template
-struct minimum_type_impl
-{
- typedef T2 type;
-}; // end minimum_type_impl
-
-template
-struct minimum_type_impl
-{
- typedef T1 type;
-}; // end minimum_type_impl
-
-template
-struct minimum_type_impl
-{
- typedef T1 type;
-}; // end minimum_type_impl
-
-template
-struct primitive_minimum_type
- : minimum_type_detail::minimum_type_impl<
- T1,
- T2,
- ::thrust::detail::is_convertible::value,
- ::thrust::detail::is_convertible::value
- >
-{
-}; // end primitive_minimum_type
-
-// because some types are not convertible (even to themselves)
-// specialize primitive_minimum_type for when both types are identical
-template
-struct primitive_minimum_type
-{
- typedef T type;
-}; // end primitive_minimum_type
-
-// XXX this belongs somewhere more general
-struct any_conversion
-{
- template operator T (void);
-};
-
-} // end minimum_type_detail
-
-template
- struct minimum_type;
-
-// base case
-template
- struct minimum_type
- : minimum_type_detail::primitive_minimum_type
-{};
-
-template
- struct lazy_minimum_type
- : minimum_type<
- typename T1::type,
- typename T2::type
- >
-{};
-
-// carefully avoid referring to a nested ::type which may not exist
-template
- struct minimum_type
- : lazy_minimum_type<
- lazy_minimum_type<
- lazy_minimum_type<
- minimum_type<
- T1,T2
- >,
- minimum_type<
- T3,T4
- >
- >,
- lazy_minimum_type<
- minimum_type<
- T5,T6
- >,
- minimum_type<
- T7,T8
- >
- >
- >,
- lazy_minimum_type<
- lazy_minimum_type<
- minimum_type<
- T9,T10
- >,
- minimum_type<
- T11,T12
- >
- >,
- lazy_minimum_type<
- minimum_type<
- T13,T14
- >,
- minimum_type<
- T15,T16
- >
- >
- >
- >
-{};
-
-} // end detail
-
-} // end thrust
-
diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/transform_scan.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/transform_scan.h
deleted file mode 100644
index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000
--- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/transform_scan.h
+++ /dev/null
@@ -1,22 +0,0 @@
-/*
- * Copyright 2008-2013 NVIDIA Corporation
- *
- * Licensed under the Apache License, Version 2.0 (the "License");
- * you may not use this file except in compliance with the License.
- * You may obtain a copy of the License at
- *
- * http://www.apache.org/licenses/LICENSE-2.0
- *
- * Unless required by applicable law or agreed to in writing, software
- * distributed under the License is distributed on an "AS IS" BASIS,
- * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
- * See the License for the specific language governing permissions and
- * limitations under the License.
- */
-
-#pragma once
-
-#include
-
-// this system has no special version of this algorithm
-
diff --git a/spaces/ChandraMohanNayal/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md b/spaces/ChandraMohanNayal/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md
deleted file mode 100644
index a4f28a3d27d66d79cb95f2b8b847832172bb5f11..0000000000000000000000000000000000000000
--- a/spaces/ChandraMohanNayal/AutoGPT/.github/PULL_REQUEST_TEMPLATE.md
+++ /dev/null
@@ -1,40 +0,0 @@
-
-
-
-
-### Background
-
-
-### Changes
-
-
-### Documentation
-
-
-### Test Plan
-
-
-### PR Quality Checklist
-- [ ] My pull request is atomic and focuses on a single change.
-- [ ] I have thoroughly tested my changes with multiple different prompts.
-- [ ] I have considered potential risks and mitigations for my changes.
-- [ ] I have documented my changes clearly and comprehensively.
-- [ ] I have not snuck in any "extra" small tweaks changes
-
-
-
-
diff --git a/spaces/Chomkwoy/Nilkessye/image_text_align.py b/spaces/Chomkwoy/Nilkessye/image_text_align.py
deleted file mode 100644
index 73bf4518487f435b5d85d7f6d31dc1f48b434cb9..0000000000000000000000000000000000000000
--- a/spaces/Chomkwoy/Nilkessye/image_text_align.py
+++ /dev/null
@@ -1,111 +0,0 @@
-import pathlib
-from typing import List
-
-import Levenshtein
-import numpy as np
-from PIL import Image
-from tqdm.auto import tqdm
-
-import load_book
-import model
-import ocr_utils
-from syllable_model import SyllableRecognizer
-
-
-def image_text_align(
- filename: str,
- sentences: List[dict],
- cur_page: str,
- dgju_dict: dict,
- centernet: model.exkp,
- recog: SyllableRecognizer,
- display=None
-):
- # Detect page
- orig_image, orig_image_bbox, orig_size = load_book.process_page(filename, thresholding=False)
- pred_syllables, line_infos = ocr_utils.recognize_page(orig_image, centernet, recog, return_line_infos=True)
-
- pred_bboxes = [item for line in line_infos for item in line['line']]
-
- # Parse ground truth text
- cand_page_syllables = load_book.parse_book_text(sentences, cur_page, dgju_dict)
-
- # Construct candidate expected texts
- cand_expected_texts = []
- for cand in cand_page_syllables:
- expected_text = []
- for syllable in cand:
- if load_book.HANJA_RE.match(syllable['syllable']):
- expected_text.append('〓')
- elif syllable['syllable'] == '?' and len(syllable['possibilities']) > 0:
- expected_text.append(syllable['possibilities'][0])
- else:
- expected_text.append(syllable['syllable'])
- cand_expected_texts.append(expected_text)
-
- if display is not None:
- print("gt =", '.'.join(cand_expected_texts[0]))
- print("pred=", '.'.join(pred_syllables))
-
- # Find out which one is correct
- pred_text = '.'.join(pred_syllables)
- leven_dists = [
- Levenshtein.distance(pred_text, '.'.join(cand))
- for cand in cand_expected_texts
- ]
- gt_idx = np.argmin(leven_dists)
- gt_syllables = cand_page_syllables[gt_idx]
-
- avg_dist = leven_dists[gt_idx] / len(pred_syllables)
- if avg_dist > 2.0:
- print('WARNING: average levenshtein dist > 2.0')
- return False
-
- # Align text
- expected_text = cand_expected_texts[gt_idx]
- pred_syll_to_gt_syll = load_book.match_syllables(pred_syllables, expected_text)
-
- # Align text & image
- for pred_syll_idx, (pred, (bbox, _, _, cls)) in enumerate(zip(tqdm(pred_syllables), pred_bboxes)):
- (tlx, tly), (brx, bry) = bbox
- w, h = brx - tlx, bry - tly
- pw, ph = w / 5, h / 5
- tile = orig_image[
- max(0, int(tly - ph)):min(orig_image.shape[0], int(bry + ph)),
- max(0, int(tlx - pw)):min(orig_image.shape[1], int(brx + pw)),
- ]
-
- # Find corresponding ground truth syllable
- gt_syll_idx = pred_syll_to_gt_syll[pred_syll_idx]
- if gt_syll_idx is None:
- continue
- gt = gt_syllables[gt_syll_idx]
-
- if load_book.HANJA_RE.match(gt['syllable']):
- possibilities = [gt['syllable']]
- elif 'possibilities' in gt:
- possibilities = gt['possibilities']
- else:
- possibilities = [gt['syllable'] + 'L', gt['syllable'] + 'H', gt['syllable'] + 'R']
-
- # Skip unknown syllables
- if load_book.HANJA_RE.match(gt['syllable']) or (gt['syllable'] == '?' and len(gt['possibilities']) == 0):
- continue
-
- # Display syllable
- if display is not None:
- print(pred, possibilities)
- display(Image.fromarray(tile))
-
- # Predict syllable
- losses = recog.loss([tile] * len(possibilities), possibilities).numpy()
- pred_idx = np.argmin(losses)
- pred_output = possibilities[pred_idx]
-
- # Save image
- page_id = filename.replace('/', '_').split('.')[0]
- out_path = pathlib.Path(f"real_syllables/{page_id}/{pred_output}_{page_id}_i{pred_syll_idx}.png")
- out_path.parent.mkdir(parents=True, exist_ok=True)
- Image.fromarray(tile).save(out_path)
-
- return True
diff --git a/spaces/Comet/txt2im-models/comet.py b/spaces/Comet/txt2im-models/comet.py
deleted file mode 100644
index de15b3e2b68ece12d910d590babd9e1add335c69..0000000000000000000000000000000000000000
--- a/spaces/Comet/txt2im-models/comet.py
+++ /dev/null
@@ -1,76 +0,0 @@
-import comet_ml
-
-
-def start_experiment(
- comet_api_key,
- comet_workspace,
- comet_project_name,
- comet_experiment_name,
- experiment,
-):
- if comet_api_key is None:
- experiment = None
- return (
- experiment,
- """
- Please add your API key in order to log your predictions to a Comet Experiment.
- If you don't have a Comet account yet, you can sign up using the link below:
-
- https://www.comet.ml/signup
- """,
- )
-
- try:
- if comet_experiment_name:
- # Retrieve the Experiment if it already exists
- api_experiment = get_experiment(
- {
- "api_key": comet_api_key,
- "workspace": comet_workspace,
- "project_name": comet_project_name,
- "experiment": comet_experiment_name,
- }
- )
- else:
- # Create a new Experiment
- api_experiment = comet_ml.APIExperiment(
- api_key=comet_api_key,
- workspace=comet_workspace,
- project_name=comet_project_name,
- )
- api_experiment.log_other("Created from", "Spaces")
-
- experiment = {
- "api_key": comet_api_key,
- "workspace": comet_workspace,
- "project_name": comet_project_name,
- "experiment": api_experiment.name,
- }
-
- return experiment, f"Started {api_experiment.name}. Happy logging!😊"
-
- except Exception as e:
- return None, e
-
-
-def get_experiment(experiment_state):
- try:
- api_key = experiment_state.get("api_key")
- workspace = experiment_state.get("workspace")
- project = experiment_state.get("project_name")
- experiment_name = experiment_state.get("experiment")
-
- return comet_ml.API(api_key=api_key).get_experiment(
- workspace=workspace, project_name=project, experiment=experiment_name
- )
- except Exception as e:
- return None
-
-
-def get_experiment_status(experiment_state):
- experiment = get_experiment(experiment_state)
- if experiment is not None:
- name = experiment.name
- return experiment_state, f"Currently logging to: {name}"
-
- return experiment_state, f"No Experiments found"
diff --git a/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/python/dqn/dqn.py b/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/python/dqn/dqn.py
deleted file mode 100644
index 6cea64d39baa7ff4c1e549869aaa4b0ae17779a9..0000000000000000000000000000000000000000
--- a/spaces/Cong723/gpt-academic-public/crazy_functions/test_project/python/dqn/dqn.py
+++ /dev/null
@@ -1,245 +0,0 @@
-from typing import Any, Dict, List, Optional, Tuple, Type, Union
-
-import gym
-import numpy as np
-import torch as th
-from torch.nn import functional as F
-
-from stable_baselines3.common import logger
-from stable_baselines3.common.off_policy_algorithm import OffPolicyAlgorithm
-from stable_baselines3.common.preprocessing import maybe_transpose
-from stable_baselines3.common.type_aliases import GymEnv, MaybeCallback, Schedule
-from stable_baselines3.common.utils import get_linear_fn, is_vectorized_observation, polyak_update
-from stable_baselines3.dqn.policies import DQNPolicy
-
-
-class DQN(OffPolicyAlgorithm):
- """
- Deep Q-Network (DQN)
-
- Paper: https://arxiv.org/abs/1312.5602, https://www.nature.com/articles/nature14236
- Default hyperparameters are taken from the nature paper,
- except for the optimizer and learning rate that were taken from Stable Baselines defaults.
-
- :param policy: The policy model to use (MlpPolicy, CnnPolicy, ...)
- :param env: The environment to learn from (if registered in Gym, can be str)
- :param learning_rate: The learning rate, it can be a function
- of the current progress remaining (from 1 to 0)
- :param buffer_size: size of the replay buffer
- :param learning_starts: how many steps of the model to collect transitions for before learning starts
- :param batch_size: Minibatch size for each gradient update
- :param tau: the soft update coefficient ("Polyak update", between 0 and 1) default 1 for hard update
- :param gamma: the discount factor
- :param train_freq: Update the model every ``train_freq`` steps. Alternatively pass a tuple of frequency and unit
- like ``(5, "step")`` or ``(2, "episode")``.
- :param gradient_steps: How many gradient steps to do after each rollout (see ``train_freq``)
- Set to ``-1`` means to do as many gradient steps as steps done in the environment
- during the rollout.
- :param optimize_memory_usage: Enable a memory efficient variant of the replay buffer
- at a cost of more complexity.
- See https://github.com/DLR-RM/stable-baselines3/issues/37#issuecomment-637501195
- :param target_update_interval: update the target network every ``target_update_interval``
- environment steps.
- :param exploration_fraction: fraction of entire training period over which the exploration rate is reduced
- :param exploration_initial_eps: initial value of random action probability
- :param exploration_final_eps: final value of random action probability
- :param max_grad_norm: The maximum value for the gradient clipping
- :param tensorboard_log: the log location for tensorboard (if None, no logging)
- :param create_eval_env: Whether to create a second environment that will be
- used for evaluating the agent periodically. (Only available when passing string for the environment)
- :param policy_kwargs: additional arguments to be passed to the policy on creation
- :param verbose: the verbosity level: 0 no output, 1 info, 2 debug
- :param seed: Seed for the pseudo random generators
- :param device: Device (cpu, cuda, ...) on which the code should be run.
- Setting it to auto, the code will be run on the GPU if possible.
- :param _init_setup_model: Whether or not to build the network at the creation of the instance
- """
-
- def __init__(
- self,
- policy: Union[str, Type[DQNPolicy]],
- env: Union[GymEnv, str],
- learning_rate: Union[float, Schedule] = 1e-4,
- buffer_size: int = 1000000,
- learning_starts: int = 50000,
- batch_size: Optional[int] = 32,
- tau: float = 1.0,
- gamma: float = 0.99,
- train_freq: Union[int, Tuple[int, str]] = 4,
- gradient_steps: int = 1,
- optimize_memory_usage: bool = False,
- target_update_interval: int = 10000,
- exploration_fraction: float = 0.1,
- exploration_initial_eps: float = 1.0,
- exploration_final_eps: float = 0.05,
- max_grad_norm: float = 10,
- tensorboard_log: Optional[str] = None,
- create_eval_env: bool = False,
- policy_kwargs: Optional[Dict[str, Any]] = None,
- verbose: int = 0,
- seed: Optional[int] = None,
- device: Union[th.device, str] = "auto",
- _init_setup_model: bool = True,
- ):
-
- super(DQN, self).__init__(
- policy,
- env,
- DQNPolicy,
- learning_rate,
- buffer_size,
- learning_starts,
- batch_size,
- tau,
- gamma,
- train_freq,
- gradient_steps,
- action_noise=None, # No action noise
- policy_kwargs=policy_kwargs,
- tensorboard_log=tensorboard_log,
- verbose=verbose,
- device=device,
- create_eval_env=create_eval_env,
- seed=seed,
- sde_support=False,
- optimize_memory_usage=optimize_memory_usage,
- supported_action_spaces=(gym.spaces.Discrete,),
- )
-
- self.exploration_initial_eps = exploration_initial_eps
- self.exploration_final_eps = exploration_final_eps
- self.exploration_fraction = exploration_fraction
- self.target_update_interval = target_update_interval
- self.max_grad_norm = max_grad_norm
- # "epsilon" for the epsilon-greedy exploration
- self.exploration_rate = 0.0
- # Linear schedule will be defined in `_setup_model()`
- self.exploration_schedule = None
- self.q_net, self.q_net_target = None, None
-
- if _init_setup_model:
- self._setup_model()
-
- def _setup_model(self) -> None:
- super(DQN, self)._setup_model()
- self._create_aliases()
- self.exploration_schedule = get_linear_fn(
- self.exploration_initial_eps, self.exploration_final_eps, self.exploration_fraction
- )
-
- def _create_aliases(self) -> None:
- self.q_net = self.policy.q_net
- self.q_net_target = self.policy.q_net_target
-
- def _on_step(self) -> None:
- """
- Update the exploration rate and target network if needed.
- This method is called in ``collect_rollouts()`` after each step in the environment.
- """
- if self.num_timesteps % self.target_update_interval == 0:
- polyak_update(self.q_net.parameters(), self.q_net_target.parameters(), self.tau)
-
- self.exploration_rate = self.exploration_schedule(self._current_progress_remaining)
- logger.record("rollout/exploration rate", self.exploration_rate)
-
- def train(self, gradient_steps: int, batch_size: int = 100) -> None:
- # Update learning rate according to schedule
- self._update_learning_rate(self.policy.optimizer)
-
- losses = []
- for _ in range(gradient_steps):
- # Sample replay buffer
- replay_data = self.replay_buffer.sample(batch_size, env=self._vec_normalize_env)
-
- with th.no_grad():
- # Compute the next Q-values using the target network
- next_q_values = self.q_net_target(replay_data.next_observations)
- # Follow greedy policy: use the one with the highest value
- next_q_values, _ = next_q_values.max(dim=1)
- # Avoid potential broadcast issue
- next_q_values = next_q_values.reshape(-1, 1)
- # 1-step TD target
- target_q_values = replay_data.rewards + (1 - replay_data.dones) * self.gamma * next_q_values
-
- # Get current Q-values estimates
- current_q_values = self.q_net(replay_data.observations)
-
- # Retrieve the q-values for the actions from the replay buffer
- current_q_values = th.gather(current_q_values, dim=1, index=replay_data.actions.long())
-
- # Compute Huber loss (less sensitive to outliers)
- loss = F.smooth_l1_loss(current_q_values, target_q_values)
- losses.append(loss.item())
-
- # Optimize the policy
- self.policy.optimizer.zero_grad()
- loss.backward()
- # Clip gradient norm
- th.nn.utils.clip_grad_norm_(self.policy.parameters(), self.max_grad_norm)
- self.policy.optimizer.step()
-
- # Increase update counter
- self._n_updates += gradient_steps
-
- logger.record("train/n_updates", self._n_updates, exclude="tensorboard")
- logger.record("train/loss", np.mean(losses))
-
- def predict(
- self,
- observation: np.ndarray,
- state: Optional[np.ndarray] = None,
- mask: Optional[np.ndarray] = None,
- deterministic: bool = False,
- ) -> Tuple[np.ndarray, Optional[np.ndarray]]:
- """
- Overrides the base_class predict function to include epsilon-greedy exploration.
-
- :param observation: the input observation
- :param state: The last states (can be None, used in recurrent policies)
- :param mask: The last masks (can be None, used in recurrent policies)
- :param deterministic: Whether or not to return deterministic actions.
- :return: the model's action and the next state
- (used in recurrent policies)
- """
- if not deterministic and np.random.rand() < self.exploration_rate:
- if is_vectorized_observation(maybe_transpose(observation, self.observation_space), self.observation_space):
- n_batch = observation.shape[0]
- action = np.array([self.action_space.sample() for _ in range(n_batch)])
- else:
- action = np.array(self.action_space.sample())
- else:
- action, state = self.policy.predict(observation, state, mask, deterministic)
- return action, state
-
- def learn(
- self,
- total_timesteps: int,
- callback: MaybeCallback = None,
- log_interval: int = 4,
- eval_env: Optional[GymEnv] = None,
- eval_freq: int = -1,
- n_eval_episodes: int = 5,
- tb_log_name: str = "DQN",
- eval_log_path: Optional[str] = None,
- reset_num_timesteps: bool = True,
- ) -> OffPolicyAlgorithm:
-
- return super(DQN, self).learn(
- total_timesteps=total_timesteps,
- callback=callback,
- log_interval=log_interval,
- eval_env=eval_env,
- eval_freq=eval_freq,
- n_eval_episodes=n_eval_episodes,
- tb_log_name=tb_log_name,
- eval_log_path=eval_log_path,
- reset_num_timesteps=reset_num_timesteps,
- )
-
- def _excluded_save_params(self) -> List[str]:
- return super(DQN, self)._excluded_save_params() + ["q_net", "q_net_target"]
-
- def _get_torch_save_params(self) -> Tuple[List[str], List[str]]:
- state_dicts = ["policy", "policy.optimizer"]
-
- return state_dicts, []
diff --git a/spaces/Cpp4App/Cpp4App/SEM/clean_txt.py b/spaces/Cpp4App/Cpp4App/SEM/clean_txt.py
deleted file mode 100644
index bcfdf14e2fae48b026e3b3120f7adfb7bccd3e4a..0000000000000000000000000000000000000000
--- a/spaces/Cpp4App/Cpp4App/SEM/clean_txt.py
+++ /dev/null
@@ -1,32 +0,0 @@
-import os
-
-def cleaning_txt(path):
- # f = open("./txt/data_types.txt","r+")
- # f.truncate()
- # g = open("./txt/use_data.txt","r+")
- # g.truncate()
- # e = open("./txt/protect_information.txt","r+")
- # e.truncate()
- # h = open("./txt/children.txt","r+")
- # h.truncate()
- # j = open("./txt/data_retention.txt","r+")
- # j.truncate()
- # k = open("./txt/update.txt","r+")
- # k.truncate()
- # d = open("./txt/region.txt","r+")
- # d.truncate()
- # a = open("./txt/share_information.txt", "r+")
- # a.truncate()
- # b = open("./txt/thrid_party.txt", "r+")
- # b.truncate()
- # c = open("./txt/user_right.txt", "r+")
- # c.truncate()
-
- ls = os.listdir(path)
- for i in ls:
- c_path = os.path.join(path, i)
- if os.path.isdir(c_path):
- cleaning_txt(c_path)
- else:
- os.remove(c_path)
- os.removedirs(path)
diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/coco.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/coco.py
deleted file mode 100644
index d0e42b437db2fab29d4fab59a813c932c9355516..0000000000000000000000000000000000000000
--- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/coco.py
+++ /dev/null
@@ -1,101 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved.
-import torch
-import torchvision
-
-from maskrcnn_benchmark.structures.bounding_box import BoxList
-from maskrcnn_benchmark.structures.segmentation_mask import SegmentationMask
-from maskrcnn_benchmark.structures.keypoint import PersonKeypoints
-
-
-min_keypoints_per_image = 10
-
-
-def _count_visible_keypoints(anno):
- return sum(sum(1 for v in ann["keypoints"][2::3] if v > 0) for ann in anno)
-
-
-def _has_only_empty_bbox(anno):
- return all(any(o <= 1 for o in obj["bbox"][2:]) for obj in anno)
-
-
-def has_valid_annotation(anno):
- # if it's empty, there is no annotation
- if len(anno) == 0:
- return False
- # if all boxes have close to zero area, there is no annotation
- if _has_only_empty_bbox(anno):
- return False
- # keypoints task have a slight different critera for considering
- # if an annotation is valid
- if "keypoints" not in anno[0]:
- return True
- # for keypoint detection tasks, only consider valid images those
- # containing at least min_keypoints_per_image
- if _count_visible_keypoints(anno) >= min_keypoints_per_image:
- return True
- return False
-
-
-class COCODataset(torchvision.datasets.coco.CocoDetection):
- def __init__(
- self, ann_file, root, remove_images_without_annotations, transforms=None
- ):
- super(COCODataset, self).__init__(root, ann_file)
- # sort indices for reproducible results
- self.ids = sorted(self.ids)
-
- # filter images without detection annotations
- if remove_images_without_annotations:
- ids = []
- for img_id in self.ids:
- ann_ids = self.coco.getAnnIds(imgIds=img_id, iscrowd=None)
- anno = self.coco.loadAnns(ann_ids)
- if has_valid_annotation(anno):
- ids.append(img_id)
- self.ids = ids
-
- self.json_category_id_to_contiguous_id = {
- v: i + 1 for i, v in enumerate(self.coco.getCatIds())
- }
- self.contiguous_category_id_to_json_id = {
- v: k for k, v in self.json_category_id_to_contiguous_id.items()
- }
- self.id_to_img_map = {k: v for k, v in enumerate(self.ids)}
- self.transforms = transforms
-
- def __getitem__(self, idx):
- img, anno = super(COCODataset, self).__getitem__(idx)
-
- # filter crowd annotations
- # TODO might be better to add an extra field
- anno = [obj for obj in anno if obj["iscrowd"] == 0]
-
- boxes = [obj["bbox"] for obj in anno]
- boxes = torch.as_tensor(boxes).reshape(-1, 4) # guard against no boxes
- target = BoxList(boxes, img.size, mode="xywh").convert("xyxy")
-
- classes = [obj["category_id"] for obj in anno]
- classes = [self.json_category_id_to_contiguous_id[c] for c in classes]
- classes = torch.tensor(classes)
- target.add_field("labels", classes)
-
- masks = [obj["segmentation"] for obj in anno]
- masks = SegmentationMask(masks, img.size, mode='poly')
- target.add_field("masks", masks)
-
- if anno and "keypoints" in anno[0]:
- keypoints = [obj["keypoints"] for obj in anno]
- keypoints = PersonKeypoints(keypoints, img.size)
- target.add_field("keypoints", keypoints)
-
- target = target.clip_to_image(remove_empty=True)
-
- if self.transforms is not None:
- img, target = self.transforms(img, target)
-
- return img, target, idx
-
- def get_img_info(self, index):
- img_id = self.id_to_img_map[index]
- img_data = self.coco.imgs[img_id]
- return img_data
diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_l_o_c_a.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_l_o_c_a.py
deleted file mode 100644
index ad1b715133a9948b2e0da307b445a24be08bf0b2..0000000000000000000000000000000000000000
--- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/fontTools/ttLib/tables/_l_o_c_a.py
+++ /dev/null
@@ -1,66 +0,0 @@
-from . import DefaultTable
-import sys
-import array
-import logging
-
-
-log = logging.getLogger(__name__)
-
-
-class table__l_o_c_a(DefaultTable.DefaultTable):
-
- dependencies = ["glyf"]
-
- def decompile(self, data, ttFont):
- longFormat = ttFont["head"].indexToLocFormat
- if longFormat:
- format = "I"
- else:
- format = "H"
- locations = array.array(format)
- locations.frombytes(data)
- if sys.byteorder != "big":
- locations.byteswap()
- if not longFormat:
- l = array.array("I")
- for i in range(len(locations)):
- l.append(locations[i] * 2)
- locations = l
- if len(locations) < (ttFont["maxp"].numGlyphs + 1):
- log.warning(
- "corrupt 'loca' table, or wrong numGlyphs in 'maxp': %d %d",
- len(locations) - 1,
- ttFont["maxp"].numGlyphs,
- )
- self.locations = locations
-
- def compile(self, ttFont):
- try:
- max_location = max(self.locations)
- except AttributeError:
- self.set([])
- max_location = 0
- if max_location < 0x20000 and all(l % 2 == 0 for l in self.locations):
- locations = array.array("H")
- for i in range(len(self.locations)):
- locations.append(self.locations[i] // 2)
- ttFont["head"].indexToLocFormat = 0
- else:
- locations = array.array("I", self.locations)
- ttFont["head"].indexToLocFormat = 1
- if sys.byteorder != "big":
- locations.byteswap()
- return locations.tobytes()
-
- def set(self, locations):
- self.locations = array.array("I", locations)
-
- def toXML(self, writer, ttFont):
- writer.comment("The 'loca' table will be calculated by the compiler")
- writer.newline()
-
- def __getitem__(self, index):
- return self.locations[index]
-
- def __len__(self):
- return len(self.locations)
diff --git a/spaces/Danielzero/GPT3.5/Dockerfile b/spaces/Danielzero/GPT3.5/Dockerfile
deleted file mode 100644
index 335c2dba28ba8c365de9306858462a59dea25f28..0000000000000000000000000000000000000000
--- a/spaces/Danielzero/GPT3.5/Dockerfile
+++ /dev/null
@@ -1,15 +0,0 @@
-FROM python:3.9 as builder
-RUN apt-get update && apt-get install -y build-essential
-COPY requirements.txt .
-COPY requirements_advanced.txt .
-RUN pip install --user -r requirements.txt
-# RUN pip install --user -r requirements_advanced.txt
-
-FROM python:3.9
-MAINTAINER iskoldt
-COPY --from=builder /root/.local /root/.local
-ENV PATH=/root/.local/bin:$PATH
-COPY . /app
-WORKDIR /app
-ENV dockerrun yes
-CMD ["python3", "-u", "ChuanhuChatbot.py", "2>&1", "|", "tee", "/var/log/application.log"]
diff --git a/spaces/Daniil-plotnikov/Daniil-plotnikov-russian-vision-v5-beta-3/app.py b/spaces/Daniil-plotnikov/Daniil-plotnikov-russian-vision-v5-beta-3/app.py
deleted file mode 100644
index fda291382fc4774ab42bb6d1391ba9307abc6f70..0000000000000000000000000000000000000000
--- a/spaces/Daniil-plotnikov/Daniil-plotnikov-russian-vision-v5-beta-3/app.py
+++ /dev/null
@@ -1,3 +0,0 @@
-import gradio as gr
-
-gr.Interface.load("models/Daniil-plotnikov/russian-vision-v5-beta-3").launch()
\ No newline at end of file
diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/__init__.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/models/__init__.py
deleted file mode 100644
index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000
diff --git a/spaces/DragGan/DragGan/gui_utils/text_utils.py b/spaces/DragGan/DragGan/gui_utils/text_utils.py
deleted file mode 100644
index 35e5e4a16dc62c4be80df5432208bce5d386bf16..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/gui_utils/text_utils.py
+++ /dev/null
@@ -1,123 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-import functools
-from typing import Optional
-
-import dnnlib
-import numpy as np
-import PIL.Image
-import PIL.ImageFont
-import scipy.ndimage
-
-from . import gl_utils
-
-#----------------------------------------------------------------------------
-
-def get_default_font():
- url = 'http://fonts.gstatic.com/s/opensans/v17/mem8YaGs126MiZpBA-U1UpcaXcl0Aw.ttf' # Open Sans regular
- return dnnlib.util.open_url(url, return_filename=True)
-
-#----------------------------------------------------------------------------
-
-@functools.lru_cache(maxsize=None)
-def get_pil_font(font=None, size=32):
- if font is None:
- font = get_default_font()
- return PIL.ImageFont.truetype(font=font, size=size)
-
-#----------------------------------------------------------------------------
-
-def get_array(string, *, dropshadow_radius: int=None, **kwargs):
- if dropshadow_radius is not None:
- offset_x = int(np.ceil(dropshadow_radius*2/3))
- offset_y = int(np.ceil(dropshadow_radius*2/3))
- return _get_array_priv(string, dropshadow_radius=dropshadow_radius, offset_x=offset_x, offset_y=offset_y, **kwargs)
- else:
- return _get_array_priv(string, **kwargs)
-
-@functools.lru_cache(maxsize=10000)
-def _get_array_priv(
- string: str, *,
- size: int = 32,
- max_width: Optional[int]=None,
- max_height: Optional[int]=None,
- min_size=10,
- shrink_coef=0.8,
- dropshadow_radius: int=None,
- offset_x: int=None,
- offset_y: int=None,
- **kwargs
-):
- cur_size = size
- array = None
- while True:
- if dropshadow_radius is not None:
- # separate implementation for dropshadow text rendering
- array = _get_array_impl_dropshadow(string, size=cur_size, radius=dropshadow_radius, offset_x=offset_x, offset_y=offset_y, **kwargs)
- else:
- array = _get_array_impl(string, size=cur_size, **kwargs)
- height, width, _ = array.shape
- if (max_width is None or width <= max_width) and (max_height is None or height <= max_height) or (cur_size <= min_size):
- break
- cur_size = max(int(cur_size * shrink_coef), min_size)
- return array
-
-#----------------------------------------------------------------------------
-
-@functools.lru_cache(maxsize=10000)
-def _get_array_impl(string, *, font=None, size=32, outline=0, outline_pad=3, outline_coef=3, outline_exp=2, line_pad: int=None):
- pil_font = get_pil_font(font=font, size=size)
- lines = [pil_font.getmask(line, 'L') for line in string.split('\n')]
- lines = [np.array(line, dtype=np.uint8).reshape([line.size[1], line.size[0]]) for line in lines]
- width = max(line.shape[1] for line in lines)
- lines = [np.pad(line, ((0, 0), (0, width - line.shape[1])), mode='constant') for line in lines]
- line_spacing = line_pad if line_pad is not None else size // 2
- lines = [np.pad(line, ((0, line_spacing), (0, 0)), mode='constant') for line in lines[:-1]] + lines[-1:]
- mask = np.concatenate(lines, axis=0)
- alpha = mask
- if outline > 0:
- mask = np.pad(mask, int(np.ceil(outline * outline_pad)), mode='constant', constant_values=0)
- alpha = mask.astype(np.float32) / 255
- alpha = scipy.ndimage.gaussian_filter(alpha, outline)
- alpha = 1 - np.maximum(1 - alpha * outline_coef, 0) ** outline_exp
- alpha = (alpha * 255 + 0.5).clip(0, 255).astype(np.uint8)
- alpha = np.maximum(alpha, mask)
- return np.stack([mask, alpha], axis=-1)
-
-#----------------------------------------------------------------------------
-
-@functools.lru_cache(maxsize=10000)
-def _get_array_impl_dropshadow(string, *, font=None, size=32, radius: int, offset_x: int, offset_y: int, line_pad: int=None, **kwargs):
- assert (offset_x > 0) and (offset_y > 0)
- pil_font = get_pil_font(font=font, size=size)
- lines = [pil_font.getmask(line, 'L') for line in string.split('\n')]
- lines = [np.array(line, dtype=np.uint8).reshape([line.size[1], line.size[0]]) for line in lines]
- width = max(line.shape[1] for line in lines)
- lines = [np.pad(line, ((0, 0), (0, width - line.shape[1])), mode='constant') for line in lines]
- line_spacing = line_pad if line_pad is not None else size // 2
- lines = [np.pad(line, ((0, line_spacing), (0, 0)), mode='constant') for line in lines[:-1]] + lines[-1:]
- mask = np.concatenate(lines, axis=0)
- alpha = mask
-
- mask = np.pad(mask, 2*radius + max(abs(offset_x), abs(offset_y)), mode='constant', constant_values=0)
- alpha = mask.astype(np.float32) / 255
- alpha = scipy.ndimage.gaussian_filter(alpha, radius)
- alpha = 1 - np.maximum(1 - alpha * 1.5, 0) ** 1.4
- alpha = (alpha * 255 + 0.5).clip(0, 255).astype(np.uint8)
- alpha = np.pad(alpha, [(offset_y, 0), (offset_x, 0)], mode='constant')[:-offset_y, :-offset_x]
- alpha = np.maximum(alpha, mask)
- return np.stack([mask, alpha], axis=-1)
-
-#----------------------------------------------------------------------------
-
-@functools.lru_cache(maxsize=10000)
-def get_texture(string, bilinear=True, mipmap=True, **kwargs):
- return gl_utils.Texture(image=get_array(string, **kwargs), bilinear=bilinear, mipmap=mipmap)
-
-#----------------------------------------------------------------------------
diff --git a/spaces/DragGan/DragGan/training/__init__.py b/spaces/DragGan/DragGan/training/__init__.py
deleted file mode 100644
index 939e7c6c8f94c4ea1141885c3c3295fe083b06aa..0000000000000000000000000000000000000000
--- a/spaces/DragGan/DragGan/training/__init__.py
+++ /dev/null
@@ -1,9 +0,0 @@
-# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved.
-#
-# NVIDIA CORPORATION and its licensors retain all intellectual property
-# and proprietary rights in and to this software, related documentation
-# and any modifications thereto. Any use, reproduction, disclosure or
-# distribution of this software and related documentation without an express
-# license agreement from NVIDIA CORPORATION is strictly prohibited.
-
-# empty
diff --git a/spaces/ECCV2022/PSG/OpenPSG/configs/_base_/datasets/psg_panoptic.py b/spaces/ECCV2022/PSG/OpenPSG/configs/_base_/datasets/psg_panoptic.py
deleted file mode 100644
index 9e5ee5f27af854da81cc9b936a47d3ed7721502f..0000000000000000000000000000000000000000
--- a/spaces/ECCV2022/PSG/OpenPSG/configs/_base_/datasets/psg_panoptic.py
+++ /dev/null
@@ -1,72 +0,0 @@
-# dataset settings
-dataset_type = 'PanopticSceneGraphDataset'
-ann_file = './data/psg/psg.json'
-coco_root = './data/coco'
-
-img_norm_cfg = dict(mean=[123.675, 116.28, 103.53],
- std=[58.395, 57.12, 57.375],
- to_rgb=True)
-train_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='LoadPanopticSceneGraphAnnotations',
- with_bbox=True,
- with_mask=True,
- with_seg=True,
- ),
- dict(type='Resize', img_scale=(1333, 800), keep_ratio=True),
- dict(type='RandomFlip', flip_ratio=0.5),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='SegRescale', scale_factor=1 / 4),
- dict(type='DefaultFormatBundle'),
- dict(
- type='Collect',
- keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks', 'gt_semantic_seg'],
- ),
-]
-test_pipeline = [
- dict(type='LoadImageFromFile'),
- dict(
- type='MultiScaleFlipAug',
- img_scale=(1333, 800),
- flip=False,
- transforms=[
- dict(type='Resize', keep_ratio=True),
- dict(type='RandomFlip'),
- dict(type='Normalize', **img_norm_cfg),
- dict(type='Pad', size_divisor=32),
- dict(type='ImageToTensor', keys=['img']),
- dict(type='Collect', keys=['img']),
- ],
- ),
-]
-data = dict(
- samples_per_gpu=2,
- workers_per_gpu=2,
- train=dict(
- type=dataset_type,
- ann_file=ann_file,
- img_prefix=coco_root,
- seg_prefix=coco_root,
- pipeline=train_pipeline,
- split='train',
- ),
- val=dict(
- type=dataset_type,
- ann_file=ann_file,
- img_prefix=coco_root,
- seg_prefix=coco_root,
- pipeline=test_pipeline,
- split='test',
- ),
- test=dict(
- type=dataset_type,
- ann_file=ann_file,
- img_prefix=coco_root,
- seg_prefix=coco_root,
- pipeline=test_pipeline,
- split='test',
- ),
-)
-evaluation = dict(interval=1, metric='PQ')
diff --git a/spaces/ElainaFanBoy/MusicGen/audiocraft/utils/export.py b/spaces/ElainaFanBoy/MusicGen/audiocraft/utils/export.py
deleted file mode 100644
index b513b52267f7bf5aae09282c15b0a2e20c8a8fee..0000000000000000000000000000000000000000
--- a/spaces/ElainaFanBoy/MusicGen/audiocraft/utils/export.py
+++ /dev/null
@@ -1,56 +0,0 @@
-# Copyright (c) Meta Platforms, Inc. and affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-"""
-Utility to export a training checkpoint to a lightweight release checkpoint.
-"""
-
-from pathlib import Path
-import typing as tp
-
-from omegaconf import OmegaConf, DictConfig
-import torch
-
-
-def _clean_lm_cfg(cfg: DictConfig):
- OmegaConf.set_struct(cfg, False)
- # This used to be set automatically in the LM solver, need a more robust solution
- # for the future.
- cfg['transformer_lm']['card'] = 2048
- cfg['transformer_lm']['n_q'] = 4
- # Experimental params no longer supported.
- bad_params = ['spectral_norm_attn_iters', 'spectral_norm_ff_iters',
- 'residual_balancer_attn', 'residual_balancer_ff', 'layer_drop']
- for name in bad_params:
- del cfg['transformer_lm'][name]
- OmegaConf.set_struct(cfg, True)
- return cfg
-
-
-def export_encodec(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
- sig = Path(checkpoint_path).parent.name
- assert len(sig) == 8, "Not a valid Dora signature"
- pkg = torch.load(checkpoint_path, 'cpu')
- new_pkg = {
- 'best_state': pkg['ema']['state']['model'],
- 'xp.cfg': OmegaConf.to_yaml(pkg['xp.cfg']),
- }
- out_file = Path(out_folder) / f'{sig}.th'
- torch.save(new_pkg, out_file)
- return out_file
-
-
-def export_lm(checkpoint_path: tp.Union[Path, str], out_folder: tp.Union[Path, str]):
- sig = Path(checkpoint_path).parent.name
- assert len(sig) == 8, "Not a valid Dora signature"
- pkg = torch.load(checkpoint_path, 'cpu')
- new_pkg = {
- 'best_state': pkg['fsdp_best_state']['model'],
- 'xp.cfg': OmegaConf.to_yaml(_clean_lm_cfg(pkg['xp.cfg']))
- }
- out_file = Path(out_folder) / f'{sig}.th'
- torch.save(new_pkg, out_file)
- return out_file
diff --git a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/sflckr.py b/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/sflckr.py
deleted file mode 100644
index 91101be5953b113f1e58376af637e43f366b3dee..0000000000000000000000000000000000000000
--- a/spaces/EleutherAI/VQGAN_CLIP/taming-transformers/taming/data/sflckr.py
+++ /dev/null
@@ -1,91 +0,0 @@
-import os
-import numpy as np
-import cv2
-import albumentations
-from PIL import Image
-from torch.utils.data import Dataset
-
-
-class SegmentationBase(Dataset):
- def __init__(self,
- data_csv, data_root, segmentation_root,
- size=None, random_crop=False, interpolation="bicubic",
- n_labels=182, shift_segmentation=False,
- ):
- self.n_labels = n_labels
- self.shift_segmentation = shift_segmentation
- self.data_csv = data_csv
- self.data_root = data_root
- self.segmentation_root = segmentation_root
- with open(self.data_csv, "r") as f:
- self.image_paths = f.read().splitlines()
- self._length = len(self.image_paths)
- self.labels = {
- "relative_file_path_": [l for l in self.image_paths],
- "file_path_": [os.path.join(self.data_root, l)
- for l in self.image_paths],
- "segmentation_path_": [os.path.join(self.segmentation_root, l.replace(".jpg", ".png"))
- for l in self.image_paths]
- }
-
- size = None if size is not None and size<=0 else size
- self.size = size
- if self.size is not None:
- self.interpolation = interpolation
- self.interpolation = {
- "nearest": cv2.INTER_NEAREST,
- "bilinear": cv2.INTER_LINEAR,
- "bicubic": cv2.INTER_CUBIC,
- "area": cv2.INTER_AREA,
- "lanczos": cv2.INTER_LANCZOS4}[self.interpolation]
- self.image_rescaler = albumentations.SmallestMaxSize(max_size=self.size,
- interpolation=self.interpolation)
- self.segmentation_rescaler = albumentations.SmallestMaxSize(max_size=self.size,
- interpolation=cv2.INTER_NEAREST)
- self.center_crop = not random_crop
- if self.center_crop:
- self.cropper = albumentations.CenterCrop(height=self.size, width=self.size)
- else:
- self.cropper = albumentations.RandomCrop(height=self.size, width=self.size)
- self.preprocessor = self.cropper
-
- def __len__(self):
- return self._length
-
- def __getitem__(self, i):
- example = dict((k, self.labels[k][i]) for k in self.labels)
- image = Image.open(example["file_path_"])
- if not image.mode == "RGB":
- image = image.convert("RGB")
- image = np.array(image).astype(np.uint8)
- if self.size is not None:
- image = self.image_rescaler(image=image)["image"]
- segmentation = Image.open(example["segmentation_path_"])
- assert segmentation.mode == "L", segmentation.mode
- segmentation = np.array(segmentation).astype(np.uint8)
- if self.shift_segmentation:
- # used to support segmentations containing unlabeled==255 label
- segmentation = segmentation+1
- if self.size is not None:
- segmentation = self.segmentation_rescaler(image=segmentation)["image"]
- if self.size is not None:
- processed = self.preprocessor(image=image,
- mask=segmentation
- )
- else:
- processed = {"image": image,
- "mask": segmentation
- }
- example["image"] = (processed["image"]/127.5 - 1.0).astype(np.float32)
- segmentation = processed["mask"]
- onehot = np.eye(self.n_labels)[segmentation]
- example["segmentation"] = onehot
- return example
-
-
-class Examples(SegmentationBase):
- def __init__(self, size=None, random_crop=False, interpolation="bicubic"):
- super().__init__(data_csv="data/sflckr_examples.txt",
- data_root="data/sflckr_images",
- segmentation_root="data/sflckr_segmentations",
- size=size, random_crop=random_crop, interpolation=interpolation)
diff --git a/spaces/EronSamez/RVC_HFmeu/infer_uvr5.py b/spaces/EronSamez/RVC_HFmeu/infer_uvr5.py
deleted file mode 100644
index 8c8c05429a1d65dd8b198f16a8ea8c6e68991c07..0000000000000000000000000000000000000000
--- a/spaces/EronSamez/RVC_HFmeu/infer_uvr5.py
+++ /dev/null
@@ -1,363 +0,0 @@
-import os, sys, torch, warnings, pdb
-
-now_dir = os.getcwd()
-sys.path.append(now_dir)
-from json import load as ll
-
-warnings.filterwarnings("ignore")
-import librosa
-import importlib
-import numpy as np
-import hashlib, math
-from tqdm import tqdm
-from lib.uvr5_pack.lib_v5 import spec_utils
-from lib.uvr5_pack.utils import _get_name_params, inference
-from lib.uvr5_pack.lib_v5.model_param_init import ModelParameters
-import soundfile as sf
-from lib.uvr5_pack.lib_v5.nets_new import CascadedNet
-from lib.uvr5_pack.lib_v5 import nets_61968KB as nets
-
-
-class _audio_pre_:
- def __init__(self, agg, model_path, device, is_half):
- self.model_path = model_path
- self.device = device
- self.data = {
- # Processing Options
- "postprocess": False,
- "tta": False,
- # Constants
- "window_size": 512,
- "agg": agg,
- "high_end_process": "mirroring",
- }
- mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v2.json")
- model = nets.CascadedASPPNet(mp.param["bins"] * 2)
- cpk = torch.load(model_path, map_location="cpu")
- model.load_state_dict(cpk)
- model.eval()
- if is_half:
- model = model.half().to(device)
- else:
- model = model.to(device)
-
- self.mp = mp
- self.model = model
-
- def _path_audio_(self, music_file, ins_root=None, vocal_root=None, format="flac"):
- if ins_root is None and vocal_root is None:
- return "No save root."
- name = os.path.basename(music_file)
- if ins_root is not None:
- os.makedirs(ins_root, exist_ok=True)
- if vocal_root is not None:
- os.makedirs(vocal_root, exist_ok=True)
- X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {}
- bands_n = len(self.mp.param["band"])
- # print(bands_n)
- for d in range(bands_n, 0, -1):
- bp = self.mp.param["band"][d]
- if d == bands_n: # high-end band
- (
- X_wave[d],
- _,
- ) = librosa.core.load(
- music_file,
- bp["sr"],
- False,
- dtype=np.float32,
- res_type=bp["res_type"],
- )
- if X_wave[d].ndim == 1:
- X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]])
- else: # lower bands
- X_wave[d] = librosa.core.resample(
- X_wave[d + 1],
- self.mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
- # Stft of wave source
- X_spec_s[d] = spec_utils.wave_to_spectrogram_mt(
- X_wave[d],
- bp["hl"],
- bp["n_fft"],
- self.mp.param["mid_side"],
- self.mp.param["mid_side_b2"],
- self.mp.param["reverse"],
- )
- # pdb.set_trace()
- if d == bands_n and self.data["high_end_process"] != "none":
- input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + (
- self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"]
- )
- input_high_end = X_spec_s[d][
- :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, :
- ]
-
- X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp)
- aggresive_set = float(self.data["agg"] / 100)
- aggressiveness = {
- "value": aggresive_set,
- "split_bin": self.mp.param["band"][1]["crop_stop"],
- }
- with torch.no_grad():
- pred, X_mag, X_phase = inference(
- X_spec_m, self.device, self.model, aggressiveness, self.data
- )
- # Postprocess
- if self.data["postprocess"]:
- pred_inv = np.clip(X_mag - pred, 0, np.inf)
- pred = spec_utils.mask_silence(pred, pred_inv)
- y_spec_m = pred * X_phase
- v_spec_m = X_spec_m - y_spec_m
-
- if ins_root is not None:
- if self.data["high_end_process"].startswith("mirroring"):
- input_high_end_ = spec_utils.mirroring(
- self.data["high_end_process"], y_spec_m, input_high_end, self.mp
- )
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(
- y_spec_m, self.mp, input_high_end_h, input_high_end_
- )
- else:
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp)
- print("%s instruments done" % name)
- if format in ["wav", "flac"]:
- sf.write(
- os.path.join(
- ins_root,
- "instrument_{}_{}.{}".format(name, self.data["agg"], format),
- ),
- (np.array(wav_instrument) * 32768).astype("int16"),
- self.mp.param["sr"],
- ) #
- else:
- path = os.path.join(
- ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"])
- )
- sf.write(
- path,
- (np.array(wav_instrument) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format)
- )
- if vocal_root is not None:
- if self.data["high_end_process"].startswith("mirroring"):
- input_high_end_ = spec_utils.mirroring(
- self.data["high_end_process"], v_spec_m, input_high_end, self.mp
- )
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(
- v_spec_m, self.mp, input_high_end_h, input_high_end_
- )
- else:
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp)
- print("%s vocals done" % name)
- if format in ["wav", "flac"]:
- sf.write(
- os.path.join(
- vocal_root,
- "vocal_{}_{}.{}".format(name, self.data["agg"], format),
- ),
- (np.array(wav_vocals) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- else:
- path = os.path.join(
- vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"])
- )
- sf.write(
- path,
- (np.array(wav_vocals) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format)
- )
-
-
-class _audio_pre_new:
- def __init__(self, agg, model_path, device, is_half):
- self.model_path = model_path
- self.device = device
- self.data = {
- # Processing Options
- "postprocess": False,
- "tta": False,
- # Constants
- "window_size": 512,
- "agg": agg,
- "high_end_process": "mirroring",
- }
- mp = ModelParameters("lib/uvr5_pack/lib_v5/modelparams/4band_v3.json")
- nout = 64 if "DeReverb" in model_path else 48
- model = CascadedNet(mp.param["bins"] * 2, nout)
- cpk = torch.load(model_path, map_location="cpu")
- model.load_state_dict(cpk)
- model.eval()
- if is_half:
- model = model.half().to(device)
- else:
- model = model.to(device)
-
- self.mp = mp
- self.model = model
-
- def _path_audio_(
- self, music_file, vocal_root=None, ins_root=None, format="flac"
- ): # 3个VR模型vocal和ins是反的
- if ins_root is None and vocal_root is None:
- return "No save root."
- name = os.path.basename(music_file)
- if ins_root is not None:
- os.makedirs(ins_root, exist_ok=True)
- if vocal_root is not None:
- os.makedirs(vocal_root, exist_ok=True)
- X_wave, y_wave, X_spec_s, y_spec_s = {}, {}, {}, {}
- bands_n = len(self.mp.param["band"])
- # print(bands_n)
- for d in range(bands_n, 0, -1):
- bp = self.mp.param["band"][d]
- if d == bands_n: # high-end band
- (
- X_wave[d],
- _,
- ) = librosa.core.load(
- music_file,
- bp["sr"],
- False,
- dtype=np.float32,
- res_type=bp["res_type"],
- )
- if X_wave[d].ndim == 1:
- X_wave[d] = np.asfortranarray([X_wave[d], X_wave[d]])
- else: # lower bands
- X_wave[d] = librosa.core.resample(
- X_wave[d + 1],
- self.mp.param["band"][d + 1]["sr"],
- bp["sr"],
- res_type=bp["res_type"],
- )
- # Stft of wave source
- X_spec_s[d] = spec_utils.wave_to_spectrogram_mt(
- X_wave[d],
- bp["hl"],
- bp["n_fft"],
- self.mp.param["mid_side"],
- self.mp.param["mid_side_b2"],
- self.mp.param["reverse"],
- )
- # pdb.set_trace()
- if d == bands_n and self.data["high_end_process"] != "none":
- input_high_end_h = (bp["n_fft"] // 2 - bp["crop_stop"]) + (
- self.mp.param["pre_filter_stop"] - self.mp.param["pre_filter_start"]
- )
- input_high_end = X_spec_s[d][
- :, bp["n_fft"] // 2 - input_high_end_h : bp["n_fft"] // 2, :
- ]
-
- X_spec_m = spec_utils.combine_spectrograms(X_spec_s, self.mp)
- aggresive_set = float(self.data["agg"] / 100)
- aggressiveness = {
- "value": aggresive_set,
- "split_bin": self.mp.param["band"][1]["crop_stop"],
- }
- with torch.no_grad():
- pred, X_mag, X_phase = inference(
- X_spec_m, self.device, self.model, aggressiveness, self.data
- )
- # Postprocess
- if self.data["postprocess"]:
- pred_inv = np.clip(X_mag - pred, 0, np.inf)
- pred = spec_utils.mask_silence(pred, pred_inv)
- y_spec_m = pred * X_phase
- v_spec_m = X_spec_m - y_spec_m
-
- if ins_root is not None:
- if self.data["high_end_process"].startswith("mirroring"):
- input_high_end_ = spec_utils.mirroring(
- self.data["high_end_process"], y_spec_m, input_high_end, self.mp
- )
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(
- y_spec_m, self.mp, input_high_end_h, input_high_end_
- )
- else:
- wav_instrument = spec_utils.cmb_spectrogram_to_wave(y_spec_m, self.mp)
- print("%s instruments done" % name)
- if format in ["wav", "flac"]:
- sf.write(
- os.path.join(
- ins_root,
- "instrument_{}_{}.{}".format(name, self.data["agg"], format),
- ),
- (np.array(wav_instrument) * 32768).astype("int16"),
- self.mp.param["sr"],
- ) #
- else:
- path = os.path.join(
- ins_root, "instrument_{}_{}.wav".format(name, self.data["agg"])
- )
- sf.write(
- path,
- (np.array(wav_instrument) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format)
- )
- if vocal_root is not None:
- if self.data["high_end_process"].startswith("mirroring"):
- input_high_end_ = spec_utils.mirroring(
- self.data["high_end_process"], v_spec_m, input_high_end, self.mp
- )
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(
- v_spec_m, self.mp, input_high_end_h, input_high_end_
- )
- else:
- wav_vocals = spec_utils.cmb_spectrogram_to_wave(v_spec_m, self.mp)
- print("%s vocals done" % name)
- if format in ["wav", "flac"]:
- sf.write(
- os.path.join(
- vocal_root,
- "vocal_{}_{}.{}".format(name, self.data["agg"], format),
- ),
- (np.array(wav_vocals) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- else:
- path = os.path.join(
- vocal_root, "vocal_{}_{}.wav".format(name, self.data["agg"])
- )
- sf.write(
- path,
- (np.array(wav_vocals) * 32768).astype("int16"),
- self.mp.param["sr"],
- )
- if os.path.exists(path):
- os.system(
- "ffmpeg -i %s -vn %s -q:a 2 -y"
- % (path, path[:-4] + ".%s" % format)
- )
-
-
-if __name__ == "__main__":
- device = "cuda"
- is_half = True
- # model_path = "uvr5_weights/2_HP-UVR.pth"
- # model_path = "uvr5_weights/VR-DeEchoDeReverb.pth"
- # model_path = "uvr5_weights/VR-DeEchoNormal.pth"
- model_path = "uvr5_weights/DeEchoNormal.pth"
- # pre_fun = _audio_pre_(model_path=model_path, device=device, is_half=True,agg=10)
- pre_fun = _audio_pre_new(model_path=model_path, device=device, is_half=True, agg=10)
- audio_path = "雪雪伴奏对消HP5.wav"
- save_path = "opt"
- pre_fun._path_audio_(audio_path, save_path, save_path)
diff --git a/spaces/FoxMeo/fire-detector/test.py b/spaces/FoxMeo/fire-detector/test.py
deleted file mode 100644
index 17b48060bebca76ba19b5f456da16fcff9324824..0000000000000000000000000000000000000000
--- a/spaces/FoxMeo/fire-detector/test.py
+++ /dev/null
@@ -1,353 +0,0 @@
-import argparse
-import json
-import os
-from pathlib import Path
-from threading import Thread
-
-import numpy as np
-import torch
-import yaml
-from tqdm import tqdm
-
-from models.experimental import attempt_load
-from utils.datasets import create_dataloader
-from utils.general import coco80_to_coco91_class, check_dataset, check_file, check_img_size, check_requirements, \
- box_iou, non_max_suppression, scale_coords, xyxy2xywh, xywh2xyxy, set_logging, increment_path, colorstr
-from utils.metrics import ap_per_class, ConfusionMatrix
-from utils.plots import plot_images, output_to_target, plot_study_txt
-from utils.torch_utils import select_device, time_synchronized, TracedModel
-
-
-def test(data,
- weights=None,
- batch_size=32,
- imgsz=640,
- conf_thres=0.001,
- iou_thres=0.6, # for NMS
- save_json=False,
- single_cls=False,
- augment=False,
- verbose=False,
- model=None,
- dataloader=None,
- save_dir=Path(''), # for saving images
- save_txt=False, # for auto-labelling
- save_hybrid=False, # for hybrid auto-labelling
- save_conf=False, # save auto-label confidences
- plots=True,
- wandb_logger=None,
- compute_loss=None,
- half_precision=True,
- trace=False,
- is_coco=False,
- v5_metric=False):
- # Initialize/load model and set device
- training = model is not None
- if training: # called by train.py
- device = next(model.parameters()).device # get model device
-
- else: # called directly
- set_logging()
- device = select_device(opt.device, batch_size=batch_size)
-
- # Directories
- save_dir = Path(increment_path(Path(opt.project) / opt.name, exist_ok=opt.exist_ok)) # increment run
- (save_dir / 'labels' if save_txt else save_dir).mkdir(parents=True, exist_ok=True) # make dir
-
- # Load model
- model = attempt_load(weights, map_location=device) # load FP32 model
- gs = max(int(model.stride.max()), 32) # grid size (max stride)
- imgsz = check_img_size(imgsz, s=gs) # check img_size
-
- if trace:
- model = TracedModel(model, device, imgsz)
-
- # Half
- half = device.type != 'cpu' and half_precision # half precision only supported on CUDA
- if half:
- model.half()
-
- # Configure
- model.eval()
- if isinstance(data, str):
- is_coco = data.endswith('coco.yaml')
- with open(data) as f:
- data = yaml.load(f, Loader=yaml.SafeLoader)
- check_dataset(data) # check
- nc = 1 if single_cls else int(data['nc']) # number of classes
- iouv = torch.linspace(0.5, 0.95, 10).to(device) # iou vector for mAP@0.5:0.95
- niou = iouv.numel()
-
- # Logging
- log_imgs = 0
- if wandb_logger and wandb_logger.wandb:
- log_imgs = min(wandb_logger.log_imgs, 100)
- # Dataloader
- if not training:
- if device.type != 'cpu':
- model(torch.zeros(1, 3, imgsz, imgsz).to(device).type_as(next(model.parameters()))) # run once
- task = opt.task if opt.task in ('train', 'val', 'test') else 'val' # path to train/val/test images
- dataloader = create_dataloader(data[task], imgsz, batch_size, gs, opt, pad=0.5, rect=True,
- prefix=colorstr(f'{task}: '))[0]
-
- if v5_metric:
- print("Testing with YOLOv5 AP metric...")
-
- seen = 0
- confusion_matrix = ConfusionMatrix(nc=nc)
- names = {k: v for k, v in enumerate(model.names if hasattr(model, 'names') else model.module.names)}
- coco91class = coco80_to_coco91_class()
- s = ('%20s' + '%12s' * 6) % ('Class', 'Images', 'Labels', 'P', 'R', 'mAP@.5', 'mAP@.5:.95')
- p, r, f1, mp, mr, map50, map, t0, t1 = 0., 0., 0., 0., 0., 0., 0., 0., 0.
- loss = torch.zeros(3, device=device)
- jdict, stats, ap, ap_class, wandb_images = [], [], [], [], []
- for batch_i, (img, targets, paths, shapes) in enumerate(tqdm(dataloader, desc=s)):
- img = img.to(device, non_blocking=True)
- img = img.half() if half else img.float() # uint8 to fp16/32
- img /= 255.0 # 0 - 255 to 0.0 - 1.0
- targets = targets.to(device)
- nb, _, height, width = img.shape # batch size, channels, height, width
-
- with torch.no_grad():
- # Run model
- t = time_synchronized()
- out, train_out = model(img, augment=augment) # inference and training outputs
- t0 += time_synchronized() - t
-
- # Compute loss
- if compute_loss:
- loss += compute_loss([x.float() for x in train_out], targets)[1][:3] # box, obj, cls
-
- # Run NMS
- targets[:, 2:] *= torch.Tensor([width, height, width, height]).to(device) # to pixels
- lb = [targets[targets[:, 0] == i, 1:] for i in range(nb)] if save_hybrid else [] # for autolabelling
- t = time_synchronized()
- out = non_max_suppression(out, conf_thres=conf_thres, iou_thres=iou_thres, labels=lb, multi_label=True)
- t1 += time_synchronized() - t
-
- # Statistics per image
- for si, pred in enumerate(out):
- labels = targets[targets[:, 0] == si, 1:]
- nl = len(labels)
- tcls = labels[:, 0].tolist() if nl else [] # target class
- path = Path(paths[si])
- seen += 1
-
- if len(pred) == 0:
- if nl:
- stats.append((torch.zeros(0, niou, dtype=torch.bool), torch.Tensor(), torch.Tensor(), tcls))
- continue
-
- # Predictions
- predn = pred.clone()
- scale_coords(img[si].shape[1:], predn[:, :4], shapes[si][0], shapes[si][1]) # native-space pred
-
- # Append to text file
- if save_txt:
- gn = torch.tensor(shapes[si][0])[[1, 0, 1, 0]] # normalization gain whwh
- for *xyxy, conf, cls in predn.tolist():
- xywh = (xyxy2xywh(torch.tensor(xyxy).view(1, 4)) / gn).view(-1).tolist() # normalized xywh
- line = (cls, *xywh, conf) if save_conf else (cls, *xywh) # label format
- with open(save_dir / 'labels' / (path.stem + '.txt'), 'a') as f:
- f.write(('%g ' * len(line)).rstrip() % line + '\n')
-
- # W&B logging - Media Panel Plots
- if len(wandb_images) < log_imgs and wandb_logger.current_epoch > 0: # Check for test operation
- if wandb_logger.current_epoch % wandb_logger.bbox_interval == 0:
- box_data = [{"position": {"minX": xyxy[0], "minY": xyxy[1], "maxX": xyxy[2], "maxY": xyxy[3]},
- "class_id": int(cls),
- "box_caption": "%s %.3f" % (names[cls], conf),
- "scores": {"class_score": conf},
- "domain": "pixel"} for *xyxy, conf, cls in pred.tolist()]
- boxes = {"predictions": {"box_data": box_data, "class_labels": names}} # inference-space
- wandb_images.append(wandb_logger.wandb.Image(img[si], boxes=boxes, caption=path.name))
- wandb_logger.log_training_progress(predn, path, names) if wandb_logger and wandb_logger.wandb_run else None
-
- # Append to pycocotools JSON dictionary
- if save_json:
- # [{"image_id": 42, "category_id": 18, "bbox": [258.15, 41.29, 348.26, 243.78], "score": 0.236}, ...
- image_id = int(path.stem) if path.stem.isnumeric() else path.stem
- box = xyxy2xywh(predn[:, :4]) # xywh
- box[:, :2] -= box[:, 2:] / 2 # xy center to top-left corner
- for p, b in zip(pred.tolist(), box.tolist()):
- jdict.append({'image_id': image_id,
- 'category_id': coco91class[int(p[5])] if is_coco else int(p[5]),
- 'bbox': [round(x, 3) for x in b],
- 'score': round(p[4], 5)})
-
- # Assign all predictions as incorrect
- correct = torch.zeros(pred.shape[0], niou, dtype=torch.bool, device=device)
- if nl:
- detected = [] # target indices
- tcls_tensor = labels[:, 0]
-
- # target boxes
- tbox = xywh2xyxy(labels[:, 1:5])
- scale_coords(img[si].shape[1:], tbox, shapes[si][0], shapes[si][1]) # native-space labels
- if plots:
- confusion_matrix.process_batch(predn, torch.cat((labels[:, 0:1], tbox), 1))
-
- # Per target class
- for cls in torch.unique(tcls_tensor):
- ti = (cls == tcls_tensor).nonzero(as_tuple=False).view(-1) # prediction indices
- pi = (cls == pred[:, 5]).nonzero(as_tuple=False).view(-1) # target indices
-
- # Search for detections
- if pi.shape[0]:
- # Prediction to target ious
- ious, i = box_iou(predn[pi, :4], tbox[ti]).max(1) # best ious, indices
-
- # Append detections
- detected_set = set()
- for j in (ious > iouv[0]).nonzero(as_tuple=False):
- d = ti[i[j]] # detected target
- if d.item() not in detected_set:
- detected_set.add(d.item())
- detected.append(d)
- correct[pi[j]] = ious[j] > iouv # iou_thres is 1xn
- if len(detected) == nl: # all targets already located in image
- break
-
- # Append statistics (correct, conf, pcls, tcls)
- stats.append((correct.cpu(), pred[:, 4].cpu(), pred[:, 5].cpu(), tcls))
-
- # Plot images
- if plots and batch_i < 3:
- f = save_dir / f'test_batch{batch_i}_labels.jpg' # labels
- Thread(target=plot_images, args=(img, targets, paths, f, names), daemon=True).start()
- f = save_dir / f'test_batch{batch_i}_pred.jpg' # predictions
- Thread(target=plot_images, args=(img, output_to_target(out), paths, f, names), daemon=True).start()
-
- # Compute statistics
- stats = [np.concatenate(x, 0) for x in zip(*stats)] # to numpy
- if len(stats) and stats[0].any():
- p, r, ap, f1, ap_class = ap_per_class(*stats, plot=plots, v5_metric=v5_metric, save_dir=save_dir, names=names)
- ap50, ap = ap[:, 0], ap.mean(1) # AP@0.5, AP@0.5:0.95
- mp, mr, map50, map = p.mean(), r.mean(), ap50.mean(), ap.mean()
- nt = np.bincount(stats[3].astype(np.int64), minlength=nc) # number of targets per class
- else:
- nt = torch.zeros(1)
-
- # Print results
- pf = '%20s' + '%12i' * 2 + '%12.3g' * 4 # print format
- print(pf % ('all', seen, nt.sum(), mp, mr, map50, map))
-
- # Print results per class
- if (verbose or (nc < 50 and not training)) and nc > 1 and len(stats):
- for i, c in enumerate(ap_class):
- print(pf % (names[c], seen, nt[c], p[i], r[i], ap50[i], ap[i]))
-
- # Print speeds
- t = tuple(x / seen * 1E3 for x in (t0, t1, t0 + t1)) + (imgsz, imgsz, batch_size) # tuple
- if not training:
- print('Speed: %.1f/%.1f/%.1f ms inference/NMS/total per %gx%g image at batch-size %g' % t)
-
- # Plots
- if plots:
- confusion_matrix.plot(save_dir=save_dir, names=list(names.values()))
- if wandb_logger and wandb_logger.wandb:
- val_batches = [wandb_logger.wandb.Image(str(f), caption=f.name) for f in sorted(save_dir.glob('test*.jpg'))]
- wandb_logger.log({"Validation": val_batches})
- if wandb_images:
- wandb_logger.log({"Bounding Box Debugger/Images": wandb_images})
-
- # Save JSON
- if save_json and len(jdict):
- w = Path(weights[0] if isinstance(weights, list) else weights).stem if weights is not None else '' # weights
- anno_json = './coco/annotations/instances_val2017.json' # annotations json
- pred_json = str(save_dir / f"{w}_predictions.json") # predictions json
- print('\nEvaluating pycocotools mAP... saving %s...' % pred_json)
- with open(pred_json, 'w') as f:
- json.dump(jdict, f)
-
- try: # https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocoEvalDemo.ipynb
- from pycocotools.coco import COCO
- from pycocotools.cocoeval import COCOeval
-
- anno = COCO(anno_json) # init annotations api
- pred = anno.loadRes(pred_json) # init predictions api
- eval = COCOeval(anno, pred, 'bbox')
- if is_coco:
- eval.params.imgIds = [int(Path(x).stem) for x in dataloader.dataset.img_files] # image IDs to evaluate
- eval.evaluate()
- eval.accumulate()
- eval.summarize()
- map, map50 = eval.stats[:2] # update results (mAP@0.5:0.95, mAP@0.5)
- except Exception as e:
- print(f'pycocotools unable to run: {e}')
-
- # Return results
- model.float() # for training
- if not training:
- s = f"\n{len(list(save_dir.glob('labels/*.txt')))} labels saved to {save_dir / 'labels'}" if save_txt else ''
- print(f"Results saved to {save_dir}{s}")
- maps = np.zeros(nc) + map
- for i, c in enumerate(ap_class):
- maps[c] = ap[i]
- return (mp, mr, map50, map, *(loss.cpu() / len(dataloader)).tolist()), maps, t
-
-
-if __name__ == '__main__':
- parser = argparse.ArgumentParser(prog='test.py')
- parser.add_argument('--weights', nargs='+', type=str, default='yolov7.pt', help='model.pt path(s)')
- parser.add_argument('--data', type=str, default='data/coco.yaml', help='*.data path')
- parser.add_argument('--batch-size', type=int, default=32, help='size of each image batch')
- parser.add_argument('--img-size', type=int, default=640, help='inference size (pixels)')
- parser.add_argument('--conf-thres', type=float, default=0.001, help='object confidence threshold')
- parser.add_argument('--iou-thres', type=float, default=0.65, help='IOU threshold for NMS')
- parser.add_argument('--task', default='val', help='train, val, test, speed or study')
- parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu')
- parser.add_argument('--single-cls', action='store_true', help='treat as single-class dataset')
- parser.add_argument('--augment', action='store_true', help='augmented inference')
- parser.add_argument('--verbose', action='store_true', help='report mAP by class')
- parser.add_argument('--save-txt', action='store_true', help='save results to *.txt')
- parser.add_argument('--save-hybrid', action='store_true', help='save label+prediction hybrid results to *.txt')
- parser.add_argument('--save-conf', action='store_true', help='save confidences in --save-txt labels')
- parser.add_argument('--save-json', action='store_true', help='save a cocoapi-compatible JSON results file')
- parser.add_argument('--project', default='runs/test', help='save to project/name')
- parser.add_argument('--name', default='exp', help='save to project/name')
- parser.add_argument('--exist-ok', action='store_true', help='existing project/name ok, do not increment')
- parser.add_argument('--no-trace', action='store_true', help='don`t trace model')
- parser.add_argument('--v5-metric', action='store_true', help='assume maximum recall as 1.0 in AP calculation')
- opt = parser.parse_args()
- opt.save_json |= opt.data.endswith('coco.yaml')
- opt.data = check_file(opt.data) # check file
- print(opt)
- #check_requirements()
-
- if opt.task in ('train', 'val', 'test'): # run normally
- test(opt.data,
- opt.weights,
- opt.batch_size,
- opt.img_size,
- opt.conf_thres,
- opt.iou_thres,
- opt.save_json,
- opt.single_cls,
- opt.augment,
- opt.verbose,
- save_txt=opt.save_txt | opt.save_hybrid,
- save_hybrid=opt.save_hybrid,
- save_conf=opt.save_conf,
- trace=not opt.no_trace,
- v5_metric=opt.v5_metric
- )
-
- elif opt.task == 'speed': # speed benchmarks
- for w in opt.weights:
- test(opt.data, w, opt.batch_size, opt.img_size, 0.25, 0.45, save_json=False, plots=False, v5_metric=opt.v5_metric)
-
- elif opt.task == 'study': # run over a range of settings and save/plot
- # python test.py --task study --data coco.yaml --iou 0.65 --weights yolov7.pt
- x = list(range(256, 1536 + 128, 128)) # x axis (image sizes)
- for w in opt.weights:
- f = f'study_{Path(opt.data).stem}_{Path(w).stem}.txt' # filename to save to
- y = [] # y axis
- for i in x: # img-size
- print(f'\nRunning {f} point {i}...')
- r, _, t = test(opt.data, w, opt.batch_size, i, opt.conf_thres, opt.iou_thres, opt.save_json,
- plots=False, v5_metric=opt.v5_metric)
- y.append(r + t) # results and times
- np.savetxt(f, y, fmt='%10.4g') # save
- os.system('zip -r study.zip study_*.txt')
- plot_study_txt(x=x) # plot
diff --git a/spaces/FoxMeo/fire-detector/utils/add_nms.py b/spaces/FoxMeo/fire-detector/utils/add_nms.py
deleted file mode 100644
index 0a1f7976a2051d07bb028f9fd68eb52f45234f43..0000000000000000000000000000000000000000
--- a/spaces/FoxMeo/fire-detector/utils/add_nms.py
+++ /dev/null
@@ -1,155 +0,0 @@
-import numpy as np
-import onnx
-from onnx import shape_inference
-try:
- import onnx_graphsurgeon as gs
-except Exception as e:
- print('Import onnx_graphsurgeon failure: %s' % e)
-
-import logging
-
-LOGGER = logging.getLogger(__name__)
-
-class RegisterNMS(object):
- def __init__(
- self,
- onnx_model_path: str,
- precision: str = "fp32",
- ):
-
- self.graph = gs.import_onnx(onnx.load(onnx_model_path))
- assert self.graph
- LOGGER.info("ONNX graph created successfully")
- # Fold constants via ONNX-GS that PyTorch2ONNX may have missed
- self.graph.fold_constants()
- self.precision = precision
- self.batch_size = 1
- def infer(self):
- """
- Sanitize the graph by cleaning any unconnected nodes, do a topological resort,
- and fold constant inputs values. When possible, run shape inference on the
- ONNX graph to determine tensor shapes.
- """
- for _ in range(3):
- count_before = len(self.graph.nodes)
-
- self.graph.cleanup().toposort()
- try:
- for node in self.graph.nodes:
- for o in node.outputs:
- o.shape = None
- model = gs.export_onnx(self.graph)
- model = shape_inference.infer_shapes(model)
- self.graph = gs.import_onnx(model)
- except Exception as e:
- LOGGER.info(f"Shape inference could not be performed at this time:\n{e}")
- try:
- self.graph.fold_constants(fold_shapes=True)
- except TypeError as e:
- LOGGER.error(
- "This version of ONNX GraphSurgeon does not support folding shapes, "
- f"please upgrade your onnx_graphsurgeon module. Error:\n{e}"
- )
- raise
-
- count_after = len(self.graph.nodes)
- if count_before == count_after:
- # No new folding occurred in this iteration, so we can stop for now.
- break
-
- def save(self, output_path):
- """
- Save the ONNX model to the given location.
- Args:
- output_path: Path pointing to the location where to write
- out the updated ONNX model.
- """
- self.graph.cleanup().toposort()
- model = gs.export_onnx(self.graph)
- onnx.save(model, output_path)
- LOGGER.info(f"Saved ONNX model to {output_path}")
-
- def register_nms(
- self,
- *,
- score_thresh: float = 0.25,
- nms_thresh: float = 0.45,
- detections_per_img: int = 100,
- ):
- """
- Register the ``EfficientNMS_TRT`` plugin node.
- NMS expects these shapes for its input tensors:
- - box_net: [batch_size, number_boxes, 4]
- - class_net: [batch_size, number_boxes, number_labels]
- Args:
- score_thresh (float): The scalar threshold for score (low scoring boxes are removed).
- nms_thresh (float): The scalar threshold for IOU (new boxes that have high IOU
- overlap with previously selected boxes are removed).
- detections_per_img (int): Number of best detections to keep after NMS.
- """
-
- self.infer()
- # Find the concat node at the end of the network
- op_inputs = self.graph.outputs
- op = "EfficientNMS_TRT"
- attrs = {
- "plugin_version": "1",
- "background_class": -1, # no background class
- "max_output_boxes": detections_per_img,
- "score_threshold": score_thresh,
- "iou_threshold": nms_thresh,
- "score_activation": False,
- "box_coding": 0,
- }
-
- if self.precision == "fp32":
- dtype_output = np.float32
- elif self.precision == "fp16":
- dtype_output = np.float16
- else:
- raise NotImplementedError(f"Currently not supports precision: {self.precision}")
-
- # NMS Outputs
- output_num_detections = gs.Variable(
- name="num_dets",
- dtype=np.int32,
- shape=[self.batch_size, 1],
- ) # A scalar indicating the number of valid detections per batch image.
- output_boxes = gs.Variable(
- name="det_boxes",
- dtype=dtype_output,
- shape=[self.batch_size, detections_per_img, 4],
- )
- output_scores = gs.Variable(
- name="det_scores",
- dtype=dtype_output,
- shape=[self.batch_size, detections_per_img],
- )
- output_labels = gs.Variable(
- name="det_classes",
- dtype=np.int32,
- shape=[self.batch_size, detections_per_img],
- )
-
- op_outputs = [output_num_detections, output_boxes, output_scores, output_labels]
-
- # Create the NMS Plugin node with the selected inputs. The outputs of the node will also
- # become the final outputs of the graph.
- self.graph.layer(op=op, name="batched_nms", inputs=op_inputs, outputs=op_outputs, attrs=attrs)
- LOGGER.info(f"Created NMS plugin '{op}' with attributes: {attrs}")
-
- self.graph.outputs = op_outputs
-
- self.infer()
-
- def save(self, output_path):
- """
- Save the ONNX model to the given location.
- Args:
- output_path: Path pointing to the location where to write
- out the updated ONNX model.
- """
- self.graph.cleanup().toposort()
- model = gs.export_onnx(self.graph)
- onnx.save(model, output_path)
- LOGGER.info(f"Saved ONNX model to {output_path}")
diff --git a/spaces/FritsLyneborg/kunstnerfrits/src/dalle_mini/model/partitions.py b/spaces/FritsLyneborg/kunstnerfrits/src/dalle_mini/model/partitions.py
deleted file mode 100644
index 22b187d5874f0c9d7b04b009387d7ce9c339a01d..0000000000000000000000000000000000000000
--- a/spaces/FritsLyneborg/kunstnerfrits/src/dalle_mini/model/partitions.py
+++ /dev/null
@@ -1,67 +0,0 @@
-import re
-
-from flax.core.frozen_dict import freeze
-from flax.traverse_util import flatten_dict, unflatten_dict
-from jax.experimental import PartitionSpec as P
-
-# utils adapted from https://github.com/google-research/google-research/blob/master/flax_models/t5x/partitions.py
-# Sentinels
-_unmatched = object()
-
-# For specifying empty leaf dict `{}`
-empty_dict = object()
-
-
-def _match(qs, ks):
- """Return True if regexes in qs match any window of strings in tuple ks."""
- # compile regexes and force complete match
- qts = tuple(map(lambda x: re.compile(x + "$"), qs))
- for i in range(len(ks) - len(qs) + 1):
- matches = [x.match(y) for x, y in zip(qts, ks[i:])]
- if matches and all(matches):
- return True
- return False
-
-
-def _replacement_rules(rules):
- def replace(key, val):
- for rule, replacement in rules:
- if _match(rule, key):
- return replacement
- return val
-
- return replace
-
-
-def _get_partition_rules():
- return [
- # embeddings
- (("embed_positions", "embedding"), P("mp", None)),
- (("embed_tokens", "embedding"), P("mp", None)),
- (("rel_bias", "embedding"), P(None, "mp")),
- # attention
- (("(q_proj|k_proj|v_proj)", "kernel"), P(None, "mp")),
- (("out_proj", "kernel"), P("mp", None)),
- # FFN
- (("Dense_0", "kernel"), P(None, "mp")),
- (("GLU.*", "Dense_1", "kernel"), P(None, "mp")),
- (("GLU.*", "Dense_2", "kernel"), P("mp", None)),
- (("FFN.*", "Dense_1", "kernel"), P("mp", None)),
- # layer norms
- (("(bias|scale)",), None),
- (("lm_head", "kernel"), P(None, "mp")),
- # head scale and tau
- (("(head_scale|tau)",), None),
- ]
-
-
-def set_partitions(in_dict):
- rules = _get_partition_rules()
- replace = _replacement_rules(rules)
- initd = {k: _unmatched for k in flatten_dict(in_dict)}
- result = {k: replace(k, v) for k, v in initd.items()}
- for k, v in result.items():
- if v == _unmatched:
- print(f"Unmatched -> {k}")
- assert _unmatched not in result.values(), "Incomplete partition spec."
- return freeze(unflatten_dict(result))
diff --git a/spaces/Froleptan/stablediffusion-infinity/js/toolbar.js b/spaces/Froleptan/stablediffusion-infinity/js/toolbar.js
deleted file mode 100644
index 6c721bc84d3a41a0761ead58e6034ba4dfd4a6ef..0000000000000000000000000000000000000000
--- a/spaces/Froleptan/stablediffusion-infinity/js/toolbar.js
+++ /dev/null
@@ -1,581 +0,0 @@
-// import { w2ui,w2toolbar,w2field,query,w2alert, w2utils,w2confirm} from "https://rawgit.com/vitmalina/w2ui/master/dist/w2ui.es6.min.js"
-// import { w2ui,w2toolbar,w2field,query,w2alert, w2utils,w2confirm} from "https://cdn.jsdelivr.net/gh/vitmalina/w2ui@master/dist/w2ui.es6.min.js"
-
-// https://stackoverflow.com/questions/36280818/how-to-convert-file-to-base64-in-javascript
-function getBase64(file) {
- var reader = new FileReader();
- reader.readAsDataURL(file);
- reader.onload = function () {
- add_image(reader.result);
- // console.log(reader.result);
- };
- reader.onerror = function (error) {
- console.log("Error: ", error);
- };
-}
-
-function getText(file) {
- var reader = new FileReader();
- reader.readAsText(file);
- reader.onload = function () {
- window.postMessage(["load",reader.result],"*")
- // console.log(reader.result);
- };
- reader.onerror = function (error) {
- console.log("Error: ", error);
- };
-}
-
-document.querySelector("#upload_file").addEventListener("change", (event)=>{
- console.log(event);
- let file = document.querySelector("#upload_file").files[0];
- getBase64(file);
-})
-
-document.querySelector("#upload_state").addEventListener("change", (event)=>{
- console.log(event);
- let file = document.querySelector("#upload_state").files[0];
- getText(file);
-})
-
-open_setting = function() {
- if (!w2ui.foo) {
- new w2form({
- name: "foo",
- style: "border: 0px; background-color: transparent;",
- fields: [{
- field: "canvas_width",
- type: "int",
- required: true,
- html: {
- label: "Canvas Width"
- }
- },
- {
- field: "canvas_height",
- type: "int",
- required: true,
- html: {
- label: "Canvas Height"
- }
- },
- ],
- record: {
- canvas_width: 1200,
- canvas_height: 600,
- },
- actions: {
- Save() {
- this.validate();
- let record = this.getCleanRecord();
- window.postMessage(["resize",record.canvas_width,record.canvas_height],"*");
- w2popup.close();
- },
- custom: {
- text: "Cancel",
- style: "text-transform: uppercase",
- onClick(event) {
- w2popup.close();
- }
- }
- }
- });
- }
- w2popup.open({
- title: "Form in a Popup",
- body: "",
- style: "padding: 15px 0px 0px 0px",
- width: 500,
- height: 280,
- showMax: true,
- async onToggle(event) {
- await event.complete
- w2ui.foo.resize();
- }
- })
- .then((event) => {
- w2ui.foo.render("#form")
- });
-}
-
-var button_lst=["clear", "load", "save", "export", "upload", "selection", "canvas", "eraser", "outpaint", "accept", "cancel", "retry", "prev", "current", "next", "eraser_size_btn", "eraser_size", "resize_selection", "scale", "zoom_in", "zoom_out", "help"];
-var upload_button_lst=['clear', 'load', 'save', "upload", 'export', 'outpaint', 'resize_selection', 'help', "setting"];
-var resize_button_lst=['clear', 'load', 'save', "upload", 'export', "selection", "canvas", "eraser", 'outpaint', 'resize_selection',"zoom_in", "zoom_out", 'help', "setting"];
-var outpaint_button_lst=['clear', 'load', 'save', "canvas", "eraser", "upload", 'export', 'resize_selection', "zoom_in", "zoom_out",'help', "setting"];
-var outpaint_result_lst=["accept", "cancel", "retry", "prev", "current", "next"];
-var outpaint_result_func_lst=["accept", "retry", "prev", "current", "next"];
-
-function check_button(id,text="",checked=true,tooltip="")
-{
- return { type: "check", id: id, text: text, icon: checked?"fa-solid fa-square-check":"fa-regular fa-square", checked: checked, tooltip: tooltip };
-}
-
-var toolbar=new w2toolbar({
- box: "#toolbar",
- name: "toolbar",
- tooltip: "top",
- items: [
- { type: "button", id: "clear", text: "Reset", tooltip: "Reset Canvas", icon: "fa-solid fa-rectangle-xmark" },
- { type: "break" },
- { type: "button", id: "load", tooltip: "Load Canvas", icon: "fa-solid fa-file-import" },
- { type: "button", id: "save", tooltip: "Save Canvas", icon: "fa-solid fa-file-export" },
- { type: "button", id: "export", tooltip: "Export Image", icon: "fa-solid fa-floppy-disk" },
- { type: "break" },
- { type: "button", id: "upload", text: "Upload Image", icon: "fa-solid fa-upload" },
- { type: "break" },
- { type: "radio", id: "selection", group: "1", tooltip: "Selection", icon: "fa-solid fa-arrows-up-down-left-right", checked: true },
- { type: "radio", id: "canvas", group: "1", tooltip: "Canvas", icon: "fa-solid fa-image" },
- { type: "radio", id: "eraser", group: "1", tooltip: "Eraser", icon: "fa-solid fa-eraser" },
- { type: "break" },
- { type: "button", id: "outpaint", text: "Outpaint", tooltip: "Run Outpainting", icon: "fa-solid fa-brush" },
- { type: "break" },
- { type: "button", id: "accept", text: "Accept", tooltip: "Accept current result", icon: "fa-solid fa-check", hidden: true, disable:true,},
- { type: "button", id: "cancel", text: "Cancel", tooltip: "Cancel current outpainting/error", icon: "fa-solid fa-ban", hidden: true},
- { type: "button", id: "retry", text: "Retry", tooltip: "Retry", icon: "fa-solid fa-rotate", hidden: true, disable:true,},
- { type: "button", id: "prev", tooltip: "Prev Result", icon: "fa-solid fa-caret-left", hidden: true, disable:true,},
- { type: "html", id: "current", hidden: true, disable:true,
- async onRefresh(event) {
- await event.complete
- let fragment = query.html(`
-
`)
- query(this.box).find("#tb_toolbar_item_scale").append(fragment)
- }
- },
- { type: "button", id: "zoom_in", tooltip: "Zoom In", icon: "fa-solid fa-magnifying-glass-plus" },
- { type: "button", id: "zoom_out", tooltip: "Zoom Out", icon: "fa-solid fa-magnifying-glass-minus" },
- { type: "break" },
- { type: "button", id: "help", tooltip: "Help", icon: "fa-solid fa-circle-info" },
- { type: "new-line"},
- { type: "button", id: "setting", text: "Canvas Setting", tooltip: "Resize Canvas Here", icon: "fa-solid fa-sliders" },
- { type: "break" },
- check_button("enable_img2img","Enable Img2Img",false),
- // check_button("use_correction","Photometric Correction",false),
- check_button("resize_check","Resize Small Input",true),
- check_button("enable_safety","Enable Safety Checker",true),
- check_button("square_selection","Square Selection Only",false),
- {type: "break"},
- check_button("use_seed","Use Seed:",false),
- { type: "html", id: "seed_val",
- async onRefresh(event) {
- await event.complete
- let fragment = query.html(`
- `)
- fragment.filter("input").on("change", event => {
- this.config_obj.seed_val = event.target.value;
- parent.config_obj=this.config_obj;
- this.refresh();
- })
- query(this.box).find("#tb_toolbar_item_seed_val").append(fragment)
- }
- },
- { type: "button", id: "random_seed", tooltip: "Set a random seed", icon: "fa-solid fa-dice" },
- ],
- onClick(event) {
- switch(event.target){
- case "setting":
- open_setting();
- break;
- case "upload":
- this.upload_mode=true
- document.querySelector("#overlay_container").style.pointerEvents="auto";
- this.click("canvas");
- this.click("selection");
- this.show("confirm","cancel_overlay","add_image","delete_image");
- this.enable("confirm","cancel_overlay","add_image","delete_image");
- this.disable(...upload_button_lst);
- query("#upload_file").click();
- if(this.upload_tip)
- {
- this.upload_tip=false;
- w2utils.notify("Note that only visible images will be added to canvas",{timeout:10000,where:query("#container")})
- }
- break;
- case "resize_selection":
- this.resize_mode=true;
- this.disable(...resize_button_lst);
- this.enable("confirm","cancel_overlay");
- this.show("confirm","cancel_overlay");
- window.postMessage(["resize_selection",""],"*");
- document.querySelector("#overlay_container").style.pointerEvents="auto";
- break;
- case "confirm":
- if(this.upload_mode)
- {
- export_image();
- }
- else
- {
- let sel_box=this.selection_box;
- window.postMessage(["resize_selection",sel_box.x,sel_box.y,sel_box.width,sel_box.height],"*");
- }
- case "cancel_overlay":
- end_overlay();
- this.hide("confirm","cancel_overlay","add_image","delete_image");
- if(this.upload_mode){
- this.enable(...upload_button_lst);
- }
- else
- {
- this.enable(...resize_button_lst);
- window.postMessage(["resize_selection","",""],"*");
- if(event.target=="cancel_overlay")
- {
- this.selection_box=this.selection_box_bak;
- }
- }
- if(this.selection_box)
- {
- this.setCount("resize_selection",`${Math.floor(this.selection_box.width/8)*8}x${Math.floor(this.selection_box.height/8)*8}`);
- }
- this.disable("confirm","cancel_overlay","add_image","delete_image");
- this.upload_mode=false;
- this.resize_mode=false;
- this.click("selection");
- break;
- case "add_image":
- query("#upload_file").click();
- break;
- case "delete_image":
- let active_obj = window.overlay.getActiveObject();
- if(active_obj)
- {
- window.overlay.remove(active_obj);
- window.overlay.renderAll();
- }
- else
- {
- w2utils.notify("You need to select an image first",{error:true,timeout:2000,where:query("#container")})
- }
- break;
- case "load":
- query("#upload_state").click();
- this.selection_box=null;
- this.setCount("resize_selection","");
- break;
- case "next":
- case "prev":
- window.postMessage(["outpaint", "", event.target], "*");
- break;
- case "outpaint":
- this.click("selection");
- this.disable(...outpaint_button_lst);
- this.show(...outpaint_result_lst);
- if(this.outpaint_tip)
- {
- this.outpaint_tip=false;
- w2utils.notify("The canvas stays locked until you accept/cancel current outpainting",{timeout:10000,where:query("#container")})
- }
- document.querySelector("#container").style.pointerEvents="none";
- case "retry":
- this.disable(...outpaint_result_func_lst);
- window.postMessage(["transfer",""],"*")
- break;
- case "accept":
- case "cancel":
- this.hide(...outpaint_result_lst);
- this.disable(...outpaint_result_func_lst);
- this.enable(...outpaint_button_lst);
- document.querySelector("#container").style.pointerEvents="auto";
- window.postMessage(["click", event.target],"*");
- let app=parent.document.querySelector("gradio-app");
- app=app.shadowRoot??app;
- app.querySelector("#cancel").click();
- break;
- case "eraser":
- case "selection":
- case "canvas":
- if(event.target=="eraser")
- {
- this.show("eraser_size","eraser_size_btn");
- window.overlay.freeDrawingBrush.width=this.eraser_size;
- window.overlay.isDrawingMode = true;
- }
- else
- {
- this.hide("eraser_size","eraser_size_btn");
- window.overlay.isDrawingMode = false;
- }
- if(this.upload_mode)
- {
- if(event.target=="canvas")
- {
- window.postMessage(["mode", event.target],"*")
- document.querySelector("#overlay_container").style.pointerEvents="none";
- document.querySelector("#overlay_container").style.opacity = 0.5;
- }
- else
- {
- document.querySelector("#overlay_container").style.pointerEvents="auto";
- document.querySelector("#overlay_container").style.opacity = 1.0;
- }
- }
- else
- {
- window.postMessage(["mode", event.target],"*")
- }
- break;
- case "help":
- w2popup.open({
- title: "Document",
- body: "Usage: https://github.com/lkwq007/stablediffusion-infinity/blob/master/docs/usage.md"
- })
- break;
- case "clear":
- w2confirm("Reset canvas?").yes(() => {
- window.postMessage(["click", event.target],"*");
- }).no(() => {})
- break;
- case "random_seed":
- this.config_obj.seed_val=Math.floor(Math.random() * 3000000000);
- parent.config_obj=this.config_obj;
- this.refresh();
- break;
- case "enable_img2img":
- case "use_correction":
- case "resize_check":
- case "enable_safety":
- case "use_seed":
- case "square_selection":
- let target=this.get(event.target);
- target.icon=target.checked?"fa-regular fa-square":"fa-solid fa-square-check";
- this.config_obj[event.target]=!target.checked;
- parent.config_obj=this.config_obj;
- this.refresh();
- break;
- case "save":
- case "export":
- ask_filename(event.target);
- break;
- default:
- // clear, save, export, outpaint, retry
- // break, save, export, accept, retry, outpaint
- window.postMessage(["click", event.target],"*")
- }
- console.log("Target: "+ event.target, event)
- }
-})
-window.w2ui=w2ui;
-w2ui.toolbar.config_obj={
- resize_check: true,
- enable_safety: true,
- use_correction: false,
- enable_img2img: false,
- use_seed: false,
- seed_val: 0,
- square_selection: false,
-};
-w2ui.toolbar.outpaint_tip=true;
-w2ui.toolbar.upload_tip=true;
-window.update_count=function(cur,total){
- w2ui.toolbar.sel_value=`${cur}/${total}`;
- w2ui.toolbar.refresh();
-}
-window.update_eraser=function(val,max_val){
- w2ui.toolbar.eraser_size=`${val}`;
- w2ui.toolbar.eraser_max=`${max_val}`;
- w2ui.toolbar.setCount("eraser_size_btn", `${val}`);
- w2ui.toolbar.refresh();
-}
-window.update_scale=function(val){
- w2ui.toolbar.scale_value=`${val}`;
- w2ui.toolbar.refresh();
-}
-window.enable_result_lst=function(){
- w2ui.toolbar.enable(...outpaint_result_lst);
-}
-function onObjectScaled(e)
-{
- let object = e.target;
- if(object.isType("rect"))
- {
- let width=object.getScaledWidth();
- let height=object.getScaledHeight();
- object.scale(1);
- width=Math.max(Math.min(width,window.overlay.width-object.left),256);
- height=Math.max(Math.min(height,window.overlay.height-object.top),256);
- let l=Math.max(Math.min(object.left,window.overlay.width-width-object.strokeWidth),0);
- let t=Math.max(Math.min(object.top,window.overlay.height-height-object.strokeWidth),0);
- if(window.w2ui.toolbar.config_obj.square_selection)
- {
- let max_val = Math.min(Math.max(width,height),window.overlay.width,window.overlay.height);
- width=max_val;
- height=max_val;
- }
- object.set({ width: width, height: height, left:l,top:t})
- window.w2ui.toolbar.selection_box={width: width, height: height, x:object.left, y:object.top};
- window.w2ui.toolbar.setCount("resize_selection",`${Math.floor(width/8)*8}x${Math.floor(height/8)*8}`);
- window.w2ui.toolbar.refresh();
- }
-}
-function onObjectMoved(e)
-{
- let object = e.target;
- if(object.isType("rect"))
- {
- let l=Math.max(Math.min(object.left,window.overlay.width-object.width-object.strokeWidth),0);
- let t=Math.max(Math.min(object.top,window.overlay.height-object.height-object.strokeWidth),0);
- object.set({left:l,top:t});
- window.w2ui.toolbar.selection_box={width: object.width, height: object.height, x:object.left, y:object.top};
- }
-}
-window.setup_overlay=function(width,height)
-{
- if(window.overlay)
- {
- window.overlay.setDimensions({width:width,height:height});
- let app=parent.document.querySelector("gradio-app");
- app=app.shadowRoot??app;
- app.querySelector("#sdinfframe").style.height=80+Number(height)+"px";
- document.querySelector("#container").style.height= height+"px";
- document.querySelector("#container").style.width = width+"px";
- }
- else
- {
- canvas=new fabric.Canvas("overlay_canvas");
- canvas.setDimensions({width:width,height:height});
- let app=parent.document.querySelector("gradio-app");
- app=app.shadowRoot??app;
- app.querySelector("#sdinfframe").style.height=80+Number(height)+"px";
- canvas.freeDrawingBrush = new fabric.EraserBrush(canvas);
- canvas.on("object:scaling", onObjectScaled);
- canvas.on("object:moving", onObjectMoved);
- window.overlay=canvas;
- }
- document.querySelector("#overlay_container").style.pointerEvents="none";
-}
-window.update_overlay=function(width,height)
-{
- window.overlay.setDimensions({width:width,height:height},{backstoreOnly:true});
- // document.querySelector("#overlay_container").style.pointerEvents="none";
-}
-window.adjust_selection=function(x,y,width,height)
-{
- var rect = new fabric.Rect({
- left: x,
- top: y,
- fill: "rgba(0,0,0,0)",
- strokeWidth: 3,
- stroke: "rgba(0,0,0,0.7)",
- cornerColor: "red",
- cornerStrokeColor: "red",
- borderColor: "rgba(255, 0, 0, 1.0)",
- width: width,
- height: height,
- lockRotation: true,
- });
- rect.setControlsVisibility({ mtr: false });
- window.overlay.add(rect);
- window.overlay.setActiveObject(window.overlay.item(0));
- window.w2ui.toolbar.selection_box={width: width, height: height, x:x, y:y};
- window.w2ui.toolbar.selection_box_bak={width: width, height: height, x:x, y:y};
-}
-function add_image(url)
-{
- fabric.Image.fromURL(url,function(img){
- window.overlay.add(img);
- window.overlay.setActiveObject(img);
- },{left:100,top:100});
-}
-function export_image()
-{
- data=window.overlay.toDataURL();
- document.querySelector("#upload_content").value=data;
- window.postMessage(["upload",""],"*");
- end_overlay();
-}
-function end_overlay()
-{
- window.overlay.clear();
- document.querySelector("#overlay_container").style.opacity = 1.0;
- document.querySelector("#overlay_container").style.pointerEvents="none";
-}
-function ask_filename(target)
-{
- w2prompt({
- label: "Enter filename",
- value: `outpaint_${((new Date(Date.now() -(new Date()).getTimezoneOffset() * 60000))).toISOString().replace("T","_").replace(/[^0-9_]/g, "").substring(0,15)}`,
- })
- .change((event) => {
- console.log("change", event.detail.originalEvent.target.value);
- })
- .ok((event) => {
- console.log("value=", event.detail.value);
- window.postMessage(["click",target,event.detail.value],"*");
- })
- .cancel((event) => {
- console.log("cancel");
- });
-}
-
-document.querySelector("#container").addEventListener("wheel",(e)=>{e.preventDefault()})
-window.setup_shortcut=function(json)
-{
- var config=JSON.parse(json);
- var key_map={};
- Object.keys(config.shortcut).forEach(k=>{
- key_map[config.shortcut[k]]=k;
- })
- document.addEventListener("keydown",(e)=>{
- if(e.target.tagName!="INPUT")
- {
- let key=e.key;
- if(e.ctrlKey)
- {
- key="Ctrl+"+e.key;
- if(key in key_map)
- {
- e.preventDefault();
- }
- }
- if(key in key_map)
- {
- w2ui.toolbar.click(key_map[key]);
- }
- }
- })
-}
\ No newline at end of file
diff --git a/spaces/GipAdonimus/Real-Time-Voice-Cloning/vocoder/vocoder_dataset.py b/spaces/GipAdonimus/Real-Time-Voice-Cloning/vocoder/vocoder_dataset.py
deleted file mode 100644
index 9eae1b5f20117feef0a06e264a99b3c0c6143bac..0000000000000000000000000000000000000000
--- a/spaces/GipAdonimus/Real-Time-Voice-Cloning/vocoder/vocoder_dataset.py
+++ /dev/null
@@ -1,84 +0,0 @@
-from torch.utils.data import Dataset
-from pathlib import Path
-from vocoder import audio
-import vocoder.hparams as hp
-import numpy as np
-import torch
-
-
-class VocoderDataset(Dataset):
- def __init__(self, metadata_fpath: Path, mel_dir: Path, wav_dir: Path):
- print("Using inputs from:\n\t%s\n\t%s\n\t%s" % (metadata_fpath, mel_dir, wav_dir))
-
- with metadata_fpath.open("r") as metadata_file:
- metadata = [line.split("|") for line in metadata_file]
-
- gta_fnames = [x[1] for x in metadata if int(x[4])]
- gta_fpaths = [mel_dir.joinpath(fname) for fname in gta_fnames]
- wav_fnames = [x[0] for x in metadata if int(x[4])]
- wav_fpaths = [wav_dir.joinpath(fname) for fname in wav_fnames]
- self.samples_fpaths = list(zip(gta_fpaths, wav_fpaths))
-
- print("Found %d samples" % len(self.samples_fpaths))
-
- def __getitem__(self, index):
- mel_path, wav_path = self.samples_fpaths[index]
-
- # Load the mel spectrogram and adjust its range to [-1, 1]
- mel = np.load(mel_path).T.astype(np.float32) / hp.mel_max_abs_value
-
- # Load the wav
- wav = np.load(wav_path)
- if hp.apply_preemphasis:
- wav = audio.pre_emphasis(wav)
- wav = np.clip(wav, -1, 1)
-
- # Fix for missing padding # TODO: settle on whether this is any useful
- r_pad = (len(wav) // hp.hop_length + 1) * hp.hop_length - len(wav)
- wav = np.pad(wav, (0, r_pad), mode='constant')
- assert len(wav) >= mel.shape[1] * hp.hop_length
- wav = wav[:mel.shape[1] * hp.hop_length]
- assert len(wav) % hp.hop_length == 0
-
- # Quantize the wav
- if hp.voc_mode == 'RAW':
- if hp.mu_law:
- quant = audio.encode_mu_law(wav, mu=2 ** hp.bits)
- else:
- quant = audio.float_2_label(wav, bits=hp.bits)
- elif hp.voc_mode == 'MOL':
- quant = audio.float_2_label(wav, bits=16)
-
- return mel.astype(np.float32), quant.astype(np.int64)
-
- def __len__(self):
- return len(self.samples_fpaths)
-
-
-def collate_vocoder(batch):
- mel_win = hp.voc_seq_len // hp.hop_length + 2 * hp.voc_pad
- max_offsets = [x[0].shape[-1] -2 - (mel_win + 2 * hp.voc_pad) for x in batch]
- mel_offsets = [np.random.randint(0, offset) for offset in max_offsets]
- sig_offsets = [(offset + hp.voc_pad) * hp.hop_length for offset in mel_offsets]
-
- mels = [x[0][:, mel_offsets[i]:mel_offsets[i] + mel_win] for i, x in enumerate(batch)]
-
- labels = [x[1][sig_offsets[i]:sig_offsets[i] + hp.voc_seq_len + 1] for i, x in enumerate(batch)]
-
- mels = np.stack(mels).astype(np.float32)
- labels = np.stack(labels).astype(np.int64)
-
- mels = torch.tensor(mels)
- labels = torch.tensor(labels).long()
-
- x = labels[:, :hp.voc_seq_len]
- y = labels[:, 1:]
-
- bits = 16 if hp.voc_mode == 'MOL' else hp.bits
-
- x = audio.label_2_float(x.float(), bits)
-
- if hp.voc_mode == 'MOL' :
- y = audio.label_2_float(y.float(), bits)
-
- return x, y, mels
\ No newline at end of file
diff --git a/spaces/Gradio-Blocks/latent_gpt2_story/README.md b/spaces/Gradio-Blocks/latent_gpt2_story/README.md
deleted file mode 100644
index 91d6bbc16731006cb281da14aaaabe3928d36587..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/latent_gpt2_story/README.md
+++ /dev/null
@@ -1,12 +0,0 @@
----
-title: Latent GPT2 Story
-emoji: ⚡
-colorFrom: pink
-colorTo: purple
-sdk: gradio
-sdk_version: 3.0.2
-app_file: app.py
-pinned: false
----
-
-Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/schedules/schedule_2x.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/schedules/schedule_2x.py
deleted file mode 100644
index 69dc9ee8080649ce3646b5775b0ca2e9c863d0f5..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/_base_/schedules/schedule_2x.py
+++ /dev/null
@@ -1,11 +0,0 @@
-# optimizer
-optimizer = dict(type='SGD', lr=0.02, momentum=0.9, weight_decay=0.0001)
-optimizer_config = dict(grad_clip=None)
-# learning policy
-lr_config = dict(
- policy='step',
- warmup='linear',
- warmup_iters=500,
- warmup_ratio=0.001,
- step=[16, 22])
-runner = dict(type='EpochBasedRunner', max_epochs=24)
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/instaboost/README.md b/spaces/Gradio-Blocks/uniformer_image_detection/configs/instaboost/README.md
deleted file mode 100644
index 5ab74a1af13639fef753dbfd43f064400cba9129..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/instaboost/README.md
+++ /dev/null
@@ -1,44 +0,0 @@
-# InstaBoost for MMDetection
-
-[ALGORITHM]
-
-Configs in this directory is the implementation for ICCV2019 paper "InstaBoost: Boosting Instance Segmentation Via Probability Map Guided Copy-Pasting" and provided by the authors of the paper. InstaBoost is a data augmentation method for object detection and instance segmentation. The paper has been released on [`arXiv`](https://arxiv.org/abs/1908.07801).
-
-```latex
-@inproceedings{fang2019instaboost,
- title={Instaboost: Boosting instance segmentation via probability map guided copy-pasting},
- author={Fang, Hao-Shu and Sun, Jianhua and Wang, Runzhong and Gou, Minghao and Li, Yong-Lu and Lu, Cewu},
- booktitle={Proceedings of the IEEE International Conference on Computer Vision},
- pages={682--691},
- year={2019}
-}
-```
-
-## Usage
-
-### Requirements
-
-You need to install `instaboostfast` before using it.
-
-```shell
-pip install instaboostfast
-```
-
-The code and more details can be found [here](https://github.com/GothicAi/Instaboost).
-
-### Integration with MMDetection
-
-InstaBoost have been already integrated in the data pipeline, thus all you need is to add or change **InstaBoost** configurations after **LoadImageFromFile**. We have provided examples like [this](mask_rcnn_r50_fpn_instaboost_4x#L121). You can refer to [`InstaBoostConfig`](https://github.com/GothicAi/InstaBoost-pypi#instaboostconfig) for more details.
-
-## Results and Models
-
-- All models were trained on `coco_2017_train` and tested on `coco_2017_val` for conveinience of evaluation and comparison. In the paper, the results are obtained from `test-dev`.
-- To balance accuracy and training time when using InstaBoost, models released in this page are all trained for 48 Epochs. Other training and testing configs strictly follow the original framework.
-- For results and models in MMDetection V1.x, please refer to [Instaboost](https://github.com/GothicAi/Instaboost).
-
-| Network | Backbone | Lr schd | Mem (GB) | Inf time (fps) | box AP | mask AP | Config | Download |
-| :-------------: | :--------: | :-----: | :------: | :------------: | :------:| :-----: | :------: | :-----------------: |
-| Mask R-CNN | R-50-FPN | 4x | 4.4 | 17.5 | 40.6 | 36.6 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco/mask_rcnn_r50_fpn_instaboost_4x_coco_20200307-d025f83a.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_r50_fpn_instaboost_4x_coco/mask_rcnn_r50_fpn_instaboost_4x_coco_20200307_223635.log.json) |
-| Mask R-CNN | R-101-FPN | 4x | 6.4 | | 42.5 | 38.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco/mask_rcnn_r101_fpn_instaboost_4x_coco_20200703_235738-f23f3a5f.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_r101_fpn_instaboost_4x_coco/mask_rcnn_r101_fpn_instaboost_4x_coco_20200703_235738.log.json) |
-| Mask R-CNN | X-101-64x4d-FPN | 4x | 10.7 | | 44.7 | 39.7 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco_20200515_080947-8ed58c1b.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/instaboost/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco/mask_rcnn_x101_64x4d_fpn_instaboost_4x_coco_20200515_080947.log.json) |
-| Cascade R-CNN | R-101-FPN | 4x | 6.0 | 12.0 | 43.7 | 38.0 | [config](https://github.com/open-mmlab/mmdetection/tree/master/configs/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco.py) | [model](http://download.openmmlab.com/mmdetection/v2.0/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco_20200307-c19d98d9.pth) | [log](http://download.openmmlab.com/mmdetection/v2.0/instaboost/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco/cascade_mask_rcnn_r50_fpn_instaboost_4x_coco_20200307_223646.log.json) |
diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py
deleted file mode 100644
index fe1d659f1a58ddb6e662d74a41c77005d2ee0638..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/retinanet_regnetx-800MF_fpn_1x_coco.py
+++ /dev/null
@@ -1,16 +0,0 @@
-_base_ = './retinanet_regnetx-3.2GF_fpn_1x_coco.py'
-model = dict(
- pretrained='open-mmlab://regnetx_800mf',
- backbone=dict(
- type='RegNet',
- arch='regnetx_800mf',
- out_indices=(0, 1, 2, 3),
- frozen_stages=1,
- norm_cfg=dict(type='BN', requires_grad=True),
- norm_eval=True,
- style='pytorch'),
- neck=dict(
- type='FPN',
- in_channels=[64, 128, 288, 672],
- out_channels=256,
- num_outs=5))
diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x1024_40k_cityscapes.py
deleted file mode 100644
index 7918dd10d05cd98dbc02f02ef1b93e3134f52357..0000000000000000000000000000000000000000
--- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/fcn/fcn_r101-d8_512x1024_40k_cityscapes.py
+++ /dev/null
@@ -1,2 +0,0 @@
-_base_ = './fcn_r50-d8_512x1024_40k_cityscapes.py'
-model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101))
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/language_model/prepare-wikitext-103.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/language_model/prepare-wikitext-103.sh
deleted file mode 100644
index 751302156f0a6829af9c2ee5e0e2ca62c2cd4187..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/language_model/prepare-wikitext-103.sh
+++ /dev/null
@@ -1,33 +0,0 @@
-#!/bin/bash
-# Adapted from https://github.com/facebookresearch/MIXER/blob/master/prepareData.sh
-
-URLS=(
- "https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-103-v1.zip"
-)
-FILES=(
- "wikitext-103-v1.zip"
-)
-
-for ((i=0;i<${#URLS[@]};++i)); do
- file=${FILES[i]}
- if [ -f $file ]; then
- echo "$file already exists, skipping download"
- else
- url=${URLS[i]}
- wget "$url"
- if [ -f $file ]; then
- echo "$url successfully downloaded."
- else
- echo "$url not successfully downloaded."
- exit -1
- fi
- if [ ${file: -4} == ".tgz" ]; then
- tar zxvf $file
- elif [ ${file: -4} == ".tar" ]; then
- tar xvf $file
- elif [ ${file: -4} == ".zip" ]; then
- unzip $file
- fi
- fi
-done
-cd ..
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_af_xh.sh b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_af_xh.sh
deleted file mode 100644
index a78fbbbbccb6f6ae005a1f03b97f083a2d958ebe..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/multilingual/data_scripts/download_af_xh.sh
+++ /dev/null
@@ -1,164 +0,0 @@
-#!/bin/bash
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-#
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-# set -x -e
-
-if [ -z $WORKDIR_ROOT ] ;
-then
- echo "please specify your working directory root in environment variable WORKDIR_ROOT. Exitting..."
- exit
-fi
-
-
-# put intermediate files
-TMP_DIR=$WORKDIR_ROOT/temp/af_xhv2
-# output {train,valid,test} files to dest
-DEST=${WORKDIR_ROOT}/ML50/raw
-
-
-
-ROOT=${WORKDIR_ROOT}
-UTILS=$PWD/utils
-TMX2CORPUS="${UTILS}/tmx2corpus"
-TMX_TOOL="python ${TMX2CORPUS}/tmx2corpus.py"
-
-mkdir -p $TMP_DIR
-mkdir -p $DEST
-mkdir -p $UTILS
-
-function download_opus(){
- src=$1
- tgt=$2
- subset=$3
- ulr=$4
-
- mkdir extract_$subset.$src-$tgt
- pushd extract_$subset.$src-$tgt
- if [ ! -f "$subset.$src-$tgt.tmx.gz" ]; then
- wget $url -O "$subset.$src-$tgt.tmx.gz"
- gzip -d "$subset.$src-$tgt.tmx.gz"
- f=$subset.$src-$tgt.tmx
- $TMX_TOOL $f
- mv bitext.$src ../$subset.$src-$tgt.$src
- mv bitext.$tgt ../$subset.$src-$tgt.$tgt
- fi
- popd
-}
-
-function concat_subsets(){
- src=$1
- tgt=$2
- subsets=$3
- src_train=raw_train.$src-$tgt.$src
- tgt_train=raw_train.$src-$tgt.$tgt
- > $src_train
- > $tgt_train
- for subset in $subsets; do
- cat $subset.$src-$tgt.$src >> $src_train
- cat $subset.$src-$tgt.$tgt >> $tgt_train
- done
-}
-
-
-
-function get_seeded_random()
-{
- seed="$1"
- openssl enc -aes-256-ctr -pass pass:"$seed" -nosalt \
- /dev/null
-}
-
-function split_train_valid(){
- src=$1
- tgt=$2
- raw_src_train=raw_train.$src-$tgt.$src
- raw_tgt_train=raw_train.$src-$tgt.$tgt
-
- shuf --random-source=<(get_seeded_random 43) $raw_src_train > shuffled.$src-$tgt.$src
- shuf --random-source=<(get_seeded_random 43) $raw_tgt_train > shuffled.$src-$tgt.$tgt
-
- head -n 1500 shuffled.$src-$tgt.$src > valid.$src-$tgt.$src
- head -n 1500 shuffled.$src-$tgt.$tgt > valid.$src-$tgt.$tgt
-
- tail +1501 shuffled.$src-$tgt.$src > train.$src-$tgt.$src
- tail +1501 shuffled.$src-$tgt.$tgt > train.$src-$tgt.$tgt
-}
-
-function copy2dst(){
- lsrc=$1
- ltgt=$2
- src=${lsrc:0:2}
- tgt=${ltgt:0:2}
-
-
- cp valid.$src-$tgt.$src $DEST/valid.$lsrc-$ltgt.$lsrc
- cp valid.$src-$tgt.$tgt $DEST/valid.$lsrc-$ltgt.$ltgt
-
- cp train.$src-$tgt.$src $DEST/train.$lsrc-$ltgt.$lsrc
- cp train.$src-$tgt.$tgt $DEST/train.$lsrc-$ltgt.$ltgt
-}
-
-
-
-
-#for xh-en
-declare -A xh_en_urls
-xh_en_urls=(
- [Tatoeba]=https://object.pouta.csc.fi/OPUS-Tatoeba/v20190709/tmx/en-xh.tmx.gz
- [wikimedia]=https://object.pouta.csc.fi/OPUS-wikimedia/v20190628/tmx/en-xh.tmx.gz
- [memat]=https://object.pouta.csc.fi/OPUS-memat/v1/tmx/en-xh.tmx.gz
- [uedin]=https://object.pouta.csc.fi/OPUS-bible-uedin/v1/tmx/en-xh.tmx.gz
- [GNOME]=https://object.pouta.csc.fi/OPUS-GNOME/v1/tmx/en-xh.tmx.gz
- [XhosaNavy]=https://object.pouta.csc.fi/OPUS-XhosaNavy/v1/tmx/en-xh.tmx.gz
- [KDE4]=https://object.pouta.csc.fi/OPUS-KDE4/v2/tmx/en-xh.tmx.gz
- [Ubuntu]=https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/tmx/en-xh.tmx.gz
-)
-
-mkdir $TMP_DIR/xh-en
-pushd $TMP_DIR/xh-en
-for k in "${!xh_en_urls[@]}"
-do
- name=$k
- url=${xh_en_urls[$k]}
- echo "$name: $url"
- download_opus xh en $name $ulr
-done
-concat_subsets xh en "${!xh_en_urls[@]}"
-split_train_valid xh en
-copy2dst xh_ZA en_XX
-popd
-
-
-##
-#for af-en
-declare -A af_en_urls
-af_en_urls=(
- [Tatoeba]=https://object.pouta.csc.fi/OPUS-Tatoeba/v20190709/tmx/af-en.tmx.gz
- [uedin]=https://object.pouta.csc.fi/OPUS-bible-uedin/v1/tmx/af-en.tmx.gz
- [GNOME]=https://object.pouta.csc.fi/OPUS-GNOME/v1/tmx/af-en.tmx.gz
- [QED]=https://object.pouta.csc.fi/OPUS-QED/v2.0a/tmx/af-en.tmx.gz
- [KDE4]=https://object.pouta.csc.fi/OPUS-KDE4/v2/tmx/af-en.tmx.gz
- [OpenSubtitles]=https://object.pouta.csc.fi/OPUS-OpenSubtitles/v2018/tmx/af-en.tmx.gz
- [SPC]=https://object.pouta.csc.fi/OPUS-SPC/v1/tmx/af-en.tmx.gz
- [Ubuntu]=https://object.pouta.csc.fi/OPUS-Ubuntu/v14.10/tmx/af-en.tmx.gz
-)
-
-mkdir $TMP_DIR/af-en
-pushd $TMP_DIR/af-en
-for k in "${!af_en_urls[@]}"
-do
- name=$k
- url=${af_en_urls[$k]}
- echo "$name: $url"
- download_opus af en $name $ulr
-done
-concat_subsets af en "${!af_en_urls[@]}"
-split_train_valid af en
-copy2dst af_ZA en_XX
-popd
-
-
diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py
deleted file mode 100644
index e7e597f4749c591b057d776aacec39b44d99c037..0000000000000000000000000000000000000000
--- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/modules/lightconv_layer/lightconv_layer.py
+++ /dev/null
@@ -1,137 +0,0 @@
-# Copyright (c) Facebook, Inc. and its affiliates.
-#
-# This source code is licensed under the MIT license found in the
-# LICENSE file in the root directory of this source tree.
-
-import lightconv_cuda
-import torch
-import torch.nn.functional as F
-from fairseq import utils
-from fairseq.incremental_decoding_utils import with_incremental_state
-from fairseq.modules.fairseq_dropout import FairseqDropout
-from torch import nn
-from torch.autograd import Function
-
-
-class lightconvFunction(Function):
- @staticmethod
- def forward(ctx, x, weights, padding_l):
- ctx.padding_l = padding_l
- outputs = lightconv_cuda.forward(x, weights, padding_l)
- variables = [x, weights]
- ctx.save_for_backward(*variables)
- return outputs[0]
-
- @staticmethod
- def backward(ctx, grad_output):
- outputs = lightconv_cuda.backward(
- grad_output.contiguous(), ctx.padding_l, *ctx.saved_tensors
- )
- grad_input, grad_weights = outputs
- return grad_input, grad_weights, None
-
-
-@with_incremental_state
-class LightconvLayer(nn.Module):
- def __init__(
- self,
- input_size,
- kernel_size=1,
- padding_l=None,
- weight_softmax=False,
- num_heads=1,
- weight_dropout=0.0,
- bias=False,
- ):
- super(LightconvLayer, self).__init__()
- self.input_size = input_size
- self.kernel_size = kernel_size
- self.padding_l = padding_l
- self.num_heads = num_heads
- self.weight_softmax = weight_softmax
- self.weight_dropout_module = FairseqDropout(
- weight_dropout, module_name=self.__class__.__name__
- )
-
- self.weight = nn.Parameter(torch.Tensor(num_heads, kernel_size))
- if bias:
- self.bias = nn.Parameter(torch.Tensor(input_size))
- else:
- self.bias = None
- self.reset_parameters()
-
- def upgrade_state_dict_named(self, state_dict, name):
- prefix = name + "." if name != "" else ""
- for k, v in state_dict.items():
- if k.endswith(prefix + "weight"):
- if v.dim() == 3 and v.size(1) == 1:
- state_dict[k] = v.squeeze(1)
-
- def reset_parameters(self):
- nn.init.xavier_uniform_(self.weight)
- if self.bias is not None:
- nn.init.constant_(self.bias, 0.0)
-
- def forward(self, x, incremental_state=None):
-
- # during inference time, incremental BMM is faster
- if incremental_state is not None:
- T, B, C = x.size()
- K, H = self.kernel_size, self.num_heads
- R = C // H
- input_buffer = self._get_input_buffer(incremental_state)
- if input_buffer is None:
- input_buffer = x.new()
- x_unfold = torch.cat([input_buffer, x.unsqueeze(3)], dim=3)
- if self.kernel_size > 1:
- self._set_input_buffer(
- incremental_state, x_unfold[:, :, :, -self.kernel_size + 1 :]
- )
- x_unfold = x_unfold.view(T * B * H, R, -1)
-
- weight = self.weight
- if self.weight_softmax:
- weight = F.softmax(weight.float(), dim=1).type_as(weight)
-
- weight = weight[:, -x_unfold.size(2) :]
-
- K = weight.size(1)
-
- weight = (
- weight.view(1, H, K)
- .expand(T * B, H, K)
- .contiguous()
- .view(T * B * H, K, 1)
- )
-
- weight = self.weight_dropout_module(weight)
- output = torch.bmm(x_unfold, weight) # T*B*H x R x 1
- output = output.view(T, B, C)
- return output
-
- # during training time, use CUDA kernel
- else:
- x = x.permute(1, 2, 0).contiguous()
- weight = self.weight
- if self.weight_softmax:
- weight = F.softmax(self.weight, -1)
- if self.weight_dropout_module.p:
- weight = self.weight_dropout_module(weight)
- return lightconvFunction.apply(x, weight, self.padding_l).permute(2, 0, 1)
-
- def reorder_incremental_state(self, incremental_state, new_order):
- input_buffer = self._get_input_buffer(incremental_state)
- if input_buffer is not None:
- input_buffer = input_buffer.index_select(1, new_order)
- self._set_input_buffer(incremental_state, input_buffer)
-
- def _get_input_buffer(self, incremental_state):
- return utils.get_incremental_state(self, incremental_state, "input_buffer")
-
- def _set_input_buffer(self, incremental_state, new_buffer):
- return utils.set_incremental_state(
- self, incremental_state, "input_buffer", new_buffer
- )
-
- def half(self):
- return self._apply(lambda t: t.half() if t.is_floating_point() else t)
diff --git a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/generate_mels.py b/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/generate_mels.py
deleted file mode 100644
index 6072459ee33bb8fd4ec1a8d0238c05d17fe33ce3..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Hindi_TTS/vakyansh_tts/src/glow_tts/generate_mels.py
+++ /dev/null
@@ -1,70 +0,0 @@
-import numpy as np
-import os
-import torch
-import commons
-
-import models
-import utils
-from argparse import ArgumentParser
-from tqdm import tqdm
-from processtext import text_to_sequence
-
-if __name__ == "__main__":
- parser = ArgumentParser()
- parser.add_argument("-m", "--model_dir", required=True, type=str)
- parser.add_argument("-s", "--mels_dir", required=True, type=str)
- args = parser.parse_args()
- MODEL_DIR = args.model_dir # path to model dir
- SAVE_MELS_DIR = args.mels_dir # path to save generated mels
-
- if not os.path.exists(SAVE_MELS_DIR):
- os.makedirs(SAVE_MELS_DIR)
-
- hps = utils.get_hparams_from_dir(MODEL_DIR)
- symbols = list(hps.data.punc) + list(hps.data.chars)
- checkpoint_path = utils.latest_checkpoint_path(MODEL_DIR)
- cleaner = hps.data.text_cleaners
-
- model = models.FlowGenerator(
- len(symbols) + getattr(hps.data, "add_blank", False),
- out_channels=hps.data.n_mel_channels,
- **hps.model
- ).to("cuda")
-
- utils.load_checkpoint(checkpoint_path, model)
- model.decoder.store_inverse() # do not calcuate jacobians for fast decoding
- _ = model.eval()
-
- def get_mel(text, fpath):
- if getattr(hps.data, "add_blank", False):
- text_norm = text_to_sequence(symbols, text, cleaner)
- text_norm = commons.intersperse(text_norm, len(symbols))
- else: # If not using "add_blank" option during training, adding spaces at the beginning and the end of utterance improves quality
- text = " " + text.strip() + " "
- text_norm = text_to_sequence(symbols, text, cleaner)
-
- sequence = np.array(text_norm)[None, :]
-
- x_tst = torch.autograd.Variable(torch.from_numpy(sequence)).cuda().long()
- x_tst_lengths = torch.tensor([x_tst.shape[1]]).cuda()
-
- with torch.no_grad():
- noise_scale = 0.667
- length_scale = 1.0
- (y_gen_tst, *_), *_, (attn_gen, *_) = model(
- x_tst,
- x_tst_lengths,
- gen=True,
- noise_scale=noise_scale,
- length_scale=length_scale,
- )
-
- np.save(os.path.join(SAVE_MELS_DIR, fpath), y_gen_tst.cpu().detach().numpy())
-
- for f in [hps.data.training_files, hps.data.validation_files]:
- file_lines = open(f).read().splitlines()
-
- for line in tqdm(file_lines):
- fname, text = line.split("|")
- fname = os.path.basename(fname).replace(".wav", ".npy")
- get_mel(text, fname)
diff --git a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/hifi/env.py b/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/hifi/env.py
deleted file mode 100644
index 2bdbc95d4f7a8bad8fd4f5eef657e2b51d946056..0000000000000000000000000000000000000000
--- a/spaces/Harveenchadha/Vakyansh-Hindi-TTS/ttsv/src/glow_tts/hifi/env.py
+++ /dev/null
@@ -1,15 +0,0 @@
-import os
-import shutil
-
-
-class AttrDict(dict):
- def __init__(self, *args, **kwargs):
- super(AttrDict, self).__init__(*args, **kwargs)
- self.__dict__ = self
-
-
-def build_env(config, config_name, path):
- t_path = os.path.join(path, config_name)
- if config != t_path:
- os.makedirs(path, exist_ok=True)
- shutil.copyfile(config, os.path.join(path, config_name))
diff --git a/spaces/Hila/RobustViT/SegmentationTest/imagenet_seg_eval.py b/spaces/Hila/RobustViT/SegmentationTest/imagenet_seg_eval.py
deleted file mode 100644
index f20c23fe9c2c798d2b8ce47681c1fc4be9a68deb..0000000000000000000000000000000000000000
--- a/spaces/Hila/RobustViT/SegmentationTest/imagenet_seg_eval.py
+++ /dev/null
@@ -1,319 +0,0 @@
-import numpy as np
-import torch
-import torchvision.transforms as transforms
-from torch.utils.data import DataLoader
-from numpy import *
-import argparse
-from PIL import Image
-import imageio
-import os
-from tqdm import tqdm
-from SegmentationTest.utils.metrices import *
-
-from SegmentationTest.utils import render
-from SegmentationTest.utils.saver import Saver
-from SegmentationTest.utils.iou import IoU
-
-from SegmentationTest.data.Imagenet import Imagenet_Segmentation
-
-# Uncomment the expected model below
-
-# ViT
-from ViT.ViT import vit_base_patch16_224 as vit
-# from ViT.ViT import vit_large_patch16_224 as vit
-
-# ViT-AugReg
-# from ViT.ViT_new import vit_small_patch16_224 as vit
-# from ViT.ViT_new import vit_base_patch16_224 as vit
-# from ViT.ViT_new import vit_large_patch16_224 as vit
-
-# DeiT
-# from ViT.ViT import deit_base_patch16_224 as vit
-# from ViT.ViT import deit_small_patch16_224 as vit
-
-
-from ViT.explainer import generate_relevance, get_image_with_relevance
-
-from sklearn.metrics import precision_recall_curve
-import matplotlib.pyplot as plt
-
-import torch.nn.functional as F
-
-import warnings
-warnings.filterwarnings("ignore")
-
-plt.switch_backend('agg')
-
-# hyperparameters
-num_workers = 0
-batch_size = 1
-
-cls = ['airplane',
- 'bicycle',
- 'bird',
- 'boat',
- 'bottle',
- 'bus',
- 'car',
- 'cat',
- 'chair',
- 'cow',
- 'dining table',
- 'dog',
- 'horse',
- 'motobike',
- 'person',
- 'potted plant',
- 'sheep',
- 'sofa',
- 'train',
- 'tv'
- ]
-
-# Args
-parser = argparse.ArgumentParser(description='Training multi-class classifier')
-parser.add_argument('--arc', type=str, default='vgg', metavar='N',
- help='Model architecture')
-parser.add_argument('--train_dataset', type=str, default='imagenet', metavar='N',
- help='Testing Dataset')
-parser.add_argument('--method', type=str,
- default='grad_rollout',
- choices=['rollout', 'lrp', 'transformer_attribution', 'full_lrp', 'lrp_last_layer',
- 'attn_last_layer', 'attn_gradcam'],
- help='')
-parser.add_argument('--thr', type=float, default=0.,
- help='threshold')
-parser.add_argument('--K', type=int, default=1,
- help='new - top K results')
-parser.add_argument('--save-img', action='store_true',
- default=False,
- help='')
-parser.add_argument('--no-ia', action='store_true',
- default=False,
- help='')
-parser.add_argument('--no-fx', action='store_true',
- default=False,
- help='')
-parser.add_argument('--no-fgx', action='store_true',
- default=False,
- help='')
-parser.add_argument('--no-m', action='store_true',
- default=False,
- help='')
-parser.add_argument('--no-reg', action='store_true',
- default=False,
- help='')
-parser.add_argument('--is-ablation', type=bool,
- default=False,
- help='')
-parser.add_argument('--imagenet-seg-path', type=str, required=True)
-parser.add_argument('--checkpoint', default='', type=str, metavar='PATH',
- help='path to latest checkpoint (default: none)')
-args = parser.parse_args()
-
-args.checkname = args.method + '_' + args.arc
-
-alpha = 2
-
-cuda = torch.cuda.is_available()
-device = torch.device("cuda" if cuda else "cpu")
-
-# Define Saver
-saver = Saver(args)
-saver.results_dir = os.path.join(saver.experiment_dir, 'results')
-if not os.path.exists(saver.results_dir):
- os.makedirs(saver.results_dir)
-if not os.path.exists(os.path.join(saver.results_dir, 'input')):
- os.makedirs(os.path.join(saver.results_dir, 'input'))
-if not os.path.exists(os.path.join(saver.results_dir, 'explain')):
- os.makedirs(os.path.join(saver.results_dir, 'explain'))
-
-args.exp_img_path = os.path.join(saver.results_dir, 'explain/img')
-if not os.path.exists(args.exp_img_path):
- os.makedirs(args.exp_img_path)
-args.exp_np_path = os.path.join(saver.results_dir, 'explain/np')
-if not os.path.exists(args.exp_np_path):
- os.makedirs(args.exp_np_path)
-
-# Data
-normalize = transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])
-test_img_trans = transforms.Compose([
- transforms.Resize((224, 224)),
- transforms.ToTensor(),
- normalize,
-])
-test_lbl_trans = transforms.Compose([
- transforms.Resize((224, 224), Image.NEAREST),
-])
-
-ds = Imagenet_Segmentation(args.imagenet_seg_path,
- transform=test_img_trans, target_transform=test_lbl_trans)
-dl = DataLoader(ds, batch_size=batch_size, shuffle=False, num_workers=1, drop_last=False)
-
-# Model
-if args.checkpoint:
- print(f"loading model from checkpoint {args.checkpoint}")
- model = vit().cuda()
- checkpoint = torch.load(args.checkpoint)
- model.load_state_dict(checkpoint['state_dict'])
-else:
- model = vit(pretrained=True).cuda()
-
-metric = IoU(2, ignore_index=-1)
-
-iterator = tqdm(dl)
-
-model.eval()
-
-
-def compute_pred(output):
- pred = output.data.max(1, keepdim=True)[1] # get the index of the max log-probability
- # pred[0, 0] = 282
- # print('Pred cls : ' + str(pred))
- T = pred.squeeze().cpu().numpy()
- T = np.expand_dims(T, 0)
- T = (T[:, np.newaxis] == np.arange(1000)) * 1.0
- T = torch.from_numpy(T).type(torch.FloatTensor)
- Tt = T.cuda()
-
- return Tt
-
-
-def eval_batch(image, labels, evaluator, index):
- evaluator.zero_grad()
- # Save input image
- if args.save_img:
- img = image[0].permute(1, 2, 0).data.cpu().numpy()
- img = 255 * (img - img.min()) / (img.max() - img.min())
- img = img.astype('uint8')
- Image.fromarray(img, 'RGB').save(os.path.join(saver.results_dir, 'input/{}_input.png'.format(index)))
- Image.fromarray((labels.repeat(3, 1, 1).permute(1, 2, 0).data.cpu().numpy() * 255).astype('uint8'), 'RGB').save(
- os.path.join(saver.results_dir, 'input/{}_mask.png'.format(index)))
-
- image.requires_grad = True
-
- image = image.requires_grad_()
- predictions = evaluator(image)
- Res = generate_relevance(model, image.cuda())
-
- # threshold between FG and BG is the mean
- Res = (Res - Res.min()) / (Res.max() - Res.min())
-
- ret = Res.mean()
-
- Res_1 = Res.gt(ret).type(Res.type())
- Res_0 = Res.le(ret).type(Res.type())
-
- Res_1_AP = Res
- Res_0_AP = 1 - Res
-
- Res_1[Res_1 != Res_1] = 0
- Res_0[Res_0 != Res_0] = 0
- Res_1_AP[Res_1_AP != Res_1_AP] = 0
- Res_0_AP[Res_0_AP != Res_0_AP] = 0
-
- # TEST
- pred = Res.clamp(min=args.thr) / Res.max()
- pred = pred.view(-1).data.cpu().numpy()
- target = labels.view(-1).data.cpu().numpy()
- # print("target", target.shape)
-
- output = torch.cat((Res_0, Res_1), 1)
- output_AP = torch.cat((Res_0_AP, Res_1_AP), 1)
-
- if args.save_img:
- # Save predicted mask
- mask = F.interpolate(Res_1, [64, 64], mode='bilinear')
- mask = mask[0].squeeze().data.cpu().numpy()
- # mask = Res_1[0].squeeze().data.cpu().numpy()
- mask = 255 * mask
- mask = mask.astype('uint8')
- imageio.imsave(os.path.join(args.exp_img_path, 'mask_' + str(index) + '.jpg'), mask)
-
- relevance = F.interpolate(Res, [64, 64], mode='bilinear')
- relevance = relevance[0].permute(1, 2, 0).data.cpu().numpy()
- # relevance = Res[0].permute(1, 2, 0).data.cpu().numpy()
- hm = np.sum(relevance, axis=-1)
- maps = (render.hm_to_rgb(hm, scaling=3, sigma=1, cmap='seismic') * 255).astype(np.uint8)
- imageio.imsave(os.path.join(args.exp_img_path, 'heatmap_' + str(index) + '.jpg'), maps)
-
- # Evaluate Segmentation
- batch_inter, batch_union, batch_correct, batch_label = 0, 0, 0, 0
- batch_ap, batch_f1 = 0, 0
-
- # Segmentation resutls
- correct, labeled = batch_pix_accuracy(output[0].data.cpu(), labels[0])
- inter, union = batch_intersection_union(output[0].data.cpu(), labels[0], 2)
- batch_correct += correct
- batch_label += labeled
- batch_inter += inter
- batch_union += union
- # print("output", output.shape)
- # print("ap labels", labels.shape)
- # ap = np.nan_to_num(get_ap_scores(output, labels))
- ap = np.nan_to_num(get_ap_scores(output_AP, labels))
- # f1 = np.nan_to_num(get_f1_scores(output[0, 1].data.cpu(), labels[0]))
- batch_ap += ap
- # batch_f1 += f1
-
- # return batch_correct, batch_label, batch_inter, batch_union, batch_ap, batch_f1, pred, target
- return batch_correct, batch_label, batch_inter, batch_union, batch_ap, pred, target
-
-
-total_inter, total_union, total_correct, total_label = np.int64(0), np.int64(0), np.int64(0), np.int64(0)
-total_ap, total_f1 = [], []
-
-predictions, targets = [], []
-for batch_idx, (image, labels) in enumerate(iterator):
-
- if args.method == "blur":
- images = (image[0].cuda(), image[1].cuda())
- else:
- images = image.cuda()
- labels = labels.cuda()
- # print("image", image.shape)
- # print("lables", labels.shape)
-
- # correct, labeled, inter, union, ap, f1, pred, target = eval_batch(images, labels, model, batch_idx)
- correct, labeled, inter, union, ap, pred, target = eval_batch(images, labels, model, batch_idx)
-
- predictions.append(pred)
- targets.append(target)
-
- total_correct += correct.astype('int64')
- total_label += labeled.astype('int64')
- total_inter += inter.astype('int64')
- total_union += union.astype('int64')
- total_ap += [ap]
- # total_f1 += [f1]
- pixAcc = np.float64(1.0) * total_correct / (np.spacing(1, dtype=np.float64) + total_label)
- IoU = np.float64(1.0) * total_inter / (np.spacing(1, dtype=np.float64) + total_union)
- mIoU = IoU.mean()
- mAp = np.mean(total_ap)
- # mF1 = np.mean(total_f1)
- # iterator.set_description('pixAcc: %.4f, mIoU: %.4f, mAP: %.4f, mF1: %.4f' % (pixAcc, mIoU, mAp, mF1))
- iterator.set_description('pixAcc: %.4f, mIoU: %.4f, mAP: %.4f' % (pixAcc, mIoU, mAp))
-
-predictions = np.concatenate(predictions)
-targets = np.concatenate(targets)
-pr, rc, thr = precision_recall_curve(targets, predictions)
-np.save(os.path.join(saver.experiment_dir, 'precision.npy'), pr)
-np.save(os.path.join(saver.experiment_dir, 'recall.npy'), rc)
-
-plt.figure()
-plt.plot(rc, pr)
-plt.savefig(os.path.join(saver.experiment_dir, 'PR_curve_{}.png'.format(args.method)))
-
-txtfile = os.path.join(saver.experiment_dir, 'result_mIoU_%.4f.txt' % mIoU)
-# txtfile = 'result_mIoU_%.4f.txt' % mIoU
-fh = open(txtfile, 'w')
-print("Mean IoU over %d classes: %.4f\n" % (2, mIoU))
-print("Pixel-wise Accuracy: %2.2f%%\n" % (pixAcc * 100))
-print("Mean AP over %d classes: %.4f\n" % (2, mAp))
-# print("Mean F1 over %d classes: %.4f\n" % (2, mF1))
-
-fh.write("Mean IoU over %d classes: %.4f\n" % (2, mIoU))
-fh.write("Pixel-wise Accuracy: %2.2f%%\n" % (pixAcc * 100))
-fh.write("Mean AP over %d classes: %.4f\n" % (2, mAp))
-# fh.write("Mean F1 over %d classes: %.4f\n" % (2, mF1))
-fh.close()
\ No newline at end of file
diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/clib/libnat_cuda/binding.cpp b/spaces/ICML2022/OFA/fairseq/fairseq/clib/libnat_cuda/binding.cpp
deleted file mode 100644
index ced91c0d0afab9071842911d9876e6360d90284a..0000000000000000000000000000000000000000
--- a/spaces/ICML2022/OFA/fairseq/fairseq/clib/libnat_cuda/binding.cpp
+++ /dev/null
@@ -1,67 +0,0 @@
-/**
- * Copyright 2017-present, Facebook, Inc.
- * All rights reserved.
- *
- * This source code is licensed under the license found in the
- * LICENSE file in the root directory of this source tree.
- */
-
-/*
- This code is partially adpoted from
- https://github.com/1ytic/pytorch-edit-distance
- */
-
-#include
-#include "edit_dist.h"
-
-#ifndef TORCH_CHECK
-#define TORCH_CHECK AT_CHECK
-#endif
-
-#define CHECK_CUDA(x) \
- TORCH_CHECK(x.type().is_cuda(), #x " must be a CUDA tensor")
-#define CHECK_CONTIGUOUS(x) \
- TORCH_CHECK(x.is_contiguous(), #x " must be contiguous")
-#define CHECK_INPUT(x) \
- CHECK_CUDA(x); \
- CHECK_CONTIGUOUS(x)
-
-torch::Tensor LevenshteinDistance(
- torch::Tensor source,
- torch::Tensor target,
- torch::Tensor source_length,
- torch::Tensor target_length) {
- CHECK_INPUT(source);
- CHECK_INPUT(target);
- CHECK_INPUT(source_length);
- CHECK_INPUT(target_length);
- return LevenshteinDistanceCuda(source, target, source_length, target_length);
-}
-
-torch::Tensor GenerateDeletionLabel(
- torch::Tensor source,
- torch::Tensor operations) {
- CHECK_INPUT(source);
- CHECK_INPUT(operations);
- return GenerateDeletionLabelCuda(source, operations);
-}
-
-std::pair GenerateInsertionLabel(
- torch::Tensor target,
- torch::Tensor operations) {
- CHECK_INPUT(target);
- CHECK_INPUT(operations);
- return GenerateInsertionLabelCuda(target, operations);
-}
-
-PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) {
- m.def("levenshtein_distance", &LevenshteinDistance, "Levenshtein distance");
- m.def(
- "generate_deletion_labels",
- &GenerateDeletionLabel,
- "Generate Deletion Label");
- m.def(
- "generate_insertion_labels",
- &GenerateInsertionLabel,
- "Generate Insertion Label");
-}
diff --git a/spaces/Jackflack09/diffuse-custom/Waifu2x/utils/Img_to_H5.py b/spaces/Jackflack09/diffuse-custom/Waifu2x/utils/Img_to_H5.py
deleted file mode 100644
index d7c565599f7178f8ac0d3e26da7cabee8444e1ed..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/Waifu2x/utils/Img_to_H5.py
+++ /dev/null
@@ -1,50 +0,0 @@
-import glob
-
-import h5py
-from PIL import Image
-from torchvision.transforms import RandomCrop
-from torchvision.transforms.functional import to_tensor
-from tqdm import tqdm
-
-from Dataloader import ImageAugment
-
-patch_size = 128
-shrink_size = 2
-noise_level = 1
-patches_per_img = 20
-images = glob.glob("dataset/train/*")
-
-database = h5py.File("train_images.hdf5", 'w')
-
-dat_group = database.create_group("shrink_2_noise_level_1_downsample_random_rgb")
-# del database['shrink_2_noise_level_1_downsample_random']
-storage_lr = dat_group.create_dataset("train_lr", shape=(patches_per_img * len(images), 3,
- patch_size // shrink_size,
- patch_size // shrink_size),
- dtype='float32',
- # compression='lzf',
- )
-storage_hr = dat_group.create_dataset("train_hr", shape=(patches_per_img * len(images), 3,
- patch_size, patch_size),
- # compression='lzf',
- dtype='float32')
-
-random_cropper = RandomCrop(size=patch_size)
-img_augmenter = ImageAugment(shrink_size, noise_level, down_sample_method=None)
-
-
-def get_img_patches(img_pil):
- img_patch = random_cropper(img_pil)
- lr_hr_patches = img_augmenter.process(img_patch)
- return lr_hr_patches
-
-
-counter = 0
-for img in tqdm(images):
- img_pil = Image.open(img).convert("RGB")
- for i in range(patches_per_img):
- patch = get_img_patches(img_pil)
- storage_lr[counter] = to_tensor(patch[0].convert("RGB")).numpy()
- storage_hr[counter] = to_tensor(patch[1].convert("RGB")).numpy()
- counter += 1
-database.close()
diff --git a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/latent_diffusion/__init__.py b/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/latent_diffusion/__init__.py
deleted file mode 100644
index 5544527ff5877bb2c725c8b375cd5b03060d6a21..0000000000000000000000000000000000000000
--- a/spaces/Jackflack09/diffuse-custom/diffusers/pipelines/latent_diffusion/__init__.py
+++ /dev/null
@@ -1,7 +0,0 @@
-# flake8: noqa
-from ...utils import is_transformers_available
-from .pipeline_latent_diffusion_superresolution import LDMSuperResolutionPipeline
-
-
-if is_transformers_available():
- from .pipeline_latent_diffusion import LDMBertModel, LDMTextToImagePipeline
diff --git a/spaces/KPCGD/bingo/src/components/tone-selector.tsx b/spaces/KPCGD/bingo/src/components/tone-selector.tsx
deleted file mode 100644
index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000
--- a/spaces/KPCGD/bingo/src/components/tone-selector.tsx
+++ /dev/null
@@ -1,43 +0,0 @@
-import React from 'react'
-import { BingConversationStyle } from '@/lib/bots/bing/types'
-import { cn } from '@/lib/utils'
-
-type ToneItem = {
- type: BingConversationStyle,
- name: string
-}
-
-const ToneList: ToneItem[] = [
- { name: '有创造力', type: BingConversationStyle.Creative },
- { name: '更平衡', type: BingConversationStyle.Balanced },
- { name: '更精确', type: BingConversationStyle.Precise }
-]
-
-interface ToneSelectorProps {
- type: BingConversationStyle | ''
- onChange?: (type: BingConversationStyle) => void
-}
-
-export function ToneSelector({ type, onChange }: ToneSelectorProps) {
- return (
-
-
- 选择对话样式
-
-
-
- {
- ToneList.map(tone => (
-
onChange?.(tone.type)}>
-
-
- ))
- }
-
-
-
- )
-}
diff --git a/spaces/Kangarroar/streamlit-docker-example/congratulations.py b/spaces/Kangarroar/streamlit-docker-example/congratulations.py
deleted file mode 100644
index 6cc676e89a1284ab3347d4fe4f4c0d4f03950dc6..0000000000000000000000000000000000000000
--- a/spaces/Kangarroar/streamlit-docker-example/congratulations.py
+++ /dev/null
@@ -1,10 +0,0 @@
-import streamlit as st
-
-def main():
- st.set_page_config(page_title="Congratulations!", page_icon=":confetti_ball:", layout="wide")
- st.title("Congratulations!")
- st.write("You have successfully completed the terms of service agreement!")
- st.write("You may now access the rest of the platform.")
-
-if __name__ == "__main__":
- main()
diff --git a/spaces/Kevin676/AutoGPT/ui/app.py b/spaces/Kevin676/AutoGPT/ui/app.py
deleted file mode 100644
index c95ecd1523e1e01f3f7af8d879edb445cfc945b3..0000000000000000000000000000000000000000
--- a/spaces/Kevin676/AutoGPT/ui/app.py
+++ /dev/null
@@ -1,160 +0,0 @@
-import gradio as gr
-import utils
-from api import AutoAPI, get_openai_api_key
-import os, shutil
-import json
-
-FILE_DIR = os.path.dirname(os.path.abspath(__file__))
-OUTPUT_DIR = os.path.join(os.path.dirname(FILE_DIR), "auto_gpt_workspace")
-if not os.path.exists(OUTPUT_DIR):
- os.mkdir(OUTPUT_DIR)
-
-CSS = """
-#chatbot {font-family: monospace;}
-#files .generating {display: none;}
-#files .min {min-height: 0px;}
-"""
-
-with gr.Blocks(css=CSS) as app:
- with gr.Column() as setup_pane:
- gr.Markdown(f""" #
🥳💬💕 - TalktoAI,随时随地,谈天说地!
- ###
🤖 - 让有人文关怀的AI造福每一个人!AI向善,文明璀璨!TalktoAI - Enable the future!
- 1. Duplicate this Space: This will **NOT** work without duplication!
- 2. Enter your OpenAI API Key below.
- 3. Powered by [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT). Thanks to [SigGravitas](https://github.com/Significant-Gravitas) and [Ali Abid](https://huggingface.co/aliabid94).
- """)
- with gr.Row():
- open_ai_key = gr.Textbox(
- value=get_openai_api_key(),
- label="OpenAI API Key",
- type="password",
- )
- gr.Markdown(
- "3. Fill the values below, then click 'Start'. There are example values you can load at the bottom of this page."
- )
- with gr.Row():
- ai_name = gr.Textbox(label="AI Name", placeholder="e.g. Entrepreneur-GPT")
- ai_role = gr.Textbox(
- label="AI Role",
- placeholder="e.g. an AI designed to autonomously develop and run businesses with the sole goal of increasing your net worth.",
- )
- top_5_goals = gr.Dataframe(
- row_count=(5, "fixed"),
- col_count=(1, "fixed"),
- headers=["AI Goals - Enter up to 5"],
- type="array"
- )
- start_btn = gr.Button("Start", variant="primary")
- with open(os.path.join(FILE_DIR, "examples.json"), "r") as f:
- example_values = json.load(f)
- gr.Examples(
- example_values,
- [ai_name, ai_role, top_5_goals],
- )
- with gr.Column(visible=False) as main_pane:
- with gr.Row():
- with gr.Column(scale=2):
- chatbot = gr.Chatbot(elem_id="chatbot")
- with gr.Row():
- yes_btn = gr.Button("Yes", variant="primary", interactive=False)
- consecutive_yes = gr.Slider(
- 1, 10, 1, step=1, label="Consecutive Yes", interactive=False
- )
- custom_response = gr.Textbox(
- label="Custom Response",
- placeholder="Press 'Enter' to Submit.",
- interactive=False,
- )
- with gr.Column(scale=1):
- gr.HTML(
- lambda: f"""
- Generated Files
-