diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar Whats New and Whats Improved in This Update.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar Whats New and Whats Improved in This Update.md deleted file mode 100644 index 35941c19f6268c2d57019dd1d19a9c2aff80dbb1..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar Whats New and Whats Improved in This Update.md +++ /dev/null @@ -1,144 +0,0 @@ -
-
- Overview: What is the 2018.0.2 version and what are its improvements and fixes
- Download: How to download and install the full .rar file | | H2: What is AutoCAD Civil 3D and what are its features and benefits | - Definition: A software for civil engineering design and documentation
- Features: A list of some of the main features of AutoCAD Civil 3D such as dynamic models, object-oriented environment, BIM tools, etc.
- Benefits: A list of some of the benefits of using AutoCAD Civil 3D such as efficiency, accuracy, collaboration, etc. | | H2: What is the 2018.0.2 version and what are its improvements and fixes | - Release date: When was the 2018.0.2 version released and by whom
- Improvements: A list of some of the improvements made in the 2018.0.2 version such as performance, stability, compatibility, etc.
- Fixes: A list of some of the fixes made in the 2018.0.2 version such as bugs, errors, issues, etc. | | H2: How to download and install the full .rar file | - Requirements: What are the system requirements for running AutoCAD Civil 3D 2018.0.2
- Sources: Where can you find the full .rar file for download
- Steps: How to download and install the full .rar file step by step | | H1: Conclusion | - Summary: A brief summary of the main points of the article
- Recommendation: A recommendation to download and use AutoCAD Civil 3D 2018.0.2
- Call to action: A call to action to visit a website or contact a service for more information or assistance | | H1: FAQs | - Q1: What is the difference between AutoCAD and AutoCAD Civil 3D?
- A1: AutoCAD is a general-purpose CAD software that can be used for various design and drafting applications, while AutoCAD Civil 3D is a specialized software that focuses on civil engineering design and documentation.
- Q2: What are some of the applications of AutoCAD Civil 3D?
- A2: Some of the applications of AutoCAD Civil 3D are surveying, land development, transportation engineering, water resources engineering, environmental engineering, etc.
- Q3: How much does AutoCAD Civil 3D cost?
- A3: AutoCAD Civil 3D is available as a subscription-based service that costs $2,155 per year or $270 per month.
- Q4: How can I learn AutoCAD Civil 3D?
- A4: You can learn AutoCAD Civil 3D by taking online courses, watching tutorials, reading manuals, joining forums, or hiring a trainer.
- Q5: How can I get support for AutoCAD Civil 3D?
- A5: You can get support for AutoCAD Civil 3D by visiting the official website, contacting the customer service, accessing the knowledge base, or joining the community. | # Article with HTML formatting ```html

Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar: What is it and why you need it

-

If you are a civil engineer or a civil engineering student, you probably have heard of AutoCAD Civil 3D, one of the most popular and powerful software for civil engineering design and documentation. But do you know what is Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar and why you need it? In this article, we will explain what this software is, what are its features and benefits, what are its improvements and fixes in the latest version, and how to download and install it.

-

What is AutoCAD Civil 3D and what are its features and benefits

-

AutoCAD Civil 3D is a software developed by Autodesk, a leading company in design and engineering software solutions. It is a software that allows you to create civil engineering designs and documentation using dynamic models, an object-oriented environment, and powerful tools for building information modeling (BIM).

-

Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar


Download Zip –––––>>> https://byltly.com/2uKwsP



-

Some of the main features of AutoCAD Civil 3D are:

- -

Some of the benefits of using AutoCAD Civil 3D are:

- -

What is the 2018.0.2 version and what are its improvements and fixes

-

The 2018.0.2 version of AutoCAD Civil 3D is the latest update released by Autodesk on November 6th, which includes several improvements and fixes that enhance the performance, stability, compatibility, etc.

-

The following table summarizes some of the main improvements made in this version:

- - - - - - - - - - -
AreaDescription
Civil ViewThe performance has been improved when importing large quantities of objects into Autodesk InfraWorks.
Data ShortcutsThe performance has been improved when creating data shortcuts for corridors with large quantities of baselines.
Drawing ManagementThe stability has been improved when opening drawings containing data shortcuts.
PipesThe stability has been improved when editing pipe networks in section views.
RailingsThe stability has been improved when editing railings in profile views.
Roadway DesignThe stability has been improved when editing corridors with large quantities of regions.
User InterfaceThe compatibility has been improved with high resolution monitors.
XrefThe performance has been improved when opening drawings containing xrefs.
-

The following table summarizes some of the main fixes made in this version:

- - - - - - - -```html has been resolved. - - - - - - - - -
Bug IDDescription
CIVIL-12900An issue where corridor solids were not created correctly for some corridors has been resolved.
CIVIL-13076An issue where corridor feature lines were not created correctly for some corridors has been resolved.
CIVIL-13107An issue where corridor solids were not displayed correctly in section views has been resolved.
CIVIL-13108An issue where corridor feature lines were not displayed correctly in section views has been resolved.
CIVIL-13109An issue where corridor solids were not displayed correctly in plan views has been resolved.
CIVIL-13111An issue where corridor solids were not displayed correctly in 3D views has been resolved.
CIVIL-13112An issue where corridor feature lines were not displayed correctly in 3D views has been resolved.
CIVIL-13113An issue where corridor solids were not exported correctly to Autodesk InfraWorks has been resolved.
CIVIL-13114An issue where corridor feature lines were not exported correctly to Autodesk InfraWorks has been resolved.
CIVIL-13115An issue where corridor solids were not exported correctly to Autodesk Navisworks has been resolved.
CIVIL-13116An issue where corridor feature lines were not exported correctly to Autodesk Navisworks has been resolved.
CIVIL-13117An issue where corridor solids were not exported correctly to Autodesk Revit has been resolved.
CIVIL-13118An issue where corridor feature lines were not exported correctly to Autodesk Revit has been resolved.
-

How to download and install the full .rar file

-

If you want to download and install the full .rar file of AutoCAD Civil 3D 2018.0.2 (x64), you need to make sure that your system meets the following requirements:

- -

Once you have checked your system requirements, you can find the full .rar file for download from various sources on the internet, such as 4shared, SolidTorrents, or Archive.org. However, be careful of the potential risks of downloading files from unverified or untrusted sources, such as viruses, malware, or corrupted files.

-

Autodesk AutoCAD Civil 3D 2018 x64 full version download
-How to install Autodesk AutoCAD Civil 3D 2018.0.2 on 64-bit Windows
-Autodesk AutoCAD Civil 3D 2018.0.2 crack serial keygen
-Autodesk AutoCAD Civil 3D 2018 for civil engineering design
-Autodesk AutoCAD Civil 3D 2018.0.2 patch update
-Autodesk AutoCAD Civil 3D 2018 x64 free trial
-Autodesk AutoCAD Civil 3D 2018 system requirements
-Autodesk AutoCAD Civil 3D 2018 tutorial pdf
-Autodesk AutoCAD Civil 3D 2018 new features and enhancements
-Autodesk AutoCAD Civil 3D 2018 license activation
-Autodesk AutoCAD Civil 3D 2018 online training course
-Autodesk AutoCAD Civil 3D 2018 vs Revit
-Autodesk AutoCAD Civil 3D 2018 user guide
-Autodesk AutoCAD Civil 3D 2018 product key generator
-Autodesk AutoCAD Civil 3D 2018 software review
-Autodesk AutoCAD Civil 3D 2018 tips and tricks
-Autodesk AutoCAD Civil 3D 2018 support forum
-Autodesk AutoCAD Civil 3D 2018 best practices
-Autodesk AutoCAD Civil 3D 2018 comparison with other versions
-Autodesk AutoCAD Civil 3D 2018 keyboard shortcuts
-Autodesk AutoCAD Civil 3D 2018 price and discount
-Autodesk AutoCAD Civil 3D 2018 torrent magnet link
-Autodesk AutoCAD Civil 3D 2018 workflow and tools
-Autodesk AutoCAD Civil 3D 2018 certification exam
-Autodesk AutoCAD Civil 3D 2018 video tutorial
-Autodesk AutoCAD Civil 3D 2018 sample projects and files
-Autodesk AutoCAD Civil 3D 2018 error and troubleshooting
-Autodesk AutoCAD Civil 3D 2018 customization and add-ons
-Autodesk AutoCAD Civil 3D 2018 release date and changelog
-Autodesk AutoCAD Civil 3D 2018 benefits and advantages
-Autodesk AutoCAD Civil 3D 2018 alternatives and competitors
-Autodesk AutoCAD Civil 3D 2018 feedback and testimonials
-Autodesk AutoCAD Civil 3D 2018 subscription and renewal
-Autodesk AutoCAD Civil 3D 2018 offline installer
-Autodesk AutoCAD Civil 3D 2018 compatibility and interoperability
-Autodesk AutoCAD Civil RAR file extraction and installation guide
-How to use Autodesk AutoCAD Civil for land development and infrastructure design
-How to upgrade from previous versions of Autodesk AutoCAD Civil
-How to uninstall or remove Autodesk AutoCAD Civil from your computer
-How to optimize the performance of Autodesk AutoCAD Civil
-How to import and export data in Autodesk AutoCAD Civil
-How to create and edit surfaces, alignments, profiles, corridors, and pipe networks in Autodesk Auto CAD Civil
-How to use dynamic modeling and analysis tools in Autodesk Auto CAD Civil
-How to collaborate and share data with other users in Autodesk Auto CAD Civil
-How to generate reports and documentation in Autodesk Auto CAD Civil
-How to apply standards and styles in Autodesk Auto CAD Civil
-How to use geospatial data and coordinate systems in Autodesk Auto CAD Civil
-How to create and manage point clouds in Autodesk Auto CAD Civil
-How to use visualization and rendering tools in Autodesk Auto CAD

-

To download and install the full .rar file, follow these steps:

-
    -
  1. Download the full .rar file from your preferred source and save it to your computer.
  2. -
  3. Extract the .rar file using a software such as WinRAR or 7-Zip.
  4. -
  5. Run the setup.exe file as administrator and follow the instructions on the screen.
  6. -
  7. Enter your serial number and product key when prompted. You can find them on your Autodesk Account or on the packaging of your product.
  8. -
  9. Select your installation options, such as language, components, and location.
  10. -
  11. Click Install and wait for the installation to complete.
  12. -
  13. Restart your computer if required.
  14. -
  15. Launch AutoCAD Civil 3D 2018.0.2 (x64) and enjoy!
  16. -
-

Conclusion

-

In this article, we have explained what is Autodesk AutoCAD Civil 3D 2018.0.2 (x64) FULL .rar, what are its features and benefits, what are its improvements and fixes in the latest version, and how to download and install it. We hope you have found this article useful and informative.

-

If you are looking for a software that can help you create civil engineering designs and documentation using dynamic models, an object-oriented environment, and powerful BIM tools, we recommend you to download and use AutoCAD Civil 3D 2018.0.2 (x64). It is a comprehensive solution that can improve your efficiency, accuracy, and collaboration in your civil engineering projects.

-

If you want to learn more about AutoCAD Civil 3D 2018.0.2 (x64), you can visit the official website, contact the customer service, access the knowledge base, or join the community. You can also find more resources such as tutorials, manuals, forums, or trainers online.

-

FAQs

-

Q1: What is the difference between AutoCAD and AutoCAD Civil 3D?

-

A1: AutoCAD is a general-purpose CAD software that can be used for various design and drafting applications, while AutoCAD Civil 3D is a specialized software that focuses on civil engineering design and documentation.

-

Q2: What are some of the applications of AutoCAD Civil 3D?

-

A2: Some of the applications of AutoCAD Civil 3D are surveying, land development, transportation engineering, water resources engineering, environmental engineering,, etc.

-

Q3: How much does AutoCAD Civil 3D cost?

-

A3: AutoCAD Civil 3D is available as a subscription-based service that costs $2,155 per year or $270 per month.

-

Q4: How can I learn AutoCAD Civil 3D?

-

A4: You can learn AutoCAD Civil 3D by taking online courses,< -watching tutorials,< -reading manuals,< -[24][25][26][27][28][29][30][31][32][33][34][35][36][37], joining forums,[38][39][40][41][42] or hiring a trainer.[43][44][45][46][47][48][49][50]

-

Q5: How can I get support for AutoCAD Civil 3D?

-

A5: You can get support for AutoCAD Civil 3D by visiting the official website,[51] contacting the customer service,[52] accessing the knowledge base,[53] or joining the community.[54]

-

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cutmaster2dprov1331fullcrackserialkeygenfree The Benefits and Features of This Powerful Software.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cutmaster2dprov1331fullcrackserialkeygenfree The Benefits and Features of This Powerful Software.md deleted file mode 100644 index d80a0a16d44927bd313dad9713b7844472fcd85c..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Cutmaster2dprov1331fullcrackserialkeygenfree The Benefits and Features of This Powerful Software.md +++ /dev/null @@ -1,21 +0,0 @@ - -

What is CutMaster 2D Pro v1.3.3.1?

-

If you are looking for a professional and powerful software program for cutting and slicing up images and videos, you might want to check out CutMaster 2D Pro v1.3.3.1. This software is a highly responsive application that allows users to slice images like a professional.

-

CutMaster 2D Pro v1.3.3.1 is a very versatile program that can handle any type of image or video format, such as JPG, PNG, BMP, GIF, MP4, AVI, MOV, etc.

-

cutmaster2dprov1331fullcrackserialkeygenfree


Download ★★★ https://byltly.com/2uKxqd



-

With CutMaster 2D Pro v1.3.3.1, you can quickly and easily create professional style cuts for your projects, such as banners, logos, posters, flyers, brochures, etc.

-

You can also use it to edit your personal photos and videos, such as cropping, rotating, resizing, adding effects, etc.

-

CutMaster 2D Pro v1.3.3.1 has a user-friendly interface that makes it easy to navigate and operate.

-

It also has a lot of features and tools that make it stand out from other similar programs.

-

Why do you need CutMaster 2D Pro v1.3.3.1?

-

There are many reasons why you might need CutMaster 2D Pro v1.3.3.1 for your image and video cutting needs.

-

Some of them are:

- -

How to download CutMaster 2D Pro v1.

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Altiumfiletypenotrecognized.md b/spaces/1gistliPinn/ChatGPT4/Examples/Altiumfiletypenotrecognized.md deleted file mode 100644 index 0cc60207e7730876c3466c8bcb09be7e11ac7297..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Altiumfiletypenotrecognized.md +++ /dev/null @@ -1,54 +0,0 @@ -

altiumfiletypenotrecognized


DOWNLOAD ✏ ✏ ✏ https://imgfil.com/2uxZLY



- - . . but when I want to save a change in the second project it says, ¨are not allowed to save´ - -I think the problem is not in the second project because the second project has a lot of.J3 files to simulate and verify the.sch file. So I ask you if I can read the.sch file in the first project and save it in the second project or if I have to read the.sch file in the second project . . ... - -thank you very much in advance . . . . . . .  - -A: - -Sorry, I'm a little late with this answer, but in the mean time, I have just had a similar issue. - -In order to "share" a project between multiple Altium instances, you need to make sure that: - -The.sch project file is saved in the "Save As..." dialog (not the "Copy" dialog). - -The new.sch project file is saved in the same folder as the.sch file of the original project. - -I'm sure this has been covered elsewhere on the internet, but here are the links I found through a quick google search: - -Q: - -Map not applied to ArrayList inside a class - -I have an application that reads about 1000 lines from a file and uses the information to make a list of customers. I am trying to print the last name of the customer to console but when I try to use my map I get an error. - -My Customer class: - -public class Customer { - - private String lastName; - - private String firstName; - - private String address; - - public Customer(String firstName, String lastName, String address) - - this.firstName = firstName; - - this.lastName = lastName; - - this.address = address; - - - - public String getLastName() { - - return this.lastName; - - public 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Autoclosets 80 Con Serial.md b/spaces/1gistliPinn/ChatGPT4/Examples/Autoclosets 80 Con Serial.md deleted file mode 100644 index b5da8335327c3a5bca9514f1163bd9ef86b4df2f..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Autoclosets 80 Con Serial.md +++ /dev/null @@ -1,6 +0,0 @@ -

Autoclosets 80 Con Serial


DOWNLOAD >>> https://imgfil.com/2uxYR1



- -暖剪釣潗! ... Online discounts to 80%! ... microcad software s l autoclosets dise o de armarios ... serial killers http://legalmusicworld.in/pop/pop-bubble-game number one ... 1fdad05405
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR R33p Setup Free.md b/spaces/1gistliPinn/ChatGPT4/Examples/BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR R33p Setup Free.md deleted file mode 100644 index 102438d6e74affd7cec7b2fbb7b73b98713abae7..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR R33p Setup Free.md +++ /dev/null @@ -1,6 +0,0 @@ -

BBE.Sonic.Sweet.Bundle.VST.RTAS.v1.0-AiR r33p setup free


DOWNLOAD ✓✓✓ https://imgfil.com/2uxZth



- -Version: 4.26- File size: 4.22MB- Date added: March 19, 2016- Price: Free- Operating system: ... Bysoft Internet Remote Control 2 6 4 957, 0.00 KB, 570, 0. ... Encoder v1.1.0.44 )2011(,Townopolis,MakeMusic Finale CR13 2012 Setup + ... Pro 6.5.08,BBE Sonic Sweet Bundle VST RTAS (v1.0)- AiR r33p,Ua ... 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Brawlhalla Como Conseguir MonedasMammoth Glory Coins UPDATED.md b/spaces/1gistliPinn/ChatGPT4/Examples/Brawlhalla Como Conseguir MonedasMammoth Glory Coins UPDATED.md deleted file mode 100644 index 653d61cab7aa1e56e805894a91903ea711fa08e0..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Brawlhalla Como Conseguir MonedasMammoth Glory Coins UPDATED.md +++ /dev/null @@ -1,9 +0,0 @@ -

brawlhalla como conseguir monedasMammoth Glory Coins


DOWNLOADhttps://imgfil.com/2uxYxx



-
-18 Jul 2020 - Race Man 3 Full Movie In Hindi Hd 720p Download Free - brawlhalla como conseguir monedasMammoth Glory Coins. Download Race Man 3 In Hindi Hd 720p Pc... -Apr 19, 2019 - Download Race Man 3 Full Movie In Hindi Hd 720p... -Download Race Man 3 In Hindi Hd 720p Pc free race man 3 in english download race man 3 in english watch race man 3 in english full movie download race man 3 in english download movie race man 3 in english watch race man 3 in english full movie download race man 3 in english -in english download 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Durgasaptashatibeejmantrasadhanapdf35 [UPD].md b/spaces/1gistliPinn/ChatGPT4/Examples/Durgasaptashatibeejmantrasadhanapdf35 [UPD].md deleted file mode 100644 index 257e8b0184b05d95e70b2580db02d6e969bbce5b..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Durgasaptashatibeejmantrasadhanapdf35 [UPD].md +++ /dev/null @@ -1,6 +0,0 @@ -

durgasaptashatibeejmantrasadhanapdf35


Download === https://imgfil.com/2uy1y0



-
-An Introduction To Bunraku [HOT] · Free E Book Download __HOT__ In Pdf Lang Lang Piano Book · Durgasaptashatibeejmantrasadhanapdf35. 4d29de3e1b
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/El Kulubud Daria Mecmuatul-Ahzabn Dzeltme ve Snflandrmasyla Oluturulan Du Kitab PDF ndir.md b/spaces/1gistliPinn/ChatGPT4/Examples/El Kulubud Daria Mecmuatul-Ahzabn Dzeltme ve Snflandrmasyla Oluturulan Du Kitab PDF ndir.md deleted file mode 100644 index 42c2cf82fad7d9d07b38f4650e2b07e08c625292..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/El Kulubud Daria Mecmuatul-Ahzabn Dzeltme ve Snflandrmasyla Oluturulan Du Kitab PDF ndir.md +++ /dev/null @@ -1,6 +0,0 @@ -

elkulubuddariaindirpdfdownload


DOWNLOAD ○○○ https://imgfil.com/2uxZrg



- - aaccfb2cb3
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator APK for Android 6.0 The Best Way to Enjoy Retro Games.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator APK for Android 6.0 The Best Way to Enjoy Retro Games.md deleted file mode 100644 index c4aba6688cbf2378f7aab85c15cdb32c15ec469c..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Dolphin Emulator APK for Android 6.0 The Best Way to Enjoy Retro Games.md +++ /dev/null @@ -1,228 +0,0 @@ - -

Introduction

-

If you are a fan of Nintendo GameCube and Wii games, you might have wished to play them on your Android device. Well, thanks to Dolphin Emulator, you can do just that! Dolphin Emulator is a free and open-source software that allows you to run GameCube and Wii games on your Android device in full HD (1080p) with several enhancements, such as compatibility with all PC controllers, turbo speed, networked multiplayer, and even more.

-

dolphin emulator android 6.0 apk


DOWNLOAD >>> https://urlin.us/2uSSy6



-

Dolphin Emulator has been around since 2003 as a desktop application for Windows, Linux, and macOS. It was the first GameCube emulator that could successfully run commercial games. Later on, it also gained support for Wii emulation. In 2013, Dolphin Emulator was ported to Android as a beta version, and since then it has been updated regularly with new features and bug fixes.

-

However, Dolphin Emulator is not a perfect emulator. It has some requirements and challenges that you need to be aware of before using it on your Android device. For example, you need a powerful device that can handle the emulation workload, you need to obtain the GameCube and Wii games legally from your own discs or backups, you need to install the app manually from an external source, you need to configure the settings and preferences according to your device and game compatibility, and you need to troubleshoot some errors and issues that may arise during the emulation process.

-

In this article, I will provide you with a comprehensive guide on how to download, install, and use Dolphin Emulator Android 6.0 APK on your Android device. I will also answer some frequently asked questions and share some user reviews about this emulator.

-

Downloading Dolphin Emulator Android 6.0 APK

-

The first step to use Dolphin Emulator on your Android device is to download the APK file from a reliable source. The APK file is an executable file that contains the app's code and resources. You can download Dolphin Emulator Android 6.0 APK from either the official website or other sources.

-

How to download Dolphin Emulator Android 6.0 APK from the official website?

-

The official website of Dolphin Emulator is https://dolphin-emu.org. Here you can find the latest news, updates, downloads, and documentation about the emulator. You can also join the community forums, chat rooms, and social media pages to interact with other users and developers.

-

To download Dolphin Emulator Android 6.0 APK from the official website, follow these steps:

-
    -
  1. Go to https://dolphin-emu.org on your web browser.
  2. -
  3. Click on the Download button on the top right corner of the homepage.
  4. -
  5. Select Android from the drop-down menu.
  6. -
  7. You will be redirected to a page with a list of available versions of Dolphin Emulator for Android. The latest version is usually at the top of the list.
  8. -
  9. Click on the Download APK button next to the version you want to download. You can also check the release notes, changelog, and compatibility list for each version by clicking on the respective links.
  10. -
  11. A pop-up window will appear asking you to confirm your download. Click on OK to proceed.
  12. -
  13. The APK file will be downloaded to your device's default download folder. You can check the progress and status of your download on your notification bar or download manager app.
  14. -
-

How to download Dolphin Emulator Android 6.0 APK from other sources?

-

If you cannot access the official website of Dolphin Emulator for some reason, or if you want to download an older or modified version of Dolphin Emulator Android 6.0 APK, you can also find it on other sources, such as third-party websites, app stores, file hosting services, or torrent sites. However, you need to be careful when downloading from these sources, as they may not be trustworthy or safe. Some of them may contain malware, viruses, spyware, adware, or other unwanted programs that can harm your device or compromise your privacy. Some of them may also provide fake or corrupted files that may not work properly or cause errors and issues with your emulator.

-

dolphin emulator android 6.0 apk download
-dolphin emulator android 6.0 apk free
-dolphin emulator android 6.0 apk latest version
-dolphin emulator android 6.0 apk mod
-dolphin emulator android 6.0 apk no root
-dolphin emulator android 6.0 apk offline
-dolphin emulator android 6.0 apk old version
-dolphin emulator android 6.0 apk pro
-dolphin emulator android 6.0 apk reddit
-dolphin emulator android 6.0 apk update
-dolphin emulator android 6.0 apk xda
-dolphin emulator android 6.0 apk youtube
-dolphin emulator android 6.0 games apk
-dolphin emulator android 6.0 marshmallow apk
-dolphin emulator android 6.0 nougat apk
-dolphin emulator android 6.0 oreo apk
-dolphin emulator android 6.0 pie apk
-dolphin emulator android 6.0 q apk
-dolphin emulator android 6.0 r apk
-dolphin emulator android 6.0 s apk
-best settings for dolphin emulator android 6.0 apk
-how to install dolphin emulator android 6.0 apk
-how to use dolphin emulator android 6.0 apk
-is dolphin emulator android 6.0 apk safe
-what is dolphin emulator android 6.0 apk
-where to download dolphin emulator android 6.0 apk
-why dolphin emulator android 6.0 apk not working
-wii games for dolphin emulator android 6.0 apk
-gamecube games for dolphin emulator android 6.0 apk
-nintendo games for dolphin emulator android 6.0 apk
-mario games for dolphin emulator android 6.0 apk
-zelda games for dolphin emulator android 6.0 apk
-pokemon games for dolphin emulator android 6.0 apk
-resident evil games for dolphin emulator android 6.0 apk
-sonic games for dolphin emulator android 6.0 apk
-kirby games for dolphin emulator android 6.0 apk
-metroid games for dolphin emulator android 6.0 apk
-fire emblem games for dolphin emulator android 6.0 apk
-animal crossing games for dolphin emulator android 6.0 apk
-super smash bros games for dolphin emulator android 6.0 apk
-mario kart games for dolphin emulator android 6.0 apk
-mario party games for dolphin emulator android 6.0 apk
-mario sports games for dolphin emulator android 6.0 apk
-mario golf games for dolphin emulator android 6.0 apk
-mario tennis games for dolphin emulator android 6.0 apk
-mario baseball games for dolphin emulator android 6.0 apk
-mario soccer games for dolphin emulator android 6.0 apk
-mario basketball games for dolphin emulator android 6.0 apk

-

To download Dolphin Emulator Android 6.0 APK from other sources, follow these steps:

-
    -
  1. Search for "Dolphin Emulator Android 6.0 APK" on your preferred search engine or app store. You can also use keywords such as "download", "free", "latest", "modded", "cracked", "unlocked", etc. to narrow down your search results.
  2. -
  3. Browse through the results and select a source that looks reliable and reputable. You can check the ratings, reviews, comments, feedback, and reputation of the source before downloading from it. You can also use tools such as VirusTotal, Malwarebytes, or Norton to scan the URL or file for any potential threats.
  4. -
  5. Click on the Download button or link on the source's page. You may have to go through some ads, pop-ups, surveys, or captcha verification before you can access the download link. Be careful not to click on any suspicious or misleading links or buttons that may redirect you to unwanted sites or install unwanted programs on your device.
  6. -
  7. The APK file will be downloaded to your device's default download folder. You can check the progress and status of your download on your notification bar or download manager app.
  8. -
-

How to verify the integrity and safety of the downloaded file?

-

After downloading Dolphin Emulator Android 6.0 APK from any source, you should always verify the integrity and safety of the downloaded file before installing it on your device. This is to ensure that the file is authentic, complete, and free from any malicious code or modification that may affect its performance or functionality.

-

To verify the integrity and safety of the downloaded file, follow these steps:

-
    -
  1. Check the file size and name of the downloaded file. Compare it with the original file size and name from the official website or source. If there is a significant difference in size or name, it may indicate that the file is fake or corrupted.
  2. -
  3. Check the file extension of the downloaded file. It should be ".apk" which stands for Android Package Kit. If it is anything else, such as ".zip", ".rar", ".exe", ".bin", etc., it may indicate that the file is not an APK file or that it contains other files that may be harmful or unnecessary.
  4. -
  5. Check the file signature or checksum of the downloaded file. This is a unique code that identifies and verifies the authenticity and integrity of a file. You can use tools such as MD5 & SHA Checksum Utility, HashTab, or Checksum Calculator to generate and compare the file signature or checksum of the downloaded file with the original one from the official website or source. If they match, it means that the file is authentic and intact. If they don't match, it means that the file is fake or corrupted.
  6. -
  7. Scan the file with a reputable antivirus or anti-malware program, such as Avast, Malwarebytes, or Norton. These programs can detect and remove any malicious code or modification that may be hidden in the file. They can also protect your device from any potential threats that may arise from installing or running the file.
  8. -
-

If the downloaded file passes all these checks, you can proceed to install it on your device. If not, you should delete it immediately and download it again from a different source.

-

Installing Dolphin Emulator Android 6.0 APK

-

The next step to use Dolphin Emulator on your Android device is to install the APK file on your device. However, since Dolphin Emulator is not available on the Google Play Store, you need to install it manually from an external source. This means that you need to grant permissions and overcome security restrictions that may prevent you from installing apps from unknown sources.

-

How to install Dolphin Emulator Android 6.0 APK on your Android device?

-

To install Dolphin Emulator Android 6.0 APK on your Android device, follow these steps:

-
    -
  1. Locate the downloaded APK file on your device's file manager app or download manager app. You can also use a third-party file manager app, such as ES File Explorer, File Manager, or Solid Explorer to locate the file.
  2. -
  3. Tap on the APK file to open it. A pop-up window will appear asking you to confirm your installation. Tap on Install to proceed.
  4. -
  5. If you see a message saying "For your security, your phone is not allowed to install unknown apps from this source", tap on Settings. This will take you to a screen where you can enable the option to allow installing apps from unknown sources. Depending on your device model and Android version, this option may be called "Unknown sources", "Install unknown apps", "Allow app installs", or something similar. Toggle the switch or check the box next to this option to enable it.
  6. -
  7. Go back to the installation screen and tap on Install again. The installation process will begin and may take a few seconds or minutes depending on your device's speed and performance.
  8. -
  9. Once the installation is complete, you will see a message saying "App installed". Tap on Open to launch Dolphin Emulator on your device. You can also tap on Done to close the installation screen and find Dolphin Emulator on your app drawer or home screen.
  10. -
-

How to grant permissions and overcome security restrictions?

-

Dolphin Emulator requires some permissions and access to certain features and functions of your device in order to work properly. For example, it needs access to your storage, camera, microphone, location, network, etc. You need to grant these permissions and overcome any security restrictions that may prevent Dolphin Emulator from accessing these features and functions.

-

To grant permissions and overcome security restrictions, follow these steps:

-
    -
  1. The first time you launch Dolphin Emulator on your device, you will see a series of pop-up windows asking you to grant various permissions to the app. Tap on Allow or Accept for each permission request. You can also tap on Deny or Reject if you don't want to grant a certain permission, but this may affect the performance or functionality of the app.
  2. -
  3. If you want to change or manage the permissions for Dolphin Emulator later, go to your device's settings app and look for the option called "Apps", "Applications", "App Manager", or something similar. Tap on this option and find Dolphin Emulator from the list of installed apps. Tap on Dolphin Emulator and then tap on Permissions. Here you can see all the permissions that Dolphin Emulator has requested and whether they are granted or denied. You can toggle the switch or check the box next to each permission to grant or revoke it.
  4. -
  5. Some features and functions of Dolphin Emulator may be blocked or restricted by your device's security settings, such as battery optimization, data usage, background activity, overlay, etc. These settings may prevent Dolphin Emulator from running smoothly or at all. To overcome these security restrictions, go to your device's settings app and look for the option called "Security", "Privacy", "Battery", "Data", or something similar. Tap on this option and find Dolphin Emulator from the list of apps or features. Tap on Dolphin Emulator and then tap on the option that allows you to disable or bypass the security restriction. For example, you may need to disable battery optimization, allow unrestricted data usage, enable background activity, allow overlay, etc.
  6. -
-

By granting permissions and overcoming security restrictions, you can ensure that Dolphin Emulator can access all the features and functions it needs to run GameCube and Wii games on your Android device.

-

Using Dolphin Emulator Android 6.0 APK

-

After installing Dolphin Emulator Android 6.0 APK on your Android device, you can start using it to play GameCube and Wii games on your device. However, before you can do that, you need to obtain and load the games on the emulator. You also need to customize the graphics and audio settings of the emulator according to your device and game compatibility. You also need to connect and use controllers with the emulator if you prefer to play with physical buttons and joysticks. You also need to play online multiplayer games with the emulator if you want to enjoy the social aspect of gaming.

-

How to obtain and load GameCube and Wii games on Dolphin Emulator Android 6.0 APK?

-

Dolphin Emulator does not come with any GameCube or Wii games pre-installed or included in the app. You need to obtain the games legally from your own discs or backups and load them on the emulator. The games are usually in the form of ISO or WBFS files that contain the game data and can be read by the emulator.

-

To obtain and load GameCube and Wii games on Dolphin Emulator Android 6.0 APK, follow these steps:

-
    -
  1. If you have the original GameCube or Wii discs, you can use a disc drive and a software tool, such as CleanRip, RawDump, or FriiDump to rip the discs and create ISO or WBFS files on your computer. You can also use a modded Wii console and a software tool, such as USB Loader GX, WiiFlow, or CFG USB Loader to rip the discs and create ISO or WBFS files on a USB drive.
  2. -
  3. If you have backup copies of GameCube or Wii games, you can use a software tool, such as Wii Backup Manager, Witgui, or Wii Backup Fusion to convert them into ISO or WBFS files on your computer.
  4. -
  5. Once you have the ISO or WBFS files of the games you want to play, you need to transfer them to your Android device's storage. You can use a USB cable, a microSD card, a cloud service, or a wireless method to do so.
  6. -
  7. On your Android device, launch Dolphin Emulator and tap on the Add Folder button on the top right corner of the screen. This will allow you to browse your device's storage and select the folder where you stored your ISO or WBFS files.
  8. -
  9. Dolphin Emulator will scan the folder and display all the games that it can recognize in a grid view. You can tap on any game to see more details about it, such as title, region, size, rating, etc.
  10. -
  11. To load a game, simply tap on its icon and wait for Dolphin Emulator to launch it. You will see a loading screen with some information about the game and the emulator's status.
  12. -
  13. Once the game is loaded, you can start playing it on your Android device using either touch controls or physical controllers.
  14. -
-

How to customize the graphics and audio settings of Dolphin Emulator Android 6.0 APK?

-

Dolphin Emulator allows you to customize the graphics and audio settings of each game according to your device's capabilities and preferences. You can adjust the resolution, aspect ratio, anti-aliasing, anisotropic filtering, texture scaling, frame rate, sound volume, and other options to enhance or optimize your gaming experience. However, you should also be aware that some of these settings may affect the performance or compatibility of the emulator or the game. You may need to experiment with different settings to find the best balance between quality and speed.

-

To customize the graphics and audio settings of Dolphin Emulator Android 6.0 APK, follow these steps:

-
    -
  1. On your Android device, launch Dolphin Emulator and tap on the Menu button on the top left corner of the screen. This will open a sidebar with various options.
  2. -
  3. Tap on Settings to access the emulator's settings menu.
  4. -
  5. Tap on Graphics to access the graphics settings menu. Here you can see four tabs: General, Enhancements, Hacks, and Advanced. Each tab contains different options that you can tweak according to your needs and preferences.
  6. -
  7. The General tab allows you to change the basic graphics settings, such as video backend, aspect ratio, resolution, vsync, etc.
  8. -
  9. The Enhancements tab allows you to change the advanced graphics settings, such as anti-aliasing, anisotropic filtering, texture scaling, post-processing effects, etc.
  10. -
  11. The Hacks tab allows you to change the performance-related graphics settings, such as skip EFB access, ignore format changes, store EFB copies to texture only, etc.
  12. -
  13. The Advanced tab allows you to change the experimental graphics settings, such as shader compilation mode, asynchronous shader compilation, etc.
  14. -
  15. To change any of these settings, simply tap on the option and select the value or toggle the switch that suits your needs and preferences. You can also tap on the i icon next to each option to see a brief explanation of what it does and how it affects the emulation.
  16. -
  17. If you want to reset all the graphics settings to their default values, tap on the Reset All Settings button at the bottom of the screen.
  18. -
  19. To save your changes and exit the graphics settings menu, tap on the Back button on your device or emulator.
  20. -
  21. To access the audio settings menu, tap on Audio from the settings menu. Here you can see two options: Enable Sound Output and Volume.
  22. -
  23. To enable or disable sound output from the emulator, toggle the switch next to Enable Sound Output. If you disable sound output, you will not hear any sound from the emulator or the game.
  24. -
  25. To adjust the volume of the sound output from the emulator, drag the slider next to Volume. You can also use your device's volume buttons to adjust the volume.
  26. -
  27. To save your changes and exit the audio settings menu, tap on the Back button on your device or emulator.
  28. -
-

Troubleshooting Dolphin Emulator Android 6.0 APK

-

Dolphin Emulator is a complex software that may encounter some errors and issues during its operation. Some of these errors and issues may be caused by factors such as device specifications, game compatibility, app configuration, network connection, etc. Some of them may be easy to fix or resolve by following some simple steps or tips. Some of them may require more advanced or technical solutions or assistance from the developers or support team.

-

How to fix common errors and issues with Dolphin Emulator Android 6.0 APK?

-

To fix common errors and issues with Dolphin Emulator Android 6.0 APK, follow these steps:

-
    -
  1. If you experience crashes, freezes, slowdowns, glitches, or other performance problems with Dolphin Emulator or a game, try these tips:
  2. - -
  3. If you experience problems with downloading, installing, updating, or uninstalling Dolphin Emulator Android 6.0 APK, try these tips:
  4. - -
-

Conclusion

-

Dolphin Emulator Android 6.0 APK is a great way to play Nintendo GameCube and Wii games on your Android device. It has many features and benefits that make it one of the best emulators available for Android. However, it also has some requirements and challenges that you need to be aware of before using it on your device. You need to download, install, and use it properly according to your device's specifications and preferences. You also need to troubleshoot some errors and issues that may arise during its operation.

-

In this article, I have provided you with a comprehensive guide on how to download, install, and use Dolphin Emulator Android 6.0 APK on your Android device. I have also answered some frequently asked questions and shared some user reviews about this emulator. I hope this article has been helpful and informative for you.

-

If you have any questions, comments, feedback, or suggestions about this article or Dolphin Emulator Android 6.0 APK, please feel free to leave them below. I would love to hear from you and help you out. Thank you for reading and happy gaming!

-

Frequently Asked Questions

-

Here are some of the most frequently asked questions about Dolphin Emulator Android 6.0 APK:

-

Is Dolphin Emulator Android 6.0 APK legal?

-

Dolphin Emulator Android 6.0 APK is legal as long as you use it for personal and non-commercial purposes. Dolphin Emulator is a free and open-source software that does not violate any intellectual property rights or laws. However, the games that you play on Dolphin Emulator may be subject to copyright and licensing restrictions. You should only play games that you own legally from your own discs or backups. You should not download, share, or distribute games that you do not own or have permission to use.

-

Is Dolphin Emulator Android 6.0 APK safe?

-

Dolphin Emulator Android 6.0 APK is safe as long as you download it from a reliable and reputable source, such as the official website or source. You should also verify the integrity and safety of the downloaded file before installing it on your device. You should also scan the file with a reputable antivirus or anti-malware program to detect and remove any malicious code or modification that may be hidden in the file. You should also grant permissions and overcome security restrictions that may prevent Dolphin Emulator from accessing certain features and functions of your device.

-

Is Dolphin Emulator Android 6.0 APK compatible with my device?

-

Dolphin Emulator Android 6.0 APK is compatible with most Android devices that run on Android 5.0 (Lollipop) or higher and have a 64-bit processor (ARMv8 or x86_64). However, some devices may not be able to run Dolphin Emulator or some games smoothly or at all due to their hardware limitations or software issues. You should check your device's specifications and compare them with the minimum system requirements for Dolphin Emulator and the game you want to play. You should also check the compatibility list on the official website or the game wiki to see if your device and game are compatible with Dolphin Emulator.

-

How can I improve the performance of Dolphin Emulator Android 6.0 APK?

-

You can improve the performance of Dolphin Emulator Android 6.0 APK by following these tips:

- -

How can I get more games for Dolphin Emulator Android 6.0 APK?

-

You can get more games for Dolphin Emulator Android 6.0 APK by following these steps:

-
    -
  1. If you have the original GameCube or Wii discs, you can use a disc drive and a software tool, such as CleanRip, RawDump, or FriiDump to rip the discs and create ISO or WBFS files on your computer. You can also use a modded Wii console and a software tool, such as USB Loader GX, WiiFlow, or CFG USB Loader to rip the discs and create ISO or WBFS files on a USB drive.
  2. -
  3. If you have backup copies of GameCube or Wii games, you can use a software tool, such as Wii Backup Manager, Witgui, or Wii Backup Fusion to convert them into ISO or WBFS files on your computer.
  4. -
  5. If you want to download GameCube or Wii games from the internet, you can use various websites or sources that offer them legally and safely. However, you should be careful when downloading from these sources, as they may not be trustworthy or safe. Some of them may contain malware, viruses, spyware, adware, or other unwanted programs that can harm your device or compromise your privacy. Some of them may also provide fake or corrupted files that may not work properly or cause errors and issues with your emulator.
  6. -
  7. Once you have the ISO or WBFS files of the games you want to play, you need to transfer them to your Android device's storage. You can use a USB cable, a microSD card, a cloud service, or a wireless method to do so.
  8. -
  9. On your Android device, launch Dolphin Emulator and tap on the Add Folder button on the top right corner of the screen. This will allow you to browse your device's storage and select the folder where you stored your ISO or WBFS files.
  10. -
  11. Dolphin Emulator will scan the folder and display all the games that it can recognize in a grid view. You can tap on any game to see more details about it, such as title, region, size, rating, etc.
  12. -
  13. To load a game, simply tap on its icon and wait for Dolphin Emulator to launch it. You will see a loading screen with some information about the game and the emulator's status.
  14. -
  15. Once the game is loaded, you can start playing it on your Android device using either touch controls or physical controllers.
  16. -
-

User Reviews

-

Here are some of the user reviews about Dolphin Emulator Android 6.0 APK from various sources:

-

Positive Reviews

- -

Negative Reviews

- -

-

This is the end of the article. I hope you enjoyed reading it and learned something new about Dolphin Emulator Android 6.0 APK. If you did, please share it with your friends and family who may also be interested in this topic. If you have any questions, comments, feedback, or suggestions about this article or Dolphin Emulator Android 6.0 APK, please leave them below. I would love to hear from you and help you out. Thank you for reading and happy gaming!

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Age of Conquest IV and Create Your Own Custom Maps and Scenarios.md b/spaces/1phancelerku/anime-remove-background/Download Age of Conquest IV and Create Your Own Custom Maps and Scenarios.md deleted file mode 100644 index d85ea6f06cb3f606f588d19a67988028108f3b15..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Age of Conquest IV and Create Your Own Custom Maps and Scenarios.md +++ /dev/null @@ -1,190 +0,0 @@ - -

Age of Conquest IV: A Turn-Based Grand Strategy Wargame

-

Do you love strategy games that let you command your armies in historical and fictional scenarios? Do you enjoy playing solo or with your friends in cross-platform multiplayer matches? Do you want to create your own custom maps and scenarios with a map editor? If you answered yes to any of these questions, then you might want to check out Age of Conquest IV, a turn-based grand strategy wargame that offers all these features and more.

-

age of conquest 4 download


Download Zip ★★★★★ https://jinyurl.com/2uNUfM



-

What is Age of Conquest IV?

-

Age of Conquest IV is a game developed and published by Noble Master LLC, a small indie studio based in Hawaii. It was released in 2016 for Windows, Mac, Linux, Android, iOS, and web browsers. It is the fourth installment in the Age of Conquest series, which started in 2002 as a Java applet game.

-

Age of Conquest IV is a game that lets you create your own warring experience by choosing from hundreds of factions and maps that span from ancient to modern times. You can play as the Roman Empire, the Inca, France, Russia, Japan, or the Chinese Dynasties, among many others. You can also play on maps that depict Europe, Colonization, Asian Empires, American Wars, World Conquest, and more.

-

The game is turn-based, meaning that you and your opponents take turns to move your units, build your economy, conduct diplomacy, and wage war. The game has a streamlined user interface that makes it easy to learn and play. You can play against the computer AI, which has different difficulty levels and personalities. You can also play online or locally with other players in cross-platform multiplayer matches. You can form alliances and fight co-op style with the AI and other players for ultimate victory.

-

Features of Age of Conquest IV

-

Age of Conquest IV has many features that make it a fun and challenging game for strategy lovers. Here are some of them:

-

Ancient to Modern

-

The game offers a variety of map scenarios that cover different time periods and regions of the world. You can play on historical maps that depict real events and conflicts, such as the Rise of Rome, the Hundred Years' War, the Napoleonic Wars, or the Cold War. You can also play on fictional maps that imagine alternative scenarios or fantasy worlds, such as Middle Earth, Westeros, or Atlantis.

-

Diplomacy & Economy

-

The game also features a diplomacy and economy system that adds depth and realism to the gameplay. You can negotiate with other factions for peace, trade, alliances, or war. You can also manage your population, happiness, and taxes in each province. You have to balance your income and expenses, as well as deal with rebellions and revolts if your people are unhappy.

-

Single & Multiplayer

-

The game allows you to play solo or with others in various modes. You can play skirmish matches against the AI or hotseat with friends and family on the same device. You can also play online with other players from around the world in cross-platform multiplayer matches. The game has a ranking and rating system that tracks your performance and skill level. You can also chat with other players and join clans for more social interaction.

-

Modding

-

The game also supports modding, which means that you can create your own custom maps and scenarios with a map editor. You can use the built-in tools to design your own terrain, provinces, factions , and units. You can also import and export your maps and share them with other players. You can also download and play maps created by other players from the online map store. You can rate and comment on the maps you play and give feedback to the creators.

-

How to Download Age of Conquest IV?

-

If you are interested in playing Age of Conquest IV, you have several options to download the game. Here are some of them:

-

age of conquest 4 free download
-age of conquest 4 steam download
-age of conquest 4 download for pc
-age of conquest 4 download for android
-age of conquest 4 download for ios
-age of conquest 4 download for mac
-age of conquest 4 download for linux
-age of conquest 4 download maps
-age of conquest 4 download modding
-age of conquest 4 browser download
-age of conquest 4 full version download
-age of conquest 4 all maps download
-age of conquest 4 portable download
-age of conquest 4 bundle download
-age of conquest 4 generic download
-age of conquest 4 legacy download
-age of conquest 4 install download
-age of conquest 4 update download
-age of conquest 4 patch download
-age of conquest 4 online download
-age of conquest 4 offline download
-age of conquest 4 turn based strategy game download
-age of conquest 4 grand strategy wargame download
-age of conquest 4 roman empire download
-age of conquest 4 inca download
-age of conquest 4 france download
-age of conquest 4 russia download
-age of conquest 4 japan download
-age of conquest 4 china download
-age of conquest 4 europe map download
-age of conquest 4 colonization map download
-age of conquest 4 asian empires map download
-age of conquest 4 american wars map download
-age of conquest 4 world conquest map download
-age of conquest 4 diplomacy and economy game download
-age of conquest 4 single player game download
-age of conquest 4 multiplayer game download
-age of conquest 4 co-op game download
-age of conquest 4 cross-platform game download
-age of conquest 4 ranking and rating game download
-age of conquest 4 skirmish mode game download
-age of conquest 4 hotseat mode game download
-age of conquest 4 modding tool game download
-age of conquest 4 map editor game download
-age of conquest 4 custom maps game download
-age of conquest iv free to play game download

-

Direct Downloads

-

You can download the game directly from the official website of Noble Master LLC. The website offers downloads for Windows, Mac, Linux, Android, and iOS devices. You can also play the game online on your web browser without downloading anything. The direct downloads are free, but they have some limitations, such as fewer maps and factions, and no multiplayer mode. You can unlock the full version of the game by purchasing a license key for $4.99 USD.

-

3rd Party Downloads

-

You can also download the game from 3rd party platforms, such as Steam, Google Play, App Store, or Amazon. These platforms offer the full version of the game for a similar price as the direct downloads. You can also enjoy some additional features, such as achievements, leaderboards, cloud saves, and more. However, you may need to create an account and install additional software to use these platforms.

-

What are the System Requirements for Age of Conquest IV?

-

Age of Conquest IV is a relatively low-spec game that can run on most devices. However, you may still want to check the system requirements before downloading the game to ensure a smooth gameplay experience. Here are the minimum and recommended requirements for the game:

-

Minimum Requirements

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Recommended Requirements

-
OSCPURAMGraphicsStorage
Windows XP or later1 GHz single-core processor512 MBOpenGL 2.0 compatible with 128 MB VRAM150 MB
Mac OS X 10.7 or later1 GHz single-core processor512 MBOpenGL 2.0 compatible with 128 MB VRAM150 MB
Linux (Ubuntu 12.04 or later)1 GHz single-core processor512 MBOpenGL 2.0 compatible with 128 MB VRAM150 MB
Android 4.0 or later1 GHz single-core processor512 MBN/AN/A
iOS 8.0 or laterN/AN/AN/AN/A
Web Browser (Chrome, Firefox, Safari, Edge)N/AN/AN/AN/A
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

How to Play Age of Conquest IV?

-

If you have downloaded and installed Age of Conquest IV, you may be wondering how to play the game. Here are some steps to help you get started:

-

Tutorial

-

The game has a tutorial mode that teaches you the basics of the game, such as how to move your units, build your economy, conduct diplomacy, and wage war. The tutorial mode consists of several missions that guide you through different aspects of the game. You can access the tutorial mode from the main menu by clicking on the "Tutorial" button. You can also watch video tutorials on the official website or YouTube channel of Noble Master LLC.

-

Tips and Tricks

-

The game also has a tips and tricks section that gives you some useful advice and information on how to play the game better. You can access the tips and tricks section from the main menu by clicking on the "Tips & Tricks" button. You can also find more tips and tricks on the official forum or wiki of Noble Master LLC.

-

Conclusion

-

Age of Conquest IV is a turn-based grand strategy wargame that lets you create your own warring experience by choosing from hundreds of factions and maps that span from ancient to modern times. You can play solo or with others in cross-platform multiplayer matches. You can also create your own custom maps and scenarios with a map editor. The game is easy to learn and play, but challenging and rewarding to master. If you are a fan of strategy games, you should definitely give Age of Conquest IV a try.

-

If you have any questions or feedback about the game, you can contact Noble Master LLC through their official website, email, or social media accounts. You can also join their community of players and modders on their forum, wiki, discord, or reddit.

-

Frequently Asked Questions (FAQs)

-

Here are some common questions and answers about Age of Conquest IV:

-
    -
  1. Is Age of Conquest IV free?
  2. -

    The game is free to download and play, but it has some limitations, such as fewer maps and factions, and no multiplayer mode. You can unlock the full version of the game by purchasing a license key for $4.99 USD.

    -
  3. Is Age of Conquest IV online?
  4. -

    The game has an online mode that allows you to play with other players from around the world in cross-platform multiplayer matches. You need an internet connection and an account to play online.

    -
  5. Is Age of Conquest IV offline?
  6. -

    The game has an offline mode that allows you to play solo or hotseat with friends and family on the same device. You do not need an internet connection or an account to play offline.

    -
  7. Is Age of Conquest IV historical?
  8. -

    The game has historical maps that depict real events and conflicts, such as the Rise of Rome, the Hundred Years' War, the Napoleonic Wars, or the Cold War. The game also has fictional maps that imagine alternative scenarios or fantasy worlds, such as Middle Earth, Westeros, or Atlantis.

    -
  9. Is Age of Conquest IV realistic?
  10. -

    The game is not meant to be a realistic simulation of history or warfare, but rather a fun and challenging strategy game that offers a variety of map scenarios and gameplay options. The game does not aim to be historically accurate or politically correct, but rather to provide an enjoyable and creative and diverse warring experience.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/1toTree/lora_test/env.py b/spaces/1toTree/lora_test/env.py deleted file mode 100644 index 29997bf1a7590c3d3e44aa85fa0948565a123e60..0000000000000000000000000000000000000000 --- a/spaces/1toTree/lora_test/env.py +++ /dev/null @@ -1,13 +0,0 @@ -############################################################################################################################ -# 修改下面的参数 -# (1)BASE_MODEL_NAME 代表你训练的基础模型 -BASE_MODEL_NAME = "runwayml/stable-diffusion-v1-5" - -# 是否开启lora -# (2)LORA_WEIGHTS_PATH 代码你上传到huggingface后的lora权重。 -# LORA_WEIGHTS_PATH = None 表示不适应lora -LORA_WEIGHTS_PATH = "1toTree/demo_test" - -# (3)PROMPTS 需要展示的prompt文本 -PROMPTS = "cartoon face" -############################################################################################################################ \ No newline at end of file diff --git a/spaces/2023Liu2023/bingo/src/components/chat-panel.tsx b/spaces/2023Liu2023/bingo/src/components/chat-panel.tsx deleted file mode 100644 index 1fbc3c2bf05b914e0c229661832fbb560745f488..0000000000000000000000000000000000000000 --- a/spaces/2023Liu2023/bingo/src/components/chat-panel.tsx +++ /dev/null @@ -1,153 +0,0 @@ -'use client' - -import * as React from 'react' -import Image from 'next/image' -import Textarea from 'react-textarea-autosize' -import { useAtomValue } from 'jotai' -import { useEnterSubmit } from '@/lib/hooks/use-enter-submit' -import { cn } from '@/lib/utils' - -import BrushIcon from '@/assets/images/brush.svg' -import ChatIcon from '@/assets/images/chat.svg' -import VisualSearchIcon from '@/assets/images/visual-search.svg' -import SendIcon from '@/assets/images/send.svg' -import PinIcon from '@/assets/images/pin.svg' -import PinFillIcon from '@/assets/images/pin-fill.svg' - -import { useBing } from '@/lib/hooks/use-bing' -import { voiceListenAtom } from '@/state' -import Voice from './voice' -import { ChatImage } from './chat-image' -import { ChatAttachments } from './chat-attachments' - -export interface ChatPanelProps - extends Pick< - ReturnType, - | 'generating' - | 'input' - | 'setInput' - | 'sendMessage' - | 'resetConversation' - | 'isSpeaking' - | 'attachmentList' - | 'uploadImage' - | 'setAttachmentList' - > { - id?: string - className?: string -} - -export function ChatPanel({ - isSpeaking, - generating, - input, - setInput, - className, - sendMessage, - resetConversation, - attachmentList, - uploadImage, - setAttachmentList -}: ChatPanelProps) { - const inputRef = React.useRef(null) - const {formRef, onKeyDown} = useEnterSubmit() - const [focused, setFocused] = React.useState(false) - const [active, setActive] = React.useState(false) - const [pin, setPin] = React.useState(false) - const [tid, setTid] = React.useState() - const voiceListening = useAtomValue(voiceListenAtom) - - const setBlur = React.useCallback(() => { - clearTimeout(tid) - setActive(false) - const _tid = setTimeout(() => setFocused(false), 2000); - setTid(_tid) - }, [tid]) - - const setFocus = React.useCallback(() => { - setFocused(true) - setActive(true) - clearTimeout(tid) - inputRef.current?.focus() - }, [tid]) - - React.useEffect(() => { - if (input) { - setFocus() - } - }, [input]) - - return ( -
    { - e.preventDefault() - if (generating) { - return; - } - if (!input?.trim()) { - return - } - setInput('') - setPin(false) - await sendMessage(input) - }} - ref={formRef} - > -
    -
    -
    -
    -
    -
    -
    - -
    -
    -
    -
    - - \n\n
    \n\n Automatically save\n values to localStorage on exit.\n\n
    The values saved to localStorage will\n override those passed to dat.GUI\'s constructor. This makes it\n easier to work incrementally, but localStorage is fragile,\n and your friends may not see the same values you do.\n\n
    \n\n
    \n\n
    ',ControllerFactory=function(e,t){var o=e[t];return Common.isArray(arguments[2])||Common.isObject(arguments[2])?new OptionController(e,t,arguments[2]):Common.isNumber(o)?Common.isNumber(arguments[2])&&Common.isNumber(arguments[3])?Common.isNumber(arguments[4])?new NumberControllerSlider(e,t,arguments[2],arguments[3],arguments[4]):new NumberControllerSlider(e,t,arguments[2],arguments[3]):Common.isNumber(arguments[4])?new NumberControllerBox(e,t,{min:arguments[2],max:arguments[3],step:arguments[4]}):new NumberControllerBox(e,t,{min:arguments[2],max:arguments[3]}):Common.isString(o)?new StringController(e,t):Common.isFunction(o)?new FunctionController(e,t,""):Common.isBoolean(o)?new BooleanController(e,t):null};function requestAnimationFrame(e){setTimeout(e,1e3/60)}var requestAnimationFrame$1=window.requestAnimationFrame||window.webkitRequestAnimationFrame||window.mozRequestAnimationFrame||window.oRequestAnimationFrame||window.msRequestAnimationFrame||requestAnimationFrame,CenteredDiv=function(){function e(){classCallCheck(this,e),this.backgroundElement=document.createElement("div"),Common.extend(this.backgroundElement.style,{backgroundColor:"rgba(0,0,0,0.8)",top:0,left:0,display:"none",zIndex:"1000",opacity:0,WebkitTransition:"opacity 0.2s linear",transition:"opacity 0.2s linear"}),dom.makeFullscreen(this.backgroundElement),this.backgroundElement.style.position="fixed",this.domElement=document.createElement("div"),Common.extend(this.domElement.style,{position:"fixed",display:"none",zIndex:"1001",opacity:0,WebkitTransition:"-webkit-transform 0.2s ease-out, opacity 0.2s linear",transition:"transform 0.2s ease-out, opacity 0.2s linear"}),document.body.appendChild(this.backgroundElement),document.body.appendChild(this.domElement);var t=this;dom.bind(this.backgroundElement,"click",function(){t.hide()})}return createClass(e,[{key:"show",value:function(){var e=this;this.backgroundElement.style.display="block",this.domElement.style.display="block",this.domElement.style.opacity=0,this.domElement.style.webkitTransform="scale(1.1)",this.layout(),Common.defer(function(){e.backgroundElement.style.opacity=1,e.domElement.style.opacity=1,e.domElement.style.webkitTransform="scale(1)"})}},{key:"hide",value:function(){var e=this,t=function t(){e.domElement.style.display="none",e.backgroundElement.style.display="none",dom.unbind(e.domElement,"webkitTransitionEnd",t),dom.unbind(e.domElement,"transitionend",t),dom.unbind(e.domElement,"oTransitionEnd",t)};dom.bind(this.domElement,"webkitTransitionEnd",t),dom.bind(this.domElement,"transitionend",t),dom.bind(this.domElement,"oTransitionEnd",t),this.backgroundElement.style.opacity=0,this.domElement.style.opacity=0,this.domElement.style.webkitTransform="scale(1.1)"}},{key:"layout",value:function(){this.domElement.style.left=window.innerWidth/2-dom.getWidth(this.domElement)/2+"px",this.domElement.style.top=window.innerHeight/2-dom.getHeight(this.domElement)/2+"px"}}]),e}(),styleSheet=___$insertStyle(".dg ul{list-style:none;margin:0;padding:0;width:100%;clear:both}.dg.ac{position:fixed;top:0;left:0;right:0;height:0;z-index:0}.dg:not(.ac) .main{overflow:hidden}.dg.main{-webkit-transition:opacity .1s linear;-o-transition:opacity .1s linear;-moz-transition:opacity .1s linear;transition:opacity .1s linear}.dg.main.taller-than-window{overflow-y:auto}.dg.main.taller-than-window .close-button{opacity:1;margin-top:-1px;border-top:1px solid #2c2c2c}.dg.main ul.closed .close-button{opacity:1 !important}.dg.main:hover .close-button,.dg.main .close-button.drag{opacity:1}.dg.main .close-button{-webkit-transition:opacity .1s linear;-o-transition:opacity .1s linear;-moz-transition:opacity .1s linear;transition:opacity .1s linear;border:0;line-height:19px;height:20px;cursor:pointer;text-align:center;background-color:#000}.dg.main .close-button.close-top{position:relative}.dg.main .close-button.close-bottom{position:absolute}.dg.main .close-button:hover{background-color:#111}.dg.a{float:right;margin-right:15px;overflow-y:visible}.dg.a.has-save>ul.close-top{margin-top:0}.dg.a.has-save>ul.close-bottom{margin-top:27px}.dg.a.has-save>ul.closed{margin-top:0}.dg.a .save-row{top:0;z-index:1002}.dg.a .save-row.close-top{position:relative}.dg.a .save-row.close-bottom{position:fixed}.dg li{-webkit-transition:height .1s ease-out;-o-transition:height .1s ease-out;-moz-transition:height .1s ease-out;transition:height .1s ease-out;-webkit-transition:overflow .1s linear;-o-transition:overflow .1s linear;-moz-transition:overflow .1s linear;transition:overflow .1s linear}.dg li:not(.folder){cursor:auto;height:27px;line-height:27px;padding:0 4px 0 5px}.dg li.folder{padding:0;border-left:4px solid rgba(0,0,0,0)}.dg li.title{cursor:pointer;margin-left:-4px}.dg .closed li:not(.title),.dg .closed ul li,.dg .closed ul li>*{height:0;overflow:hidden;border:0}.dg .cr{clear:both;padding-left:3px;height:27px;overflow:hidden}.dg .property-name{cursor:default;float:left;clear:left;width:40%;overflow:hidden;text-overflow:ellipsis}.dg .c{float:left;width:60%;position:relative}.dg .c input[type=text]{border:0;margin-top:4px;padding:3px;width:100%;float:right}.dg .has-slider input[type=text]{width:30%;margin-left:0}.dg .slider{float:left;width:66%;margin-left:-5px;margin-right:0;height:19px;margin-top:4px}.dg .slider-fg{height:100%}.dg .c input[type=checkbox]{margin-top:7px}.dg .c select{margin-top:5px}.dg .cr.function,.dg .cr.function .property-name,.dg .cr.function *,.dg .cr.boolean,.dg .cr.boolean *{cursor:pointer}.dg .cr.color{overflow:visible}.dg .selector{display:none;position:absolute;margin-left:-9px;margin-top:23px;z-index:10}.dg .c:hover .selector,.dg .selector.drag{display:block}.dg li.save-row{padding:0}.dg li.save-row .button{display:inline-block;padding:0px 6px}.dg.dialogue{background-color:#222;width:460px;padding:15px;font-size:13px;line-height:15px}#dg-new-constructor{padding:10px;color:#222;font-family:Monaco, monospace;font-size:10px;border:0;resize:none;box-shadow:inset 1px 1px 1px #888;word-wrap:break-word;margin:12px 0;display:block;width:440px;overflow-y:scroll;height:100px;position:relative}#dg-local-explain{display:none;font-size:11px;line-height:17px;border-radius:3px;background-color:#333;padding:8px;margin-top:10px}#dg-local-explain code{font-size:10px}#dat-gui-save-locally{display:none}.dg{color:#eee;font:11px 'Lucida Grande', sans-serif;text-shadow:0 -1px 0 #111}.dg.main::-webkit-scrollbar{width:5px;background:#1a1a1a}.dg.main::-webkit-scrollbar-corner{height:0;display:none}.dg.main::-webkit-scrollbar-thumb{border-radius:5px;background:#676767}.dg li:not(.folder){background:#1a1a1a;border-bottom:1px solid #2c2c2c}.dg li.save-row{line-height:25px;background:#dad5cb;border:0}.dg li.save-row select{margin-left:5px;width:108px}.dg li.save-row .button{margin-left:5px;margin-top:1px;border-radius:2px;font-size:9px;line-height:7px;padding:4px 4px 5px 4px;background:#c5bdad;color:#fff;text-shadow:0 1px 0 #b0a58f;box-shadow:0 -1px 0 #b0a58f;cursor:pointer}.dg li.save-row .button.gears{background:#c5bdad url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAsAAAANCAYAAAB/9ZQ7AAAAGXRFWHRTb2Z0d2FyZQBBZG9iZSBJbWFnZVJlYWR5ccllPAAAAQJJREFUeNpiYKAU/P//PwGIC/ApCABiBSAW+I8AClAcgKxQ4T9hoMAEUrxx2QSGN6+egDX+/vWT4e7N82AMYoPAx/evwWoYoSYbACX2s7KxCxzcsezDh3evFoDEBYTEEqycggWAzA9AuUSQQgeYPa9fPv6/YWm/Acx5IPb7ty/fw+QZblw67vDs8R0YHyQhgObx+yAJkBqmG5dPPDh1aPOGR/eugW0G4vlIoTIfyFcA+QekhhHJhPdQxbiAIguMBTQZrPD7108M6roWYDFQiIAAv6Aow/1bFwXgis+f2LUAynwoIaNcz8XNx3Dl7MEJUDGQpx9gtQ8YCueB+D26OECAAQDadt7e46D42QAAAABJRU5ErkJggg==) 2px 1px no-repeat;height:7px;width:8px}.dg li.save-row .button:hover{background-color:#bab19e;box-shadow:0 -1px 0 #b0a58f}.dg li.folder{border-bottom:0}.dg li.title{padding-left:16px;background:#000 url(data:image/gif;base64,R0lGODlhBQAFAJEAAP////Pz8////////yH5BAEAAAIALAAAAAAFAAUAAAIIlI+hKgFxoCgAOw==) 6px 10px no-repeat;cursor:pointer;border-bottom:1px solid rgba(255,255,255,0.2)}.dg .closed li.title{background-image:url(data:image/gif;base64,R0lGODlhBQAFAJEAAP////Pz8////////yH5BAEAAAIALAAAAAAFAAUAAAIIlGIWqMCbWAEAOw==)}.dg .cr.boolean{border-left:3px solid #806787}.dg .cr.color{border-left:3px solid}.dg .cr.function{border-left:3px solid #e61d5f}.dg .cr.number{border-left:3px solid #2FA1D6}.dg .cr.number input[type=text]{color:#2FA1D6}.dg .cr.string{border-left:3px solid #1ed36f}.dg .cr.string input[type=text]{color:#1ed36f}.dg .cr.function:hover,.dg .cr.boolean:hover{background:#111}.dg .c input[type=text]{background:#303030;outline:none}.dg .c input[type=text]:hover{background:#3c3c3c}.dg .c input[type=text]:focus{background:#494949;color:#fff}.dg .c .slider{background:#303030;cursor:ew-resize}.dg .c .slider-fg{background:#2FA1D6;max-width:100%}.dg .c .slider:hover{background:#3c3c3c}.dg .c .slider:hover .slider-fg{background:#44abda}\n");css.inject(styleSheet);var CSS_NAMESPACE="dg",HIDE_KEY_CODE=72,CLOSE_BUTTON_HEIGHT=20,DEFAULT_DEFAULT_PRESET_NAME="Default",SUPPORTS_LOCAL_STORAGE=function(){try{return!!window.localStorage}catch(e){return!1}}(),SAVE_DIALOGUE=void 0,autoPlaceVirgin=!0,autoPlaceContainer=void 0,hide=!1,hideableGuis=[],GUI=function e(t){var o=this,n=t||{};this.domElement=document.createElement("div"),this.__ul=document.createElement("ul"),this.domElement.appendChild(this.__ul),dom.addClass(this.domElement,CSS_NAMESPACE),this.__folders={},this.__controllers=[],this.__rememberedObjects=[],this.__rememberedObjectIndecesToControllers=[],this.__listening=[],n=Common.defaults(n,{closeOnTop:!1,autoPlace:!0,width:e.DEFAULT_WIDTH}),n=Common.defaults(n,{resizable:n.autoPlace,hideable:n.autoPlace}),Common.isUndefined(n.load)?n.load={preset:DEFAULT_DEFAULT_PRESET_NAME}:n.preset&&(n.load.preset=n.preset),Common.isUndefined(n.parent)&&n.hideable&&hideableGuis.push(this),n.resizable=Common.isUndefined(n.parent)&&n.resizable,n.autoPlace&&Common.isUndefined(n.scrollable)&&(n.scrollable=!0);var i,r=SUPPORTS_LOCAL_STORAGE&&"true"===localStorage.getItem(getLocalStorageHash(this,"isLocal")),s=void 0,a=void 0;if(Object.defineProperties(this,{parent:{get:function(){return n.parent}},scrollable:{get:function(){return n.scrollable}},autoPlace:{get:function(){return n.autoPlace}},closeOnTop:{get:function(){return n.closeOnTop}},preset:{get:function(){return o.parent?o.getRoot().preset:n.load.preset},set:function(e){o.parent?o.getRoot().preset=e:n.load.preset=e,setPresetSelectIndex(this),o.revert()}},width:{get:function(){return n.width},set:function(e){n.width=e,setWidth(o,e)}},name:{get:function(){return n.name},set:function(e){n.name=e,a&&(a.innerHTML=n.name)}},closed:{get:function(){return n.closed},set:function(t){n.closed=t,n.closed?dom.addClass(o.__ul,e.CLASS_CLOSED):dom.removeClass(o.__ul,e.CLASS_CLOSED),this.onResize(),o.__closeButton&&(o.__closeButton.innerHTML=t?e.TEXT_OPEN:e.TEXT_CLOSED)}},load:{get:function(){return n.load}},useLocalStorage:{get:function(){return r},set:function(e){SUPPORTS_LOCAL_STORAGE&&(r=e,e?dom.bind(window,"unload",s):dom.unbind(window,"unload",s),localStorage.setItem(getLocalStorageHash(o,"isLocal"),e))}}}),Common.isUndefined(n.parent)){if(this.closed=n.closed||!1,dom.addClass(this.domElement,e.CLASS_MAIN),dom.makeSelectable(this.domElement,!1),SUPPORTS_LOCAL_STORAGE&&r){o.useLocalStorage=!0;var l=localStorage.getItem(getLocalStorageHash(this,"gui"));l&&(n.load=JSON.parse(l))}this.__closeButton=document.createElement("div"),this.__closeButton.innerHTML=e.TEXT_CLOSED,dom.addClass(this.__closeButton,e.CLASS_CLOSE_BUTTON),n.closeOnTop?(dom.addClass(this.__closeButton,e.CLASS_CLOSE_TOP),this.domElement.insertBefore(this.__closeButton,this.domElement.childNodes[0])):(dom.addClass(this.__closeButton,e.CLASS_CLOSE_BOTTOM),this.domElement.appendChild(this.__closeButton)),dom.bind(this.__closeButton,"click",function(){o.closed=!o.closed})}else{void 0===n.closed&&(n.closed=!0);var d=document.createTextNode(n.name);dom.addClass(d,"controller-name"),a=addRow(o,d);dom.addClass(this.__ul,e.CLASS_CLOSED),dom.addClass(a,"title"),dom.bind(a,"click",function(e){return e.preventDefault(),o.closed=!o.closed,!1}),n.closed||(this.closed=!1)}n.autoPlace&&(Common.isUndefined(n.parent)&&(autoPlaceVirgin&&(autoPlaceContainer=document.createElement("div"),dom.addClass(autoPlaceContainer,CSS_NAMESPACE),dom.addClass(autoPlaceContainer,e.CLASS_AUTO_PLACE_CONTAINER),document.body.appendChild(autoPlaceContainer),autoPlaceVirgin=!1),autoPlaceContainer.appendChild(this.domElement),dom.addClass(this.domElement,e.CLASS_AUTO_PLACE)),this.parent||setWidth(o,n.width)),this.__resizeHandler=function(){o.onResizeDebounced()},dom.bind(window,"resize",this.__resizeHandler),dom.bind(this.__ul,"webkitTransitionEnd",this.__resizeHandler),dom.bind(this.__ul,"transitionend",this.__resizeHandler),dom.bind(this.__ul,"oTransitionEnd",this.__resizeHandler),this.onResize(),n.resizable&&addResizeHandle(this),s=function(){SUPPORTS_LOCAL_STORAGE&&"true"===localStorage.getItem(getLocalStorageHash(o,"isLocal"))&&localStorage.setItem(getLocalStorageHash(o,"gui"),JSON.stringify(o.getSaveObject()))},this.saveToLocalStorageIfPossible=s,n.parent||((i=o.getRoot()).width+=1,Common.defer(function(){i.width-=1}))};function addRow(e,t,o){var n=document.createElement("li");return t&&n.appendChild(t),o?e.__ul.insertBefore(n,o):e.__ul.appendChild(n),e.onResize(),n}function removeListeners(e){dom.unbind(window,"resize",e.__resizeHandler),e.saveToLocalStorageIfPossible&&dom.unbind(window,"unload",e.saveToLocalStorageIfPossible)}function markPresetModified(e,t){var o=e.__preset_select[e.__preset_select.selectedIndex];o.innerHTML=t?o.value+"*":o.value}function augmentController(e,t,o){if(o.__li=t,o.__gui=e,Common.extend(o,{options:function(t){if(arguments.length>1){var n=o.__li.nextElementSibling;return o.remove(),_add(e,o.object,o.property,{before:n,factoryArgs:[Common.toArray(arguments)]})}if(Common.isArray(t)||Common.isObject(t)){var i=o.__li.nextElementSibling;return o.remove(),_add(e,o.object,o.property,{before:i,factoryArgs:[t]})}},name:function(e){return o.__li.firstElementChild.firstElementChild.innerHTML=e,o},listen:function(){return o.__gui.listen(o),o},remove:function(){return o.__gui.remove(o),o}}),o instanceof NumberControllerSlider){var n=new NumberControllerBox(o.object,o.property,{min:o.__min,max:o.__max,step:o.__step});Common.each(["updateDisplay","onChange","onFinishChange","step","min","max"],function(e){var t=o[e],i=n[e];o[e]=n[e]=function(){var e=Array.prototype.slice.call(arguments);return i.apply(n,e),t.apply(o,e)}}),dom.addClass(t,"has-slider"),o.domElement.insertBefore(n.domElement,o.domElement.firstElementChild)}else if(o instanceof NumberControllerBox){var i=function(t){if(Common.isNumber(o.__min)&&Common.isNumber(o.__max)){var n=o.__li.firstElementChild.firstElementChild.innerHTML,i=o.__gui.__listening.indexOf(o)>-1;o.remove();var r=_add(e,o.object,o.property,{before:o.__li.nextElementSibling,factoryArgs:[o.__min,o.__max,o.__step]});return r.name(n),i&&r.listen(),r}return t};o.min=Common.compose(i,o.min),o.max=Common.compose(i,o.max)}else o instanceof BooleanController?(dom.bind(t,"click",function(){dom.fakeEvent(o.__checkbox,"click")}),dom.bind(o.__checkbox,"click",function(e){e.stopPropagation()})):o instanceof FunctionController?(dom.bind(t,"click",function(){dom.fakeEvent(o.__button,"click")}),dom.bind(t,"mouseover",function(){dom.addClass(o.__button,"hover")}),dom.bind(t,"mouseout",function(){dom.removeClass(o.__button,"hover")})):o instanceof ColorController&&(dom.addClass(t,"color"),o.updateDisplay=Common.compose(function(e){return t.style.borderLeftColor=o.__color.toString(),e},o.updateDisplay),o.updateDisplay());o.setValue=Common.compose(function(t){return e.getRoot().__preset_select&&o.isModified()&&markPresetModified(e.getRoot(),!0),t},o.setValue)}function recallSavedValue(e,t){var o=e.getRoot(),n=o.__rememberedObjects.indexOf(t.object);if(-1!==n){var i=o.__rememberedObjectIndecesToControllers[n];if(void 0===i&&(i={},o.__rememberedObjectIndecesToControllers[n]=i),i[t.property]=t,o.load&&o.load.remembered){var r=o.load.remembered,s=void 0;if(r[e.preset])s=r[e.preset];else{if(!r[DEFAULT_DEFAULT_PRESET_NAME])return;s=r[DEFAULT_DEFAULT_PRESET_NAME]}if(s[n]&&void 0!==s[n][t.property]){var a=s[n][t.property];t.initialValue=a,t.setValue(a)}}}}function _add(e,t,o,n){if(void 0===t[o])throw new Error('Object "'+t+'" has no property "'+o+'"');var i=void 0;if(n.color)i=new ColorController(t,o);else{var r=[t,o].concat(n.factoryArgs);i=ControllerFactory.apply(e,r)}n.before instanceof Controller&&(n.before=n.before.__li),recallSavedValue(e,i),dom.addClass(i.domElement,"c");var s=document.createElement("span");dom.addClass(s,"property-name"),s.innerHTML=i.property;var a=document.createElement("div");a.appendChild(s),a.appendChild(i.domElement);var l=addRow(e,a,n.before);return dom.addClass(l,GUI.CLASS_CONTROLLER_ROW),i instanceof ColorController?dom.addClass(l,"color"):dom.addClass(l,_typeof(i.getValue())),augmentController(e,l,i),e.__controllers.push(i),i}function getLocalStorageHash(e,t){return document.location.href+"."+t}function addPresetOption(e,t,o){var n=document.createElement("option");n.innerHTML=t,n.value=t,e.__preset_select.appendChild(n),o&&(e.__preset_select.selectedIndex=e.__preset_select.length-1)}function showHideExplain(e,t){t.style.display=e.useLocalStorage?"block":"none"}function addSaveMenu(e){var t=e.__save_row=document.createElement("li");dom.addClass(e.domElement,"has-save"),e.__ul.insertBefore(t,e.__ul.firstChild),dom.addClass(t,"save-row");var o=document.createElement("span");o.innerHTML=" ",dom.addClass(o,"button gears");var n=document.createElement("span");n.innerHTML="Save",dom.addClass(n,"button"),dom.addClass(n,"save");var i=document.createElement("span");i.innerHTML="New",dom.addClass(i,"button"),dom.addClass(i,"save-as");var r=document.createElement("span");r.innerHTML="Revert",dom.addClass(r,"button"),dom.addClass(r,"revert");var s=e.__preset_select=document.createElement("select");if(e.load&&e.load.remembered?Common.each(e.load.remembered,function(t,o){addPresetOption(e,o,o===e.preset)}):addPresetOption(e,DEFAULT_DEFAULT_PRESET_NAME,!1),dom.bind(s,"change",function(){for(var t=0;t0&&(e.preset=this.preset,e.remembered||(e.remembered={}),e.remembered[this.preset]=getCurrentPreset(this)),e.folders={},Common.each(this.__folders,function(t,o){e.folders[o]=t.getSaveObject()}),e},save:function(){this.load.remembered||(this.load.remembered={}),this.load.remembered[this.preset]=getCurrentPreset(this),markPresetModified(this,!1),this.saveToLocalStorageIfPossible()},saveAs:function(e){this.load.remembered||(this.load.remembered={},this.load.remembered[DEFAULT_DEFAULT_PRESET_NAME]=getCurrentPreset(this,!0)),this.load.remembered[e]=getCurrentPreset(this),this.preset=e,addPresetOption(this,e,!0),this.saveToLocalStorageIfPossible()},revert:function(e){Common.each(this.__controllers,function(t){this.getRoot().load.remembered?recallSavedValue(e||this.getRoot(),t):t.setValue(t.initialValue),t.__onFinishChange&&t.__onFinishChange.call(t,t.getValue())},this),Common.each(this.__folders,function(e){e.revert(e)}),e||markPresetModified(this.getRoot(),!1)},listen:function(e){var t=0===this.__listening.length;this.__listening.push(e),t&&updateDisplays(this.__listening)},updateDisplay:function(){Common.each(this.__controllers,function(e){e.updateDisplay()}),Common.each(this.__folders,function(e){e.updateDisplay()})}});var color={Color:Color,math:ColorMath,interpret:interpret},controllers={Controller:Controller,BooleanController:BooleanController,OptionController:OptionController,StringController:StringController,NumberController:NumberController,NumberControllerBox:NumberControllerBox,NumberControllerSlider:NumberControllerSlider,FunctionController:FunctionController,ColorController:ColorController},dom$1={dom:dom},gui={GUI:GUI},GUI$1=GUI,index={color:color,controllers:controllers,dom:dom$1,gui:gui,GUI:GUI};export{color,controllers,dom$1 as dom,gui,GUI$1 as GUI};export default index; \ No newline at end of file diff --git a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/infer.py b/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/infer.py deleted file mode 100644 index 4474e972a7c2ad25e1078b6549805dc26164fdbb..0000000000000000000000000000000000000000 --- a/spaces/bigjoker/stable-diffusion-webui/extensions/deforum/scripts/deforum_helpers/src/infer.py +++ /dev/null @@ -1,165 +0,0 @@ -import glob -import os - -import numpy as np -import torch -import torch.nn as nn -from PIL import Image -from torchvision import transforms -from tqdm import tqdm - -import model_io -import utils -from adabins import UnetAdaptiveBins - - -def _is_pil_image(img): - return isinstance(img, Image.Image) - - -def _is_numpy_image(img): - return isinstance(img, np.ndarray) and (img.ndim in {2, 3}) - - -class ToTensor(object): - def __init__(self): - self.normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - - def __call__(self, image, target_size=(640, 480)): - # image = image.resize(target_size) - image = self.to_tensor(image) - image = self.normalize(image) - return image - - def to_tensor(self, pic): - if not (_is_pil_image(pic) or _is_numpy_image(pic)): - raise TypeError( - 'pic should be PIL Image or ndarray. Got {}'.format(type(pic))) - - if isinstance(pic, np.ndarray): - img = torch.from_numpy(pic.transpose((2, 0, 1))) - return img - - # handle PIL Image - if pic.mode == 'I': - img = torch.from_numpy(np.array(pic, np.int32, copy=False)) - elif pic.mode == 'I;16': - img = torch.from_numpy(np.array(pic, np.int16, copy=False)) - else: - img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes())) - # PIL image mode: 1, L, P, I, F, RGB, YCbCr, RGBA, CMYK - if pic.mode == 'YCbCr': - nchannel = 3 - elif pic.mode == 'I;16': - nchannel = 1 - else: - nchannel = len(pic.mode) - img = img.view(pic.size[1], pic.size[0], nchannel) - - img = img.transpose(0, 1).transpose(0, 2).contiguous() - if isinstance(img, torch.ByteTensor): - return img.float() - else: - return img - - -class InferenceHelper: - def __init__(self, models_path, dataset='nyu', device='cuda:0'): - self.toTensor = ToTensor() - self.device = device - if dataset == 'nyu': - self.min_depth = 1e-3 - self.max_depth = 10 - self.saving_factor = 1000 # used to save in 16 bit - model = UnetAdaptiveBins.build(n_bins=256, min_val=self.min_depth, max_val=self.max_depth) - pretrained_path = os.path.join(models_path, "AdaBins_nyu.pt") - elif dataset == 'kitti': - self.min_depth = 1e-3 - self.max_depth = 80 - self.saving_factor = 256 - model = UnetAdaptiveBins.build(n_bins=256, min_val=self.min_depth, max_val=self.max_depth) - pretrained_path = os.path.join(models_path, "AdaBins_kitti.pt") - else: - raise ValueError("dataset can be either 'nyu' or 'kitti' but got {}".format(dataset)) - - model, _, _ = model_io.load_checkpoint(pretrained_path, model) - model.eval() - self.model = model.to(self.device) - - @torch.no_grad() - def predict_pil(self, pil_image, visualized=False): - # pil_image = pil_image.resize((640, 480)) - img = np.asarray(pil_image) / 255. - - img = self.toTensor(img).unsqueeze(0).float().to(self.device) - bin_centers, pred = self.predict(img) - - if visualized: - viz = utils.colorize(torch.from_numpy(pred).unsqueeze(0), vmin=None, vmax=None, cmap='magma') - # pred = np.asarray(pred*1000, dtype='uint16') - viz = Image.fromarray(viz) - return bin_centers, pred, viz - return bin_centers, pred - - @torch.no_grad() - def predict(self, image): - bins, pred = self.model(image) - pred = np.clip(pred.cpu().numpy(), self.min_depth, self.max_depth) - - # Flip - image = torch.Tensor(np.array(image.cpu().numpy())[..., ::-1].copy()).to(self.device) - pred_lr = self.model(image)[-1] - pred_lr = np.clip(pred_lr.cpu().numpy()[..., ::-1], self.min_depth, self.max_depth) - - # Take average of original and mirror - final = 0.5 * (pred + pred_lr) - final = nn.functional.interpolate(torch.Tensor(final), image.shape[-2:], - mode='bilinear', align_corners=True).cpu().numpy() - - final[final < self.min_depth] = self.min_depth - final[final > self.max_depth] = self.max_depth - final[np.isinf(final)] = self.max_depth - final[np.isnan(final)] = self.min_depth - - centers = 0.5 * (bins[:, 1:] + bins[:, :-1]) - centers = centers.cpu().squeeze().numpy() - centers = centers[centers > self.min_depth] - centers = centers[centers < self.max_depth] - - return centers, final - - @torch.no_grad() - def predict_dir(self, test_dir, out_dir): - os.makedirs(out_dir, exist_ok=True) - transform = ToTensor() - all_files = glob.glob(os.path.join(test_dir, "*")) - self.model.eval() - for f in tqdm(all_files): - image = np.asarray(Image.open(f), dtype='float32') / 255. - image = transform(image).unsqueeze(0).to(self.device) - - centers, final = self.predict(image) - # final = final.squeeze().cpu().numpy() - - final = (final * self.saving_factor).astype('uint16') - basename = os.path.basename(f).split('.')[0] - save_path = os.path.join(out_dir, basename + ".png") - - Image.fromarray(final.squeeze()).save(save_path) - - def to(self, device): - self.device = device - self.model.to(device) - - -if __name__ == '__main__': - import matplotlib.pyplot as plt - from time import time - - img = Image.open("test_imgs/classroom__rgb_00283.jpg") - start = time() - inferHelper = InferenceHelper() - centers, pred = inferHelper.predict_pil(img) - print(f"took :{time() - start}s") - plt.imshow(pred.squeeze(), cmap='magma_r') - plt.show() diff --git a/spaces/bioriAsaeru/text-to-voice/10 Tips for Writing SEO-Friendly Titles That Rank Well.md b/spaces/bioriAsaeru/text-to-voice/10 Tips for Writing SEO-Friendly Titles That Rank Well.md deleted file mode 100644 index df9b61dc5154733efeb5e5e8d7a96c4bfd6dadad..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/10 Tips for Writing SEO-Friendly Titles That Rank Well.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Chai Pani Etc full movie in telugu hd 1080p


    Download Zip »»» https://urloso.com/2uyOXS



    - - aaccfb2cb3
    -
    -
    -

    diff --git a/spaces/bioriAsaeru/text-to-voice/Free Gateway Factory Recovery Disc FREE.md b/spaces/bioriAsaeru/text-to-voice/Free Gateway Factory Recovery Disc FREE.md deleted file mode 100644 index 392bfa5a1ddbf85a3105bf883d44e3fe0f108aee..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Free Gateway Factory Recovery Disc FREE.md +++ /dev/null @@ -1,16 +0,0 @@ - -

    Your computer may still have a recovery partition intact (undamaged or uncorrupted) that can be used to restore your computer to its factory conditions. You can check if the partition is still intact by:

    -

    The above steps should boot your computer into the recovery software program made by Gateway for Windows Vista users. Follow any instructions given by the wizard to restore your computer to factory conditions.

    -

    Free Gateway Factory Recovery Disc


    DOWNLOADhttps://urloso.com/2uyQlq



    -

    A recovery disk is also known as repair disc, boot disc, rescue disk or restore disc. When your system cannot boot normally, you can access system recovery options from a recovery disk and restore computer to an earlier date. Wanna know how to create system recovery disk for Gateway? Just keep reading.

    -

    AOMEI Backupper Professional is a powerful software that can be used for making system repair disc in simple steps. You can use it to create system recovery disk for Gateway, HP, Dell, Toshiba, Asus, etc, and it can work with all Windows PCs including Windows 11/10/8/7/Vista/XP. Free download Gateway recovery disk creator to have a try!

    -

    Creating Gateway system recovery disk is such a simple thing with AOMEI Backupper. You can use the recovery disk to boot any Windows computer. Besides, AOMEI Backupper allows you to create a system image backup to the recovery disk or other storage devices so that you can boot your Gateway or other computers from the system repair disc and restore your computer from the system image.

    -

    If your Gateway laptop is crashing frequently, or won't boot into Windows, it may be time for a reset. You can try a System Restore first, which will attempt to roll your laptop back to a time when it was functioning properly. It's recommended that you try this first, as you won't lose any of your data. If that doesn't work, you can use the Recovery Manager or a Windows installation disc to perform a factory reset of your Gateway.

    -

    If you don't have access to a recovery disc or you just don't know what it is, then this is for you. Since the below steps will wipe the data on your laptop, make sure you have backups for your important files before you begin.

    -

    If the reason you want to factory reset your Gateway laptop is forgetting your user account password, then you need not do so. There is a simple way to reset the password rather than having to reset your entire laptop for the same. All you need is a Windows password recovery tool. PassFab 4WinKey provides this service by creating a Windows password recovery disk.

    -

    Otherwise, you may be able to boot into the BIOS and directly load the recovery partition on your hard drive, if your PC manufacturer included one. However, if you factory reset with this method, you'll reinstall all the manufacturer bloatware. While it's not ideal, it can work if you have no other options.

    -

    -

    "Hello,
    I have a gateway computer that's filled with spyware, no pop up killer gets rid of it. cant restore using the system restore cause it doesn't work. I can't remember how to factory restore our Gateway computer. Anyone know how to?"

    -

    \r\nvar smarthelp_ua = navigator.userAgent.search('SmartHelp_WebViewer');if(smarthelp_ua > -1)$('head').append('.var_check_productdisplay:none;')\r\n\r\nIMPORTANT: This article applies only to specific products and\/or operating systems. Check Applicable Products and Categories for details.\nThe following steps allow you to recover the computer to its default factory settings using the recovery partition shipped with the computer instead of having to use the recovery discs.\nWARNING: There is a risk of data loss. All data, settings, and programs added to the computer will be deleted. During this procedure you will have a chance to back up your data. Sony does not guarantee your personal data can be backed up or restored properly.\nIMPORTANT:\n

      Before performing a recovery, download and install all software and driver updates. \r\n\r\nDownloads are posted on your model support page.\r\n\r\nThis procedure does not require the use of Recovery Discs since the Recovery data is included on a special partition of the hard drive. That said, it is highly recommended to create a set of Recovery Discs.It is possible to create your own Recovery Media on most computers. Instructions can be found in your product manual.\r\nSony Electronics is no longer permitted to sell system recovery media for VAIO PCs that shipped with operating systems released prior to Windows 7. \r\nIf your computer shipped with the Windows 7 operating system (or later), a Recovery Media Kit can be purchased through Encompass Supply Chain Solutions at https:\/\/sony.encompass.com or 1-866-779-5153. \r\nIf your computer shipped with any operating system released prior to the Windows 7 operating system, please contact Best Buy for VAIO PC repair and recovery options at 1-800-433-5778.If the operating system will not start and no other troubleshooting can be performed, a system recovery can be performed by starting the computer and pressing the F10 key.The computer must be connected to the AC adapter to ensure power is not interrupted.Before beginning this procedure, disconnect all external devices such as printers, network cables, cameras, external hard drives or flash drives.\nNOTES:\n
      • Because the hidden Recovery partition contains the data needed to perform a Recovery, you will only be able to perform these steps if the computer has a functioning Recovery partition on the hard drive. If the Recovery partition is corrupted or has been removed, you will need to perform a Recovery using a set of Recovery Discs.
      • It is recommended that you either print these instructions or use a different computer to view this solution. This will allow you to follow the steps as you perform the procedure to reset the default factory settings.\n
        1. Restart the computer and immediately begin tapping the F10 key until the Edit Boot Options screen is displayed.
        2. At the Edit Boot Options screen, press the Enter key.\nNOTE: The VAIO Care Rescue window will now be displayed. This indicates that you have accessed the hidden Recovery partition.\n
        3. In the VAIO Care Rescue window, click Start recovery wizard.
        4. In the Do you need to rescue your data? window, click Yes, I'd like to rescue my data.\nNOTE: If you do not have any data on your computer that you need to back up, click the Skip Rescue button and proceed to step 10.\n
        5. Connect an external storage device to the computer, such as a USB hard drive or flash drive.
        6. In the Select Rescue type window, click Easy rescue.\nNOTES:\n
          • If you would like to manually select files to back up, click Custom Rescue and then follow the on-screen instructions.
          • The software will automatically detect your data.\n
          • Click to select the drive where you would like to back up your data, and then click the Next button.\nNOTE: This screen displays the amount of disc space required and the amount of free space on the external drive.\n
          • In the Confirm options and start rescue window, click the Start Rescue button.\nNOTE: A progress window will show the status of the backup.\n
          • In the Rescue has completed successfully window, click the Next button.
          • In the Are you sure you want to start recovery? window, click to select Yes, I'm sure and then click the Start Recovery button.\nIMPORTANT: A status window will be displayed indicating the recovery progress of the different applications. No action is required. This process may take up to 2 hours and the computer may restart several times.\n
          • In the Recovery complete window, click the Restart button.\nNOTE: The computer will restart to complete the Recovery process.\n
          • Follow the on-screen instructions to set up the operating system and complete the system recovery."}],"keywords":["."],"operatingSystems":["name":"Windows® 7"],"countries":["BR","CA","LM","US"],"region":"US","hidden":false,"lastModifiedDate":"Wed, 24 Jul 2019 00:21:56 GMT","isInternal":false,"hideModels":false,"isSelfDiagnosis":false,"catalog":"Personal Computers":,"taggedModels":[]}};window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.react_client_head_tags ="module":"isCritical":false;window.__COMPONENT_CONFIG__.react_client_head_tags ="portal":"protocol":"https","support_additionalVisibleModulesInLiteMode":[];window.__I18N__.react_client_head_tags = 0 ;window.__PRELOADED_STATE__ = window.__PRELOADED_STATE__ || ;window.__PRELOADED_STATE__.location ="pathname":"\/electronics\/support\/articles\/00017843","query":;window.__PRELOADED_STATE__.page ="type":"articleDetails","searchByType":"none","typeAlias":null,"origin":"server","hasError":false,"isLoading":false,"isLoadingPartialContent":false,"location":"pathname":"\/electronics\/support\/articles\/00017843","query":,"params":"articleFamilyId":"00017843","cookie":"","locale":"en_US";window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.software_details ="module":"isCritical":true;window.__COMPONENT_CONFIG__.software_details ="support_additionalVisibleModulesInLiteMode":[];window.__I18N__.software_details ="GB.displayName":"United Kingdom","SE.displayName":"Sweden","related_products_curated_link.text":"Related Products","flowplayer.language.ja":"Japanese","idk.text":"Sorry, this data isn't available","pt_PT.displayName":"Portugal","AD.displayName":"Andorra","no_NO.displayName":"","dynamic.product_count.default":"Products","support_link.text":"Support","TW.displayName":"Taiwan","product_count.14":"0 Products","YU.displayName":"Serbia","en_HK.displayName":"","flowplayer.language.fi":"Finnish","es_CL.displayName":"Chile","ME.displayName":"Montenegro","FR.displayName":"France","BA.displayName":"Bosnia and Herzegovina","flowplayer.language.ko":"Korean","EU.region_displayName":"Europe Region","CA.displayName":"Canada","pricing.starting.at_succeeding":"","SI.displayName":"Slovenia","product_count.72":"","es_AR.displayName":"Argentina","dynamic.product_count.34":"Products","NG.displayName":"Nigeria","sony.text":"Sony US","product_count.4":"0 Products","aria.slider.previous":"","KZ.displayName":"Kazakhstan","flowplayer.language.nl":"Dutch","fi_FI.displayName":"","en_PH.displayName":"Philippines","KW.displayName":"Kuwait","flowplayer.language.al":"Albanian","dynamic.accessory_count.1":"0 Accessory","MA.displayName":"Morocco","flowplayer.language.mk":"Macedonian","nl_NL.displayName":"","dynamic.product_count.24":"Products","fr_LU.displayName":"","LV.displayName":"Latvia","lt_LT.displayName":"","dynamic.product_count.3":"Products","flowplayer.language.ro":"Romanian","GE.displayName":"Georgia","consent_warning.button_text":"Manage cookies","favorite.text":"Favorite","productInformationSheet.text":"Product Information Sheet","flowplayer.language.et":"Estonian","flowplayer.language.sk":"Slovak","LT.displayName":"Lithuania","en_IE.displayName":"Ireland","PL.displayName":"Poland","ZA.displayName":"South Africa","BG.displayName":"Bulgaria","pl_PL.displayName":"","BH.displayName":"Bahrain","flowplayer.language.en":"English","flowplayer.language.bg":"Bulgarian","FI.displayName":"Finland","CH.displayName":"Switzerland","JP.displayName":"","BY.displayName":"Belarus","BR.displayName":"Brazil","TR.displayName":"Türkiye","fr_BE.displayName":"Belgium","IE.displayName":"Republic of Ireland","en_EE.displayName":"Estonia","sv_SE.displayName":"","recycling_cost_5Euro.text":"","BE.displayName":"Belgium","LU.displayName":"Luxembourg","IS.displayName":"Iceland","flowplayer.language.kk":"Kazakh","RU.displayName":"Russia","buy.button.text":"Where To Buy","dynamic.product_count.12":"Products","CZ.displayName":"Czech Republic","MD.displayName":"Moldova","CN.region_displayName":"China Region","dynamic.product_count.13":"Products","product_count.3":"0 Products","AL.displayName":"Albania","XM.displayName":"Middle East","en_ID.displayName":"Indonesia","IN.displayName":"India","dynamic.product_count.2":"Products","dynamic.product_count.23":"Products","MC.displayName":"Monaco","flowplayer.language.it":"Italian","US.region_displayName":"Pan America Region","applicable_details.information":"This information is for the following models:","it_IT.displayName":"","fr_CH.displayName":"Switzerland","meganav.viewMore":"View More","violators.topPick":"Top Pick","MK.displayName":"Macedonia","AP.displayName":"Others","HK.displayName":"Hong Kong","ro_RO.displayName":"","product_count.23":"0 Products","bg_BG.displayName":"","en_US.displayName":"USA","AU.displayName":"Australia","VA.displayName":"Vatican City","product_count.74":"","vi_VN.displayName":"","PH.displayName":"Philippines","NZ.displayName":"New Zealand","product_count.34":"0 Products","SA.displayName":"Kingdom of Saudi Arabia","de_AT.displayName":"Austria","product_count.12":"0 Products","flowplayer.language.sl":"Slovene","KR.displayName":"Korea","SG.displayName":"Singapore","flowplayer.language.es":"Spanish","sk_SK.displayName":"","ID.displayName":"Indonesia","en_SG.displayName":"Singapore","ru_RU.displayName":"Russia","cs_CZ.displayName":"","de_DE.displayName":"Germany","MY.displayName":"Malaysia","dynamic.product_count.31":"Products","related_products_link.text":"Related Products","DE.displayName":"Germany","en_CA.displayName":"Canada","es_ES.displayName":"Spain","favorites.tooltip.add_action":"Add to Favorites","flowplayer.language.no":"Norwegian","en_LV.displayName":"Latvia","product_count.2":"0 Products","GR.displayName":"Greece","favorites.tooltip.header":"Favorites","NO.displayName":"Norway","fr_CA.displayName":"Canada","en_TH.displayName":"Thailand","notify_me.text":"Notify Me","th_TH.displayName":"","sr_YU.displayName":"","dynamic.product_count.22":"Products","product.specifications_page_description":"Get the detailed list of specifications for the Sony 0 & see which 1 fit your needs.","dynamic.product_count.11":"Products","flowplayer.language.ru":"Russian","HU.displayName":"Hungary","product_count.64":"","en_MY.displayName":"Malaysia","applicable_details.products-title":"Applicable Model","HR.displayName":"Croatia","IT.displayName":"Italy","consent_warning.description":"Access your cookie preferences below and make sure to switch on the Youtube cookie under the 'Functional' section.","product.specifications_page_title":"0 Specifications","flowplayer.language.ar":"Arabic","applicable_details.show_more":"View all applicable models","AE.displayName":"United Arab Emirates","product.reviews_page_title":"0 Reviews & Ratings","product_count.32":"0 Products","sr_RS.displayName":"","favorites.tooltip.on_add":"Added","SM.displayName":"San Marino","flowplayer.language.pl":"Polish","accessory_finder.label":"Accessory Finder","dynamic.product_count.1":"Product","aria.slider.next":"","AP.region_displayName":"Asia Pacific Region","product.details_page_description":"Discover the 0 from Sony & explore all the 1 features.","product_count.21":"0 Products","additional_cta.text":"","EE.displayName":"Estonia","mk_MK.displayName":"","product_count.33":"0 Products","flowplayer.language.sv":"Slovenian","TH.displayName":"Thailand","tr_TR.displayName":"Türkiye","JO.displayName":"Jordan","hr_BA.displayName":"","favorites.tooltip.remove_action":"Remove Favorite","brexitTVEnergyLink.text":"","VN.displayName":"Vietnam","es_EC.displayName":"Ecuador","CY.displayName":"Cyprus","product_count.22":"0 Products","de_CH.displayName":"Switzerland","en_NZ.displayName":"New Zealand","eula_title":"EULA (End-User-License-Agreement)","product_count.default":"0 Products","violators.newProduct":"New","meganav.viewall":"View All","accessory_finder.help_label":"How do I find my model number?","CN.displayName":"China","share.text":"Share","dynamic.product_count.32":"Products","da_DK.displayName":"","PK.displayName":"Pakistan","pricing.rrp":"","UA.displayName":"Ukraine","consent_warning.title":"Please accept Youtube cookies to watch this video","pricing.starting.at":"Starting at","product_count.11":"0 Products","US.displayName":"United States","es_MX.displayName":"Mexico","buyButton.static_text.text":"","DK.displayName":"Denmark","reviews_listing.text":"Reviews and Ratings","es_PE.displayName":"Peru","hu_HU.displayName":"","aria.slider.thumbs":"","ES.displayName":"Spain","en_IN.displayName":"India","pricing.suffix":"for 0 model","NL.displayName":"Netherlands","accessory_finder.placeholder":"Enter your model number","de_LU.displayName":"","product_count.13":"0 Products","flowplayer.language.fr":"French","el_GR.displayName":"","product.productShots.alt_text.template":"Images of 0","flowplayer.language.el":"Greek","uk_UA.displayName":"Ukraine","product_count.24":"0 Products","favorites.tooltip.on_remove":"Removed","product.reviews_page_description":"Read the latest user reviews and ratings of the Sony 0 and explore the 1.","header.typeAheadMarketingResultsTitle":"Products","download_productInfoSheet_label":"Product Information Sheet","available_soon":"Available soon","sl_SI.displayName":"","EG.displayName":"Egypt","product.lifestyleShots.alt_text.template":"0 in action","IR.displayName":"Iran","AT.displayName":"Austria","product_count.1":"0 Product","flowplayer.language.hu":"Hungary","LM.displayName":"Latin America","product_count.31":"0 Products","MT.displayName":"Malta","nl_BE.displayName":"","flowplayer.language.da":"Danish","download_button.text":"Download","fr_FR.displayName":"France","SK.displayName":"Slovakia","accessory_finder.subhead":"Select your model to find compatible accessories:","flowplayer.language.tr":"Turkish","flowplayer.language.zh":"Traditional Chinese","eula_scroll_info":"Please read and scroll through the entire End User License Agreement (EULA) to enable the Download button.","product.primaryShot.alt_text.template":"Picture of 0","JP.region_displayName":"","buy.button.disabled.text":"Currently not available","header.typeAheadSupportResultsTitle":"Support","flowplayer.language.bs":"Bosnian","flowplayer.language.xz":"Simplified Chinese","en_AU.displayName":"Australia","back_to_top.label":"Back to Top","AZ.displayName":"Azerbaijan","dynamic.accessory_count.default":"0 Accessories","flowplayer.language.cs":"Czech","violators.awardWinning":"Award Winner","LI.displayName":"Liechtenstein","flowplayer.language.de":"German","en_GB.displayName":"United Kingdom","dynamic.product_count.14":"Products","flowplayer.language.uk":"Ukrainian","product.detailsDimensionShots.alt_text.template":"Dimensions of 0","dynamic.product_count.33":"Products","flowplayer.language.pt":"Portuguese","hr_HR.displayName":"","brexitTVEnergyLabel.text":"","XA.displayName":"North Africa","dynamic.product_count.4":"Products","aria.slider.thumb":"","dynamic.product_count.21":"Products","tax_disclaimer.text":"","product.dimensionShots.alt_text.template":"Dimensions of 0","PT.displayName":"Portugal","RO.displayName":"Romania","es_CO.displayName":"Colombia";window.__PRELOADED_STATE__ = window.__PRELOADED_STATE__ || ;window.__PRELOADED_STATE__.location ="pathname":"\/electronics\/support\/articles\/00017843","query":;window.__PRELOADED_STATE__.page ="type":"articleDetails","searchByType":"none","typeAlias":null,"origin":"server","hasError":false,"isLoading":false,"isLoadingPartialContent":false,"location":"pathname":"\/electronics\/support\/articles\/00017843","query":,"params":"articleFamilyId":"00017843","cookie":"","locale":"en_US";window.__PRELOADED_STATE__.specialmessage ="params":"locale":"en_US","isFetching":false,"hasError":false,"specialMessageResponse":"error":"No Special Message found for the locale en_US","description":"Element corresponding to query \"$and\":[\"locale\":\"en_US\",] not found in collection support_special_messages.";window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.special_message ="module":"isCritical":false;window.__COMPONENT_CONFIG__.special_message ="support_specialMessageValidityInDays":7,"support_showSpecialMessage":true,"support_additionalVisibleModulesInLiteMode":[];window.__I18N__.special_message ="GB.displayName":"United Kingdom","SE.displayName":"Sweden","related_products_curated_link.text":"Related Products","flowplayer.language.ja":"Japanese","idk.text":"Sorry, this data isn't available","pt_PT.displayName":"Portugal","AD.displayName":"Andorra","no_NO.displayName":"","dynamic.product_count.default":"Products","support_link.text":"Support","TW.displayName":"Taiwan","product_count.14":"0 Products","YU.displayName":"Serbia","en_HK.displayName":"","flowplayer.language.fi":"Finnish","es_CL.displayName":"Chile","ME.displayName":"Montenegro","FR.displayName":"France","BA.displayName":"Bosnia and Herzegovina","flowplayer.language.ko":"Korean","EU.region_displayName":"Europe Region","CA.displayName":"Canada","pricing.starting.at_succeeding":"","SI.displayName":"Slovenia","product_count.72":"","es_AR.displayName":"Argentina","dynamic.product_count.34":"Products","NG.displayName":"Nigeria","sony.text":"Sony US","product_count.4":"0 Products","aria.slider.previous":"","KZ.displayName":"Kazakhstan","flowplayer.language.nl":"Dutch","fi_FI.displayName":"","en_PH.displayName":"Philippines","KW.displayName":"Kuwait","flowplayer.language.al":"Albanian","dynamic.accessory_count.1":"0 Accessory","MA.displayName":"Morocco","flowplayer.language.mk":"Macedonian","nl_NL.displayName":"","dynamic.product_count.24":"Products","fr_LU.displayName":"","LV.displayName":"Latvia","lt_LT.displayName":"","dynamic.product_count.3":"Products","flowplayer.language.ro":"Romanian","GE.displayName":"Georgia","consent_warning.button_text":"Manage cookies","favorite.text":"Favorite","productInformationSheet.text":"Product Information Sheet","flowplayer.language.et":"Estonian","flowplayer.language.sk":"Slovak","LT.displayName":"Lithuania","en_IE.displayName":"Ireland","PL.displayName":"Poland","ZA.displayName":"South Africa","BG.displayName":"Bulgaria","pl_PL.displayName":"","BH.displayName":"Bahrain","flowplayer.language.en":"English","flowplayer.language.bg":"Bulgarian","FI.displayName":"Finland","CH.displayName":"Switzerland","JP.displayName":"","BY.displayName":"Belarus","BR.displayName":"Brazil","TR.displayName":"Türkiye","fr_BE.displayName":"Belgium","IE.displayName":"Republic of Ireland","en_EE.displayName":"Estonia","sv_SE.displayName":"","recycling_cost_5Euro.text":"","BE.displayName":"Belgium","LU.displayName":"Luxembourg","IS.displayName":"Iceland","flowplayer.language.kk":"Kazakh","RU.displayName":"Russia","buy.button.text":"Where To Buy","dynamic.product_count.12":"Products","CZ.displayName":"Czech Republic","MD.displayName":"Moldova","CN.region_displayName":"China Region","dynamic.product_count.13":"Products","product_count.3":"0 Products","AL.displayName":"Albania","XM.displayName":"Middle East","en_ID.displayName":"Indonesia","IN.displayName":"India","dynamic.product_count.2":"Products","dynamic.product_count.23":"Products","MC.displayName":"Monaco","flowplayer.language.it":"Italian","US.region_displayName":"Pan America Region","it_IT.displayName":"","fr_CH.displayName":"Switzerland","meganav.viewMore":"View More","violators.topPick":"Top Pick","MK.displayName":"Macedonia","AP.displayName":"Others","HK.displayName":"Hong Kong","ro_RO.displayName":"","product_count.23":"0 Products","bg_BG.displayName":"","en_US.displayName":"USA","AU.displayName":"Australia","VA.displayName":"Vatican City","product_count.74":"","vi_VN.displayName":"","PH.displayName":"Philippines","NZ.displayName":"New Zealand","product_count.34":"0 Products","SA.displayName":"Kingdom of Saudi Arabia","de_AT.displayName":"Austria","product_count.12":"0 Products","flowplayer.language.sl":"Slovene","KR.displayName":"Korea","SG.displayName":"Singapore","flowplayer.language.es":"Spanish","sk_SK.displayName":"","ID.displayName":"Indonesia","en_SG.displayName":"Singapore","ru_RU.displayName":"Russia","cs_CZ.displayName":"","de_DE.displayName":"Germany","MY.displayName":"Malaysia","dynamic.product_count.31":"Products","related_products_link.text":"Related Products","DE.displayName":"Germany","en_CA.displayName":"Canada","es_ES.displayName":"Spain","favorites.tooltip.add_action":"Add to Favorites","flowplayer.language.no":"Norwegian","en_LV.displayName":"Latvia","product_count.2":"0 Products","GR.displayName":"Greece","favorites.tooltip.header":"Favorites","NO.displayName":"Norway","fr_CA.displayName":"Canada","en_TH.displayName":"Thailand","notify_me.text":"Notify Me","th_TH.displayName":"","sr_YU.displayName":"","dynamic.product_count.22":"Products","product.specifications_page_description":"Get the detailed list of specifications for the Sony 0 & see which 1 fit your needs.","dynamic.product_count.11":"Products","flowplayer.language.ru":"Russian","HU.displayName":"Hungary","product_count.64":"","en_MY.displayName":"Malaysia","HR.displayName":"Croatia","IT.displayName":"Italy","consent_warning.description":"Access your cookie preferences below and make sure to switch on the Youtube cookie under the 'Functional' section.","product.specifications_page_title":"0 Specifications","flowplayer.language.ar":"Arabic","AE.displayName":"United Arab Emirates","product.reviews_page_title":"0 Reviews & Ratings","product_count.32":"0 Products","sr_RS.displayName":"","favorites.tooltip.on_add":"Added","SM.displayName":"San Marino","flowplayer.language.pl":"Polish","accessory_finder.label":"Accessory Finder","dynamic.product_count.1":"Product","aria.slider.next":"","AP.region_displayName":"Asia Pacific Region","product.details_page_description":"Discover the 0 from Sony & explore all the 1 features.","product_count.21":"0 Products","additional_cta.text":"","EE.displayName":"Estonia","mk_MK.displayName":"","product_count.33":"0 Products","flowplayer.language.sv":"Slovenian","TH.displayName":"Thailand","tr_TR.displayName":"Türkiye","JO.displayName":"Jordan","hr_BA.displayName":"","favorites.tooltip.remove_action":"Remove Favorite","brexitTVEnergyLink.text":"","VN.displayName":"Vietnam","es_EC.displayName":"Ecuador","CY.displayName":"Cyprus","product_count.22":"0 Products","de_CH.displayName":"Switzerland","en_NZ.displayName":"New Zealand","product_count.default":"0 Products","violators.newProduct":"New","meganav.viewall":"View All","accessory_finder.help_label":"How do I find my model number?","CN.displayName":"China","share.text":"Share","dynamic.product_count.32":"Products","da_DK.displayName":"","PK.displayName":"Pakistan","pricing.rrp":"","UA.displayName":"Ukraine","consent_warning.title":"Please accept Youtube cookies to watch this video","pricing.starting.at":"Starting at","product_count.11":"0 Products","US.displayName":"United States","es_MX.displayName":"Mexico","buyButton.static_text.text":"","DK.displayName":"Denmark","reviews_listing.text":"Reviews and Ratings","es_PE.displayName":"Peru","hu_HU.displayName":"","aria.slider.thumbs":"","ES.displayName":"Spain","en_IN.displayName":"India","pricing.suffix":"for 0 model","NL.displayName":"Netherlands","accessory_finder.placeholder":"Enter your model number","de_LU.displayName":"","product_count.13":"0 Products","flowplayer.language.fr":"French","el_GR.displayName":"","product.productShots.alt_text.template":"Images of 0","flowplayer.language.el":"Greek","uk_UA.displayName":"Ukraine","product_count.24":"0 Products","favorites.tooltip.on_remove":"Removed","product.reviews_page_description":"Read the latest user reviews and ratings of the Sony 0 and explore the 1.","header.typeAheadMarketingResultsTitle":"Products","download_productInfoSheet_label":"Product Information Sheet","available_soon":"Available soon","sl_SI.displayName":"","EG.displayName":"Egypt","product.lifestyleShots.alt_text.template":"0 in action","IR.displayName":"Iran","AT.displayName":"Austria","product_count.1":"0 Product","flowplayer.language.hu":"Hungary","LM.displayName":"Latin America","product_count.31":"0 Products","MT.displayName":"Malta","nl_BE.displayName":"","flowplayer.language.da":"Danish","fr_FR.displayName":"France","SK.displayName":"Slovakia","accessory_finder.subhead":"Select your model to find compatible accessories:","flowplayer.language.tr":"Turkish","flowplayer.language.zh":"Traditional Chinese","product.primaryShot.alt_text.template":"Picture of 0","JP.region_displayName":"","buy.button.disabled.text":"Currently not available","header.typeAheadSupportResultsTitle":"Support","flowplayer.language.bs":"Bosnian","flowplayer.language.xz":"Simplified Chinese","en_AU.displayName":"Australia","back_to_top.label":"Back to Top","AZ.displayName":"Azerbaijan","dynamic.accessory_count.default":"0 Accessories","flowplayer.language.cs":"Czech","violators.awardWinning":"Award Winner","LI.displayName":"Liechtenstein","flowplayer.language.de":"German","en_GB.displayName":"United Kingdom","dynamic.product_count.14":"Products","flowplayer.language.uk":"Ukrainian","product.detailsDimensionShots.alt_text.template":"Dimensions of 0","support.close":"Close","dynamic.product_count.33":"Products","flowplayer.language.pt":"Portuguese","hr_HR.displayName":"","brexitTVEnergyLabel.text":"","XA.displayName":"North Africa","dynamic.product_count.4":"Products","aria.slider.thumb":"","dynamic.product_count.21":"Products","tax_disclaimer.text":"","product.dimensionShots.alt_text.template":"Dimensions of 0","PT.displayName":"Portugal","RO.displayName":"Romania","es_CO.displayName":"Colombia";window.__PRELOADED_STATE__ = window.__PRELOADED_STATE__ || ;window.__PRELOADED_STATE__.location ="pathname":"\/electronics\/support\/articles\/00017843","query":;window.__PRELOADED_STATE__.page ="type":"articleDetails","searchByType":"none","typeAlias":null,"origin":"server","hasError":false,"isLoading":false,"isLoadingPartialContent":false,"location":"pathname":"\/electronics\/support\/articles\/00017843","query":,"params":"articleFamilyId":"00017843","cookie":"","locale":"en_US";window.__PRELOADED_STATE__.externalhtml ="isFetching":false,"isFetched":true,"fileContent":"\n\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/check for preview domain\nif($(\"body\").attr(\"data-locale\") == \"en_US\")\n\tvar the_domain = window.location.hostname;\n\tif(the_domain.indexOf(\"preview-production-pdp\") >= 0)\n\t\t\n\t\tinit();\n\t\t\n\telse\n\t\t\n\t\tinit();\n\t\n\t\n\n\/\/end preview-check\n\/\/=============================\/\/\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/smarthelp debugger on production\nfunction debug_smarthelp()\n\tvar the_href = window.location.href;\n\tif(the_href.indexOf(\"smarthelp_testing\") >= 0)\n\n\t\t$(\"body\").prepend(\"Debugger\")\n\n\t\t$(\"*\").bind(\"focus\", function()\n\t\t\tvar debug = $('.debug_smarthelp');\n\t\t\tdebug.empty();\n\t\t\tvar type = $(this).prop(\"nodeName\");\n\t\t\tvar prevSibling = $(this).prev().prop(\"nodeName\");\n\t\t\tvar id = $(this).attr(\"id\");\n\t\t\tvar theClass = $(this).attr(\"class\");\n\t\t\tvar theParent = $(this).parent().prop(\"nodeName\");\n\t\t\tvar theParentClass = $(this).parent().attr(\"class\");\n\t\t\tdebug.append(\"Type: \" + type + \"\");\n\t\t\tdebug.append(\"Previous sibling: \" + prevSibling + \"\");\n\t\t\tdebug.append(\"Id: \" + id + \"\");\n\t\t\tdebug.append(\"Class: \" + theClass + \"\");\n\t\t\tdebug.append(\"Parent Type: \" + theParent + \"\");\n\t\t\tdebug.append(\"Parent Class: \" + theParentClass + \"\");\n\t\t\t\n\t\t);\n\t\n\n\n\/\/toggle debugger\n\/\/debug_smarthelp();\n\/\/END toggle debugger\n\/\/=============================\/\/\n\n\n\n\n\nfunction init()\n\n\t\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\t\/\/Initial JS\n\tconsole.log(\"update7.1\");\n\n\tvar smarthelp_ua = navigator.userAgent.search(\"SmartHelp_WebViewer\");\n\tif(smarthelp_ua > -1)\n\t\tvar article_body = document.getElementsByTagName(\"BODY\")[0];\n\t\tarticle_body.tabIndex = -1;\n\t\tdo_smarthelp_styles();\n\telse\n\t\tdocument.addEventListener(\"page_state_updated\", function (e) page_state_updated() , false);\n\t\tnot_smarthelp();\n\t\n\n\t\/\/END Initial JS\n\t\/\/=============================\/\/\n\n\n\n\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/Function that fires with event listener, and DOM Ready, for non-smarthelp browsers\nfunction page_state_updated()\n\tnot_smarthelp();\n\n\/\/=============================\/\/\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/Update the elements hidden by default in custom stylesheet for non-smarthelp browsers\nfunction not_smarthelp()\n\tif($(\".search-eyebrow\").is(\":hidden\") === false)\n\t\t$(\"head\").append(\"body[data-locale='en_US'] .search-article-details-print-button, body[data-locale='en_US'] .search-article-details-wrapper, body[data-locale='en_US'] .search-eyebrow, body[data-locale='en_US'] .article-details-applicable-details-wrapper, body[data-locale='en_US'] .icon-list-wrapper, body[data-locale='en_US'] .var_suptype_link, body[data-locale='en_US'] .smarthelp_hide, body[data-locale='en_US'] .smh_hide visibility:visible !important; body[data-locale='en_US'] .article-details-content a, body[data-locale='en_US'] .article-details-applicable-details-wrapper a visibility:visible !important body[data-locale='en_US'] .smarthelp_hide, body[data-locale='en_US'] .smh_hide, body[data-locale='en_US'] .icon-list-wrapper, body[data-locale='en_US'] .icon-banner-wrapper.js-icon-banner-wrapper, iframe[src*='youtube'] display: block !important\");\n\t\n\n\/\/=============================\/\/\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/Function to modify content outside of the article body, for smarthelp browser.\nfunction do_smarthelp_styles()\n\t\/\/$(\".article-details-content a\").hide()\n\t$(\".article-details-applicable-details-wrapper\").slideUp(\"fast\");\n\n\/\/=============================\/\/\n\n\n\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\n\/\/This is for Smarthelp Javascript Article Body manipulation\n\/\/Smarthelp never navigates SWT site (single-page-app), instead it accesses KB articles with direct link. \n\/\/Therefore the article content manipulation needs to occur after dom loaded.\n\/\/\n(function()\n\t\/\/console.log(\"ready\");\n\n\tif($(\"body\").attr(\"data-locale\") == \"en_US\")\n\t\t\/\/start preview-check\n\t\tvar the_domain = window.location.hostname;\n\t\tif(the_domain.indexOf(\"preview-production-pdp\") >= 0)\n\t\t\t\/\/console.log(\"Domain: Preview\");\n\t\t\tdoReady();\n\n\t\telse\n\t\t\t\/\/console.log(\"Domain: Production\");\n\t\t\tdoReady();\n\t\t\n\t\t\/\/end preview-check\n\t\n\t\n\t\n\n\t\n\tfunction doReady()\n\t\n\t\tvar smarthelp_ua = navigator.userAgent.search(\"SmartHelp_WebViewer\");\n\t\tif(smarthelp_ua > -1)\n\t\t\t\/\/console.log(\"sh-update3\")\n\t\t\tvar c_val = '';\n\t\t\tpersistExpand()\n\t\t\t\n\t\t\t$('.article-details-content a[href*=\".pdf\"], .article-details-content a[href*=\".PDF\"]').each(function()\n\t\t\t\t$(this).replaceWith('' + $(this).text() + '');\n\t\t\t);\n\t\t\t$.each($(\".article-details-content a[href*='\/external-link?url']\"), function()\n\t\t\t\t$(this).replaceWith(\"\" + $(this).text() + \"\")\n\t\t\t)\n\t\t\t\/\/:not([href^=#])\n\t\t\t$.each($(\".article-details-content a:not(.expand_parent):not(.expand_parent_dev):not(.back_to_top):not(.var_imageX)\").not('.article-details-content a[href*=\"\/sna\/graphics\/\"]').not('.article-details-content a[href*=\"docs.sony.com\"]').not('.article-details-content a[href*=\"\/articleimage\/servlet\/\"]'), function(i)\n\t\t\t\tvar that = $(this);\n\t\t\t\tvar href = that.attr(\"href\");\n\t\t\t\tif(href)\n\t\t\t\t\tif(href.indexOf(\"\/electronics\/support\/articles\/\") < 0)\n\t\t\t\t\t\tthat.replaceWith(\"\" + that.text() + \"\")\n\t\t\t\t\t\n\t\t\t\telse\n\t\t\t\t\tthat.css(\"visibility\", \"visible\")\n\t\t\t\t\n\t\t\t);\n\t\t\tconsole.log(\"sh-here\")\n\t\t\t$(\".article-details-content a\").css(\"visibility\", \"visible\")\n\t\t\t$(\".article-details-content a\").show().css('display', 'inline-block');\n\t\t\t$('.var_suptype_link, .smarthelp_hide, .smh_hide').remove();\n\t\t\t$(\"head\").append(\"#search-compact display: none;\")\n\t\t\t\n\t\t\t\/\/Webview bug fix-\n\t\t\t\/\/When page loads, if the first focusable element (a link) is beyond the fold, when you first start scrolling down, webview will skip all content before the first link.\n\t\t\t\/\/Added a tabindex to the first targetable element, the page title\n\t\t\t$('h1.search-article-title').css('outline', 'none');\n\t\t\t\n\t\t\t$(\".article-details-content\").prop('tabIndex', -1)\n\t\t\t$(\".article-details-content > div\").prop('tabIndex', -1)\n\t\t\t$('h1.search-article-title').prop('tabindex', 0)\n\t\t\t$('.expand_child').prop('tabIndex', -1);\n\t\t\t$('.expand_child_dev').prop('tabIndex', -1);\n\t\t\t$(\".article-details-content a\").show();\n\t\t\t\n\t\t\t\n\t\t\t\n\t\t\t\/\/Create\/update a cookie every time expand\/collapse is clicked \n function setPersistCookie()\n var date = new Date();\n \/\/One day cookie (change the 1 below to desired days)\n date.setTime(date.getTime() + 1*24*60*60*1000);\n \/\/Each cookie is only accessible by the page it was created on... each page where a user toggles expand\/collapse\n var thispage = window.location.pathname;\n \/\/Remove trailing comma in c_val\n c_val = c_val.replace(\/,\\s*$\/, \"\");\n document.cookie = 'persist_c='+c_val+';expires=' + date.toGMTString() + ';path='+ thispage;\n \n\t\t\t\n\t\t\t\/\/ Get Cookie\n\t\t\tfunction getCookie(name) \n\t\t\t\tvar value = \"; \" + document.cookie;\n\t\t\t\tvar parts = value.split(\"; \" + name + \"=\");\n\t\t\t\tif (parts.length == 2) return parts.pop().split(\";\").shift();\n\t\t\t\n\t\t\t\n\t\t\t\/\/Check for cookie on load, then open expand\/collapse that were previously opened by the user\n function persistExpand()\n if(getCookie('persist_c'))\n var array = getCookie('persist_c').split(',');\n $.each(array, function(index, value)\n \n $(\"a.expand_parent_dev\").eq(value).addClass('toggleFocus');\n\t\t\t\t\t\t$(\"a.expand_parent_dev\").eq(value).parent().nextAll('.expand_child_dev').first().show().removeAttr('tabindex');\n\t\t\t\t\t\t$(\"a.expand_parent_dev\").eq(value).nextAll('.expand_child_dev').first().show().removeAttr('tabindex');\n );\n \n \n \n\n\t\t\t\/\/Bind events to Expand\/Collapse links\n\t\t\t$('.expand_parent_dev').bind('click', function()\n\t\t\t\tcheckToggles();\n\t\t\t).keyup(function(e));\n\t\t\t\n\t\t\t\/\/Check for \"open state\" class on all expand parent elements, append its DOM index to a variable that will be used for the cookie value\n\t\t\tfunction checkToggles()\n\t\t\t\tconsole.log(\"clicked\")\n\t\t\t\tc_val = '';\n\t\t\t\t$('a.expand_parent_dev').each(function(i)\n\t\t\t\t\tif($(this).hasClass('toggleFocus'))\n\t\t\t\t\t\tc_val = c_val + i + ',';\n\t\t\t\t\t\t\/\/c_val example value: \"0,2,3,7,\"\n\t\t\t\t\t\n\t\t\t\t)\n\t\t\t\tsetPersistCookie();\n\t\t\t\n\t\t\t\n\t\t\t\n\t\telse\n\t\t\tpage_state_updated();\n\t\t\n\t\n\t\t\n)()\n\n\n\n\n\/\/=============================\/\/\n\/\/=============================\/\/\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n";window.__PRELOADED_STATE__.ascportal ="isAscPortal":false,"isFetching":false;window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.support_external_html ="module":"isCritical":false;window.__COMPONENT_CONFIG__.support_external_html ="support_accessLevelContent":true,"support_asc_embedLinkHeader":false,"support_additionalVisibleModulesInLiteMode":[];window.__I18N__.support_external_html ="GB.displayName":"United Kingdom","SE.displayName":"Sweden","related_products_curated_link.text":"Related Products","flowplayer.language.ja":"Japanese","idk.text":"Sorry, this data isn't available","pt_PT.displayName":"Portugal","AD.displayName":"Andorra","no_NO.displayName":"","dynamic.product_count.default":"Products","support_link.text":"Support","TW.displayName":"Taiwan","product_count.14":"0 Products","YU.displayName":"Serbia","en_HK.displayName":"","flowplayer.language.fi":"Finnish","es_CL.displayName":"Chile","ME.displayName":"Montenegro","FR.displayName":"France","BA.displayName":"Bosnia and Herzegovina","flowplayer.language.ko":"Korean","EU.region_displayName":"Europe Region","CA.displayName":"Canada","pricing.starting.at_succeeding":"","SI.displayName":"Slovenia","product_count.72":"","es_AR.displayName":"Argentina","dynamic.product_count.34":"Products","NG.displayName":"Nigeria","sony.text":"Sony US","product_count.4":"0 Products","aria.slider.previous":"","KZ.displayName":"Kazakhstan","flowplayer.language.nl":"Dutch","fi_FI.displayName":"","en_PH.displayName":"Philippines","KW.displayName":"Kuwait","flowplayer.language.al":"Albanian","dynamic.accessory_count.1":"0 Accessory","MA.displayName":"Morocco","flowplayer.language.mk":"Macedonian","nl_NL.displayName":"","dynamic.product_count.24":"Products","fr_LU.displayName":"","LV.displayName":"Latvia","lt_LT.displayName":"","dynamic.product_count.3":"Products","flowplayer.language.ro":"Romanian","GE.displayName":"Georgia","consent_warning.button_text":"Manage cookies","favorite.text":"Favorite","productInformationSheet.text":"Product Information Sheet","flowplayer.language.et":"Estonian","flowplayer.language.sk":"Slovak","LT.displayName":"Lithuania","en_IE.displayName":"Ireland","PL.displayName":"Poland","ZA.displayName":"South Africa","BG.displayName":"Bulgaria","pl_PL.displayName":"","BH.displayName":"Bahrain","flowplayer.language.en":"English","flowplayer.language.bg":"Bulgarian","FI.displayName":"Finland","CH.displayName":"Switzerland","JP.displayName":"","BY.displayName":"Belarus","BR.displayName":"Brazil","TR.displayName":"Türkiye","fr_BE.displayName":"Belgium","IE.displayName":"Republic of Ireland","en_EE.displayName":"Estonia","sv_SE.displayName":"","recycling_cost_5Euro.text":"","BE.displayName":"Belgium","LU.displayName":"Luxembourg","IS.displayName":"Iceland","flowplayer.language.kk":"Kazakh","RU.displayName":"Russia","buy.button.text":"Where To Buy","dynamic.product_count.12":"Products","CZ.displayName":"Czech Republic","MD.displayName":"Moldova","CN.region_displayName":"China Region","dynamic.product_count.13":"Products","product_count.3":"0 Products","AL.displayName":"Albania","XM.displayName":"Middle East","en_ID.displayName":"Indonesia","IN.displayName":"India","dynamic.product_count.2":"Products","dynamic.product_count.23":"Products","MC.displayName":"Monaco","flowplayer.language.it":"Italian","US.region_displayName":"Pan America Region","it_IT.displayName":"","fr_CH.displayName":"Switzerland","meganav.viewMore":"View More","violators.topPick":"Top Pick","MK.displayName":"Macedonia","AP.displayName":"Others","HK.displayName":"Hong Kong","ro_RO.displayName":"","product_count.23":"0 Products","bg_BG.displayName":"","en_US.displayName":"USA","AU.displayName":"Australia","VA.displayName":"Vatican City","product_count.74":"","vi_VN.displayName":"","PH.displayName":"Philippines","NZ.displayName":"New Zealand","product_count.34":"0 Products","SA.displayName":"Kingdom of Saudi Arabia","de_AT.displayName":"Austria","product_count.12":"0 Products","flowplayer.language.sl":"Slovene","KR.displayName":"Korea","SG.displayName":"Singapore","flowplayer.language.es":"Spanish","sk_SK.displayName":"","ID.displayName":"Indonesia","en_SG.displayName":"Singapore","ru_RU.displayName":"Russia","cs_CZ.displayName":"","de_DE.displayName":"Germany","MY.displayName":"Malaysia","dynamic.product_count.31":"Products","related_products_link.text":"Related Products","DE.displayName":"Germany","en_CA.displayName":"Canada","es_ES.displayName":"Spain","favorites.tooltip.add_action":"Add to Favorites","flowplayer.language.no":"Norwegian","en_LV.displayName":"Latvia","product_count.2":"0 Products","GR.displayName":"Greece","favorites.tooltip.header":"Favorites","NO.displayName":"Norway","fr_CA.displayName":"Canada","en_TH.displayName":"Thailand","notify_me.text":"Notify Me","th_TH.displayName":"","sr_YU.displayName":"","dynamic.product_count.22":"Products","product.specifications_page_description":"Get the detailed list of specifications for the Sony 0 & see which 1 fit your needs.","dynamic.product_count.11":"Products","flowplayer.language.ru":"Russian","HU.displayName":"Hungary","product_count.64":"","en_MY.displayName":"Malaysia","HR.displayName":"Croatia","IT.displayName":"Italy","consent_warning.description":"Access your cookie preferences below and make sure to switch on the Youtube cookie under the 'Functional' section.","product.specifications_page_title":"0 Specifications","flowplayer.language.ar":"Arabic","AE.displayName":"United Arab Emirates","product.reviews_page_title":"0 Reviews & Ratings","product_count.32":"0 Products","sr_RS.displayName":"","favorites.tooltip.on_add":"Added","SM.displayName":"San Marino","flowplayer.language.pl":"Polish","accessory_finder.label":"Accessory Finder","dynamic.product_count.1":"Product","aria.slider.next":"","AP.region_displayName":"Asia Pacific Region","product.details_page_description":"Discover the 0 from Sony & explore all the 1 features.","product_count.21":"0 Products","additional_cta.text":"","EE.displayName":"Estonia","mk_MK.displayName":"","product_count.33":"0 Products","flowplayer.language.sv":"Slovenian","TH.displayName":"Thailand","tr_TR.displayName":"Türkiye","JO.displayName":"Jordan","hr_BA.displayName":"","favorites.tooltip.remove_action":"Remove Favorite","brexitTVEnergyLink.text":"","VN.displayName":"Vietnam","es_EC.displayName":"Ecuador","CY.displayName":"Cyprus","product_count.22":"0 Products","de_CH.displayName":"Switzerland","en_NZ.displayName":"New Zealand","product_count.default":"0 Products","violators.newProduct":"New","meganav.viewall":"View All","accessory_finder.help_label":"How do I find my model number?","CN.displayName":"China","share.text":"Share","dynamic.product_count.32":"Products","da_DK.displayName":"","PK.displayName":"Pakistan","pricing.rrp":"","UA.displayName":"Ukraine","consent_warning.title":"Please accept Youtube cookies to watch this video","pricing.starting.at":"Starting at","product_count.11":"0 Products","US.displayName":"United States","es_MX.displayName":"Mexico","buyButton.static_text.text":"","DK.displayName":"Denmark","reviews_listing.text":"Reviews and Ratings","es_PE.displayName":"Peru","hu_HU.displayName":"","aria.slider.thumbs":"","ES.displayName":"Spain","en_IN.displayName":"India","pricing.suffix":"for 0 model","NL.displayName":"Netherlands","accessory_finder.placeholder":"Enter your model number","de_LU.displayName":"","product_count.13":"0 Products","flowplayer.language.fr":"French","el_GR.displayName":"","product.productShots.alt_text.template":"Images of 0","flowplayer.language.el":"Greek","uk_UA.displayName":"Ukraine","product_count.24":"0 Products","favorites.tooltip.on_remove":"Removed","product.reviews_page_description":"Read the latest user reviews and ratings of the Sony 0 and explore the 1.","header.typeAheadMarketingResultsTitle":"Products","download_productInfoSheet_label":"Product Information Sheet","available_soon":"Available soon","sl_SI.displayName":"","EG.displayName":"Egypt","product.lifestyleShots.alt_text.template":"0 in action","IR.displayName":"Iran","AT.displayName":"Austria","product_count.1":"0 Product","flowplayer.language.hu":"Hungary","LM.displayName":"Latin America","product_count.31":"0 Products","MT.displayName":"Malta","nl_BE.displayName":"","flowplayer.language.da":"Danish","fr_FR.displayName":"France","SK.displayName":"Slovakia","accessory_finder.subhead":"Select your model to find compatible accessories:","flowplayer.language.tr":"Turkish","flowplayer.language.zh":"Traditional Chinese","product.primaryShot.alt_text.template":"Picture of 0","JP.region_displayName":"","buy.button.disabled.text":"Currently not available","header.typeAheadSupportResultsTitle":"Support","flowplayer.language.bs":"Bosnian","flowplayer.language.xz":"Simplified Chinese","en_AU.displayName":"Australia","back_to_top.label":"Back to Top","AZ.displayName":"Azerbaijan","dynamic.accessory_count.default":"0 Accessories","flowplayer.language.cs":"Czech","violators.awardWinning":"Award Winner","LI.displayName":"Liechtenstein","flowplayer.language.de":"German","en_GB.displayName":"United Kingdom","dynamic.product_count.14":"Products","flowplayer.language.uk":"Ukrainian","product.detailsDimensionShots.alt_text.template":"Dimensions of 0","dynamic.product_count.33":"Products","flowplayer.language.pt":"Portuguese","hr_HR.displayName":"","brexitTVEnergyLabel.text":"","XA.displayName":"North Africa","dynamic.product_count.4":"Products","aria.slider.thumb":"","dynamic.product_count.21":"Products","tax_disclaimer.text":"","product.dimensionShots.alt_text.template":"Dimensions of 0","PT.displayName":"Portugal","RO.displayName":"Romania","es_CO.displayName":"Colombia";window.__PRELOADED_STATE__ = window.__PRELOADED_STATE__ || ;window.__PRELOADED_STATE__.location ="pathname":"\/electronics\/support\/articles\/00017843","query":;window.__PRELOADED_STATE__.page ="type":"articleDetails","searchByType":"none","typeAlias":null,"origin":"server","hasError":false,"isLoading":false,"isLoadingPartialContent":false,"location":"pathname":"\/electronics\/support\/articles\/00017843","query":,"params":"articleFamilyId":"00017843","cookie":"","locale":"en_US";window.__CTX__ = window.__CTX__ || ;window.__COMPONENT_CONFIG__ = window.__COMPONENT_CONFIG__ || ;window.__I18N__ = window.__I18N__ || ;window.__CTX__.cc_agent ="module":"isCritical":false;window.__COMPONENT_CONFIG__.cc_agent ="support_ccAgentTag":true,"support_additionalVisibleModulesInLiteMode":[];window.__I18N__.cc_agent ="GB.displayName":"United Kingdom","SE.displayName":"Sweden","related_products_curated_link.text":"Related Products","flowplayer.language.ja":"Japanese","idk.text":"Sorry, this data isn't available","pt_PT.displayName":"Portugal","AD.displayName":"Andorra","no_NO.displayName":"","dynamic.product_count.default":"Products","support_link.text":"Support","TW.displayName":"Taiwan","product_count.14":"0 Products","YU.displayName":"Serbia","en_HK.displayName":"","flowplayer.language.fi":"Finnish","es_CL.displayName":"Chile","ME.displayName":"Montenegro","FR.displayName":"France","BA.displayName":"Bosnia and Herzegovina","flowplayer.language.ko":"Korean","EU.region_displayName":"Europe Region","CA.displayName":"Canada","pricing.starting.at_succeeding":"","SI.displayName":"Slovenia","product_count.72":"","es_AR.displayName":"Argentina","dynamic.product_count.34":"Products","NG.displayName":"Nigeria","sony.text":"Sony US","product_count.4":"0 Products","aria.slider.previous":"","KZ.displayName":"Kazakhstan","flowplayer.language.nl":"Dutch","fi_FI.displayName":"","en_PH.displayName":"Philippines","KW.displayName":"Kuwait","flowplayer.language.al":"Albanian","dynamic.accessory_count.1":"0 Accessory","MA.displayName":"Morocco","flowplayer.language.mk":"Macedonian","nl_NL.displayName":"","dynamic.product_count.24":"Products","fr_LU.displayName":"","LV.displayName":"Latvia","lt_LT.displayName":"","dynamic.product_count.3":"Products","flowplayer.language.ro":"Romanian","GE.displayName":"Georgia","consent_warning.button_text":"Manage cookies","favorite.text":"Favorite","ccAgent_text":"CC Agent Portal for Internal Search","productInformationSheet.text":"Product Information Sheet","flowplayer.language.et":"Estonian","flowplayer.language.sk":"Slovak","LT.displayName":"Lithuania","en_IE.displayName":"Ireland","PL.displayName":"Poland","ZA.displayName":"South Africa","BG.displayName":"Bulgaria","pl_PL.displayName":"","BH.displayName":"Bahrain","flowplayer.language.en":"English","flowplayer.language.bg":"Bulgarian","FI.displayName":"Finland","CH.displayName":"Switzerland","JP.displayName":"","BY.displayName":"Belarus","BR.displayName":"Brazil","TR.displayName":"Türkiye","fr_BE.displayName":"Belgium","IE.displayName":"Republic of Ireland","en_EE.displayName":"Estonia","sv_SE.displayName":"","recycling_cost_5Euro.text":"","BE.displayName":"Belgium","LU.displayName":"Luxembourg","IS.displayName":"Iceland","flowplayer.language.kk":"Kazakh","RU.displayName":"Russia","buy.button.text":"Where To Buy","dynamic.product_count.12":"Products","CZ.displayName":"Czech Republic","MD.displayName":"Moldova","CN.region_displayName":"China Region","dynamic.product_count.13":"Products","product_count.3":"0 Products","AL.displayName":"Albania","XM.displayName":"Middle East","en_ID.displayName":"Indonesia","IN.displayName":"India","dynamic.product_count.2":"Products","dynamic.product_count.23":"Products","MC.displayName":"Monaco","flowplayer.language.it":"Italian","US.region_displayName":"Pan America Region","it_IT.displayName":"","fr_CH.displayName":"Switzerland","meganav.viewMore":"View More","violators.topPick":"Top Pick","MK.displayName":"Macedonia","AP.displayName":"Others","HK.displayName":"Hong Kong","ro_RO.displayName":"","myFeedbacks_button":"My Feedbacks","product_count.23":"0 Products","bg_BG.displayName":"","en_US.displayName":"USA","AU.displayName":"Australia","VA.displayName":"Vatican City","product_count.74":"","vi_VN.displayName":"","PH.displayName":"Philippines","NZ.displayName":"New Zealand","product_count.34":"0 Products","SA.displayName":"Kingdom of Saudi Arabia","de_AT.displayName":"Austria","product_count.12":"0 Products","flowplayer.language.sl":"Slovene","KR.displayName":"Korea","SG.displayName":"Singapore","flowplayer.language.es":"Spanish","sk_SK.displayName":"","ID.displayName":"Indonesia","en_SG.displayName":"Singapore","ru_RU.displayName":"Russia","cs_CZ.displayName":"","de_DE.displayName":"Germany","MY.displayName":"Malaysia","dynamic.product_count.31":"Products","related_products_link.text":"Related Products","DE.displayName":"Germany","en_CA.displayName":"Canada","es_ES.displayName":"Spain","favorites.tooltip.add_action":"Add to Favorites","flowplayer.language.no":"Norwegian","en_LV.displayName":"Latvia","product_count.2":"0 Products","GR.displayName":"Greece","favorites.tooltip.header":"Favorites","NO.displayName":"Norway","fr_CA.displayName":"Canada","en_TH.displayName":"Thailand","notify_me.text":"Notify Me","th_TH.displayName":"","sr_YU.displayName":"","dynamic.product_count.22":"Products","product.specifications_page_description":"Get the detailed list of specifications for the Sony 0 & see which 1 fit your needs.","dynamic.product_count.11":"Products","flowplayer.language.ru":"Russian","HU.displayName":"Hungary","product_count.64":"","en_MY.displayName":"Malaysia","HR.displayName":"Croatia","IT.displayName":"Italy","consent_warning.description":"Access your cookie preferences below and make sure to switch on the Youtube cookie under the 'Functional' section.","product.specifications_page_title":"0 Specifications","flowplayer.language.ar":"Arabic","AE.displayName":"United Arab Emirates","product.reviews_page_title":"0 Reviews & Ratings","product_count.32":"0 Products","sr_RS.displayName":"","favorites.tooltip.on_add":"Added","SM.displayName":"San Marino","flowplayer.language.pl":"Polish","accessory_finder.label":"Accessory Finder","dynamic.product_count.1":"Product","aria.slider.next":"","AP.region_displayName":"Asia Pacific Region","product.details_page_description":"Discover the 0 from Sony & explore all the 1 features.","product_count.21":"0 Products","additional_cta.text":"","EE.displayName":"Estonia","mk_MK.displayName":"","product_count.33":"0 Products","flowplayer.language.sv":"Slovenian","TH.displayName":"Thailand","tr_TR.displayName":"Türkiye","JO.displayName":"Jordan","hr_BA.displayName":"","favorites.tooltip.remove_action":"Remove Favorite","brexitTVEnergyLink.text":"","VN.displayName":"Vietnam","es_EC.displayName":"Ecuador","CY.displayName":"Cyprus","product_count.22":"0 Products","de_CH.displayName":"Switzerland","en_NZ.displayName":"New Zealand","product_count.default":"0 Products","violators.newProduct":"New","meganav.viewall":"View All","accessory_finder.help_label":"How do I find my model number?","CN.displayName":"China","share.text":"Share","dynamic.product_count.32":"Products","da_DK.displayName":"","PK.displayName":"Pakistan","pricing.rrp":"","UA.displayName":"Ukraine","consent_warning.title":"Please accept Youtube cookies to watch this video","pricing.starting.at":"Starting at","product_count.11":"0 Products","US.displayName":"United States","es_MX.displayName":"Mexico","buyButton.static_text.text":"","DK.displayName":"Denmark","reviews_listing.text":"Reviews and Ratings","es_PE.displayName":"Peru","hu_HU.displayName":"","aria.slider.thumbs":"","ES.displayName":"Spain","en_IN.displayName":"India","pricing.suffix":"for 0 model","NL.displayName":"Netherlands","accessory_finder.placeholder":"Enter your model number","de_LU.displayName":"","product_count.13":"0 Products","flowplayer.language.fr":"French","el_GR.displayName":"","product.productShots.alt_text.template":"Images of 0","flowplayer.language.el":"Greek","uk_UA.displayName":"Ukraine","product_count.24":"0 Products","favorites.tooltip.on_remove":"Removed","product.reviews_page_description":"Read the latest user reviews and ratings of the Sony 0 and explore the 1.","header.typeAheadMarketingResultsTitle":"Products","download_productInfoSheet_label":"Product Information Sheet","available_soon":"Available soon","sl_SI.displayName":"","EG.displayName":"Egypt","product.lifestyleShots.alt_text.template":"0 in action","IR.displayName":"Iran","AT.displayName":"Austria","product_count.1":"0 Product","flowplayer.language.hu":"Hungary","LM.displayName":"Latin America","product_count.31":"0 Products","MT.displayName":"Malta","nl_BE.displayName":"","flowplayer.language.da":"Danish","fr_FR.displayName":"France","SK.displayName":"Slovakia","accessory_finder.subhead":"Select your model to find compatible accessories:","flowplayer.language.tr":"Turkish","flowplayer.language.zh":"Traditional Chinese","product.primaryShot.alt_text.template":"Picture of 0","JP.region_displayName":"","buy.button.disabled.text":"Currently not available","header.typeAheadSupportResultsTitle":"Support","flowplayer.language.bs":"Bosnian","flowplayer.language.xz":"Simplified Chinese","en_AU.displayName":"Australia","back_to_top.label":"Back to Top","AZ.displayName":"Azerbaijan","dynamic.accessory_count.default":"0 Accessories","flowplayer.language.cs":"Czech","violators.awardWinning":"Award Winner","LI.displayName":"Liechtenstein","flowplayer.language.de":"German","en_GB.displayName":"United Kingdom","dynamic.product_count.14":"Products","flowplayer.language.uk":"Ukrainian","product.detailsDimensionShots.alt_text.template":"Dimensions of 0","dynamic.product_count.33":"Products","flowplayer.language.pt":"Portuguese","hr_HR.displayName":"","brexitTVEnergyLabel.text":"","XA.displayName":"North Africa","dynamic.product_count.4":"Products","aria.slider.thumb":"","dynamic.product_count.21":"Products","tax_disclaimer.text":"","product.dimensionShots.alt_text.template":"Dimensions of 0","PT.displayName":"Portugal","RO.displayName":"Romania","es_CO.displayName":"Colombia";Sony Support

            aaccfb2cb3
            -
            -
            \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Fujitsu Ten Nd3t W54 Toyota Car Dvd User Manual Pdf.md b/spaces/bioriAsaeru/text-to-voice/Fujitsu Ten Nd3t W54 Toyota Car Dvd User Manual Pdf.md deleted file mode 100644 index d6f094b3bbd5446ed120193aa4b5f5729f4c860b..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Fujitsu Ten Nd3t W54 Toyota Car Dvd User Manual Pdf.md +++ /dev/null @@ -1,34 +0,0 @@ -
            -

            How to Download and Use the Fujitsu Ten ND3T W54 Toyota Car DVD User Manual PDF

            - -

            If you own a Toyota car with a Fujitsu Ten ND3T W54 DVD player, you might be wondering how to download and use the user manual PDF. The user manual PDF is a handy guide that explains the features and functions of the DVD player, as well as how to troubleshoot common problems. In this article, we will show you how to download and use the Fujitsu Ten ND3T W54 Toyota car DVD user manual PDF in a few simple steps.

            -

            fujitsu ten nd3t w54 toyota car dvd user manual pdf


            Download Zip »»» https://urloso.com/2uyPsk



            - -

            Step 1: Download the User Manual PDF

            - -

            The first step is to download the user manual PDF from the official website of Fujitsu Ten. To do this, you need to have the serial number of your DVD player, which is usually located on the back or bottom of the device. You can also find it on the receipt or warranty card that came with your purchase.

            - -

            Once you have the serial number, go to https://www.fujitsu-ten.com/support/manual/ and enter it in the search box. Then, click on the "Search" button and you will see a list of available manuals for your device. Choose the one that matches your model and language preference and click on the "Download" button. You will need to agree to the terms and conditions before you can download the file.

            - -

            The user manual PDF will be downloaded to your computer or mobile device. You can save it in a folder of your choice or open it directly with a PDF reader application.

            -

            - -

            Step 2: Use the User Manual PDF

            - -

            The second step is to use the user manual PDF to learn more about your DVD player. The user manual PDF is divided into several sections, such as:

            - -
              -
            • Introduction: This section gives an overview of the DVD player and its main features.
            • -
            • Operation: This section explains how to operate the DVD player, such as how to insert and eject discs, how to use the remote control, how to adjust the settings, etc.
            • -
            • Functions: This section describes the various functions of the DVD player, such as how to play different types of discs, how to use the navigation system, how to connect external devices, etc.
            • -
            • Troubleshooting: This section provides solutions for common problems that you might encounter with your DVD player, such as error messages, poor sound quality, disc compatibility issues, etc.
            • -
            • Specifications: This section lists the technical specifications of your DVD player, such as power consumption, dimensions, weight, etc.
            • -
            - -

            You can use the table of contents or the index at the beginning or end of the user manual PDF to find the section that you need. You can also use the search function of your PDF reader application to look for specific keywords or phrases. You can zoom in or out of the pages, print them out, or bookmark them for future reference.

            - -

            Conclusion

            - -

            The Fujitsu Ten ND3T W54 Toyota car DVD user manual PDF is a useful resource that can help you get the most out of your DVD player. By following these steps, you can easily download and use it anytime you need. If you have any questions or issues with your DVD player, you can contact Fujitsu Ten customer service or visit their website for more information.

            d5da3c52bf
            -
            -
            \ No newline at end of file diff --git a/spaces/birdortyedi/instagram-filter-removal/modeling/base.py b/spaces/birdortyedi/instagram-filter-removal/modeling/base.py deleted file mode 100644 index 546427a1e9f91fceecea94913b23e46fc1787289..0000000000000000000000000000000000000000 --- a/spaces/birdortyedi/instagram-filter-removal/modeling/base.py +++ /dev/null @@ -1,60 +0,0 @@ -from torch import nn - - -class BaseNetwork(nn.Module): - def __init__(self): - super(BaseNetwork, self).__init__() - - def forward(self, x, y): - pass - - def print_network(self): - if isinstance(self, list): - self = self[0] - num_params = 0 - for param in self.parameters(): - num_params += param.numel() - print('Network [%s] was created. Total number of parameters: %.1f million. ' - 'To see the architecture, do print(network).' - % (type(self).__name__, num_params / 1000000)) - - def set_requires_grad(self, requires_grad=False): - """Set requies_grad=Fasle for all the networks to avoid unnecessary computations - Parameters: - requires_grad (bool) -- whether the networks require gradients or not - """ - for param in self.parameters(): - param.requires_grad = requires_grad - - def init_weights(self, init_type='xavier', gain=0.02): - def init_func(m): - classname = m.__class__.__name__ - if classname.find('BatchNorm2d') != -1: - if hasattr(m, 'weight') and m.weight is not None: - nn.init.normal_(m.weight.data, 1.0, gain) - if hasattr(m, 'bias') and m.bias is not None: - nn.init.constant_(m.bias.data, 0.0) - elif hasattr(m, 'weight') and (classname.find('Conv') != -1 or classname.find('Linear') != -1): - if init_type == 'normal': - nn.init.normal_(m.weight.data, 0.0, gain) - elif init_type == 'xavier': - nn.init.xavier_normal_(m.weight.data, gain=gain) - elif init_type == 'xavier_uniform': - nn.init.xavier_uniform_(m.weight.data, gain=1.0) - elif init_type == 'kaiming': - nn.init.kaiming_normal_(m.weight.data, a=0, mode='fan_in') - elif init_type == 'orthogonal': - nn.init.orthogonal_(m.weight.data, gain=gain) - elif init_type == 'none': # uses pytorch's default init method - m.reset_parameters() - else: - raise NotImplementedError('initialization method [%s] is not implemented' % init_type) - if hasattr(m, 'bias') and m.bias is not None: - nn.init.constant_(m.bias.data, 0.0) - - self.apply(init_func) - - # propagate to children - for m in self.children(): - if hasattr(m, 'init_weights'): - m.init_weights(init_type, gain) diff --git a/spaces/bradarrML/Diffusion_Space/utils.py b/spaces/bradarrML/Diffusion_Space/utils.py deleted file mode 100644 index ff1c065d186347ca51b47d010a697dbe1814695c..0000000000000000000000000000000000000000 --- a/spaces/bradarrML/Diffusion_Space/utils.py +++ /dev/null @@ -1,6 +0,0 @@ -def is_google_colab(): - try: - import google.colab - return True - except: - return False \ No newline at end of file diff --git a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/modules/__init__.py b/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/modules/__init__.py deleted file mode 100644 index 61418616ef18f0ecca56a007c43af4a731d98b9b..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AudioCreator_Music-Audio_Generation/audiocraft/modules/__init__.py +++ /dev/null @@ -1,22 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Modules used for building the models.""" - -# flake8: noqa -from .conv import ( - NormConv1d, - NormConv2d, - NormConvTranspose1d, - NormConvTranspose2d, - StreamableConv1d, - StreamableConvTranspose1d, - pad_for_conv1d, - pad1d, - unpad1d, -) -from .lstm import StreamableLSTM -from .seanet import SEANetEncoder, SEANetDecoder -from .transformer import StreamingTransformer \ No newline at end of file diff --git a/spaces/brainblow/beat_remixer/app.py b/spaces/brainblow/beat_remixer/app.py deleted file mode 100644 index ba07731e74f39b406f3ebe7ee29c76c9196889d3..0000000000000000000000000000000000000000 --- a/spaces/brainblow/beat_remixer/app.py +++ /dev/null @@ -1,123 +0,0 @@ -import gradio as gr, numpy as np -from gradio.components import Audio, Textbox, Checkbox, Image -import beat_manipulator as bm -import cv2 - -def BeatSwap(audiofile, pattern: str = 'test', scale:float = 1, shift:float = 0, caching:bool = True, variableBPM:bool = False): - print() - print(f'path = {audiofile}, pattern = "{pattern}", scale = {scale}, shift = {shift}, caching = {caching}, variable BPM = {variableBPM}') - if pattern == '' or pattern is None: pattern = 'test' - if caching is not False: caching == True - if variableBPM is not True: variableBPM == False - try: - scale=bm.utils._safer_eval(scale) - except: scale = 1 - try: - shift=bm.utils._safer_eval(shift) - except: shift = 0 - if scale <0: scale = -scale - if scale < 0.02: scale = 0.02 - print('Loading auidofile...') - if audiofile is not None: - try: - song=bm.song(audio=audiofile,log=False) - except Exception as e: - print(f'Failed to load audio, retrying: {e}') - song=bm.song(audio=audiofile, log=False) - else: - print(f'Audiofile is {audiofile}') - return - try: - print(f'Scale = {scale}, shift = {shift}, length = {len(song.audio[0])/song.sr}') - if len(song.audio[0]) > (song.sr*1800): - song.audio = np.array(song.audio, copy=False) - song.audio = song.audio[:,:song.sr*1800] - except Exception as e: print(f'Reducing audio size failed, why? {e}') - lib = 'madmom.BeatDetectionProcessor' if variableBPM is False else 'madmom.BeatTrackingProcessor' - song.path = '.'.join(song.path.split('.')[:-1])[:-8]+'.'+song.path.split('.')[-1] - print(f'path: {song.path}') - print('Generating beatmap...') - song.beatmap_generate(lib=lib, caching=caching) - song.beatmap_shift(shift) - song.beatmap_scale(scale) - print('Generating image...') - try: - song.image_generate() - image = bm.image.bw_to_colored(song.image) - y=min(len(image), len(image[0]), 2048) - y=max(y, 2048) - image = np.rot90(np.clip(cv2.resize(image, (y,y), interpolation=cv2.INTER_NEAREST), -1, 1)) - #print(image) - except Exception as e: - print(f'Image generation failed: {e}') - image = np.asarray([[0.5,-0.5],[-0.5,0.5]]) - print('Beatswapping...') - song.beatswap(pattern=pattern, scale=1, shift=0) - song.audio = (np.clip(np.asarray(song.audio), -1, 1) * 32766).astype(np.int16).T - #song.write_audio(output=bm.outputfilename('',song.filename, suffix=' (beatswap)')) - print('___ SUCCESS ___') - return ((song.sr, song.audio), image) - -audiofile=Audio(source='upload', type='filepath') -patternbox = Textbox(label="Pattern:", placeholder="1, 3, 2, 4!", value="1, 2>0.5, 3, 4>0.5, 5, 6>0.5, 3, 4>0.5, 7, 8", lines=1) -scalebox = Textbox(value=1, label="Beatmap scale. At 2, every two beat positions will be merged, at 0.5 - a beat position added between every two existing ones.", placeholder=1, lines=1) -shiftbox = Textbox(value=0, label="Beatmap shift, in beats (applies before scaling):", placeholder=0, lines=1) -cachebox = Checkbox(value=True, label="Enable caching generated beatmaps for faster loading. Saves a file with beat positions and loads it when you open same audio again.") -beatdetectionbox = Checkbox(value=False, label='Enable support for variable BPM, however this makes beat detection slightly less accurate') - -gr.Interface (fn=BeatSwap,inputs=[audiofile,patternbox,scalebox,shiftbox, cachebox, beatdetectionbox],outputs=[Audio(type='numpy'), Image(type='numpy')],theme="default", -title = "BrainBlow's Beat Remixer" -,description = """Remix music using AI-powered beat detection and advanced beat swapping. Make \"every other beat is missing\" remixes, or completely change beat of the song. - -Thanks to: -Github - https://github.com/stunlocked1/beat_manipulator. - - -### Basic usage -Upload your audio, enter the beat swapping pattern, change scale and shift if needed, and run it. - -### pattern syntax -patterns are sequences of **beats**, separated by **commas** or other separators. You can use spaces freely in patterns to make them look prettier. -- `1, 3, 2, 4` - swap 2nd and 3rd beat every four beats. Repeats every four beats because `4` is the biggest number in it. -- `1, 3, 4` - skip 2nd beat every four beats -- `1, 2, 3, 4!` - skip 4th beat every four beats. `!` skips the beat. - -**slicing:** -- `1>0.5` - plays first half of 1st beat -- `1<0.5` - plays last half of 1st beat -- `1 > 1/3, 2, 3, 4` - every four beats, plays first third of the first beat - you can use math expressions anywhere in your pattern. -- also instead of slicing beats you can use a smaller `scale` parameter to make more precise beat edits - -**merging beats:** -- `1; 2, 3, 4` - every four beats, play 1st and 2nd beats at the same time. - -**effects:** -- `1, 2r` - 2nd beat will be reversed -- `1, 2s0.5` - 2nd beat will be played at 0.5x speed -- `1, 2d10` - 2nd beat will have 8-bit effect (downsampled) - -You can do much more with the syntax - shuffle/randomize beats, use samples, mix two songs, etc. Syntax is described in detail at https://github.com/stunlocked1/beat_manipulator -### scale -`scale = 0.5` will insert a new beat position between every existing beat position in the beatmap. That allows you to make patterns on smaller intervals. - -`scale = 2`, on the other hand, will merge every two beat positions in the beatmap. Useful, for example, when beat map detection puts sees BPM as two times faster than it actually is, and puts beats in between every actual beat. -### shift -Shifts the beatmap, in beats. For example, if you want to remove 4th beat every four beats, you can do it by writing `1, 2, 3, 4!`. However sometimes it doesn't properly detect which beat is first, and for example remove 2nd beat every 4 beats instead. In that case, if you want 4th beat, use `shift = 2`. Also sometimes beats are detected right in between actual beats, so shift = 0.5 or -0.5 will fix it. -### creating images -You can create cool images based on beat positions. Each song produces its own unique image. This gradio app creates a 2048x2048 image from each song. -### presets -A bunch of example patterns: https://github.com/stunlocked1/beat_manipulator/blob/main/beat_manipulator/presets.yaml - -Those are supposed to be used on normalized beat maps, where kick + snare is two beats, so make sure to adjust beatmaps using `scale` and `shift`. - -### Changelog: -- play two beats at the same time by using `;` instead of `,` -- significantly reduced clicking -- shuffle and randomize beats -- gradient effect, similar to high pass -- add samples to beats -- use beats from other songs - -### My soundcloud https://soundcloud.com/soon -""" - ).launch(share=False) \ No newline at end of file diff --git a/spaces/brcprado/removeBG/README.md b/spaces/brcprado/removeBG/README.md deleted file mode 100644 index a82281da4458c9720149b4f3994fca8f5815d274..0000000000000000000000000000000000000000 --- a/spaces/brcprado/removeBG/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: RemoveBG -emoji: 🐢 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: bsd-2-clause ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/checkpoint/__init__.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/checkpoint/__init__.py deleted file mode 100644 index 99da0469ae7e169d8970e4b642fed3f870076860..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/checkpoint/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. -# File: - - -from . import catalog as _UNUSED # register the handler -from .detection_checkpoint import DetectionCheckpointer -from fvcore.common.checkpoint import Checkpointer, PeriodicCheckpointer - -__all__ = ["Checkpointer", "PeriodicCheckpointer", "DetectionCheckpointer"] diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/config/instantiate.py b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/config/instantiate.py deleted file mode 100644 index 05ee2c7d21c9bf3e56a0a8e98447d2587b4b8fed..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/config/instantiate.py +++ /dev/null @@ -1,88 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import collections.abc as abc -import dataclasses -import logging -from typing import Any - -from detectron2.utils.registry import _convert_target_to_string, locate - -__all__ = ["dump_dataclass", "instantiate"] - - -def dump_dataclass(obj: Any): - """ - Dump a dataclass recursively into a dict that can be later instantiated. - - Args: - obj: a dataclass object - - Returns: - dict - """ - assert dataclasses.is_dataclass(obj) and not isinstance( - obj, type - ), "dump_dataclass() requires an instance of a dataclass." - ret = {"_target_": _convert_target_to_string(type(obj))} - for f in dataclasses.fields(obj): - v = getattr(obj, f.name) - if dataclasses.is_dataclass(v): - v = dump_dataclass(v) - if isinstance(v, (list, tuple)): - v = [dump_dataclass(x) if dataclasses.is_dataclass(x) else x for x in v] - ret[f.name] = v - return ret - - -def instantiate(cfg): - """ - Recursively instantiate objects defined in dictionaries by - "_target_" and arguments. - - Args: - cfg: a dict-like object with "_target_" that defines the caller, and - other keys that define the arguments - - Returns: - object instantiated by cfg - """ - from omegaconf import ListConfig, DictConfig, OmegaConf - - if isinstance(cfg, ListConfig): - lst = [instantiate(x) for x in cfg] - return ListConfig(lst, flags={"allow_objects": True}) - if isinstance(cfg, list): - # Specialize for list, because many classes take - # list[objects] as arguments, such as ResNet, DatasetMapper - return [instantiate(x) for x in cfg] - - # If input is a DictConfig backed by dataclasses (i.e. omegaconf's structured config), - # instantiate it to the actual dataclass. - if isinstance(cfg, DictConfig) and dataclasses.is_dataclass(cfg._metadata.object_type): - return OmegaConf.to_object(cfg) - - if isinstance(cfg, abc.Mapping) and "_target_" in cfg: - # conceptually equivalent to hydra.utils.instantiate(cfg) with _convert_=all, - # but faster: https://github.com/facebookresearch/hydra/issues/1200 - cfg = {k: instantiate(v) for k, v in cfg.items()} - cls = cfg.pop("_target_") - cls = instantiate(cls) - - if isinstance(cls, str): - cls_name = cls - cls = locate(cls_name) - assert cls is not None, cls_name - else: - try: - cls_name = cls.__module__ + "." + cls.__qualname__ - except Exception: - # target could be anything, so the above could fail - cls_name = str(cls) - assert callable(cls), f"_target_ {cls} does not define a callable object" - try: - return cls(**cfg) - except TypeError: - logger = logging.getLogger(__name__) - logger.error(f"Error when instantiating {cls_name}!") - raise - return cfg # return as-is if don't know what to do diff --git a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/font.py b/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/font.py deleted file mode 100644 index 5ac530d7b949f50314a0d9cf5d744bedcace0571..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/pyrender/pyrender/font.py +++ /dev/null @@ -1,272 +0,0 @@ -"""Font texture loader and processor. - -Author: Matthew Matl -""" -import freetype -import numpy as np -import os - -import OpenGL -from OpenGL.GL import * - -from .constants import TextAlign, FLOAT_SZ -from .texture import Texture -from .sampler import Sampler - - -class FontCache(object): - """A cache for fonts. - """ - - def __init__(self, font_dir=None): - self._font_cache = {} - self.font_dir = font_dir - if self.font_dir is None: - base_dir, _ = os.path.split(os.path.realpath(__file__)) - self.font_dir = os.path.join(base_dir, 'fonts') - - def get_font(self, font_name, font_pt): - # If it's a file, load it directly, else, try to load from font dir. - if os.path.isfile(font_name): - font_filename = font_name - _, font_name = os.path.split(font_name) - font_name, _ = os.path.split(font_name) - else: - font_filename = os.path.join(self.font_dir, font_name) + '.ttf' - - cid = OpenGL.contextdata.getContext() - key = (cid, font_name, int(font_pt)) - - if key not in self._font_cache: - self._font_cache[key] = Font(font_filename, font_pt) - return self._font_cache[key] - - def clear(self): - for key in self._font_cache: - self._font_cache[key].delete() - self._font_cache = {} - - -class Character(object): - """A single character, with its texture and attributes. - """ - - def __init__(self, texture, size, bearing, advance): - self.texture = texture - self.size = size - self.bearing = bearing - self.advance = advance - - -class Font(object): - """A font object. - - Parameters - ---------- - font_file : str - The file to load the font from. - font_pt : int - The height of the font in pixels. - """ - - def __init__(self, font_file, font_pt=40): - self.font_file = font_file - self.font_pt = int(font_pt) - self._face = freetype.Face(font_file) - self._face.set_pixel_sizes(0, font_pt) - self._character_map = {} - - for i in range(0, 128): - - # Generate texture - face = self._face - face.load_char(chr(i)) - buf = face.glyph.bitmap.buffer - src = (np.array(buf) / 255.0).astype(np.float32) - src = src.reshape((face.glyph.bitmap.rows, - face.glyph.bitmap.width)) - tex = Texture( - sampler=Sampler( - magFilter=GL_LINEAR, - minFilter=GL_LINEAR, - wrapS=GL_CLAMP_TO_EDGE, - wrapT=GL_CLAMP_TO_EDGE - ), - source=src, - source_channels='R', - ) - character = Character( - texture=tex, - size=np.array([face.glyph.bitmap.width, - face.glyph.bitmap.rows]), - bearing=np.array([face.glyph.bitmap_left, - face.glyph.bitmap_top]), - advance=face.glyph.advance.x - ) - self._character_map[chr(i)] = character - - self._vbo = None - self._vao = None - - @property - def font_file(self): - """str : The file the font was loaded from. - """ - return self._font_file - - @font_file.setter - def font_file(self, value): - self._font_file = value - - @property - def font_pt(self): - """int : The height of the font in pixels. - """ - return self._font_pt - - @font_pt.setter - def font_pt(self, value): - self._font_pt = int(value) - - def _add_to_context(self): - - self._vao = glGenVertexArrays(1) - glBindVertexArray(self._vao) - self._vbo = glGenBuffers(1) - glBindBuffer(GL_ARRAY_BUFFER, self._vbo) - glBufferData(GL_ARRAY_BUFFER, FLOAT_SZ * 6 * 4, None, GL_DYNAMIC_DRAW) - glEnableVertexAttribArray(0) - glVertexAttribPointer( - 0, 4, GL_FLOAT, GL_FALSE, 4 * FLOAT_SZ, ctypes.c_void_p(0) - ) - glBindVertexArray(0) - - glPixelStorei(GL_UNPACK_ALIGNMENT, 1) - for c in self._character_map: - ch = self._character_map[c] - if not ch.texture._in_context(): - ch.texture._add_to_context() - - def _remove_from_context(self): - for c in self._character_map: - ch = self._character_map[c] - ch.texture.delete() - if self._vao is not None: - glDeleteVertexArrays(1, [self._vao]) - glDeleteBuffers(1, [self._vbo]) - self._vao = None - self._vbo = None - - def _in_context(self): - return self._vao is not None - - def _bind(self): - glBindVertexArray(self._vao) - - def _unbind(self): - glBindVertexArray(0) - - def delete(self): - self._unbind() - self._remove_from_context() - - def render_string(self, text, x, y, scale=1.0, - align=TextAlign.BOTTOM_LEFT): - """Render a string to the current view buffer. - - Note - ---- - Assumes correct shader program already bound w/ uniforms set. - - Parameters - ---------- - text : str - The text to render. - x : int - Horizontal pixel location of text. - y : int - Vertical pixel location of text. - scale : int - Scaling factor for text. - align : int - One of the TextAlign options which specifies where the ``x`` - and ``y`` parameters lie on the text. For example, - :attr:`.TextAlign.BOTTOM_LEFT` means that ``x`` and ``y`` indicate - the position of the bottom-left corner of the textbox. - """ - glActiveTexture(GL_TEXTURE0) - glEnable(GL_BLEND) - glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA) - glDisable(GL_DEPTH_TEST) - glPolygonMode(GL_FRONT_AND_BACK, GL_FILL) - self._bind() - - # Determine width and height of text relative to x, y - width = 0.0 - height = 0.0 - for c in text: - ch = self._character_map[c] - height = max(height, ch.bearing[1] * scale) - width += (ch.advance >> 6) * scale - - # Determine offsets based on alignments - xoff = 0 - yoff = 0 - if align == TextAlign.BOTTOM_RIGHT: - xoff = -width - elif align == TextAlign.BOTTOM_CENTER: - xoff = -width / 2.0 - elif align == TextAlign.TOP_LEFT: - yoff = -height - elif align == TextAlign.TOP_RIGHT: - yoff = -height - xoff = -width - elif align == TextAlign.TOP_CENTER: - yoff = -height - xoff = -width / 2.0 - elif align == TextAlign.CENTER: - xoff = -width / 2.0 - yoff = -height / 2.0 - elif align == TextAlign.CENTER_LEFT: - yoff = -height / 2.0 - elif align == TextAlign.CENTER_RIGHT: - xoff = -width - yoff = -height / 2.0 - - x += xoff - y += yoff - - ch = None - for c in text: - ch = self._character_map[c] - xpos = x + ch.bearing[0] * scale - ypos = y - (ch.size[1] - ch.bearing[1]) * scale - w = ch.size[0] * scale - h = ch.size[1] * scale - - vertices = np.array([ - [xpos, ypos, 0.0, 0.0], - [xpos + w, ypos, 1.0, 0.0], - [xpos + w, ypos + h, 1.0, 1.0], - [xpos + w, ypos + h, 1.0, 1.0], - [xpos, ypos + h, 0.0, 1.0], - [xpos, ypos, 0.0, 0.0], - ], dtype=np.float32) - - ch.texture._bind() - - glBindBuffer(GL_ARRAY_BUFFER, self._vbo) - glBufferData( - GL_ARRAY_BUFFER, FLOAT_SZ * 6 * 4, vertices, GL_DYNAMIC_DRAW - ) - # TODO MAKE THIS MORE EFFICIENT, lgBufferSubData is broken - # glBufferSubData( - # GL_ARRAY_BUFFER, 0, 6 * 4 * FLOAT_SZ, - # np.ascontiguousarray(vertices.flatten) - # ) - glDrawArrays(GL_TRIANGLES, 0, 6) - x += (ch.advance >> 6) * scale - - self._unbind() - if ch: - ch.texture._unbind() diff --git a/spaces/caffeinum/VToonify/vtoonify/model/raft/core/update.py b/spaces/caffeinum/VToonify/vtoonify/model/raft/core/update.py deleted file mode 100644 index f940497f9b5eb1c12091574fe9a0223a1b196d50..0000000000000000000000000000000000000000 --- a/spaces/caffeinum/VToonify/vtoonify/model/raft/core/update.py +++ /dev/null @@ -1,139 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - - -class FlowHead(nn.Module): - def __init__(self, input_dim=128, hidden_dim=256): - super(FlowHead, self).__init__() - self.conv1 = nn.Conv2d(input_dim, hidden_dim, 3, padding=1) - self.conv2 = nn.Conv2d(hidden_dim, 2, 3, padding=1) - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - return self.conv2(self.relu(self.conv1(x))) - -class ConvGRU(nn.Module): - def __init__(self, hidden_dim=128, input_dim=192+128): - super(ConvGRU, self).__init__() - self.convz = nn.Conv2d(hidden_dim+input_dim, hidden_dim, 3, padding=1) - self.convr = nn.Conv2d(hidden_dim+input_dim, hidden_dim, 3, padding=1) - self.convq = nn.Conv2d(hidden_dim+input_dim, hidden_dim, 3, padding=1) - - def forward(self, h, x): - hx = torch.cat([h, x], dim=1) - - z = torch.sigmoid(self.convz(hx)) - r = torch.sigmoid(self.convr(hx)) - q = torch.tanh(self.convq(torch.cat([r*h, x], dim=1))) - - h = (1-z) * h + z * q - return h - -class SepConvGRU(nn.Module): - def __init__(self, hidden_dim=128, input_dim=192+128): - super(SepConvGRU, self).__init__() - self.convz1 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (1,5), padding=(0,2)) - self.convr1 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (1,5), padding=(0,2)) - self.convq1 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (1,5), padding=(0,2)) - - self.convz2 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (5,1), padding=(2,0)) - self.convr2 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (5,1), padding=(2,0)) - self.convq2 = nn.Conv2d(hidden_dim+input_dim, hidden_dim, (5,1), padding=(2,0)) - - - def forward(self, h, x): - # horizontal - hx = torch.cat([h, x], dim=1) - z = torch.sigmoid(self.convz1(hx)) - r = torch.sigmoid(self.convr1(hx)) - q = torch.tanh(self.convq1(torch.cat([r*h, x], dim=1))) - h = (1-z) * h + z * q - - # vertical - hx = torch.cat([h, x], dim=1) - z = torch.sigmoid(self.convz2(hx)) - r = torch.sigmoid(self.convr2(hx)) - q = torch.tanh(self.convq2(torch.cat([r*h, x], dim=1))) - h = (1-z) * h + z * q - - return h - -class SmallMotionEncoder(nn.Module): - def __init__(self, args): - super(SmallMotionEncoder, self).__init__() - cor_planes = args.corr_levels * (2*args.corr_radius + 1)**2 - self.convc1 = nn.Conv2d(cor_planes, 96, 1, padding=0) - self.convf1 = nn.Conv2d(2, 64, 7, padding=3) - self.convf2 = nn.Conv2d(64, 32, 3, padding=1) - self.conv = nn.Conv2d(128, 80, 3, padding=1) - - def forward(self, flow, corr): - cor = F.relu(self.convc1(corr)) - flo = F.relu(self.convf1(flow)) - flo = F.relu(self.convf2(flo)) - cor_flo = torch.cat([cor, flo], dim=1) - out = F.relu(self.conv(cor_flo)) - return torch.cat([out, flow], dim=1) - -class BasicMotionEncoder(nn.Module): - def __init__(self, args): - super(BasicMotionEncoder, self).__init__() - cor_planes = args.corr_levels * (2*args.corr_radius + 1)**2 - self.convc1 = nn.Conv2d(cor_planes, 256, 1, padding=0) - self.convc2 = nn.Conv2d(256, 192, 3, padding=1) - self.convf1 = nn.Conv2d(2, 128, 7, padding=3) - self.convf2 = nn.Conv2d(128, 64, 3, padding=1) - self.conv = nn.Conv2d(64+192, 128-2, 3, padding=1) - - def forward(self, flow, corr): - cor = F.relu(self.convc1(corr)) - cor = F.relu(self.convc2(cor)) - flo = F.relu(self.convf1(flow)) - flo = F.relu(self.convf2(flo)) - - cor_flo = torch.cat([cor, flo], dim=1) - out = F.relu(self.conv(cor_flo)) - return torch.cat([out, flow], dim=1) - -class SmallUpdateBlock(nn.Module): - def __init__(self, args, hidden_dim=96): - super(SmallUpdateBlock, self).__init__() - self.encoder = SmallMotionEncoder(args) - self.gru = ConvGRU(hidden_dim=hidden_dim, input_dim=82+64) - self.flow_head = FlowHead(hidden_dim, hidden_dim=128) - - def forward(self, net, inp, corr, flow): - motion_features = self.encoder(flow, corr) - inp = torch.cat([inp, motion_features], dim=1) - net = self.gru(net, inp) - delta_flow = self.flow_head(net) - - return net, None, delta_flow - -class BasicUpdateBlock(nn.Module): - def __init__(self, args, hidden_dim=128, input_dim=128): - super(BasicUpdateBlock, self).__init__() - self.args = args - self.encoder = BasicMotionEncoder(args) - self.gru = SepConvGRU(hidden_dim=hidden_dim, input_dim=128+hidden_dim) - self.flow_head = FlowHead(hidden_dim, hidden_dim=256) - - self.mask = nn.Sequential( - nn.Conv2d(128, 256, 3, padding=1), - nn.ReLU(inplace=True), - nn.Conv2d(256, 64*9, 1, padding=0)) - - def forward(self, net, inp, corr, flow, upsample=True): - motion_features = self.encoder(flow, corr) - inp = torch.cat([inp, motion_features], dim=1) - - net = self.gru(net, inp) - delta_flow = self.flow_head(net) - - # scale mask to balence gradients - mask = .25 * self.mask(net) - return net, mask, delta_flow - - - diff --git a/spaces/cakewalk/splat/main.js b/spaces/cakewalk/splat/main.js deleted file mode 100644 index 3e0b0916ca9e04f30b9fc03ccb8bfda51567b26a..0000000000000000000000000000000000000000 --- a/spaces/cakewalk/splat/main.js +++ /dev/null @@ -1,1325 +0,0 @@ -let cameras = [ - { - id: 0, - img_name: "00001", - width: 1959, - height: 1090, - position: [ - -3.0089893469241797, -0.11086489695181866, -3.7527640949141428, - ], - rotation: [ - [0.876134201218856, 0.06925962026449776, 0.47706599800804744], - [-0.04747421839895102, 0.9972110940209488, -0.057586739349882114], - [-0.4797239414934443, 0.027805376500959853, 0.8769787916452908], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, - { - id: 1, - img_name: "00009", - width: 1959, - height: 1090, - position: [ - -2.5199776022057296, -0.09704735754873686, -3.6247725540304545, - ], - rotation: [ - [0.9982731285632193, -0.011928707708098955, -0.05751927260507243], - [0.0065061360949636325, 0.9955928229282383, -0.09355533724430458], - [0.058381769258182864, 0.09301955098900708, 0.9939511719154457], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, - { - id: 2, - img_name: "00017", - width: 1959, - height: 1090, - position: [ - -0.7737533667465242, -0.3364271945329695, -2.9358969417573753, - ], - rotation: [ - [0.9998813418672372, 0.013742375651625236, -0.0069605529394208224], - [-0.014268370388586709, 0.996512943252834, -0.08220929105659476], - [0.00580653013657589, 0.08229885200307129, 0.9965907801935302], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, - { - id: 3, - img_name: "00025", - width: 1959, - height: 1090, - position: [ - 1.2198221749590001, -0.2196687861401182, -2.3183162007028453, - ], - rotation: [ - [0.9208648867765482, 0.0012010625395201253, 0.389880004297208], - [-0.06298204172269357, 0.987319521752825, 0.14571693239364383], - [-0.3847611242348369, -0.1587410451475895, 0.9092635249821667], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, - { - id: 4, - img_name: "00033", - width: 1959, - height: 1090, - position: [ - 1.742387858893817, -0.13848225198886954, -2.0566370113193146, - ], - rotation: [ - [0.24669889292141334, -0.08370189346592856, -0.9654706879349405], - [0.11343747891376445, 0.9919082664242816, -0.05700815184573074], - [0.9624300466054861, -0.09545671285663988, 0.2541976029815521], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, - { - id: 5, - img_name: "00041", - width: 1959, - height: 1090, - position: [ - 3.6567309419223935, -0.16470990600750707, -1.3458085590422042, - ], - rotation: [ - [0.2341293058324528, -0.02968330457755884, -0.9717522161434825], - [0.10270823606832301, 0.99469554638321, -0.005638106875665722], - [0.9667649592295676, -0.09848690996657204, 0.2359360976431732], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, - { - id: 6, - img_name: "00049", - width: 1959, - height: 1090, - position: [ - 3.9013554243203497, -0.2597500978038105, -0.8106154188297828, - ], - rotation: [ - [0.6717235545638952, -0.015718162115524837, -0.7406351366386528], - [0.055627354673906296, 0.9980224478387622, 0.029270992841185218], - [0.7387104058127439, -0.060861588786650656, 0.6712695459756353], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, - { - id: 7, - img_name: "00057", - width: 1959, - height: 1090, - position: [4.742994605467533, -0.05591660945412069, 0.9500365976084458], - rotation: [ - [-0.17042655709210375, 0.01207080756938, -0.9852964448542146], - [0.1165090336695526, 0.9931575292530063, -0.00798543433078162], - [0.9784581921120181, -0.1161568667478904, -0.1706667764862097], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, - { - id: 8, - img_name: "00065", - width: 1959, - height: 1090, - position: [4.34676307626522, 0.08168160516967145, 1.0876221470355405], - rotation: [ - [-0.003575447631888379, -0.044792503246552894, -0.9989899137764799], - [0.10770152645126597, 0.9931680875192705, -0.04491693593046672], - [0.9941768441149182, -0.10775333677534978, 0.0012732004866391048], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, - { - id: 9, - img_name: "00073", - width: 1959, - height: 1090, - position: [3.264984351114202, 0.078974937336732, 1.0117200284114904], - rotation: [ - [-0.026919994628162257, -0.1565891128261527, -0.9872968974090509], - [0.08444552208239385, 0.983768234577625, -0.1583319754069128], - [0.9960643893290491, -0.0876350978794554, -0.013259786205163005], - ], - fy: 1164.6601287484507, - fx: 1159.5880733038064, - }, -]; - -const camera = cameras[0]; - -function getProjectionMatrix(fx, fy, width, height) { - const znear = 0.2; - const zfar = 200; - return [ - [(2 * fx) / width, 0, 0, 0], - [0, -(2 * fy) / height, 0, 0], - [0, 0, zfar / (zfar - znear), 1], - [0, 0, -(zfar * znear) / (zfar - znear), 0], - ].flat(); -} - -function getViewMatrix(camera) { - const R = camera.rotation.flat(); - const t = camera.position; - const camToWorld = [ - [R[0], R[1], R[2], 0], - [R[3], R[4], R[5], 0], - [R[6], R[7], R[8], 0], - [ - -t[0] * R[0] - t[1] * R[3] - t[2] * R[6], - -t[0] * R[1] - t[1] * R[4] - t[2] * R[7], - -t[0] * R[2] - t[1] * R[5] - t[2] * R[8], - 1, - ], - ].flat(); - return camToWorld; -} - -function multiply4(a, b) { - return [ - b[0] * a[0] + b[1] * a[4] + b[2] * a[8] + b[3] * a[12], - b[0] * a[1] + b[1] * a[5] + b[2] * a[9] + b[3] * a[13], - b[0] * a[2] + b[1] * a[6] + b[2] * a[10] + b[3] * a[14], - b[0] * a[3] + b[1] * a[7] + b[2] * a[11] + b[3] * a[15], - b[4] * a[0] + b[5] * a[4] + b[6] * a[8] + b[7] * a[12], - b[4] * a[1] + b[5] * a[5] + b[6] * a[9] + b[7] * a[13], - b[4] * a[2] + b[5] * a[6] + b[6] * a[10] + b[7] * a[14], - b[4] * a[3] + b[5] * a[7] + b[6] * a[11] + b[7] * a[15], - b[8] * a[0] + b[9] * a[4] + b[10] * a[8] + b[11] * a[12], - b[8] * a[1] + b[9] * a[5] + b[10] * a[9] + b[11] * a[13], - b[8] * a[2] + b[9] * a[6] + b[10] * a[10] + b[11] * a[14], - b[8] * a[3] + b[9] * a[7] + b[10] * a[11] + b[11] * a[15], - b[12] * a[0] + b[13] * a[4] + b[14] * a[8] + b[15] * a[12], - b[12] * a[1] + b[13] * a[5] + b[14] * a[9] + b[15] * a[13], - b[12] * a[2] + b[13] * a[6] + b[14] * a[10] + b[15] * a[14], - b[12] * a[3] + b[13] * a[7] + b[14] * a[11] + b[15] * a[15], - ]; -} - -function invert4(a) { - let b00 = a[0] * a[5] - a[1] * a[4]; - let b01 = a[0] * a[6] - a[2] * a[4]; - let b02 = a[0] * a[7] - a[3] * a[4]; - let b03 = a[1] * a[6] - a[2] * a[5]; - let b04 = a[1] * a[7] - a[3] * a[5]; - let b05 = a[2] * a[7] - a[3] * a[6]; - let b06 = a[8] * a[13] - a[9] * a[12]; - let b07 = a[8] * a[14] - a[10] * a[12]; - let b08 = a[8] * a[15] - a[11] * a[12]; - let b09 = a[9] * a[14] - a[10] * a[13]; - let b10 = a[9] * a[15] - a[11] * a[13]; - let b11 = a[10] * a[15] - a[11] * a[14]; - let det = - b00 * b11 - b01 * b10 + b02 * b09 + b03 * b08 - b04 * b07 + b05 * b06; - if (!det) return null; - return [ - (a[5] * b11 - a[6] * b10 + a[7] * b09) / det, - (a[2] * b10 - a[1] * b11 - a[3] * b09) / det, - (a[13] * b05 - a[14] * b04 + a[15] * b03) / det, - (a[10] * b04 - a[9] * b05 - a[11] * b03) / det, - (a[6] * b08 - a[4] * b11 - a[7] * b07) / det, - (a[0] * b11 - a[2] * b08 + a[3] * b07) / det, - (a[14] * b02 - a[12] * b05 - a[15] * b01) / det, - (a[8] * b05 - a[10] * b02 + a[11] * b01) / det, - (a[4] * b10 - a[5] * b08 + a[7] * b06) / det, - (a[1] * b08 - a[0] * b10 - a[3] * b06) / det, - (a[12] * b04 - a[13] * b02 + a[15] * b00) / det, - (a[9] * b02 - a[8] * b04 - a[11] * b00) / det, - (a[5] * b07 - a[4] * b09 - a[6] * b06) / det, - (a[0] * b09 - a[1] * b07 + a[2] * b06) / det, - (a[13] * b01 - a[12] * b03 - a[14] * b00) / det, - (a[8] * b03 - a[9] * b01 + a[10] * b00) / det, - ]; -} - -function rotate4(a, rad, x, y, z) { - let len = Math.hypot(x, y, z); - x /= len; - y /= len; - z /= len; - let s = Math.sin(rad); - let c = Math.cos(rad); - let t = 1 - c; - let b00 = x * x * t + c; - let b01 = y * x * t + z * s; - let b02 = z * x * t - y * s; - let b10 = x * y * t - z * s; - let b11 = y * y * t + c; - let b12 = z * y * t + x * s; - let b20 = x * z * t + y * s; - let b21 = y * z * t - x * s; - let b22 = z * z * t + c; - return [ - a[0] * b00 + a[4] * b01 + a[8] * b02, - a[1] * b00 + a[5] * b01 + a[9] * b02, - a[2] * b00 + a[6] * b01 + a[10] * b02, - a[3] * b00 + a[7] * b01 + a[11] * b02, - a[0] * b10 + a[4] * b11 + a[8] * b12, - a[1] * b10 + a[5] * b11 + a[9] * b12, - a[2] * b10 + a[6] * b11 + a[10] * b12, - a[3] * b10 + a[7] * b11 + a[11] * b12, - a[0] * b20 + a[4] * b21 + a[8] * b22, - a[1] * b20 + a[5] * b21 + a[9] * b22, - a[2] * b20 + a[6] * b21 + a[10] * b22, - a[3] * b20 + a[7] * b21 + a[11] * b22, - ...a.slice(12, 16), - ]; -} - -function translate4(a, x, y, z) { - return [ - ...a.slice(0, 12), - a[0] * x + a[4] * y + a[8] * z + a[12], - a[1] * x + a[5] * y + a[9] * z + a[13], - a[2] * x + a[6] * y + a[10] * z + a[14], - a[3] * x + a[7] * y + a[11] * z + a[15], - ]; -} - -function createWorker(self) { - let buffer; - let vertexCount = 0; - let viewProj; - // 6*4 + 4 + 4 = 8*4 - // XYZ - Position (Float32) - // XYZ - Scale (Float32) - // RGBA - colors (uint8) - // IJKL - quaternion/rot (uint8) - const rowLength = 3 * 4 + 3 * 4 + 4 + 4; - let depthMix = new BigInt64Array(); - let lastProj = []; - - const runSort = (viewProj) => { - if (!buffer) return; - - - - const f_buffer = new Float32Array(buffer); - const u_buffer = new Uint8Array(buffer); - - const covA = new Float32Array(3 * vertexCount); - const covB = new Float32Array(3 * vertexCount); - - const center = new Float32Array(3 * vertexCount); - const color = new Float32Array(4 * vertexCount); - - if (depthMix.length !== vertexCount) { - depthMix = new BigInt64Array(vertexCount); - const indexMix = new Uint32Array(depthMix.buffer); - for (let j = 0; j < vertexCount; j++) { - indexMix[2 * j] = j; - } - } else { - let dot = - lastProj[2] * viewProj[2] + - lastProj[6] * viewProj[6] + - lastProj[10] * viewProj[10]; - if (Math.abs(dot - 1) < 0.01) { - return; - } - } - // console.time("sort"); - - const floatMix = new Float32Array(depthMix.buffer); - const indexMix = new Uint32Array(depthMix.buffer); - - for (let j = 0; j < vertexCount; j++) { - let i = indexMix[2 * j]; - floatMix[2 * j + 1] = - 10000 + - viewProj[2] * f_buffer[8 * i + 0] + - viewProj[6] * f_buffer[8 * i + 1] + - viewProj[10] * f_buffer[8 * i + 2]; - } - - lastProj = viewProj; - - depthMix.sort(); - - for (let j = 0; j < vertexCount; j++) { - const i = indexMix[2 * j]; - - - center[3 * j + 0] = f_buffer[8 * i + 0]; - center[3 * j + 1] = f_buffer[8 * i + 1]; - center[3 * j + 2] = f_buffer[8 * i + 2]; - - color[4 * j + 0] = u_buffer[32 * i + 24 + 0] / 255; - color[4 * j + 1] = u_buffer[32 * i + 24 + 1] / 255; - color[4 * j + 2] = u_buffer[32 * i + 24 + 2] / 255; - color[4 * j + 3] = u_buffer[32 * i + 24 + 3] / 255; - - let scale = [ f_buffer[8 * i + 3 + 0], f_buffer[8 * i + 3 + 1], f_buffer[8 * i + 3 + 2]]; - let rot = [(u_buffer[32 * i + 28 + 0] - 128) / 128, (u_buffer[32 * i + 28 + 1] - 128) / 128, (u_buffer[32 * i + 28 + 2] - 128) / 128, (u_buffer[32 * i + 28 + 3] - 128) / 128] - - const R = [ - 1.0 - 2.0 * (rot[2] * rot[2] + rot[3] * rot[3]), - 2.0 * (rot[1] * rot[2] + rot[0] * rot[3]), - 2.0 * (rot[1] * rot[3] - rot[0] * rot[2]), - - 2.0 * (rot[1] * rot[2] - rot[0] * rot[3]), - 1.0 - 2.0 * (rot[1] * rot[1] + rot[3] * rot[3]), - 2.0 * (rot[2] * rot[3] + rot[0] * rot[1]), - - 2.0 * (rot[1] * rot[3] + rot[0] * rot[2]), - 2.0 * (rot[2] * rot[3] - rot[0] * rot[1]), - 1.0 - 2.0 * (rot[1] * rot[1] + rot[2] * rot[2]), - ]; - - // Compute the matrix product of S and R (M = S * R) - const M = [ - scale[0] * R[0], - scale[0] * R[1], - scale[0] * R[2], - scale[1] * R[3], - scale[1] * R[4], - scale[1] * R[5], - scale[2] * R[6], - scale[2] * R[7], - scale[2] * R[8], - ]; - - - covA[3 * j + 0] = M[0] * M[0] + M[3] * M[3] + M[6] * M[6]; - covA[3 * j + 1] = M[0] * M[1] + M[3] * M[4] + M[6] * M[7]; - covA[3 * j + 2] = M[0] * M[2] + M[3] * M[5] + M[6] * M[8]; - covB[3 * j + 0] = M[1] * M[1] + M[4] * M[4] + M[7] * M[7]; - covB[3 * j + 1] = M[1] * M[2] + M[4] * M[5] + M[7] * M[8]; - covB[3 * j + 2] = M[2] * M[2] + M[5] * M[5] + M[8] * M[8]; - } - - self.postMessage({ covA, center, color, covB, viewProj }, [ - covA.buffer, - center.buffer, - color.buffer, - covB.buffer, - ]); - - // console.timeEnd("sort"); - }; - - function processPlyBuffer(inputBuffer) { - const ubuf = new Uint8Array(inputBuffer); - // 10KB ought to be enough for a header... - const header = new TextDecoder().decode(ubuf.slice(0, 1024 * 10)); - const header_end = "end_header\n"; - const header_end_index = header.indexOf(header_end); - if (header_end_index < 0) - throw new Error("Unable to read .ply file header"); - const vertexCount = parseInt(/element vertex (\d+)\n/.exec(header)[1]); - console.log("Vertex Count", vertexCount); - let row_offset = 0, - offsets = {}, - types = {}; - const TYPE_MAP = { - double: "getFloat64", - int: "getInt32", - uint: "getUint32", - float: "getFloat32", - short: "getInt16", - ushort: "getUint16", - uchar: "getUint8", - }; - for (let prop of header - .slice(0, header_end_index) - .split("\n") - .filter((k) => k.startsWith("property "))) { - const [p, type, name] = prop.split(" "); - const arrayType = TYPE_MAP[type] || "getInt8"; - types[name] = arrayType; - offsets[name] = row_offset; - row_offset += parseInt(arrayType.replace(/[^\d]/g, "")) / 8; - } - console.log("Bytes per row", row_offset, types, offsets); - - let dataView = new DataView( - inputBuffer, - header_end_index + header_end.length, - ); - let row = 0; - const attrs = new Proxy( - {}, - { - get(target, prop) { - if (!types[prop]) throw new Error(prop + " not found"); - return dataView[types[prop]]( - row * row_offset + offsets[prop], - true, - ); - }, - }, - ); - - console.time("calculate importance"); - let sizeList = new Float32Array(vertexCount); - let sizeIndex = new Uint32Array(vertexCount); - for (row = 0; row < vertexCount; row++) { - sizeIndex[row] = row; - if (!types["scale_0"]) continue; - const size = - Math.exp(attrs.scale_0) * - Math.exp(attrs.scale_1) * - Math.exp(attrs.scale_2); - const opacity = 1 / (1 + Math.exp(-attrs.opacity)); - sizeList[row] = size * opacity; - } - console.timeEnd("calculate importance"); - - console.time("sort"); - sizeIndex.sort((b, a) => sizeList[a] - sizeList[b]); - console.timeEnd("sort"); - - // 6*4 + 4 + 4 = 8*4 - // XYZ - Position (Float32) - // XYZ - Scale (Float32) - // RGBA - colors (uint8) - // IJKL - quaternion/rot (uint8) - const rowLength = 3 * 4 + 3 * 4 + 4 + 4; - const buffer = new ArrayBuffer(rowLength * vertexCount); - - console.time("build buffer"); - for (let j = 0; j < vertexCount; j++) { - row = sizeIndex[j]; - - const position = new Float32Array(buffer, j * rowLength, 3); - const scales = new Float32Array(buffer, j * rowLength + 4 * 3, 3); - const rgba = new Uint8ClampedArray( - buffer, - j * rowLength + 4 * 3 + 4 * 3, - 4, - ); - const rot = new Uint8ClampedArray( - buffer, - j * rowLength + 4 * 3 + 4 * 3 + 4, - 4, - ); - - if (types["scale_0"]) { - const qlen = Math.sqrt( - attrs.rot_0 ** 2 + - attrs.rot_1 ** 2 + - attrs.rot_2 ** 2 + - attrs.rot_3 ** 2, - ); - - rot[0] = (attrs.rot_0 / qlen) * 128 + 128; - rot[1] = (attrs.rot_1 / qlen) * 128 + 128; - rot[2] = (attrs.rot_2 / qlen) * 128 + 128; - rot[3] = (attrs.rot_3 / qlen) * 128 + 128; - - scales[0] = Math.exp(attrs.scale_0); - scales[1] = Math.exp(attrs.scale_1); - scales[2] = Math.exp(attrs.scale_2); - } else { - scales[0] = 0.01; - scales[1] = 0.01; - scales[2] = 0.01; - - rot[0] = 255; - rot[1] = 0; - rot[2] = 0; - rot[3] = 0; - } - - position[0] = attrs.x; - position[1] = attrs.y; - position[2] = attrs.z; - - if (types["f_dc_0"]) { - const SH_C0 = 0.28209479177387814; - rgba[0] = (0.5 + SH_C0 * attrs.f_dc_0) * 255; - rgba[1] = (0.5 + SH_C0 * attrs.f_dc_1) * 255; - rgba[2] = (0.5 + SH_C0 * attrs.f_dc_2) * 255; - } else { - rgba[0] = attrs.red; - rgba[1] = attrs.green; - rgba[2] = attrs.blue; - } - if (types["opacity"]) { - rgba[3] = (1 / (1 + Math.exp(-attrs.opacity))) * 255; - } else { - rgba[3] = 255; - } - } - console.timeEnd("build buffer"); - return buffer; - } - - const throttledSort = () => { - if (!sortRunning) { - sortRunning = true; - let lastView = viewProj; - runSort(lastView); - setTimeout(() => { - sortRunning = false; - if (lastView !== viewProj) { - throttledSort(); - } - }, 0); - } - }; - - let sortRunning; - self.onmessage = (e) => { - if (e.data.ply) { - vertexCount = 0; - runSort(viewProj); - buffer = processPlyBuffer(e.data.ply); - vertexCount = Math.floor(buffer.byteLength / rowLength); - postMessage({ buffer: buffer }); - } else if (e.data.buffer) { - buffer = e.data.buffer; - vertexCount = e.data.vertexCount; - } else if (e.data.vertexCount) { - vertexCount = e.data.vertexCount; - } else if (e.data.view) { - viewProj = e.data.view; - throttledSort(); - } - }; -} - -const vertexShaderSource = ` - precision mediump float; - attribute vec2 position; - - attribute vec4 color; - attribute vec3 center; - attribute vec3 covA; - attribute vec3 covB; - - uniform mat4 projection, view; - uniform vec2 focal; - uniform vec2 viewport; - - varying vec4 vColor; - varying vec2 vPosition; - - mat3 transpose(mat3 m) { - return mat3( - m[0][0], m[1][0], m[2][0], - m[0][1], m[1][1], m[2][1], - m[0][2], m[1][2], m[2][2] - ); - } - - void main () { - vec4 camspace = view * vec4(center, 1); - vec4 pos2d = projection * camspace; - - float bounds = 1.2 * pos2d.w; - if (pos2d.z < -pos2d.w || pos2d.x < -bounds || pos2d.x > bounds - || pos2d.y < -bounds || pos2d.y > bounds) { - gl_Position = vec4(0.0, 0.0, 2.0, 1.0); - return; - } - - mat3 Vrk = mat3( - covA.x, covA.y, covA.z, - covA.y, covB.x, covB.y, - covA.z, covB.y, covB.z - ); - - mat3 J = mat3( - focal.x / camspace.z, 0., -(focal.x * camspace.x) / (camspace.z * camspace.z), - 0., -focal.y / camspace.z, (focal.y * camspace.y) / (camspace.z * camspace.z), - 0., 0., 0. - ); - - mat3 W = transpose(mat3(view)); - mat3 T = W * J; - mat3 cov = transpose(T) * Vrk * T; - - vec2 vCenter = vec2(pos2d) / pos2d.w; - - float diagonal1 = cov[0][0] + 0.3; - float offDiagonal = cov[0][1]; - float diagonal2 = cov[1][1] + 0.3; - - float mid = 0.5 * (diagonal1 + diagonal2); - float radius = length(vec2((diagonal1 - diagonal2) / 2.0, offDiagonal)); - float lambda1 = mid + radius; - float lambda2 = max(mid - radius, 0.1); - vec2 diagonalVector = normalize(vec2(offDiagonal, lambda1 - diagonal1)); - vec2 v1 = min(sqrt(2.0 * lambda1), 1024.0) * diagonalVector; - vec2 v2 = min(sqrt(2.0 * lambda2), 1024.0) * vec2(diagonalVector.y, -diagonalVector.x); - - - vColor = color; - vPosition = position; - - gl_Position = vec4( - vCenter - + position.x * v1 / viewport * 2.0 - + position.y * v2 / viewport * 2.0, 0.0, 1.0); - - } -`; - -const fragmentShaderSource = ` -precision mediump float; - - varying vec4 vColor; - varying vec2 vPosition; - - void main () { - float A = -dot(vPosition, vPosition); - if (A < -4.0) discard; - float B = exp(A) * vColor.a; - gl_FragColor = vec4(B * vColor.rgb, B); - } -`; - -let defaultViewMatrix = [ - 0.47, 0.04, 0.88, 0, -0.11, 0.99, 0.02, 0, -0.88, -0.11, 0.47, 0, 0.07, - 0.03, 6.55, 1, -]; -let viewMatrix = defaultViewMatrix; - -async function main() { - let carousel = true; - const params = new URLSearchParams(location.search); - try { - viewMatrix = JSON.parse(decodeURIComponent(location.hash.slice(1))); - carousel = false; - } catch (err) {} - const url = new URL( - // "nike.splat", - // location.href, - params.get("url") || "train.splat", - "https://huggingface.co/cakewalk/splat-data/resolve/main/", - ); - const req = await fetch(url, { - mode: "cors", // no-cors, *cors, same-origin - credentials: "omit", // include, *same-origin, omit - }); - console.log(req); - if (req.status != 200) - throw new Error(req.status + " Unable to load " + req.url); - - const rowLength = 3 * 4 + 3 * 4 + 4 + 4; - const reader = req.body.getReader(); - let splatData = new Uint8Array(req.headers.get("content-length")); - - const downsample = splatData.length / rowLength > 500000 ? 1 : 1 / devicePixelRatio; - // const downsample = 1 / devicePixelRatio; - // const downsample = 1; - console.log(splatData.length / rowLength, downsample); - - const worker = new Worker( - URL.createObjectURL( - new Blob(["(", createWorker.toString(), ")(self)"], { - type: "application/javascript", - }), - ), - ); - - const canvas = document.getElementById("canvas"); - canvas.width = innerWidth / downsample; - canvas.height = innerHeight / downsample; - - const fps = document.getElementById("fps"); - - let projectionMatrix = getProjectionMatrix( - camera.fx / downsample, - camera.fy / downsample, - canvas.width, - canvas.height, - ); - - const gl = canvas.getContext("webgl"); - const ext = gl.getExtension("ANGLE_instanced_arrays"); - - const vertexShader = gl.createShader(gl.VERTEX_SHADER); - gl.shaderSource(vertexShader, vertexShaderSource); - gl.compileShader(vertexShader); - if (!gl.getShaderParameter(vertexShader, gl.COMPILE_STATUS)) - console.error(gl.getShaderInfoLog(vertexShader)); - - const fragmentShader = gl.createShader(gl.FRAGMENT_SHADER); - gl.shaderSource(fragmentShader, fragmentShaderSource); - gl.compileShader(fragmentShader); - if (!gl.getShaderParameter(fragmentShader, gl.COMPILE_STATUS)) - console.error(gl.getShaderInfoLog(fragmentShader)); - - const program = gl.createProgram(); - gl.attachShader(program, vertexShader); - gl.attachShader(program, fragmentShader); - gl.linkProgram(program); - gl.useProgram(program); - - if (!gl.getProgramParameter(program, gl.LINK_STATUS)) - console.error(gl.getProgramInfoLog(program)); - - gl.disable(gl.DEPTH_TEST); // Disable depth testing - - // Enable blending - gl.enable(gl.BLEND); - - // Set blending function - gl.blendFuncSeparate( - gl.ONE_MINUS_DST_ALPHA, - gl.ONE, - gl.ONE_MINUS_DST_ALPHA, - gl.ONE, - ); - - // Set blending equation - gl.blendEquationSeparate(gl.FUNC_ADD, gl.FUNC_ADD); - - // projection - const u_projection = gl.getUniformLocation(program, "projection"); - gl.uniformMatrix4fv(u_projection, false, projectionMatrix); - - // viewport - const u_viewport = gl.getUniformLocation(program, "viewport"); - gl.uniform2fv(u_viewport, new Float32Array([canvas.width, canvas.height])); - - // focal - const u_focal = gl.getUniformLocation(program, "focal"); - gl.uniform2fv( - u_focal, - new Float32Array([camera.fx / downsample, camera.fy / downsample]), - ); - - // view - const u_view = gl.getUniformLocation(program, "view"); - gl.uniformMatrix4fv(u_view, false, viewMatrix); - - // positions - const triangleVertices = new Float32Array([-2, -2, 2, -2, 2, 2, -2, 2]); - const vertexBuffer = gl.createBuffer(); - gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); - gl.bufferData(gl.ARRAY_BUFFER, triangleVertices, gl.STATIC_DRAW); - const a_position = gl.getAttribLocation(program, "position"); - gl.enableVertexAttribArray(a_position); - gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer); - gl.vertexAttribPointer(a_position, 2, gl.FLOAT, false, 0, 0); - - // center - const centerBuffer = gl.createBuffer(); - // gl.bindBuffer(gl.ARRAY_BUFFER, centerBuffer); - // gl.bufferData(gl.ARRAY_BUFFER, center, gl.STATIC_DRAW); - const a_center = gl.getAttribLocation(program, "center"); - gl.enableVertexAttribArray(a_center); - gl.bindBuffer(gl.ARRAY_BUFFER, centerBuffer); - gl.vertexAttribPointer(a_center, 3, gl.FLOAT, false, 0, 0); - ext.vertexAttribDivisorANGLE(a_center, 1); // Use the extension here - - // color - const colorBuffer = gl.createBuffer(); - // gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer); - // gl.bufferData(gl.ARRAY_BUFFER, color, gl.STATIC_DRAW); - const a_color = gl.getAttribLocation(program, "color"); - gl.enableVertexAttribArray(a_color); - gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer); - gl.vertexAttribPointer(a_color, 4, gl.FLOAT, false, 0, 0); - ext.vertexAttribDivisorANGLE(a_color, 1); // Use the extension here - - // cov - const covABuffer = gl.createBuffer(); - const a_covA = gl.getAttribLocation(program, "covA"); - gl.enableVertexAttribArray(a_covA); - gl.bindBuffer(gl.ARRAY_BUFFER, covABuffer); - gl.vertexAttribPointer(a_covA, 3, gl.FLOAT, false, 0, 0); - ext.vertexAttribDivisorANGLE(a_covA, 1); // Use the extension here - - const covBBuffer = gl.createBuffer(); - const a_covB = gl.getAttribLocation(program, "covB"); - gl.enableVertexAttribArray(a_covB); - gl.bindBuffer(gl.ARRAY_BUFFER, covBBuffer); - gl.vertexAttribPointer(a_covB, 3, gl.FLOAT, false, 0, 0); - ext.vertexAttribDivisorANGLE(a_covB, 1); // Use the extension here - - let lastProj = [] - let lastData - - worker.onmessage = (e) => { - if (e.data.buffer) { - splatData = new Uint8Array(e.data.buffer); - const blob = new Blob([splatData.buffer], { - type: "application/octet-stream", - }); - const link = document.createElement("a"); - link.download = "model.splat"; - link.href = URL.createObjectURL(blob); - document.body.appendChild(link); - link.click(); - } else { - let { covA, covB, center, color, viewProj } = e.data; - lastData = e.data; - - lastProj = viewProj - vertexCount = center.length / 3; - - gl.bindBuffer(gl.ARRAY_BUFFER, centerBuffer); - gl.bufferData(gl.ARRAY_BUFFER, center, gl.STATIC_DRAW); - - gl.bindBuffer(gl.ARRAY_BUFFER, colorBuffer); - gl.bufferData(gl.ARRAY_BUFFER, color, gl.STATIC_DRAW); - - - gl.bindBuffer(gl.ARRAY_BUFFER, covABuffer); - gl.bufferData(gl.ARRAY_BUFFER, covA, gl.STATIC_DRAW); - - gl.bindBuffer(gl.ARRAY_BUFFER, covBBuffer); - gl.bufferData(gl.ARRAY_BUFFER, covB, gl.STATIC_DRAW); - - } - }; - - let activeKeys = []; - - window.addEventListener("keydown", (e) => { - if (document.activeElement != document.body) return; - carousel = false; - if (!activeKeys.includes(e.key)) activeKeys.push(e.key); - if (/\d/.test(e.key)) { - viewMatrix = getViewMatrix(cameras[parseInt(e.key)]); - } - if (e.key == "v") { - location.hash = - "#" + - JSON.stringify( - viewMatrix.map((k) => Math.round(k * 100) / 100), - ); - } else if (e.key === "p") { - carousel = true; - } - }); - window.addEventListener("keyup", (e) => { - activeKeys = activeKeys.filter((k) => k !== e.key); - }); - window.addEventListener("blur", () => { - activeKeys = []; - }); - - window.addEventListener( - "wheel", - (e) => { - carousel = false; - e.preventDefault(); - const lineHeight = 10; - const scale = - e.deltaMode == 1 - ? lineHeight - : e.deltaMode == 2 - ? innerHeight - : 1; - let inv = invert4(viewMatrix); - if (e.shiftKey) { - inv = translate4( - inv, - (e.deltaX * scale) / innerWidth, - (e.deltaY * scale) / innerHeight, - 0, - ); - } else if (e.ctrlKey || e.metaKey) { - // inv = rotate4(inv, (e.deltaX * scale) / innerWidth, 0, 0, 1); - // inv = translate4(inv, 0, (e.deltaY * scale) / innerHeight, 0); - let preY = inv[13]; - inv = translate4( - inv, - 0, - 0, - (-10 * (e.deltaY * scale)) / innerHeight, - ); - inv[13] = preY; - } else { - let d = 4; - inv = translate4(inv, 0, 0, d); - inv = rotate4(inv, -(e.deltaX * scale) / innerWidth, 0, 1, 0); - inv = rotate4(inv, (e.deltaY * scale) / innerHeight, 1, 0, 0); - inv = translate4(inv, 0, 0, -d); - } - - viewMatrix = invert4(inv); - }, - { passive: false }, - ); - - let startX, startY, down; - canvas.addEventListener("mousedown", (e) => { - carousel = false; - e.preventDefault(); - startX = e.clientX; - startY = e.clientY; - down = e.ctrlKey || e.metaKey ? 2 : 1; - }); - canvas.addEventListener("contextmenu", (e) => { - carousel = false; - e.preventDefault(); - startX = e.clientX; - startY = e.clientY; - down = 2; - }); - - canvas.addEventListener("mousemove", (e) => { - e.preventDefault(); - if (down == 1) { - let inv = invert4(viewMatrix); - let dx = (5 * (e.clientX - startX)) / innerWidth; - let dy = (5 * (e.clientY - startY)) / innerHeight; - let d = 4; - - inv = translate4(inv, 0, 0, d); - inv = rotate4(inv, dx, 0, 1, 0); - inv = rotate4(inv, -dy, 1, 0, 0); - inv = translate4(inv, 0, 0, -d); - // let postAngle = Math.atan2(inv[0], inv[10]) - // inv = rotate4(inv, postAngle - preAngle, 0, 0, 1) - // console.log(postAngle) - viewMatrix = invert4(inv); - - startX = e.clientX; - startY = e.clientY; - } else if (down == 2) { - let inv = invert4(viewMatrix); - // inv = rotateY(inv, ); - let preY = inv[13]; - inv = translate4( - inv, - (-10 * (e.clientX - startX)) / innerWidth, - 0, - (10 * (e.clientY - startY)) / innerHeight, - ); - inv[13] = preY; - viewMatrix = invert4(inv); - - startX = e.clientX; - startY = e.clientY; - } - }); - canvas.addEventListener("mouseup", (e) => { - e.preventDefault(); - down = false; - startX = 0; - startY = 0; - }); - - let altX = 0, - altY = 0; - canvas.addEventListener( - "touchstart", - (e) => { - e.preventDefault(); - if (e.touches.length === 1) { - carousel = false; - startX = e.touches[0].clientX; - startY = e.touches[0].clientY; - down = 1; - } else if (e.touches.length === 2) { - // console.log('beep') - carousel = false; - startX = e.touches[0].clientX; - altX = e.touches[1].clientX; - startY = e.touches[0].clientY; - altY = e.touches[1].clientY; - down = 1; - } - }, - { passive: false }, - ); - canvas.addEventListener( - "touchmove", - (e) => { - e.preventDefault(); - if (e.touches.length === 1 && down) { - let inv = invert4(viewMatrix); - let dx = (4 * (e.touches[0].clientX - startX)) / innerWidth; - let dy = (4 * (e.touches[0].clientY - startY)) / innerHeight; - - let d = 4; - inv = translate4(inv, 0, 0, d); - // inv = translate4(inv, -x, -y, -z); - // inv = translate4(inv, x, y, z); - inv = rotate4(inv, dx, 0, 1, 0); - inv = rotate4(inv, -dy, 1, 0, 0); - inv = translate4(inv, 0, 0, -d); - - viewMatrix = invert4(inv); - - startX = e.touches[0].clientX; - startY = e.touches[0].clientY; - } else if (e.touches.length === 2) { - // alert('beep') - const dtheta = - Math.atan2(startY - altY, startX - altX) - - Math.atan2( - e.touches[0].clientY - e.touches[1].clientY, - e.touches[0].clientX - e.touches[1].clientX, - ); - const dscale = - Math.hypot(startX - altX, startY - altY) / - Math.hypot( - e.touches[0].clientX - e.touches[1].clientX, - e.touches[0].clientY - e.touches[1].clientY, - ); - const dx = - (e.touches[0].clientX + - e.touches[1].clientX - - (startX + altX)) / - 2; - const dy = - (e.touches[0].clientY + - e.touches[1].clientY - - (startY + altY)) / - 2; - let inv = invert4(viewMatrix); - // inv = translate4(inv, 0, 0, d); - inv = rotate4(inv, dtheta, 0, 0, 1); - - inv = translate4(inv, -dx / innerWidth, -dy / innerHeight, 0); - - let preY = inv[13]; - inv = translate4(inv, 0, 0, 3 * (1 - dscale)); - inv[13] = preY; - - viewMatrix = invert4(inv); - - startX = e.touches[0].clientX; - altX = e.touches[1].clientX; - startY = e.touches[0].clientY; - altY = e.touches[1].clientY; - } - }, - { passive: false }, - ); - canvas.addEventListener( - "touchend", - (e) => { - e.preventDefault(); - down = false; - startX = 0; - startY = 0; - }, - { passive: false }, - ); - - let jumpDelta = 0; - let vertexCount = 0; - - let lastFrame = 0; - let avgFps = 0; - let start = 0; - - const frame = (now) => { - let inv = invert4(viewMatrix); - - if (activeKeys.includes("ArrowUp")) { - if(activeKeys.includes("Shift")){ - inv = translate4(inv, 0, -0.03, 0); - }else{ - let preY = inv[13]; - inv = translate4(inv, 0, 0, 0.1); - inv[13] = preY; - } - } - if (activeKeys.includes("ArrowDown")) { - if(activeKeys.includes("Shift")){ - inv = translate4(inv, 0, 0.03, 0); - }else{ - let preY = inv[13]; - inv = translate4(inv, 0, 0, -0.1); - inv[13] = preY; - } - } - if (activeKeys.includes("ArrowLeft")) - inv = translate4(inv, -0.03, 0, 0); - // - if (activeKeys.includes("ArrowRight")) - inv = translate4(inv, 0.03, 0, 0); - // inv = rotate4(inv, 0.01, 0, 1, 0); - if (activeKeys.includes("a")) inv = rotate4(inv, -0.01, 0, 1, 0); - if (activeKeys.includes("d")) inv = rotate4(inv, 0.01, 0, 1, 0); - if (activeKeys.includes("q")) inv = rotate4(inv, 0.01, 0, 0, 1); - if (activeKeys.includes("e")) inv = rotate4(inv, -0.01, 0, 0, 1); - if (activeKeys.includes("w")) inv = rotate4(inv, 0.005, 1, 0, 0); - if (activeKeys.includes("s")) inv = rotate4(inv, -0.005, 1, 0, 0); - - if (["j", "k", "l", "i"].some((k) => activeKeys.includes(k))) { - let d = 4; - inv = translate4(inv, 0, 0, d); - inv = rotate4( - inv, - activeKeys.includes("j") - ? -0.05 - : activeKeys.includes("l") - ? 0.05 - : 0, - 0, - 1, - 0, - ); - inv = rotate4( - inv, - activeKeys.includes("i") - ? 0.05 - : activeKeys.includes("k") - ? -0.05 - : 0, - 1, - 0, - 0, - ); - inv = translate4(inv, 0, 0, -d); - } - - // inv[13] = preY; - viewMatrix = invert4(inv); - - if (carousel) { - let inv = invert4(defaultViewMatrix); - - const t = Math.sin((Date.now() - start) / 5000); - inv = translate4(inv, 2.5 * t, 0, 6 * (1 - Math.cos(t))); - inv = rotate4(inv, -0.6 * t, 0, 1, 0); - - viewMatrix = invert4(inv); - } - - if (activeKeys.includes(" ")) { - jumpDelta = Math.min(1, jumpDelta + 0.05); - } else { - jumpDelta = Math.max(0, jumpDelta - 0.05); - } - - let inv2 = invert4(viewMatrix); - inv2[13] -= jumpDelta; - inv2 = rotate4(inv2, -0.1 * jumpDelta, 1, 0, 0); - let actualViewMatrix = invert4(inv2); - - const viewProj = multiply4(projectionMatrix, actualViewMatrix); - worker.postMessage({ view: viewProj }); - - const currentFps = 1000 / (now - lastFrame) || 0; - avgFps = avgFps * 0.9 + currentFps * 0.1; - - if (vertexCount > 0) { - document.getElementById("spinner").style.display = "none"; - // console.time('render') - gl.uniformMatrix4fv(u_view, false, actualViewMatrix); - ext.drawArraysInstancedANGLE(gl.TRIANGLE_FAN, 0, 4, vertexCount); - // console.timeEnd('render') - } else { - gl.clear(gl.COLOR_BUFFER_BIT); - document.getElementById("spinner").style.display = ""; - start = Date.now() + 2000; - } - const progress = (100 * vertexCount) / (splatData.length / rowLength); - if (progress < 100) { - document.getElementById("progress").style.width = progress + "%"; - } else { - document.getElementById("progress").style.display = "none"; - } - fps.innerText = Math.round(avgFps) + " fps"; - lastFrame = now; - requestAnimationFrame(frame); - }; - - frame(); - - const selectFile = (file) => { - const fr = new FileReader(); - if (/\.json$/i.test(file.name)) { - fr.onload = () => { - cameras = JSON.parse(fr.result); - viewMatrix = getViewMatrix(cameras[0]); - projectionMatrix = getProjectionMatrix( - camera.fx / downsample, - camera.fy / downsample, - canvas.width, - canvas.height, - ); - gl.uniformMatrix4fv(u_projection, false, projectionMatrix); - - console.log("Loaded Cameras"); - }; - fr.readAsText(file); - } else { - stopLoading = true; - fr.onload = () => { - splatData = new Uint8Array(fr.result); - console.log("Loaded", Math.floor(splatData.length / rowLength)); - - if ( - splatData[0] == 112 && - splatData[1] == 108 && - splatData[2] == 121 && - splatData[3] == 10 - ) { - // ply file magic header means it should be handled differently - worker.postMessage({ ply: splatData.buffer }); - } else { - worker.postMessage({ - buffer: splatData.buffer, - vertexCount: Math.floor(splatData.length / rowLength), - }); - } - }; - fr.readAsArrayBuffer(file); - } - }; - - window.addEventListener("hashchange", (e) => { - try { - viewMatrix = JSON.parse(decodeURIComponent(location.hash.slice(1))); - carousel = false; - } catch (err) {} - }); - - const preventDefault = (e) => { - e.preventDefault(); - e.stopPropagation(); - }; - document.addEventListener("dragenter", preventDefault); - document.addEventListener("dragover", preventDefault); - document.addEventListener("dragleave", preventDefault); - document.addEventListener("drop", (e) => { - e.preventDefault(); - e.stopPropagation(); - selectFile(e.dataTransfer.files[0]); - }); - - let bytesRead = 0; - let lastVertexCount = -1; - let stopLoading = false; - - while (true) { - const { done, value } = await reader.read(); - if (done || stopLoading) break; - - splatData.set(value, bytesRead); - bytesRead += value.length; - - if (vertexCount > lastVertexCount) { - worker.postMessage({ - buffer: splatData.buffer, - vertexCount: Math.floor(bytesRead / rowLength), - }); - lastVertexCount = vertexCount; - } - } - if (!stopLoading) - worker.postMessage({ - buffer: splatData.buffer, - vertexCount: Math.floor(bytesRead / rowLength), - }); -} - -main().catch((err) => { - document.getElementById("spinner").style.display = "none"; - document.getElementById("message").innerText = err.toString(); -}); - - diff --git a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageGrab.py b/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageGrab.py deleted file mode 100644 index 927033c6073a28ae67c0e33ec53ec660c741b194..0000000000000000000000000000000000000000 --- a/spaces/camilosegura/traductor-multilenguaje/Lib/site-packages/PIL/ImageGrab.py +++ /dev/null @@ -1,169 +0,0 @@ -# -# The Python Imaging Library -# $Id$ -# -# screen grabber -# -# History: -# 2001-04-26 fl created -# 2001-09-17 fl use builtin driver, if present -# 2002-11-19 fl added grabclipboard support -# -# Copyright (c) 2001-2002 by Secret Labs AB -# Copyright (c) 2001-2002 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -import io -import os -import shutil -import subprocess -import sys -import tempfile - -from . import Image - - -def grab(bbox=None, include_layered_windows=False, all_screens=False, xdisplay=None): - if xdisplay is None: - if sys.platform == "darwin": - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - args = ["screencapture"] - if bbox: - left, top, right, bottom = bbox - args += ["-R", f"{left},{top},{right-left},{bottom-top}"] - subprocess.call(args + ["-x", filepath]) - im = Image.open(filepath) - im.load() - os.unlink(filepath) - if bbox: - im_resized = im.resize((right - left, bottom - top)) - im.close() - return im_resized - return im - elif sys.platform == "win32": - offset, size, data = Image.core.grabscreen_win32( - include_layered_windows, all_screens - ) - im = Image.frombytes( - "RGB", - size, - data, - # RGB, 32-bit line padding, origin lower left corner - "raw", - "BGR", - (size[0] * 3 + 3) & -4, - -1, - ) - if bbox: - x0, y0 = offset - left, top, right, bottom = bbox - im = im.crop((left - x0, top - y0, right - x0, bottom - y0)) - return im - try: - if not Image.core.HAVE_XCB: - msg = "Pillow was built without XCB support" - raise OSError(msg) - size, data = Image.core.grabscreen_x11(xdisplay) - except OSError: - if ( - xdisplay is None - and sys.platform not in ("darwin", "win32") - and shutil.which("gnome-screenshot") - ): - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - subprocess.call(["gnome-screenshot", "-f", filepath]) - im = Image.open(filepath) - im.load() - os.unlink(filepath) - if bbox: - im_cropped = im.crop(bbox) - im.close() - return im_cropped - return im - else: - raise - else: - im = Image.frombytes("RGB", size, data, "raw", "BGRX", size[0] * 4, 1) - if bbox: - im = im.crop(bbox) - return im - - -def grabclipboard(): - if sys.platform == "darwin": - fh, filepath = tempfile.mkstemp(".png") - os.close(fh) - commands = [ - 'set theFile to (open for access POSIX file "' - + filepath - + '" with write permission)', - "try", - " write (the clipboard as «class PNGf») to theFile", - "end try", - "close access theFile", - ] - script = ["osascript"] - for command in commands: - script += ["-e", command] - subprocess.call(script) - - im = None - if os.stat(filepath).st_size != 0: - im = Image.open(filepath) - im.load() - os.unlink(filepath) - return im - elif sys.platform == "win32": - fmt, data = Image.core.grabclipboard_win32() - if fmt == "file": # CF_HDROP - import struct - - o = struct.unpack_from("I", data)[0] - if data[16] != 0: - files = data[o:].decode("utf-16le").split("\0") - else: - files = data[o:].decode("mbcs").split("\0") - return files[: files.index("")] - if isinstance(data, bytes): - data = io.BytesIO(data) - if fmt == "png": - from . import PngImagePlugin - - return PngImagePlugin.PngImageFile(data) - elif fmt == "DIB": - from . import BmpImagePlugin - - return BmpImagePlugin.DibImageFile(data) - return None - else: - if shutil.which("wl-paste"): - output = subprocess.check_output(["wl-paste", "-l"]).decode() - mimetypes = output.splitlines() - if "image/png" in mimetypes: - mimetype = "image/png" - elif mimetypes: - mimetype = mimetypes[0] - else: - mimetype = None - - args = ["wl-paste"] - if mimetype: - args.extend(["-t", mimetype]) - elif shutil.which("xclip"): - args = ["xclip", "-selection", "clipboard", "-t", "image/png", "-o"] - else: - msg = "wl-paste or xclip is required for ImageGrab.grabclipboard() on Linux" - raise NotImplementedError(msg) - p = subprocess.run(args, stdout=subprocess.PIPE, stderr=subprocess.PIPE) - err = p.stderr - if err: - msg = f"{args[0]} error: {err.strip().decode()}" - raise ChildProcessError(msg) - data = io.BytesIO(p.stdout) - im = Image.open(data) - im.load() - return im diff --git a/spaces/candlend/vits-hoshimi/sovits/commons.py b/spaces/candlend/vits-hoshimi/sovits/commons.py deleted file mode 100644 index 074888006392e956ce204d8368362dbb2cd4e304..0000000000000000000000000000000000000000 --- a/spaces/candlend/vits-hoshimi/sovits/commons.py +++ /dev/null @@ -1,188 +0,0 @@ -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -def slice_pitch_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, idx_str:idx_end] - return ret - -def rand_slice_segments_with_pitch(x, pitch, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - ret_pitch = slice_pitch_segments(pitch, ids_str, segment_size) - return ret, ret_pitch, ids_str - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def rand_spec_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / - (num_timescales - 1)) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d(length, channels, min_timescale, max_timescale) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/config/lazy.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/config/lazy.py deleted file mode 100644 index 3b80f3787ca156a617e2b35e56515d9dd9105060..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/detectron2/config/lazy.py +++ /dev/null @@ -1,421 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import ast -import builtins -import collections.abc as abc -import importlib -import inspect -import logging -import os -import uuid -from contextlib import contextmanager -from copy import deepcopy -from dataclasses import is_dataclass -from typing import List, Tuple, Union -import cloudpickle -import yaml -from omegaconf import DictConfig, ListConfig, OmegaConf, SCMode - -from detectron2.utils.file_io import PathManager -from detectron2.utils.registry import _convert_target_to_string - -__all__ = ["LazyCall", "LazyConfig"] - - -class LazyCall: - """ - Wrap a callable so that when it's called, the call will not be executed, - but returns a dict that describes the call. - - LazyCall object has to be called with only keyword arguments. Positional - arguments are not yet supported. - - Examples: - :: - from detectron2.config import instantiate, LazyCall - - layer_cfg = LazyCall(nn.Conv2d)(in_channels=32, out_channels=32) - layer_cfg.out_channels = 64 # can edit it afterwards - layer = instantiate(layer_cfg) - """ - - def __init__(self, target): - if not (callable(target) or isinstance(target, (str, abc.Mapping))): - raise TypeError( - f"target of LazyCall must be a callable or defines a callable! Got {target}" - ) - self._target = target - - def __call__(self, **kwargs): - if is_dataclass(self._target): - # omegaconf object cannot hold dataclass type - # https://github.com/omry/omegaconf/issues/784 - target = _convert_target_to_string(self._target) - else: - target = self._target - kwargs["_target_"] = target - - return DictConfig(content=kwargs, flags={"allow_objects": True}) - - -def _visit_dict_config(cfg, func): - """ - Apply func recursively to all DictConfig in cfg. - """ - if isinstance(cfg, DictConfig): - func(cfg) - for v in cfg.values(): - _visit_dict_config(v, func) - elif isinstance(cfg, ListConfig): - for v in cfg: - _visit_dict_config(v, func) - - -def _validate_py_syntax(filename): - # see also https://github.com/open-mmlab/mmcv/blob/master/mmcv/utils/config.py - with PathManager.open(filename, "r") as f: - content = f.read() - try: - ast.parse(content) - except SyntaxError as e: - raise SyntaxError(f"Config file {filename} has syntax error!") from e - - -def _cast_to_config(obj): - # if given a dict, return DictConfig instead - if isinstance(obj, dict): - return DictConfig(obj, flags={"allow_objects": True}) - return obj - - -_CFG_PACKAGE_NAME = "detectron2._cfg_loader" -""" -A namespace to put all imported config into. -""" - - -def _random_package_name(filename): - # generate a random package name when loading config files - return _CFG_PACKAGE_NAME + str(uuid.uuid4())[:4] + "." + os.path.basename(filename) - - -@contextmanager -def _patch_import(): - """ - Enhance relative import statements in config files, so that they: - 1. locate files purely based on relative location, regardless of packages. - e.g. you can import file without having __init__ - 2. do not cache modules globally; modifications of module states has no side effect - 3. support other storage system through PathManager, so config files can be in the cloud - 4. imported dict are turned into omegaconf.DictConfig automatically - """ - old_import = builtins.__import__ - - def find_relative_file(original_file, relative_import_path, level): - # NOTE: "from . import x" is not handled. Because then it's unclear - # if such import should produce `x` as a python module or DictConfig. - # This can be discussed further if needed. - relative_import_err = """ -Relative import of directories is not allowed within config files. -Within a config file, relative import can only import other config files. -""".replace( - "\n", " " - ) - if not len(relative_import_path): - raise ImportError(relative_import_err) - - cur_file = os.path.dirname(original_file) - for _ in range(level - 1): - cur_file = os.path.dirname(cur_file) - cur_name = relative_import_path.lstrip(".") - for part in cur_name.split("."): - cur_file = os.path.join(cur_file, part) - if not cur_file.endswith(".py"): - cur_file += ".py" - if not PathManager.isfile(cur_file): - cur_file_no_suffix = cur_file[: -len(".py")] - if PathManager.isdir(cur_file_no_suffix): - raise ImportError(f"Cannot import from {cur_file_no_suffix}." + relative_import_err) - else: - raise ImportError( - f"Cannot import name {relative_import_path} from " - f"{original_file}: {cur_file} does not exist." - ) - return cur_file - - def new_import(name, globals=None, locals=None, fromlist=(), level=0): - if ( - # Only deal with relative imports inside config files - level != 0 - and globals is not None - and (globals.get("__package__", "") or "").startswith(_CFG_PACKAGE_NAME) - ): - cur_file = find_relative_file(globals["__file__"], name, level) - _validate_py_syntax(cur_file) - spec = importlib.machinery.ModuleSpec( - _random_package_name(cur_file), None, origin=cur_file - ) - module = importlib.util.module_from_spec(spec) - module.__file__ = cur_file - with PathManager.open(cur_file) as f: - content = f.read() - exec(compile(content, cur_file, "exec"), module.__dict__) - for name in fromlist: # turn imported dict into DictConfig automatically - val = _cast_to_config(module.__dict__[name]) - module.__dict__[name] = val - return module - return old_import(name, globals, locals, fromlist=fromlist, level=level) - - builtins.__import__ = new_import - yield new_import - builtins.__import__ = old_import - - -class LazyConfig: - """ - Provide methods to save, load, and overrides an omegaconf config object - which may contain definition of lazily-constructed objects. - """ - - @staticmethod - def load_rel(filename: str, keys: Union[None, str, Tuple[str, ...]] = None): - """ - Similar to :meth:`load()`, but load path relative to the caller's - source file. - - This has the same functionality as a relative import, except that this method - accepts filename as a string, so more characters are allowed in the filename. - """ - caller_frame = inspect.stack()[1] - caller_fname = caller_frame[0].f_code.co_filename - assert caller_fname != "", "load_rel Unable to find caller" - caller_dir = os.path.dirname(caller_fname) - filename = os.path.join(caller_dir, filename) - return LazyConfig.load(filename, keys) - - @staticmethod - def load(filename: str, keys: Union[None, str, Tuple[str, ...]] = None): - """ - Load a config file. - - Args: - filename: absolute path or relative path w.r.t. the current working directory - keys: keys to load and return. If not given, return all keys - (whose values are config objects) in a dict. - """ - has_keys = keys is not None - filename = filename.replace("/./", "/") # redundant - if os.path.splitext(filename)[1] not in [".py", ".yaml", ".yml"]: - raise ValueError(f"Config file {filename} has to be a python or yaml file.") - if filename.endswith(".py"): - _validate_py_syntax(filename) - - with _patch_import(): - # Record the filename - module_namespace = { - "__file__": filename, - "__package__": _random_package_name(filename), - } - with PathManager.open(filename) as f: - content = f.read() - # Compile first with filename to: - # 1. make filename appears in stacktrace - # 2. make load_rel able to find its parent's (possibly remote) location - exec(compile(content, filename, "exec"), module_namespace) - - ret = module_namespace - else: - with PathManager.open(filename) as f: - obj = yaml.unsafe_load(f) - ret = OmegaConf.create(obj, flags={"allow_objects": True}) - - if has_keys: - if isinstance(keys, str): - return _cast_to_config(ret[keys]) - else: - return tuple(_cast_to_config(ret[a]) for a in keys) - else: - if filename.endswith(".py"): - # when not specified, only load those that are config objects - ret = DictConfig( - { - name: _cast_to_config(value) - for name, value in ret.items() - if isinstance(value, (DictConfig, ListConfig, dict)) - and not name.startswith("_") - }, - flags={"allow_objects": True}, - ) - return ret - - @staticmethod - def save(cfg, filename: str): - """ - Save a config object to a yaml file. - Note that when the config dictionary contains complex objects (e.g. lambda), - it can't be saved to yaml. In that case we will print an error and - attempt to save to a pkl file instead. - - Args: - cfg: an omegaconf config object - filename: yaml file name to save the config file - """ - logger = logging.getLogger(__name__) - try: - cfg = deepcopy(cfg) - except Exception: - pass - else: - # if it's deep-copyable, then... - def _replace_type_by_name(x): - if "_target_" in x and callable(x._target_): - try: - x._target_ = _convert_target_to_string(x._target_) - except AttributeError: - pass - - # not necessary, but makes yaml looks nicer - _visit_dict_config(cfg, _replace_type_by_name) - - save_pkl = False - try: - dict = OmegaConf.to_container( - cfg, - # Do not resolve interpolation when saving, i.e. do not turn ${a} into - # actual values when saving. - resolve=False, - # Save structures (dataclasses) in a format that can be instantiated later. - # Without this option, the type information of the dataclass will be erased. - structured_config_mode=SCMode.INSTANTIATE, - ) - dumped = yaml.dump(dict, default_flow_style=None, allow_unicode=True, width=9999) - with PathManager.open(filename, "w") as f: - f.write(dumped) - - try: - _ = yaml.unsafe_load(dumped) # test that it is loadable - except Exception: - logger.warning( - "The config contains objects that cannot serialize to a valid yaml. " - f"{filename} is human-readable but cannot be loaded." - ) - save_pkl = True - except Exception: - logger.exception("Unable to serialize the config to yaml. Error:") - save_pkl = True - - if save_pkl: - new_filename = filename + ".pkl" - try: - # retry by pickle - with PathManager.open(new_filename, "wb") as f: - cloudpickle.dump(cfg, f) - logger.warning(f"Config is saved using cloudpickle at {new_filename}.") - except Exception: - pass - - @staticmethod - def apply_overrides(cfg, overrides: List[str]): - """ - In-place override contents of cfg. - - Args: - cfg: an omegaconf config object - overrides: list of strings in the format of "a=b" to override configs. - See https://hydra.cc/docs/next/advanced/override_grammar/basic/ - for syntax. - - Returns: - the cfg object - """ - - def safe_update(cfg, key, value): - parts = key.split(".") - for idx in range(1, len(parts)): - prefix = ".".join(parts[:idx]) - v = OmegaConf.select(cfg, prefix, default=None) - if v is None: - break - if not OmegaConf.is_config(v): - raise KeyError( - f"Trying to update key {key}, but {prefix} " - f"is not a config, but has type {type(v)}." - ) - OmegaConf.update(cfg, key, value, merge=True) - - from hydra.core.override_parser.overrides_parser import OverridesParser - - parser = OverridesParser.create() - overrides = parser.parse_overrides(overrides) - for o in overrides: - key = o.key_or_group - value = o.value() - if o.is_delete(): - # TODO support this - raise NotImplementedError("deletion is not yet a supported override") - safe_update(cfg, key, value) - return cfg - - @staticmethod - def to_py(cfg, prefix: str = "cfg."): - """ - Try to convert a config object into Python-like psuedo code. - - Note that perfect conversion is not always possible. So the returned - results are mainly meant to be human-readable, and not meant to be executed. - - Args: - cfg: an omegaconf config object - prefix: root name for the resulting code (default: "cfg.") - - - Returns: - str of formatted Python code - """ - import black - - cfg = OmegaConf.to_container(cfg, resolve=True) - - def _to_str(obj, prefix=None, inside_call=False): - if prefix is None: - prefix = [] - if isinstance(obj, abc.Mapping) and "_target_" in obj: - # Dict representing a function call - target = _convert_target_to_string(obj.pop("_target_")) - args = [] - for k, v in sorted(obj.items()): - args.append(f"{k}={_to_str(v, inside_call=True)}") - args = ", ".join(args) - call = f"{target}({args})" - return "".join(prefix) + call - elif isinstance(obj, abc.Mapping) and not inside_call: - # Dict that is not inside a call is a list of top-level config objects that we - # render as one object per line with dot separated prefixes - key_list = [] - for k, v in sorted(obj.items()): - if isinstance(v, abc.Mapping) and "_target_" not in v: - key_list.append(_to_str(v, prefix=prefix + [k + "."])) - else: - key = "".join(prefix) + k - key_list.append(f"{key}={_to_str(v)}") - return "\n".join(key_list) - elif isinstance(obj, abc.Mapping): - # Dict that is inside a call is rendered as a regular dict - return ( - "{" - + ",".join( - f"{repr(k)}: {_to_str(v, inside_call=inside_call)}" - for k, v in sorted(obj.items()) - ) - + "}" - ) - elif isinstance(obj, list): - return "[" + ",".join(_to_str(x, inside_call=inside_call) for x in obj) + "]" - else: - return repr(obj) - - py_str = _to_str(cfg, prefix=[prefix]) - try: - return black.format_str(py_str, mode=black.Mode()) - except black.InvalidInput: - return py_str diff --git a/spaces/caslabs/midi-autocompletion/musicautobot/multitask_transformer/model.py b/spaces/caslabs/midi-autocompletion/musicautobot/multitask_transformer/model.py deleted file mode 100644 index 6add0e4bf09db73a6b6f430090ce610757ed0c80..0000000000000000000000000000000000000000 --- a/spaces/caslabs/midi-autocompletion/musicautobot/multitask_transformer/model.py +++ /dev/null @@ -1,258 +0,0 @@ -from fastai.basics import * -from fastai.text.models.transformer import Activation, PositionalEncoding, feed_forward, init_transformer, _line_shift -from fastai.text.models.awd_lstm import RNNDropout -from ..utils.attention_mask import * - -def get_multitask_model(vocab_size:int, config:dict=None, drop_mult:float=1., pad_idx=None): - "Create a language model from `arch` and its `config`, maybe `pretrained`." - for k in config.keys(): - if k.endswith('_p'): config[k] *= drop_mult - n_hid = config['d_model'] - mem_len = config.pop('mem_len') - embed = TransformerEmbedding(vocab_size, n_hid, embed_p=config['embed_p'], mem_len=mem_len, pad_idx=pad_idx) - encoder = MTEncoder(embed, n_hid, n_layers=config['enc_layers'], mem_len=0, **config) # encoder doesn't need memory - decoder = MTEncoder(embed, n_hid, is_decoder=True, n_layers=config['dec_layers'], mem_len=mem_len, **config) - head = MTLinearDecoder(n_hid, vocab_size, tie_encoder=embed.embed, **config) - model = MultiTransformer(encoder, decoder, head, mem_len=mem_len) - return model.apply(init_transformer) - -class MultiTransformer(nn.Module): - "Multitask Transformer for training mask, next word, and sequence 2 sequence" - def __init__(self, encoder, decoder, head, mem_len): - super().__init__() - self.encoder = encoder - self.decoder = decoder - self.head = head - self.default_mem_len = mem_len - self.current_mem_len = None - - def forward(self, inp): - # data order: mask, next word, melody, chord - outputs = {} - msk, lm, c2m, m2c = [inp.get(key) for key in ['msk', 'lm', 'c2m', 'm2c']] - - if msk is not None: - outputs['msk'] = self.head(self.encoder(msk['x'], msk['pos'])) - if lm is not None: - outputs['lm'] = self.head(self.decoder(lm['x'], lm['pos'])) - - if c2m is not None: - self.reset() - c2m_enc = self.encoder(c2m['enc'], c2m['enc_pos']) - c2m_dec = self.decoder(c2m['dec'], c2m['dec_pos'], c2m_enc) - outputs['c2m'] = self.head(c2m_dec) - - if m2c is not None: - self.reset() - m2c_enc = self.encoder(m2c['enc'], m2c['enc_pos']) - m2c_dec = self.decoder(m2c['dec'], m2c['dec_pos'], m2c_enc) - outputs['m2c'] = self.head(m2c_dec) - - return outputs - - "A sequential module that passes the reset call to its children." - def reset(self): - for module in self.children(): - reset_children(module) - -def reset_children(mod): - if hasattr(mod, 'reset'): mod.reset() - for module in mod.children(): - reset_children(module) - - # COMPONENTS -class TransformerEmbedding(nn.Module): - "Embedding + positional encoding + dropout" - def __init__(self, vocab_size:int, emb_sz:int, embed_p:float=0., mem_len=512, beat_len=32, max_bar_len=1024, pad_idx=None): - super().__init__() - self.emb_sz = emb_sz - self.pad_idx = pad_idx - - self.embed = nn.Embedding(vocab_size, emb_sz, padding_idx=pad_idx) - self.pos_enc = PositionalEncoding(emb_sz) - self.beat_len, self.max_bar_len = beat_len, max_bar_len - self.beat_enc = nn.Embedding(beat_len, emb_sz, padding_idx=0) - self.bar_enc = nn.Embedding(max_bar_len, emb_sz, padding_idx=0) - - self.drop = nn.Dropout(embed_p) - self.mem_len = mem_len - - def forward(self, inp, pos): - beat_enc = self.beat_enc(pos % self.beat_len) - bar_pos = pos // self.beat_len % self.max_bar_len - bar_pos[bar_pos >= self.max_bar_len] = self.max_bar_len - 1 - bar_enc = self.bar_enc((bar_pos)) - emb = self.drop(self.embed(inp) + beat_enc + bar_enc) - return emb - - def relative_pos_enc(self, emb): -# return torch.arange(640-1, -1, -1).float().cuda() - seq_len = emb.shape[1] + self.mem_len - pos = torch.arange(seq_len-1, -1, -1, device=emb.device, dtype=emb.dtype) # backwards (txl pos encoding) - return self.pos_enc(pos) - -class MTLinearDecoder(nn.Module): - "To go on top of a RNNCore module and create a Language Model." - initrange=0.1 - - def __init__(self, n_hid:int, n_out:int, output_p:float, tie_encoder:nn.Module=None, out_bias:bool=True, **kwargs): - super().__init__() - self.decoder = nn.Linear(n_hid, n_out, bias=out_bias) - self.decoder.weight.data.uniform_(-self.initrange, self.initrange) - self.output_dp = RNNDropout(output_p) - if out_bias: self.decoder.bias.data.zero_() - if tie_encoder: self.decoder.weight = tie_encoder.weight - - def forward(self, input:Tuple[Tensor,Tensor])->Tuple[Tensor,Tensor,Tensor]: - output = self.output_dp(input) - decoded = self.decoder(output) - return decoded - - -# DECODER TRANSLATE BLOCK -class MTEncoder(nn.Module): - def __init__(self, embed:nn.Module, n_hid:int, n_layers:int, n_heads:int, d_model:int, d_head:int, d_inner:int, - resid_p:float=0., attn_p:float=0., ff_p:float=0., bias:bool=True, scale:bool=True, - act:Activation=Activation.ReLU, double_drop:bool=True, mem_len:int=512, is_decoder=False, - mask_steps=1, mask_p=0.3, **kwargs): - super().__init__() - self.embed = embed - self.u = nn.Parameter(torch.Tensor(n_heads, 1, d_head)) #Remove 1 for einsum implementation of attention - self.v = nn.Parameter(torch.Tensor(n_heads, 1, d_head)) #Remove 1 for einsum implementation of attention - self.n_layers,self.d_model = n_layers,d_model - self.layers = nn.ModuleList([MTEncoderBlock(n_heads, d_model, d_head, d_inner, resid_p=resid_p, attn_p=attn_p, - ff_p=ff_p, bias=bias, scale=scale, act=act, double_drop=double_drop, mem_len=mem_len, - ) for k in range(n_layers)]) - - self.mask_steps, self.mask_p = mask_steps, mask_p - self.is_decoder = is_decoder - - nn.init.normal_(self.u, 0., 0.02) - nn.init.normal_(self.v, 0., 0.02) - - def forward(self, x_lm, lm_pos, msk_emb=None): - bs,lm_len = x_lm.size() - - lm_emb = self.embed(x_lm, lm_pos) - if msk_emb is not None and msk_emb.shape[1] > lm_emb.shape[1]: - pos_enc = self.embed.relative_pos_enc(msk_emb) - else: - pos_enc = self.embed.relative_pos_enc(lm_emb) - - # Masks - if self.is_decoder: - lm_mask = rand_window_mask(lm_len, self.embed.mem_len, x_lm.device, - max_size=self.mask_steps, p=self.mask_p, is_eval=not self.training) - else: - lm_mask = None - - for i, layer in enumerate(self.layers): - lm_emb = layer(lm_emb, msk_emb, lm_mask=lm_mask, - r=pos_enc, g_u=self.u, g_v=self.v) - return lm_emb - -class MTEncoderBlock(nn.Module): - "Decoder block of a Transformer model." - #Can't use Sequential directly cause more than one input... - def __init__(self, n_heads:int, d_model:int, d_head:int, d_inner:int, resid_p:float=0., attn_p:float=0., ff_p:float=0., - bias:bool=True, scale:bool=True, double_drop:bool=True, mem_len:int=512, mha2_mem_len=0, **kwargs): - super().__init__() - attn_cls = MemMultiHeadRelativeAttentionKV - self.mha1 = attn_cls(n_heads, d_model, d_head, resid_p=resid_p, attn_p=attn_p, bias=bias, scale=scale, mem_len=mem_len, r_mask=False) - self.mha2 = attn_cls(n_heads, d_model, d_head, resid_p=resid_p, attn_p=attn_p, bias=bias, scale=scale, mem_len=mha2_mem_len, r_mask=True) - self.ff = feed_forward(d_model, d_inner, ff_p=ff_p, double_drop=double_drop) - - def forward(self, enc_lm:Tensor, enc_msk:Tensor, - r=None, g_u=None, g_v=None, - msk_mask:Tensor=None, lm_mask:Tensor=None): - - y_lm = self.mha1(enc_lm, enc_lm, enc_lm, r, g_u, g_v, mask=lm_mask) - if enc_msk is None: return y_lm - return self.ff(self.mha2(y_lm, enc_msk, enc_msk, r, g_u, g_v, mask=msk_mask)) - - - # Attention Layer - - -# Attn - -class MemMultiHeadRelativeAttentionKV(nn.Module): - "Attention Layer monster - relative positioning, keeps track of own memory, separate kv weights to support sequence2sequence decoding." - def __init__(self, n_heads:int, d_model:int, d_head:int=None, resid_p:float=0., attn_p:float=0., bias:bool=True, - scale:bool=True, mem_len:int=512, r_mask=True): - super().__init__() - d_head = ifnone(d_head, d_model//n_heads) - self.n_heads,self.d_head,self.scale = n_heads,d_head,scale - - assert(d_model == d_head * n_heads) - self.q_wgt = nn.Linear(d_model, n_heads * d_head, bias=bias) - self.k_wgt = nn.Linear(d_model, n_heads * d_head, bias=bias) - self.v_wgt = nn.Linear(d_model, n_heads * d_head, bias=bias) - - self.drop_att,self.drop_res = nn.Dropout(attn_p),nn.Dropout(resid_p) - self.ln = nn.LayerNorm(d_model) - self.r_attn = nn.Linear(d_model, n_heads * d_head, bias=bias) - self.r_mask = r_mask - - self.mem_len = mem_len - self.prev_k = None - self.prev_v = None - - def forward(self, q:Tensor, k:Tensor=None, v:Tensor=None, - r:Tensor=None, g_u:Tensor=None, g_v:Tensor=None, - mask:Tensor=None, **kwargs): - if k is None: k = q - if v is None: v = q - return self.ln(q + self.drop_res(self._apply_attention(q, k, v, r, g_u, g_v, mask=mask, **kwargs))) - - def mem_k(self, k): - if self.mem_len == 0: return k - if self.prev_k is None or (self.prev_k.shape[0] != k.shape[0]): # reset if wrong batch size - self.prev_k = k[:, -self.mem_len:] - return k - with torch.no_grad(): - k_ext = torch.cat([self.prev_k, k], dim=1) - self.prev_k = k_ext[:, -self.mem_len:] - return k_ext.detach() - - def mem_v(self, v): - if self.mem_len == 0: return v - if self.prev_v is None or (self.prev_v.shape[0] != v.shape[0]): # reset if wrong batch size - self.prev_v = v[:, -self.mem_len:] - return v - with torch.no_grad(): - v_ext = torch.cat([self.prev_v, v], dim=1) - self.prev_v = v_ext[:, -self.mem_len:] - return v_ext.detach() - - def reset(self): - self.prev_v = None - self.prev_k = None - - def _apply_attention(self, q:Tensor, k:Tensor, v:Tensor, - r:Tensor=None, g_u:Tensor=None, g_v:Tensor=None, - mask:Tensor=None, **kwargs): - #Notations from the paper: x input, r vector of relative distance between two elements, u et v learnable - #parameters of the model common between all layers, mask to avoid cheating and mem the previous hidden states. -# bs,x_len,seq_len = q.size(0),q.size(1),r.size(0) - k = self.mem_k(k) - v = self.mem_v(v) - bs,x_len,seq_len = q.size(0),q.size(1),k.size(1) - wq,wk,wv = self.q_wgt(q),self.k_wgt(k),self.v_wgt(v) - wq = wq[:,-x_len:] - wq,wk,wv = map(lambda x:x.view(bs, x.size(1), self.n_heads, self.d_head), (wq,wk,wv)) - wq,wk,wv = wq.permute(0, 2, 1, 3),wk.permute(0, 2, 3, 1),wv.permute(0, 2, 1, 3) - wkr = self.r_attn(r[-seq_len:]) - wkr = wkr.view(seq_len, self.n_heads, self.d_head) - wkr = wkr.permute(1,2,0) - #### compute attention score (AC is (a) + (c) and BS is (b) + (d) in the paper) - AC = torch.matmul(wq+g_u,wk) - BD = _line_shift(torch.matmul(wq+g_v, wkr), mask=self.r_mask) - if self.scale: attn_score = (AC + BD).mul_(1/(self.d_head ** 0.5)) - if mask is not None: - mask = mask[...,-seq_len:] - if hasattr(mask, 'bool'): mask = mask.bool() - attn_score = attn_score.float().masked_fill(mask, -float('inf')).type_as(attn_score) - attn_prob = self.drop_att(F.softmax(attn_score, dim=-1)) - attn_vec = torch.matmul(attn_prob, wv) - return attn_vec.permute(0, 2, 1, 3).contiguous().view(bs, x_len, -1) diff --git a/spaces/cenji1109285052/img-to-music/share_btn.py b/spaces/cenji1109285052/img-to-music/share_btn.py deleted file mode 100644 index 1a2ac6a6e74b114dbd54c2f24723a87180db51ef..0000000000000000000000000000000000000000 --- a/spaces/cenji1109285052/img-to-music/share_btn.py +++ /dev/null @@ -1,100 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `sd-perception-${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `sd-perception-${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - async function getOutputMusicFile(audioEL){ - const res = await fetch(audioEL.src); - const blob = await res.blob(); - const audioId = Date.now() % 200; - const fileName = `img-to-music-${{audioId}}.wav`; - const musicBlob = new File([blob], fileName, { type: 'audio/wav' }); - console.log(musicBlob); - return musicBlob; - } - - async function audioToBase64(audioFile) { - return new Promise((resolve, reject) => { - let reader = new FileReader(); - reader.readAsDataURL(audioFile); - reader.onload = () => resolve(reader.result); - reader.onerror = error => reject(error); - - }); - } - const gradioEl = document.querySelector('body > gradio-app'); - // const gradioEl = document.querySelector("gradio-app").shadowRoot; - const inputImgEl = gradioEl.querySelector('#input-img img'); - const outputMusic = gradioEl.querySelector('#music-output audio'); - const outputMusic_src = gradioEl.querySelector('#music-output audio').src; - const outputMusic_name = outputMusic_src.split('/').pop(); - let titleTxt = outputMusic_name; - //if(titleTxt.length > 100){ - // titleTxt = titleTxt.slice(0, 100) + ' ...'; - //} - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - if(!outputMusic){ - return; - }; - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - const inputFile = await getInputImgFile(inputImgEl); - const urlInputImg = await uploadFile(inputFile); - const musicFile = await getOutputMusicFile(outputMusic); - const dataOutputMusic = await uploadFile(musicFile); - - const descriptionMd = `#### Input img: - - -#### Music: - - -`; - const params = new URLSearchParams({ - title: titleTxt, - description: descriptionMd, - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/fffiloni/img-to-music/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" \ No newline at end of file diff --git a/spaces/cfwef/gpt/crazy_functions/__init__.py b/spaces/cfwef/gpt/crazy_functions/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/nebullvm/README.md b/spaces/chendl/compositional_test/multimodal/YOLOX/demo/nebullvm/README.md deleted file mode 100644 index 24253505544c9ad7ec069a30e78e5a3d2d42dbbb..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/demo/nebullvm/README.md +++ /dev/null @@ -1,95 +0,0 @@ -# **Accelerate YOLOX inference with nebullvm in Python** - -This document shows how to accelerate YOLOX inference time with nebullvm. - -[nebullvm](https://github.com/nebuly-ai/nebullvm) is an open-source library designed to accelerate AI inference of deep learning models in a few lines of code. nebullvm leverages state-of-the-art model optimization techniques such as deep learning compilers (TensorRT, Openvino, ONNX Runtime, TVM, TF Lite, DeepSparse, etc.), various quantization and compression strategies to achieve the maximum physically possible acceleration on the user's hardware. - -## Benchmarks -Following are the results of the nebullvm optimization on YOLOX without loss of accuracy. -For each model-hardware pairing, response time was evaluated as the average over 100 predictions. The test was run on Nvidia Tesla T4 (g4dn.xlarge) and Intel XEON Scalable (m6i.24xlarge and c6i.12xlarge) on AWS. - -| Model | Hardware | Unoptimized (ms)| Nebullvm optimized (ms) | Speedup | -|---------|--------------|-----------------|-------------------------|---------| -| YOLOX-s | g4dn.xlarge | 13.6 | 9.0 | 1.5x | -| YOLOX-s | m6i.24xlarge | 32.7 | 8.8 | 3.7x | -| YOLOX-s | c6i.12xlarge | 34.4 | 12.4 | 2.8x | -| YOLOX-m | g4dn.xlarge | 24.2 | 22.4 | 1.1x | -| YOLOX-m | m6i.24xlarge | 55.1 | 36.0 | 2.3x | -| YOLOX-m | c6i.12xlarge | 62.5 | 26.9 | 2.6x | -| YOLOX-l | g4dn.xlarge | 84.4 | 80.5 | 1.5x | -| YOLOX-l | m6i.24xlarge | 88.0 | 33.7 | 2.6x | -| YOLOX-l | c6i.12xlarge | 102.8 | 54.2 | 1.9x | -| YOLOX-x | g4dn.xlarge | 87.3 | 34.0 | 2.6x | -| YOLOX-x | m6i.24xlarge | 134.5 | 56.6 | 2.4x | -| YOLOX-x | c6i.12xlarge | 162.0 | 95.4 | 1.7x | - -## Steps to accelerate YOLOX with nebullvm -1. Download a YOLOX model from the original [readme](https://github.com/Megvii-BaseDetection/YOLOX) -2. Optimize YOLOX with nebullvm -3. Perform inference and compare the latency of the optimized model with that of the original model - -[Here](nebullvm_optimization.py) you can find a demo in python. - - -First, let's install nebullvm. The simplest way is by using pip. -``` -pip install nebullvm -``` -Now, let's download one of YOLOX models and optimize it with nebullvm. - -```python -# Import YOLOX model -from yolox.exp import get_exp -from yolox.data.data_augment import ValTransform - -exp = get_exp(None, 'yolox-s') # select model name -model = exp.get_model() -model.cuda() -model.eval() - - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -input_data = [((torch.randn(1, 3, 640, 640).to(device), ), 0) for i in range(100)] - -# Run nebullvm optimization without performance loss -optimized_model = optimize_model(model, input_data=input_data, optimization_time="constrained") -``` -Find [here](nebullvm_optimize.py) the complete script in python with more details. - -In this example, we optimized YOLOX without any loss in accuracy. To further speed up the model by means of more aggressive optimization techniques, proceed as follows: -- Set *optimization_time="unconstrained"*. With the unconstrained option, nebullvm will test time-consuming techniques such as pruning and quantization-aware training (QAT). -- Set the *metric_drop_ths* parameter to be greater than zero (by default, *metric_drop_ths=0*). In this way, we will allow nebullvm to test optimization techniques that involve a tradeoff of some trade-off of a certain metric. For example, to test maximum acceleration with a minimum loss of accuracy of 3%, set *metric_drop_ths=0.03* and *metric="accuracy"*. -For more information about nebullvm API, see [nebullvm documentation](https://github.com/nebuly-ai/nebullvm). - - -Let's now compare the latency of the optimized model with that of the original model. -Note that before testing latency of the optimized model, it is necessary to perform some warmup runs, as some optimizers fine-tune certain internal parameters during the first few inferences after optimization. - -```python -# Check perfomance -warmup_iters = 30 -num_iters = 100 - -# Unoptimized model perfomance -with torch.no_grad(): - for i in range(warmup_iters): - o = model(img) - - start = time.time() - for i in range(num_iters): - o = model(img) -stop = time.time() -print(f"Average inference time of unoptimized YOLOX: {(stop - start)/num_iters*1000} ms") - -# Optimized model perfomance -with torch.no_grad(): - for i in range(warmup_iters): - res = model_opt(img) - - start = time.time() - for i in range(num_iters): - res = model_opt(img) -stop = time.time() -print(f"Average inference time of YOLOX otpimized with nebullvm: {(stop - start)/num_iters*1000} ms") -``` -Find [here](nebullvm_optimization.py) the complete script in python with more details. diff --git a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/demo/openvino_py_readme.md b/spaces/chendl/compositional_test/multimodal/YOLOX/docs/demo/openvino_py_readme.md deleted file mode 100644 index 8adb770a576450bcc507861f98a36dd43bf00019..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/YOLOX/docs/demo/openvino_py_readme.md +++ /dev/null @@ -1 +0,0 @@ -../../demo/OpenVINO/python/README.md \ No newline at end of file diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/deebert/README.md b/spaces/chendl/compositional_test/transformers/examples/research_projects/deebert/README.md deleted file mode 100644 index 30c871e1a594fc7216b70711ea65d8667831fab4..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/deebert/README.md +++ /dev/null @@ -1,54 +0,0 @@ -# DeeBERT: Early Exiting for *BERT - -This is the code base for the paper [DeeBERT: Dynamic Early Exiting for Accelerating BERT Inference](https://www.aclweb.org/anthology/2020.acl-main.204/), modified from its [original code base](https://github.com/castorini/deebert). - -The original code base also has information for downloading sample models that we have trained in advance. - -## Usage - -There are three scripts in the folder which can be run directly. - -In each script, there are several things to modify before running: - -* `PATH_TO_DATA`: path to the GLUE dataset. -* `--output_dir`: path for saving fine-tuned models. Default: `./saved_models`. -* `--plot_data_dir`: path for saving evaluation results. Default: `./results`. Results are printed to stdout and also saved to `npy` files in this directory to facilitate plotting figures and further analyses. -* `MODEL_TYPE`: bert or roberta -* `MODEL_SIZE`: base or large -* `DATASET`: SST-2, MRPC, RTE, QNLI, QQP, or MNLI - -#### train_deebert.sh - -This is for fine-tuning DeeBERT models. - -#### eval_deebert.sh - -This is for evaluating each exit layer for fine-tuned DeeBERT models. - -#### entropy_eval.sh - -This is for evaluating fine-tuned DeeBERT models, given a number of different early exit entropy thresholds. - - - -## Citation - -Please cite our paper if you find the resource useful: -``` -@inproceedings{xin-etal-2020-deebert, - title = "{D}ee{BERT}: Dynamic Early Exiting for Accelerating {BERT} Inference", - author = "Xin, Ji and - Tang, Raphael and - Lee, Jaejun and - Yu, Yaoliang and - Lin, Jimmy", - booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", - month = jul, - year = "2020", - address = "Online", - publisher = "Association for Computational Linguistics", - url = "https://www.aclweb.org/anthology/2020.acl-main.204", - pages = "2246--2251", -} -``` - diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/charset_normalizer/models.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/charset_normalizer/models.py deleted file mode 100644 index 7f8ca389050cd4bac7fd23d84e399a242d35d309..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/charset_normalizer/models.py +++ /dev/null @@ -1,337 +0,0 @@ -from encodings.aliases import aliases -from hashlib import sha256 -from json import dumps -from typing import Any, Dict, Iterator, List, Optional, Tuple, Union - -from .constant import TOO_BIG_SEQUENCE -from .utils import iana_name, is_multi_byte_encoding, unicode_range - - -class CharsetMatch: - def __init__( - self, - payload: bytes, - guessed_encoding: str, - mean_mess_ratio: float, - has_sig_or_bom: bool, - languages: "CoherenceMatches", - decoded_payload: Optional[str] = None, - ): - self._payload: bytes = payload - - self._encoding: str = guessed_encoding - self._mean_mess_ratio: float = mean_mess_ratio - self._languages: CoherenceMatches = languages - self._has_sig_or_bom: bool = has_sig_or_bom - self._unicode_ranges: Optional[List[str]] = None - - self._leaves: List[CharsetMatch] = [] - self._mean_coherence_ratio: float = 0.0 - - self._output_payload: Optional[bytes] = None - self._output_encoding: Optional[str] = None - - self._string: Optional[str] = decoded_payload - - def __eq__(self, other: object) -> bool: - if not isinstance(other, CharsetMatch): - raise TypeError( - "__eq__ cannot be invoked on {} and {}.".format( - str(other.__class__), str(self.__class__) - ) - ) - return self.encoding == other.encoding and self.fingerprint == other.fingerprint - - def __lt__(self, other: object) -> bool: - """ - Implemented to make sorted available upon CharsetMatches items. - """ - if not isinstance(other, CharsetMatch): - raise ValueError - - chaos_difference: float = abs(self.chaos - other.chaos) - coherence_difference: float = abs(self.coherence - other.coherence) - - # Below 1% difference --> Use Coherence - if chaos_difference < 0.01 and coherence_difference > 0.02: - # When having a tough decision, use the result that decoded as many multi-byte as possible. - if chaos_difference == 0.0 and self.coherence == other.coherence: - return self.multi_byte_usage > other.multi_byte_usage - return self.coherence > other.coherence - - return self.chaos < other.chaos - - @property - def multi_byte_usage(self) -> float: - return 1.0 - len(str(self)) / len(self.raw) - - def __str__(self) -> str: - # Lazy Str Loading - if self._string is None: - self._string = str(self._payload, self._encoding, "strict") - return self._string - - def __repr__(self) -> str: - return "".format(self.encoding, self.fingerprint) - - def add_submatch(self, other: "CharsetMatch") -> None: - if not isinstance(other, CharsetMatch) or other == self: - raise ValueError( - "Unable to add instance <{}> as a submatch of a CharsetMatch".format( - other.__class__ - ) - ) - - other._string = None # Unload RAM usage; dirty trick. - self._leaves.append(other) - - @property - def encoding(self) -> str: - return self._encoding - - @property - def encoding_aliases(self) -> List[str]: - """ - Encoding name are known by many name, using this could help when searching for IBM855 when it's listed as CP855. - """ - also_known_as: List[str] = [] - for u, p in aliases.items(): - if self.encoding == u: - also_known_as.append(p) - elif self.encoding == p: - also_known_as.append(u) - return also_known_as - - @property - def bom(self) -> bool: - return self._has_sig_or_bom - - @property - def byte_order_mark(self) -> bool: - return self._has_sig_or_bom - - @property - def languages(self) -> List[str]: - """ - Return the complete list of possible languages found in decoded sequence. - Usually not really useful. Returned list may be empty even if 'language' property return something != 'Unknown'. - """ - return [e[0] for e in self._languages] - - @property - def language(self) -> str: - """ - Most probable language found in decoded sequence. If none were detected or inferred, the property will return - "Unknown". - """ - if not self._languages: - # Trying to infer the language based on the given encoding - # Its either English or we should not pronounce ourselves in certain cases. - if "ascii" in self.could_be_from_charset: - return "English" - - # doing it there to avoid circular import - from charset_normalizer.cd import encoding_languages, mb_encoding_languages - - languages = ( - mb_encoding_languages(self.encoding) - if is_multi_byte_encoding(self.encoding) - else encoding_languages(self.encoding) - ) - - if len(languages) == 0 or "Latin Based" in languages: - return "Unknown" - - return languages[0] - - return self._languages[0][0] - - @property - def chaos(self) -> float: - return self._mean_mess_ratio - - @property - def coherence(self) -> float: - if not self._languages: - return 0.0 - return self._languages[0][1] - - @property - def percent_chaos(self) -> float: - return round(self.chaos * 100, ndigits=3) - - @property - def percent_coherence(self) -> float: - return round(self.coherence * 100, ndigits=3) - - @property - def raw(self) -> bytes: - """ - Original untouched bytes. - """ - return self._payload - - @property - def submatch(self) -> List["CharsetMatch"]: - return self._leaves - - @property - def has_submatch(self) -> bool: - return len(self._leaves) > 0 - - @property - def alphabets(self) -> List[str]: - if self._unicode_ranges is not None: - return self._unicode_ranges - # list detected ranges - detected_ranges: List[Optional[str]] = [ - unicode_range(char) for char in str(self) - ] - # filter and sort - self._unicode_ranges = sorted(list({r for r in detected_ranges if r})) - return self._unicode_ranges - - @property - def could_be_from_charset(self) -> List[str]: - """ - The complete list of encoding that output the exact SAME str result and therefore could be the originating - encoding. - This list does include the encoding available in property 'encoding'. - """ - return [self._encoding] + [m.encoding for m in self._leaves] - - def output(self, encoding: str = "utf_8") -> bytes: - """ - Method to get re-encoded bytes payload using given target encoding. Default to UTF-8. - Any errors will be simply ignored by the encoder NOT replaced. - """ - if self._output_encoding is None or self._output_encoding != encoding: - self._output_encoding = encoding - self._output_payload = str(self).encode(encoding, "replace") - - return self._output_payload # type: ignore - - @property - def fingerprint(self) -> str: - """ - Retrieve the unique SHA256 computed using the transformed (re-encoded) payload. Not the original one. - """ - return sha256(self.output()).hexdigest() - - -class CharsetMatches: - """ - Container with every CharsetMatch items ordered by default from most probable to the less one. - Act like a list(iterable) but does not implements all related methods. - """ - - def __init__(self, results: Optional[List[CharsetMatch]] = None): - self._results: List[CharsetMatch] = sorted(results) if results else [] - - def __iter__(self) -> Iterator[CharsetMatch]: - yield from self._results - - def __getitem__(self, item: Union[int, str]) -> CharsetMatch: - """ - Retrieve a single item either by its position or encoding name (alias may be used here). - Raise KeyError upon invalid index or encoding not present in results. - """ - if isinstance(item, int): - return self._results[item] - if isinstance(item, str): - item = iana_name(item, False) - for result in self._results: - if item in result.could_be_from_charset: - return result - raise KeyError - - def __len__(self) -> int: - return len(self._results) - - def __bool__(self) -> bool: - return len(self._results) > 0 - - def append(self, item: CharsetMatch) -> None: - """ - Insert a single match. Will be inserted accordingly to preserve sort. - Can be inserted as a submatch. - """ - if not isinstance(item, CharsetMatch): - raise ValueError( - "Cannot append instance '{}' to CharsetMatches".format( - str(item.__class__) - ) - ) - # We should disable the submatch factoring when the input file is too heavy (conserve RAM usage) - if len(item.raw) <= TOO_BIG_SEQUENCE: - for match in self._results: - if match.fingerprint == item.fingerprint and match.chaos == item.chaos: - match.add_submatch(item) - return - self._results.append(item) - self._results = sorted(self._results) - - def best(self) -> Optional["CharsetMatch"]: - """ - Simply return the first match. Strict equivalent to matches[0]. - """ - if not self._results: - return None - return self._results[0] - - def first(self) -> Optional["CharsetMatch"]: - """ - Redundant method, call the method best(). Kept for BC reasons. - """ - return self.best() - - -CoherenceMatch = Tuple[str, float] -CoherenceMatches = List[CoherenceMatch] - - -class CliDetectionResult: - def __init__( - self, - path: str, - encoding: Optional[str], - encoding_aliases: List[str], - alternative_encodings: List[str], - language: str, - alphabets: List[str], - has_sig_or_bom: bool, - chaos: float, - coherence: float, - unicode_path: Optional[str], - is_preferred: bool, - ): - self.path: str = path - self.unicode_path: Optional[str] = unicode_path - self.encoding: Optional[str] = encoding - self.encoding_aliases: List[str] = encoding_aliases - self.alternative_encodings: List[str] = alternative_encodings - self.language: str = language - self.alphabets: List[str] = alphabets - self.has_sig_or_bom: bool = has_sig_or_bom - self.chaos: float = chaos - self.coherence: float = coherence - self.is_preferred: bool = is_preferred - - @property - def __dict__(self) -> Dict[str, Any]: # type: ignore - return { - "path": self.path, - "encoding": self.encoding, - "encoding_aliases": self.encoding_aliases, - "alternative_encodings": self.alternative_encodings, - "language": self.language, - "alphabets": self.alphabets, - "has_sig_or_bom": self.has_sig_or_bom, - "chaos": self.chaos, - "coherence": self.coherence, - "unicode_path": self.unicode_path, - "is_preferred": self.is_preferred, - } - - def to_json(self) -> str: - return dumps(self.__dict__, ensure_ascii=True, indent=4) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/click/globals.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/click/globals.py deleted file mode 100644 index 480058f10dd6a8205d1bff0b94de7ae347a7629a..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/click/globals.py +++ /dev/null @@ -1,68 +0,0 @@ -import typing as t -from threading import local - -if t.TYPE_CHECKING: - import typing_extensions as te - from .core import Context - -_local = local() - - -@t.overload -def get_current_context(silent: "te.Literal[False]" = False) -> "Context": - ... - - -@t.overload -def get_current_context(silent: bool = ...) -> t.Optional["Context"]: - ... - - -def get_current_context(silent: bool = False) -> t.Optional["Context"]: - """Returns the current click context. This can be used as a way to - access the current context object from anywhere. This is a more implicit - alternative to the :func:`pass_context` decorator. This function is - primarily useful for helpers such as :func:`echo` which might be - interested in changing its behavior based on the current context. - - To push the current context, :meth:`Context.scope` can be used. - - .. versionadded:: 5.0 - - :param silent: if set to `True` the return value is `None` if no context - is available. The default behavior is to raise a - :exc:`RuntimeError`. - """ - try: - return t.cast("Context", _local.stack[-1]) - except (AttributeError, IndexError) as e: - if not silent: - raise RuntimeError("There is no active click context.") from e - - return None - - -def push_context(ctx: "Context") -> None: - """Pushes a new context to the current stack.""" - _local.__dict__.setdefault("stack", []).append(ctx) - - -def pop_context() -> None: - """Removes the top level from the stack.""" - _local.stack.pop() - - -def resolve_color_default(color: t.Optional[bool] = None) -> t.Optional[bool]: - """Internal helper to get the default value of the color flag. If a - value is passed it's returned unchanged, otherwise it's looked up from - the current context. - """ - if color is not None: - return color - - ctx = get_current_context(silent=True) - - if ctx is not None: - return ctx.color - - return None diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/types/base.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/types/base.py deleted file mode 100644 index 7c0c0d26d9344907eb6d5cba107dd927c8477d5d..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/filetype/types/base.py +++ /dev/null @@ -1,29 +0,0 @@ -# -*- coding: utf-8 -*- - - -class Type(object): - """ - Represents the file type object inherited by - specific file type matchers. - Provides convenient accessor and helper methods. - """ - def __init__(self, mime, extension): - self.__mime = mime - self.__extension = extension - - @property - def mime(self): - return self.__mime - - @property - def extension(self): - return self.__extension - - def is_extension(self, extension): - return self.__extension is extension - - def is_mime(self, mime): - return self.__mime is mime - - def match(self, buf): - raise NotImplementedError diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/empty_pb2.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/empty_pb2.py deleted file mode 100644 index cbecdfe26c2907f2e8ead79e396c24d3cd54d167..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/empty_pb2.py +++ /dev/null @@ -1,27 +0,0 @@ -# -*- coding: utf-8 -*- -# Generated by the protocol buffer compiler. DO NOT EDIT! -# source: google/protobuf/empty.proto -"""Generated protocol buffer code.""" -from google.protobuf import descriptor as _descriptor -from google.protobuf import descriptor_pool as _descriptor_pool -from google.protobuf import symbol_database as _symbol_database -from google.protobuf.internal import builder as _builder -# @@protoc_insertion_point(imports) - -_sym_db = _symbol_database.Default() - - - - -DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n\x1bgoogle/protobuf/empty.proto\x12\x0fgoogle.protobuf\"\x07\n\x05\x45mptyB}\n\x13\x63om.google.protobufB\nEmptyProtoP\x01Z.google.golang.org/protobuf/types/known/emptypb\xf8\x01\x01\xa2\x02\x03GPB\xaa\x02\x1eGoogle.Protobuf.WellKnownTypesb\x06proto3') - -_globals = globals() -_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals) -_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'google.protobuf.empty_pb2', _globals) -if _descriptor._USE_C_DESCRIPTORS == False: - - DESCRIPTOR._options = None - DESCRIPTOR._serialized_options = b'\n\023com.google.protobufB\nEmptyProtoP\001Z.google.golang.org/protobuf/types/known/emptypb\370\001\001\242\002\003GPB\252\002\036Google.Protobuf.WellKnownTypes' - _globals['_EMPTY']._serialized_start=48 - _globals['_EMPTY']._serialized_end=55 -# @@protoc_insertion_point(module_scope) diff --git a/spaces/chuyin/anime-ai-detect/README.md b/spaces/chuyin/anime-ai-detect/README.md deleted file mode 100644 index 952c183fd69ccb1664b4236b6132fc6d0358c7de..0000000000000000000000000000000000000000 --- a/spaces/chuyin/anime-ai-detect/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Anime Ai Detect -emoji: 🤖 -colorFrom: green -colorTo: purple -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: true -duplicated_from: saltacc/anime-ai-detect ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/cihyFjudo/fairness-paper-search/Subrang Digest January 2011 Free Downloadl Read Urdu Stories Novels and Poetry Online.md b/spaces/cihyFjudo/fairness-paper-search/Subrang Digest January 2011 Free Downloadl Read Urdu Stories Novels and Poetry Online.md deleted file mode 100644 index 4159bf19e31bebd8f35bc9941f813d96e9acb46a..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/Subrang Digest January 2011 Free Downloadl Read Urdu Stories Novels and Poetry Online.md +++ /dev/null @@ -1,6 +0,0 @@ -

            Subrang Digest January 2011 Free Downloadl


            Download Zip ✵✵✵ https://tinurli.com/2uwkF7



            -
            - aaccfb2cb3
            -
            -
            -

            diff --git a/spaces/cleanmaster/akagi-sovits3/attentions.py b/spaces/cleanmaster/akagi-sovits3/attentions.py deleted file mode 100644 index 4e0b0c1fd48c962e21e1fbe60b23fc574927435c..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/akagi-sovits3/attentions.py +++ /dev/null @@ -1,303 +0,0 @@ -import copy -import math -import numpy as np -import torch -from torch import nn -from torch.nn import functional as F - -import commons -import modules -from modules import LayerNorm - - -class Encoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., window_size=4, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.window_size = window_size - - self.drop = nn.Dropout(p_dropout) - self.attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, window_size=window_size)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask): - attn_mask = x_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.attn_layers[i](x, x, attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class Decoder(nn.Module): - def __init__(self, hidden_channels, filter_channels, n_heads, n_layers, kernel_size=1, p_dropout=0., proximal_bias=False, proximal_init=True, **kwargs): - super().__init__() - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - - self.drop = nn.Dropout(p_dropout) - self.self_attn_layers = nn.ModuleList() - self.norm_layers_0 = nn.ModuleList() - self.encdec_attn_layers = nn.ModuleList() - self.norm_layers_1 = nn.ModuleList() - self.ffn_layers = nn.ModuleList() - self.norm_layers_2 = nn.ModuleList() - for i in range(self.n_layers): - self.self_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout, proximal_bias=proximal_bias, proximal_init=proximal_init)) - self.norm_layers_0.append(LayerNorm(hidden_channels)) - self.encdec_attn_layers.append(MultiHeadAttention(hidden_channels, hidden_channels, n_heads, p_dropout=p_dropout)) - self.norm_layers_1.append(LayerNorm(hidden_channels)) - self.ffn_layers.append(FFN(hidden_channels, hidden_channels, filter_channels, kernel_size, p_dropout=p_dropout, causal=True)) - self.norm_layers_2.append(LayerNorm(hidden_channels)) - - def forward(self, x, x_mask, h, h_mask): - """ - x: decoder input - h: encoder output - """ - self_attn_mask = commons.subsequent_mask(x_mask.size(2)).to(device=x.device, dtype=x.dtype) - encdec_attn_mask = h_mask.unsqueeze(2) * x_mask.unsqueeze(-1) - x = x * x_mask - for i in range(self.n_layers): - y = self.self_attn_layers[i](x, x, self_attn_mask) - y = self.drop(y) - x = self.norm_layers_0[i](x + y) - - y = self.encdec_attn_layers[i](x, h, encdec_attn_mask) - y = self.drop(y) - x = self.norm_layers_1[i](x + y) - - y = self.ffn_layers[i](x, x_mask) - y = self.drop(y) - x = self.norm_layers_2[i](x + y) - x = x * x_mask - return x - - -class MultiHeadAttention(nn.Module): - def __init__(self, channels, out_channels, n_heads, p_dropout=0., window_size=None, heads_share=True, block_length=None, proximal_bias=False, proximal_init=False): - super().__init__() - assert channels % n_heads == 0 - - self.channels = channels - self.out_channels = out_channels - self.n_heads = n_heads - self.p_dropout = p_dropout - self.window_size = window_size - self.heads_share = heads_share - self.block_length = block_length - self.proximal_bias = proximal_bias - self.proximal_init = proximal_init - self.attn = None - - self.k_channels = channels // n_heads - self.conv_q = nn.Conv1d(channels, channels, 1) - self.conv_k = nn.Conv1d(channels, channels, 1) - self.conv_v = nn.Conv1d(channels, channels, 1) - self.conv_o = nn.Conv1d(channels, out_channels, 1) - self.drop = nn.Dropout(p_dropout) - - if window_size is not None: - n_heads_rel = 1 if heads_share else n_heads - rel_stddev = self.k_channels**-0.5 - self.emb_rel_k = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - self.emb_rel_v = nn.Parameter(torch.randn(n_heads_rel, window_size * 2 + 1, self.k_channels) * rel_stddev) - - nn.init.xavier_uniform_(self.conv_q.weight) - nn.init.xavier_uniform_(self.conv_k.weight) - nn.init.xavier_uniform_(self.conv_v.weight) - if proximal_init: - with torch.no_grad(): - self.conv_k.weight.copy_(self.conv_q.weight) - self.conv_k.bias.copy_(self.conv_q.bias) - - def forward(self, x, c, attn_mask=None): - q = self.conv_q(x) - k = self.conv_k(c) - v = self.conv_v(c) - - x, self.attn = self.attention(q, k, v, mask=attn_mask) - - x = self.conv_o(x) - return x - - def attention(self, query, key, value, mask=None): - # reshape [b, d, t] -> [b, n_h, t, d_k] - b, d, t_s, t_t = (*key.size(), query.size(2)) - query = query.view(b, self.n_heads, self.k_channels, t_t).transpose(2, 3) - key = key.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - value = value.view(b, self.n_heads, self.k_channels, t_s).transpose(2, 3) - - scores = torch.matmul(query / math.sqrt(self.k_channels), key.transpose(-2, -1)) - if self.window_size is not None: - assert t_s == t_t, "Relative attention is only available for self-attention." - key_relative_embeddings = self._get_relative_embeddings(self.emb_rel_k, t_s) - rel_logits = self._matmul_with_relative_keys(query /math.sqrt(self.k_channels), key_relative_embeddings) - scores_local = self._relative_position_to_absolute_position(rel_logits) - scores = scores + scores_local - if self.proximal_bias: - assert t_s == t_t, "Proximal bias is only available for self-attention." - scores = scores + self._attention_bias_proximal(t_s).to(device=scores.device, dtype=scores.dtype) - if mask is not None: - scores = scores.masked_fill(mask == 0, -1e4) - if self.block_length is not None: - assert t_s == t_t, "Local attention is only available for self-attention." - block_mask = torch.ones_like(scores).triu(-self.block_length).tril(self.block_length) - scores = scores.masked_fill(block_mask == 0, -1e4) - p_attn = F.softmax(scores, dim=-1) # [b, n_h, t_t, t_s] - p_attn = self.drop(p_attn) - output = torch.matmul(p_attn, value) - if self.window_size is not None: - relative_weights = self._absolute_position_to_relative_position(p_attn) - value_relative_embeddings = self._get_relative_embeddings(self.emb_rel_v, t_s) - output = output + self._matmul_with_relative_values(relative_weights, value_relative_embeddings) - output = output.transpose(2, 3).contiguous().view(b, d, t_t) # [b, n_h, t_t, d_k] -> [b, d, t_t] - return output, p_attn - - def _matmul_with_relative_values(self, x, y): - """ - x: [b, h, l, m] - y: [h or 1, m, d] - ret: [b, h, l, d] - """ - ret = torch.matmul(x, y.unsqueeze(0)) - return ret - - def _matmul_with_relative_keys(self, x, y): - """ - x: [b, h, l, d] - y: [h or 1, m, d] - ret: [b, h, l, m] - """ - ret = torch.matmul(x, y.unsqueeze(0).transpose(-2, -1)) - return ret - - def _get_relative_embeddings(self, relative_embeddings, length): - max_relative_position = 2 * self.window_size + 1 - # Pad first before slice to avoid using cond ops. - pad_length = max(length - (self.window_size + 1), 0) - slice_start_position = max((self.window_size + 1) - length, 0) - slice_end_position = slice_start_position + 2 * length - 1 - if pad_length > 0: - padded_relative_embeddings = F.pad( - relative_embeddings, - commons.convert_pad_shape([[0, 0], [pad_length, pad_length], [0, 0]])) - else: - padded_relative_embeddings = relative_embeddings - used_relative_embeddings = padded_relative_embeddings[:,slice_start_position:slice_end_position] - return used_relative_embeddings - - def _relative_position_to_absolute_position(self, x): - """ - x: [b, h, l, 2*l-1] - ret: [b, h, l, l] - """ - batch, heads, length, _ = x.size() - # Concat columns of pad to shift from relative to absolute indexing. - x = F.pad(x, commons.convert_pad_shape([[0,0],[0,0],[0,0],[0,1]])) - - # Concat extra elements so to add up to shape (len+1, 2*len-1). - x_flat = x.view([batch, heads, length * 2 * length]) - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0,0],[0,0],[0,length-1]])) - - # Reshape and slice out the padded elements. - x_final = x_flat.view([batch, heads, length+1, 2*length-1])[:, :, :length, length-1:] - return x_final - - def _absolute_position_to_relative_position(self, x): - """ - x: [b, h, l, l] - ret: [b, h, l, 2*l-1] - """ - batch, heads, length, _ = x.size() - # padd along column - x = F.pad(x, commons.convert_pad_shape([[0, 0], [0, 0], [0, 0], [0, length-1]])) - x_flat = x.view([batch, heads, length**2 + length*(length -1)]) - # add 0's in the beginning that will skew the elements after reshape - x_flat = F.pad(x_flat, commons.convert_pad_shape([[0, 0], [0, 0], [length, 0]])) - x_final = x_flat.view([batch, heads, length, 2*length])[:,:,:,1:] - return x_final - - def _attention_bias_proximal(self, length): - """Bias for self-attention to encourage attention to close positions. - Args: - length: an integer scalar. - Returns: - a Tensor with shape [1, 1, length, length] - """ - r = torch.arange(length, dtype=torch.float32) - diff = torch.unsqueeze(r, 0) - torch.unsqueeze(r, 1) - return torch.unsqueeze(torch.unsqueeze(-torch.log1p(torch.abs(diff)), 0), 0) - - -class FFN(nn.Module): - def __init__(self, in_channels, out_channels, filter_channels, kernel_size, p_dropout=0., activation=None, causal=False): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.activation = activation - self.causal = causal - - if causal: - self.padding = self._causal_padding - else: - self.padding = self._same_padding - - self.conv_1 = nn.Conv1d(in_channels, filter_channels, kernel_size) - self.conv_2 = nn.Conv1d(filter_channels, out_channels, kernel_size) - self.drop = nn.Dropout(p_dropout) - - def forward(self, x, x_mask): - x = self.conv_1(self.padding(x * x_mask)) - if self.activation == "gelu": - x = x * torch.sigmoid(1.702 * x) - else: - x = torch.relu(x) - x = self.drop(x) - x = self.conv_2(self.padding(x * x_mask)) - return x * x_mask - - def _causal_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = self.kernel_size - 1 - pad_r = 0 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x - - def _same_padding(self, x): - if self.kernel_size == 1: - return x - pad_l = (self.kernel_size - 1) // 2 - pad_r = self.kernel_size // 2 - padding = [[0, 0], [0, 0], [pad_l, pad_r]] - x = F.pad(x, commons.convert_pad_shape(padding)) - return x diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/ImageChops.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/ImageChops.py deleted file mode 100644 index 70120031797c2493c0ce878c13c3fd3d5554c354..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/PIL/ImageChops.py +++ /dev/null @@ -1,303 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# standard channel operations -# -# History: -# 1996-03-24 fl Created -# 1996-08-13 fl Added logical operations (for "1" images) -# 2000-10-12 fl Added offset method (from Image.py) -# -# Copyright (c) 1997-2000 by Secret Labs AB -# Copyright (c) 1996-2000 by Fredrik Lundh -# -# See the README file for information on usage and redistribution. -# - -from . import Image - - -def constant(image, value): - """Fill a channel with a given grey level. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.new("L", image.size, value) - - -def duplicate(image): - """Copy a channel. Alias for :py:meth:`PIL.Image.Image.copy`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return image.copy() - - -def invert(image): - """ - Invert an image (channel). :: - - out = MAX - image - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image.load() - return image._new(image.im.chop_invert()) - - -def lighter(image1, image2): - """ - Compares the two images, pixel by pixel, and returns a new image containing - the lighter values. :: - - out = max(image1, image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_lighter(image2.im)) - - -def darker(image1, image2): - """ - Compares the two images, pixel by pixel, and returns a new image containing - the darker values. :: - - out = min(image1, image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_darker(image2.im)) - - -def difference(image1, image2): - """ - Returns the absolute value of the pixel-by-pixel difference between the two - images. :: - - out = abs(image1 - image2) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_difference(image2.im)) - - -def multiply(image1, image2): - """ - Superimposes two images on top of each other. - - If you multiply an image with a solid black image, the result is black. If - you multiply with a solid white image, the image is unaffected. :: - - out = image1 * image2 / MAX - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_multiply(image2.im)) - - -def screen(image1, image2): - """ - Superimposes two inverted images on top of each other. :: - - out = MAX - ((MAX - image1) * (MAX - image2) / MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_screen(image2.im)) - - -def soft_light(image1, image2): - """ - Superimposes two images on top of each other using the Soft Light algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_soft_light(image2.im)) - - -def hard_light(image1, image2): - """ - Superimposes two images on top of each other using the Hard Light algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_hard_light(image2.im)) - - -def overlay(image1, image2): - """ - Superimposes two images on top of each other using the Overlay algorithm - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_overlay(image2.im)) - - -def add(image1, image2, scale=1.0, offset=0): - """ - Adds two images, dividing the result by scale and adding the - offset. If omitted, scale defaults to 1.0, and offset to 0.0. :: - - out = ((image1 + image2) / scale + offset) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_add(image2.im, scale, offset)) - - -def subtract(image1, image2, scale=1.0, offset=0): - """ - Subtracts two images, dividing the result by scale and adding the offset. - If omitted, scale defaults to 1.0, and offset to 0.0. :: - - out = ((image1 - image2) / scale + offset) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_subtract(image2.im, scale, offset)) - - -def add_modulo(image1, image2): - """Add two images, without clipping the result. :: - - out = ((image1 + image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_add_modulo(image2.im)) - - -def subtract_modulo(image1, image2): - """Subtract two images, without clipping the result. :: - - out = ((image1 - image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_subtract_modulo(image2.im)) - - -def logical_and(image1, image2): - """Logical AND between two images. - - Both of the images must have mode "1". If you would like to perform a - logical AND on an image with a mode other than "1", try - :py:meth:`~PIL.ImageChops.multiply` instead, using a black-and-white mask - as the second image. :: - - out = ((image1 and image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_and(image2.im)) - - -def logical_or(image1, image2): - """Logical OR between two images. - - Both of the images must have mode "1". :: - - out = ((image1 or image2) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_or(image2.im)) - - -def logical_xor(image1, image2): - """Logical XOR between two images. - - Both of the images must have mode "1". :: - - out = ((bool(image1) != bool(image2)) % MAX) - - :rtype: :py:class:`~PIL.Image.Image` - """ - - image1.load() - image2.load() - return image1._new(image1.im.chop_xor(image2.im)) - - -def blend(image1, image2, alpha): - """Blend images using constant transparency weight. Alias for - :py:func:`PIL.Image.blend`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.blend(image1, image2, alpha) - - -def composite(image1, image2, mask): - """Create composite using transparency mask. Alias for - :py:func:`PIL.Image.composite`. - - :rtype: :py:class:`~PIL.Image.Image` - """ - - return Image.composite(image1, image2, mask) - - -def offset(image, xoffset, yoffset=None): - """Returns a copy of the image where data has been offset by the given - distances. Data wraps around the edges. If ``yoffset`` is omitted, it - is assumed to be equal to ``xoffset``. - - :param image: Input image. - :param xoffset: The horizontal distance. - :param yoffset: The vertical distance. If omitted, both - distances are set to the same value. - :rtype: :py:class:`~PIL.Image.Image` - """ - - if yoffset is None: - yoffset = xoffset - image.load() - return image._new(image.im.offset(xoffset, yoffset)) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_backends/_asyncio.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_backends/_asyncio.py deleted file mode 100644 index bfdb4ea7e12761fa1440e484c83bcaa3de7844c9..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_backends/_asyncio.py +++ /dev/null @@ -1,2117 +0,0 @@ -from __future__ import annotations - -import array -import asyncio -import concurrent.futures -import math -import socket -import sys -from asyncio.base_events import _run_until_complete_cb # type: ignore[attr-defined] -from collections import OrderedDict, deque -from concurrent.futures import Future -from contextvars import Context, copy_context -from dataclasses import dataclass -from functools import partial, wraps -from inspect import ( - CORO_RUNNING, - CORO_SUSPENDED, - GEN_RUNNING, - GEN_SUSPENDED, - getcoroutinestate, - getgeneratorstate, -) -from io import IOBase -from os import PathLike -from queue import Queue -from socket import AddressFamily, SocketKind -from threading import Thread -from types import TracebackType -from typing import ( - IO, - Any, - AsyncGenerator, - Awaitable, - Callable, - Collection, - Coroutine, - Generator, - Iterable, - Mapping, - Optional, - Sequence, - Tuple, - TypeVar, - Union, - cast, -) -from weakref import WeakKeyDictionary - -import sniffio - -from .. import CapacityLimiterStatistics, EventStatistics, TaskInfo, abc -from .._core._compat import DeprecatedAsyncContextManager, DeprecatedAwaitable -from .._core._eventloop import claim_worker_thread, threadlocals -from .._core._exceptions import ( - BrokenResourceError, - BusyResourceError, - ClosedResourceError, - EndOfStream, - WouldBlock, -) -from .._core._exceptions import ExceptionGroup as BaseExceptionGroup -from .._core._sockets import GetAddrInfoReturnType, convert_ipv6_sockaddr -from .._core._synchronization import CapacityLimiter as BaseCapacityLimiter -from .._core._synchronization import Event as BaseEvent -from .._core._synchronization import ResourceGuard -from .._core._tasks import CancelScope as BaseCancelScope -from ..abc import IPSockAddrType, UDPPacketType -from ..lowlevel import RunVar - -if sys.version_info >= (3, 8): - - def get_coro(task: asyncio.Task) -> Generator | Awaitable[Any]: - return task.get_coro() - -else: - - def get_coro(task: asyncio.Task) -> Generator | Awaitable[Any]: - return task._coro - - -from asyncio import all_tasks, create_task, current_task, get_running_loop -from asyncio import run as native_run - - -def _get_task_callbacks(task: asyncio.Task) -> Iterable[Callable]: - return [cb for cb, context in task._callbacks] - - -T_Retval = TypeVar("T_Retval") -T_contra = TypeVar("T_contra", contravariant=True) - -# Check whether there is native support for task names in asyncio (3.8+) -_native_task_names = hasattr(asyncio.Task, "get_name") - - -_root_task: RunVar[asyncio.Task | None] = RunVar("_root_task") - - -def find_root_task() -> asyncio.Task: - root_task = _root_task.get(None) - if root_task is not None and not root_task.done(): - return root_task - - # Look for a task that has been started via run_until_complete() - for task in all_tasks(): - if task._callbacks and not task.done(): - for cb in _get_task_callbacks(task): - if ( - cb is _run_until_complete_cb - or getattr(cb, "__module__", None) == "uvloop.loop" - ): - _root_task.set(task) - return task - - # Look up the topmost task in the AnyIO task tree, if possible - task = cast(asyncio.Task, current_task()) - state = _task_states.get(task) - if state: - cancel_scope = state.cancel_scope - while cancel_scope and cancel_scope._parent_scope is not None: - cancel_scope = cancel_scope._parent_scope - - if cancel_scope is not None: - return cast(asyncio.Task, cancel_scope._host_task) - - return task - - -def get_callable_name(func: Callable) -> str: - module = getattr(func, "__module__", None) - qualname = getattr(func, "__qualname__", None) - return ".".join([x for x in (module, qualname) if x]) - - -# -# Event loop -# - -_run_vars = ( - WeakKeyDictionary() -) # type: WeakKeyDictionary[asyncio.AbstractEventLoop, Any] - -current_token = get_running_loop - - -def _task_started(task: asyncio.Task) -> bool: - """Return ``True`` if the task has been started and has not finished.""" - coro = cast(Coroutine[Any, Any, Any], get_coro(task)) - try: - return getcoroutinestate(coro) in (CORO_RUNNING, CORO_SUSPENDED) - except AttributeError: - try: - return getgeneratorstate(cast(Generator, coro)) in ( - GEN_RUNNING, - GEN_SUSPENDED, - ) - except AttributeError: - # task coro is async_genenerator_asend https://bugs.python.org/issue37771 - raise Exception(f"Cannot determine if task {task} has started or not") - - -def _maybe_set_event_loop_policy( - policy: asyncio.AbstractEventLoopPolicy | None, use_uvloop: bool -) -> None: - # On CPython, use uvloop when possible if no other policy has been given and if not - # explicitly disabled - if policy is None and use_uvloop and sys.implementation.name == "cpython": - try: - import uvloop - except ImportError: - pass - else: - # Test for missing shutdown_default_executor() (uvloop 0.14.0 and earlier) - if not hasattr( - asyncio.AbstractEventLoop, "shutdown_default_executor" - ) or hasattr(uvloop.loop.Loop, "shutdown_default_executor"): - policy = uvloop.EventLoopPolicy() - - if policy is not None: - asyncio.set_event_loop_policy(policy) - - -def run( - func: Callable[..., Awaitable[T_Retval]], - *args: object, - debug: bool = False, - use_uvloop: bool = False, - policy: asyncio.AbstractEventLoopPolicy | None = None, -) -> T_Retval: - @wraps(func) - async def wrapper() -> T_Retval: - task = cast(asyncio.Task, current_task()) - task_state = TaskState(None, get_callable_name(func), None) - _task_states[task] = task_state - if _native_task_names: - task.set_name(task_state.name) - - try: - return await func(*args) - finally: - del _task_states[task] - - _maybe_set_event_loop_policy(policy, use_uvloop) - return native_run(wrapper(), debug=debug) - - -# -# Miscellaneous -# - -sleep = asyncio.sleep - - -# -# Timeouts and cancellation -# - -CancelledError = asyncio.CancelledError - - -class CancelScope(BaseCancelScope): - def __new__( - cls, *, deadline: float = math.inf, shield: bool = False - ) -> CancelScope: - return object.__new__(cls) - - def __init__(self, deadline: float = math.inf, shield: bool = False): - self._deadline = deadline - self._shield = shield - self._parent_scope: CancelScope | None = None - self._cancel_called = False - self._active = False - self._timeout_handle: asyncio.TimerHandle | None = None - self._cancel_handle: asyncio.Handle | None = None - self._tasks: set[asyncio.Task] = set() - self._host_task: asyncio.Task | None = None - self._timeout_expired = False - self._cancel_calls: int = 0 - - def __enter__(self) -> CancelScope: - if self._active: - raise RuntimeError( - "Each CancelScope may only be used for a single 'with' block" - ) - - self._host_task = host_task = cast(asyncio.Task, current_task()) - self._tasks.add(host_task) - try: - task_state = _task_states[host_task] - except KeyError: - task_name = host_task.get_name() if _native_task_names else None - task_state = TaskState(None, task_name, self) - _task_states[host_task] = task_state - else: - self._parent_scope = task_state.cancel_scope - task_state.cancel_scope = self - - self._timeout() - self._active = True - - # Start cancelling the host task if the scope was cancelled before entering - if self._cancel_called: - self._deliver_cancellation() - - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - if not self._active: - raise RuntimeError("This cancel scope is not active") - if current_task() is not self._host_task: - raise RuntimeError( - "Attempted to exit cancel scope in a different task than it was " - "entered in" - ) - - assert self._host_task is not None - host_task_state = _task_states.get(self._host_task) - if host_task_state is None or host_task_state.cancel_scope is not self: - raise RuntimeError( - "Attempted to exit a cancel scope that isn't the current tasks's " - "current cancel scope" - ) - - self._active = False - if self._timeout_handle: - self._timeout_handle.cancel() - self._timeout_handle = None - - self._tasks.remove(self._host_task) - - host_task_state.cancel_scope = self._parent_scope - - # Restart the cancellation effort in the farthest directly cancelled parent scope if this - # one was shielded - if self._shield: - self._deliver_cancellation_to_parent() - - if exc_val is not None: - exceptions = ( - exc_val.exceptions if isinstance(exc_val, ExceptionGroup) else [exc_val] - ) - if all(isinstance(exc, CancelledError) for exc in exceptions): - if self._timeout_expired: - return self._uncancel() - elif not self._cancel_called: - # Task was cancelled natively - return None - elif not self._parent_cancelled(): - # This scope was directly cancelled - return self._uncancel() - - return None - - def _uncancel(self) -> bool: - if sys.version_info < (3, 11) or self._host_task is None: - self._cancel_calls = 0 - return True - - # Uncancel all AnyIO cancellations - for i in range(self._cancel_calls): - self._host_task.uncancel() - - self._cancel_calls = 0 - return not self._host_task.cancelling() - - def _timeout(self) -> None: - if self._deadline != math.inf: - loop = get_running_loop() - if loop.time() >= self._deadline: - self._timeout_expired = True - self.cancel() - else: - self._timeout_handle = loop.call_at(self._deadline, self._timeout) - - def _deliver_cancellation(self) -> None: - """ - Deliver cancellation to directly contained tasks and nested cancel scopes. - - Schedule another run at the end if we still have tasks eligible for cancellation. - """ - should_retry = False - current = current_task() - for task in self._tasks: - if task._must_cancel: # type: ignore[attr-defined] - continue - - # The task is eligible for cancellation if it has started and is not in a cancel - # scope shielded from this one - cancel_scope = _task_states[task].cancel_scope - while cancel_scope is not self: - if cancel_scope is None or cancel_scope._shield: - break - else: - cancel_scope = cancel_scope._parent_scope - else: - should_retry = True - if task is not current and ( - task is self._host_task or _task_started(task) - ): - self._cancel_calls += 1 - task.cancel() - - # Schedule another callback if there are still tasks left - if should_retry: - self._cancel_handle = get_running_loop().call_soon( - self._deliver_cancellation - ) - else: - self._cancel_handle = None - - def _deliver_cancellation_to_parent(self) -> None: - """Start cancellation effort in the farthest directly cancelled parent scope""" - scope = self._parent_scope - scope_to_cancel: CancelScope | None = None - while scope is not None: - if scope._cancel_called and scope._cancel_handle is None: - scope_to_cancel = scope - - # No point in looking beyond any shielded scope - if scope._shield: - break - - scope = scope._parent_scope - - if scope_to_cancel is not None: - scope_to_cancel._deliver_cancellation() - - def _parent_cancelled(self) -> bool: - # Check whether any parent has been cancelled - cancel_scope = self._parent_scope - while cancel_scope is not None and not cancel_scope._shield: - if cancel_scope._cancel_called: - return True - else: - cancel_scope = cancel_scope._parent_scope - - return False - - def cancel(self) -> DeprecatedAwaitable: - if not self._cancel_called: - if self._timeout_handle: - self._timeout_handle.cancel() - self._timeout_handle = None - - self._cancel_called = True - if self._host_task is not None: - self._deliver_cancellation() - - return DeprecatedAwaitable(self.cancel) - - @property - def deadline(self) -> float: - return self._deadline - - @deadline.setter - def deadline(self, value: float) -> None: - self._deadline = float(value) - if self._timeout_handle is not None: - self._timeout_handle.cancel() - self._timeout_handle = None - - if self._active and not self._cancel_called: - self._timeout() - - @property - def cancel_called(self) -> bool: - return self._cancel_called - - @property - def shield(self) -> bool: - return self._shield - - @shield.setter - def shield(self, value: bool) -> None: - if self._shield != value: - self._shield = value - if not value: - self._deliver_cancellation_to_parent() - - -async def checkpoint() -> None: - await sleep(0) - - -async def checkpoint_if_cancelled() -> None: - task = current_task() - if task is None: - return - - try: - cancel_scope = _task_states[task].cancel_scope - except KeyError: - return - - while cancel_scope: - if cancel_scope.cancel_called: - await sleep(0) - elif cancel_scope.shield: - break - else: - cancel_scope = cancel_scope._parent_scope - - -async def cancel_shielded_checkpoint() -> None: - with CancelScope(shield=True): - await sleep(0) - - -def current_effective_deadline() -> float: - try: - cancel_scope = _task_states[current_task()].cancel_scope # type: ignore[index] - except KeyError: - return math.inf - - deadline = math.inf - while cancel_scope: - deadline = min(deadline, cancel_scope.deadline) - if cancel_scope._cancel_called: - deadline = -math.inf - break - elif cancel_scope.shield: - break - else: - cancel_scope = cancel_scope._parent_scope - - return deadline - - -def current_time() -> float: - return get_running_loop().time() - - -# -# Task states -# - - -class TaskState: - """ - Encapsulates auxiliary task information that cannot be added to the Task instance itself - because there are no guarantees about its implementation. - """ - - __slots__ = "parent_id", "name", "cancel_scope" - - def __init__( - self, - parent_id: int | None, - name: str | None, - cancel_scope: CancelScope | None, - ): - self.parent_id = parent_id - self.name = name - self.cancel_scope = cancel_scope - - -_task_states = WeakKeyDictionary() # type: WeakKeyDictionary[asyncio.Task, TaskState] - - -# -# Task groups -# - - -class ExceptionGroup(BaseExceptionGroup): - def __init__(self, exceptions: list[BaseException]): - super().__init__() - self.exceptions = exceptions - - -class _AsyncioTaskStatus(abc.TaskStatus): - def __init__(self, future: asyncio.Future, parent_id: int): - self._future = future - self._parent_id = parent_id - - def started(self, value: T_contra | None = None) -> None: - try: - self._future.set_result(value) - except asyncio.InvalidStateError: - raise RuntimeError( - "called 'started' twice on the same task status" - ) from None - - task = cast(asyncio.Task, current_task()) - _task_states[task].parent_id = self._parent_id - - -class TaskGroup(abc.TaskGroup): - def __init__(self) -> None: - self.cancel_scope: CancelScope = CancelScope() - self._active = False - self._exceptions: list[BaseException] = [] - - async def __aenter__(self) -> TaskGroup: - self.cancel_scope.__enter__() - self._active = True - return self - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - ignore_exception = self.cancel_scope.__exit__(exc_type, exc_val, exc_tb) - if exc_val is not None: - self.cancel_scope.cancel() - self._exceptions.append(exc_val) - - while self.cancel_scope._tasks: - try: - await asyncio.wait(self.cancel_scope._tasks) - except asyncio.CancelledError: - self.cancel_scope.cancel() - - self._active = False - if not self.cancel_scope._parent_cancelled(): - exceptions = self._filter_cancellation_errors(self._exceptions) - else: - exceptions = self._exceptions - - try: - if len(exceptions) > 1: - if all( - isinstance(e, CancelledError) and not e.args for e in exceptions - ): - # Tasks were cancelled natively, without a cancellation message - raise CancelledError - else: - raise ExceptionGroup(exceptions) - elif exceptions and exceptions[0] is not exc_val: - raise exceptions[0] - except BaseException as exc: - # Clear the context here, as it can only be done in-flight. - # If the context is not cleared, it can result in recursive tracebacks (see #145). - exc.__context__ = None - raise - - return ignore_exception - - @staticmethod - def _filter_cancellation_errors( - exceptions: Sequence[BaseException], - ) -> list[BaseException]: - filtered_exceptions: list[BaseException] = [] - for exc in exceptions: - if isinstance(exc, ExceptionGroup): - new_exceptions = TaskGroup._filter_cancellation_errors(exc.exceptions) - if len(new_exceptions) > 1: - filtered_exceptions.append(exc) - elif len(new_exceptions) == 1: - filtered_exceptions.append(new_exceptions[0]) - elif new_exceptions: - new_exc = ExceptionGroup(new_exceptions) - new_exc.__cause__ = exc.__cause__ - new_exc.__context__ = exc.__context__ - new_exc.__traceback__ = exc.__traceback__ - filtered_exceptions.append(new_exc) - elif not isinstance(exc, CancelledError) or exc.args: - filtered_exceptions.append(exc) - - return filtered_exceptions - - async def _run_wrapped_task( - self, coro: Coroutine, task_status_future: asyncio.Future | None - ) -> None: - # This is the code path for Python 3.7 on which asyncio freaks out if a task - # raises a BaseException. - __traceback_hide__ = __tracebackhide__ = True # noqa: F841 - task = cast(asyncio.Task, current_task()) - try: - await coro - except BaseException as exc: - if task_status_future is None or task_status_future.done(): - self._exceptions.append(exc) - self.cancel_scope.cancel() - else: - task_status_future.set_exception(exc) - else: - if task_status_future is not None and not task_status_future.done(): - task_status_future.set_exception( - RuntimeError("Child exited without calling task_status.started()") - ) - finally: - if task in self.cancel_scope._tasks: - self.cancel_scope._tasks.remove(task) - del _task_states[task] - - def _spawn( - self, - func: Callable[..., Awaitable[Any]], - args: tuple, - name: object, - task_status_future: asyncio.Future | None = None, - ) -> asyncio.Task: - def task_done(_task: asyncio.Task) -> None: - # This is the code path for Python 3.8+ - assert _task in self.cancel_scope._tasks - self.cancel_scope._tasks.remove(_task) - del _task_states[_task] - - try: - exc = _task.exception() - except CancelledError as e: - while isinstance(e.__context__, CancelledError): - e = e.__context__ - - exc = e - - if exc is not None: - if task_status_future is None or task_status_future.done(): - self._exceptions.append(exc) - self.cancel_scope.cancel() - else: - task_status_future.set_exception(exc) - elif task_status_future is not None and not task_status_future.done(): - task_status_future.set_exception( - RuntimeError("Child exited without calling task_status.started()") - ) - - if not self._active: - raise RuntimeError( - "This task group is not active; no new tasks can be started." - ) - - options: dict[str, Any] = {} - name = get_callable_name(func) if name is None else str(name) - if _native_task_names: - options["name"] = name - - kwargs = {} - if task_status_future: - parent_id = id(current_task()) - kwargs["task_status"] = _AsyncioTaskStatus( - task_status_future, id(self.cancel_scope._host_task) - ) - else: - parent_id = id(self.cancel_scope._host_task) - - coro = func(*args, **kwargs) - if not asyncio.iscoroutine(coro): - raise TypeError( - f"Expected an async function, but {func} appears to be synchronous" - ) - - foreign_coro = not hasattr(coro, "cr_frame") and not hasattr(coro, "gi_frame") - if foreign_coro or sys.version_info < (3, 8): - coro = self._run_wrapped_task(coro, task_status_future) - - task = create_task(coro, **options) - if not foreign_coro and sys.version_info >= (3, 8): - task.add_done_callback(task_done) - - # Make the spawned task inherit the task group's cancel scope - _task_states[task] = TaskState( - parent_id=parent_id, name=name, cancel_scope=self.cancel_scope - ) - self.cancel_scope._tasks.add(task) - return task - - def start_soon( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> None: - self._spawn(func, args, name) - - async def start( - self, func: Callable[..., Awaitable[Any]], *args: object, name: object = None - ) -> None: - future: asyncio.Future = asyncio.Future() - task = self._spawn(func, args, name, future) - - # If the task raises an exception after sending a start value without a switch point - # between, the task group is cancelled and this method never proceeds to process the - # completed future. That's why we have to have a shielded cancel scope here. - with CancelScope(shield=True): - try: - return await future - except CancelledError: - task.cancel() - raise - - -# -# Threads -# - -_Retval_Queue_Type = Tuple[Optional[T_Retval], Optional[BaseException]] - - -class WorkerThread(Thread): - MAX_IDLE_TIME = 10 # seconds - - def __init__( - self, - root_task: asyncio.Task, - workers: set[WorkerThread], - idle_workers: deque[WorkerThread], - ): - super().__init__(name="AnyIO worker thread") - self.root_task = root_task - self.workers = workers - self.idle_workers = idle_workers - self.loop = root_task._loop - self.queue: Queue[ - tuple[Context, Callable, tuple, asyncio.Future] | None - ] = Queue(2) - self.idle_since = current_time() - self.stopping = False - - def _report_result( - self, future: asyncio.Future, result: Any, exc: BaseException | None - ) -> None: - self.idle_since = current_time() - if not self.stopping: - self.idle_workers.append(self) - - if not future.cancelled(): - if exc is not None: - if isinstance(exc, StopIteration): - new_exc = RuntimeError("coroutine raised StopIteration") - new_exc.__cause__ = exc - exc = new_exc - - future.set_exception(exc) - else: - future.set_result(result) - - def run(self) -> None: - with claim_worker_thread("asyncio"): - threadlocals.loop = self.loop - while True: - item = self.queue.get() - if item is None: - # Shutdown command received - return - - context, func, args, future = item - if not future.cancelled(): - result = None - exception: BaseException | None = None - try: - result = context.run(func, *args) - except BaseException as exc: - exception = exc - - if not self.loop.is_closed(): - self.loop.call_soon_threadsafe( - self._report_result, future, result, exception - ) - - self.queue.task_done() - - def stop(self, f: asyncio.Task | None = None) -> None: - self.stopping = True - self.queue.put_nowait(None) - self.workers.discard(self) - try: - self.idle_workers.remove(self) - except ValueError: - pass - - -_threadpool_idle_workers: RunVar[deque[WorkerThread]] = RunVar( - "_threadpool_idle_workers" -) -_threadpool_workers: RunVar[set[WorkerThread]] = RunVar("_threadpool_workers") - - -async def run_sync_in_worker_thread( - func: Callable[..., T_Retval], - *args: object, - cancellable: bool = False, - limiter: CapacityLimiter | None = None, -) -> T_Retval: - await checkpoint() - - # If this is the first run in this event loop thread, set up the necessary variables - try: - idle_workers = _threadpool_idle_workers.get() - workers = _threadpool_workers.get() - except LookupError: - idle_workers = deque() - workers = set() - _threadpool_idle_workers.set(idle_workers) - _threadpool_workers.set(workers) - - async with (limiter or current_default_thread_limiter()): - with CancelScope(shield=not cancellable): - future: asyncio.Future = asyncio.Future() - root_task = find_root_task() - if not idle_workers: - worker = WorkerThread(root_task, workers, idle_workers) - worker.start() - workers.add(worker) - root_task.add_done_callback(worker.stop) - else: - worker = idle_workers.pop() - - # Prune any other workers that have been idle for MAX_IDLE_TIME seconds or longer - now = current_time() - while idle_workers: - if now - idle_workers[0].idle_since < WorkerThread.MAX_IDLE_TIME: - break - - expired_worker = idle_workers.popleft() - expired_worker.root_task.remove_done_callback(expired_worker.stop) - expired_worker.stop() - - context = copy_context() - context.run(sniffio.current_async_library_cvar.set, None) - worker.queue.put_nowait((context, func, args, future)) - return await future - - -def run_sync_from_thread( - func: Callable[..., T_Retval], - *args: object, - loop: asyncio.AbstractEventLoop | None = None, -) -> T_Retval: - @wraps(func) - def wrapper() -> None: - try: - f.set_result(func(*args)) - except BaseException as exc: - f.set_exception(exc) - if not isinstance(exc, Exception): - raise - - f: concurrent.futures.Future[T_Retval] = Future() - loop = loop or threadlocals.loop - loop.call_soon_threadsafe(wrapper) - return f.result() - - -def run_async_from_thread( - func: Callable[..., Awaitable[T_Retval]], *args: object -) -> T_Retval: - f: concurrent.futures.Future[T_Retval] = asyncio.run_coroutine_threadsafe( - func(*args), threadlocals.loop - ) - return f.result() - - -class BlockingPortal(abc.BlockingPortal): - def __new__(cls) -> BlockingPortal: - return object.__new__(cls) - - def __init__(self) -> None: - super().__init__() - self._loop = get_running_loop() - - def _spawn_task_from_thread( - self, - func: Callable, - args: tuple, - kwargs: dict[str, Any], - name: object, - future: Future, - ) -> None: - run_sync_from_thread( - partial(self._task_group.start_soon, name=name), - self._call_func, - func, - args, - kwargs, - future, - loop=self._loop, - ) - - -# -# Subprocesses -# - - -@dataclass(eq=False) -class StreamReaderWrapper(abc.ByteReceiveStream): - _stream: asyncio.StreamReader - - async def receive(self, max_bytes: int = 65536) -> bytes: - data = await self._stream.read(max_bytes) - if data: - return data - else: - raise EndOfStream - - async def aclose(self) -> None: - self._stream.feed_eof() - - -@dataclass(eq=False) -class StreamWriterWrapper(abc.ByteSendStream): - _stream: asyncio.StreamWriter - - async def send(self, item: bytes) -> None: - self._stream.write(item) - await self._stream.drain() - - async def aclose(self) -> None: - self._stream.close() - - -@dataclass(eq=False) -class Process(abc.Process): - _process: asyncio.subprocess.Process - _stdin: StreamWriterWrapper | None - _stdout: StreamReaderWrapper | None - _stderr: StreamReaderWrapper | None - - async def aclose(self) -> None: - if self._stdin: - await self._stdin.aclose() - if self._stdout: - await self._stdout.aclose() - if self._stderr: - await self._stderr.aclose() - - await self.wait() - - async def wait(self) -> int: - return await self._process.wait() - - def terminate(self) -> None: - self._process.terminate() - - def kill(self) -> None: - self._process.kill() - - def send_signal(self, signal: int) -> None: - self._process.send_signal(signal) - - @property - def pid(self) -> int: - return self._process.pid - - @property - def returncode(self) -> int | None: - return self._process.returncode - - @property - def stdin(self) -> abc.ByteSendStream | None: - return self._stdin - - @property - def stdout(self) -> abc.ByteReceiveStream | None: - return self._stdout - - @property - def stderr(self) -> abc.ByteReceiveStream | None: - return self._stderr - - -async def open_process( - command: str | bytes | Sequence[str | bytes], - *, - shell: bool, - stdin: int | IO[Any] | None, - stdout: int | IO[Any] | None, - stderr: int | IO[Any] | None, - cwd: str | bytes | PathLike | None = None, - env: Mapping[str, str] | None = None, - start_new_session: bool = False, -) -> Process: - await checkpoint() - if shell: - process = await asyncio.create_subprocess_shell( - cast(Union[str, bytes], command), - stdin=stdin, - stdout=stdout, - stderr=stderr, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) - else: - process = await asyncio.create_subprocess_exec( - *command, - stdin=stdin, - stdout=stdout, - stderr=stderr, - cwd=cwd, - env=env, - start_new_session=start_new_session, - ) - - stdin_stream = StreamWriterWrapper(process.stdin) if process.stdin else None - stdout_stream = StreamReaderWrapper(process.stdout) if process.stdout else None - stderr_stream = StreamReaderWrapper(process.stderr) if process.stderr else None - return Process(process, stdin_stream, stdout_stream, stderr_stream) - - -def _forcibly_shutdown_process_pool_on_exit( - workers: set[Process], _task: object -) -> None: - """ - Forcibly shuts down worker processes belonging to this event loop.""" - child_watcher: asyncio.AbstractChildWatcher | None - try: - child_watcher = asyncio.get_event_loop_policy().get_child_watcher() - except NotImplementedError: - child_watcher = None - - # Close as much as possible (w/o async/await) to avoid warnings - for process in workers: - if process.returncode is None: - continue - - process._stdin._stream._transport.close() # type: ignore[union-attr] - process._stdout._stream._transport.close() # type: ignore[union-attr] - process._stderr._stream._transport.close() # type: ignore[union-attr] - process.kill() - if child_watcher: - child_watcher.remove_child_handler(process.pid) - - -async def _shutdown_process_pool_on_exit(workers: set[Process]) -> None: - """ - Shuts down worker processes belonging to this event loop. - - NOTE: this only works when the event loop was started using asyncio.run() or anyio.run(). - - """ - process: Process - try: - await sleep(math.inf) - except asyncio.CancelledError: - for process in workers: - if process.returncode is None: - process.kill() - - for process in workers: - await process.aclose() - - -def setup_process_pool_exit_at_shutdown(workers: set[Process]) -> None: - kwargs: dict[str, Any] = ( - {"name": "AnyIO process pool shutdown task"} if _native_task_names else {} - ) - create_task(_shutdown_process_pool_on_exit(workers), **kwargs) - find_root_task().add_done_callback( - partial(_forcibly_shutdown_process_pool_on_exit, workers) - ) - - -# -# Sockets and networking -# - - -class StreamProtocol(asyncio.Protocol): - read_queue: deque[bytes] - read_event: asyncio.Event - write_event: asyncio.Event - exception: Exception | None = None - - def connection_made(self, transport: asyncio.BaseTransport) -> None: - self.read_queue = deque() - self.read_event = asyncio.Event() - self.write_event = asyncio.Event() - self.write_event.set() - cast(asyncio.Transport, transport).set_write_buffer_limits(0) - - def connection_lost(self, exc: Exception | None) -> None: - if exc: - self.exception = BrokenResourceError() - self.exception.__cause__ = exc - - self.read_event.set() - self.write_event.set() - - def data_received(self, data: bytes) -> None: - self.read_queue.append(data) - self.read_event.set() - - def eof_received(self) -> bool | None: - self.read_event.set() - return True - - def pause_writing(self) -> None: - self.write_event = asyncio.Event() - - def resume_writing(self) -> None: - self.write_event.set() - - -class DatagramProtocol(asyncio.DatagramProtocol): - read_queue: deque[tuple[bytes, IPSockAddrType]] - read_event: asyncio.Event - write_event: asyncio.Event - exception: Exception | None = None - - def connection_made(self, transport: asyncio.BaseTransport) -> None: - self.read_queue = deque(maxlen=100) # arbitrary value - self.read_event = asyncio.Event() - self.write_event = asyncio.Event() - self.write_event.set() - - def connection_lost(self, exc: Exception | None) -> None: - self.read_event.set() - self.write_event.set() - - def datagram_received(self, data: bytes, addr: IPSockAddrType) -> None: - addr = convert_ipv6_sockaddr(addr) - self.read_queue.append((data, addr)) - self.read_event.set() - - def error_received(self, exc: Exception) -> None: - self.exception = exc - - def pause_writing(self) -> None: - self.write_event.clear() - - def resume_writing(self) -> None: - self.write_event.set() - - -class SocketStream(abc.SocketStream): - def __init__(self, transport: asyncio.Transport, protocol: StreamProtocol): - self._transport = transport - self._protocol = protocol - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - self._closed = False - - @property - def _raw_socket(self) -> socket.socket: - return self._transport.get_extra_info("socket") - - async def receive(self, max_bytes: int = 65536) -> bytes: - with self._receive_guard: - await checkpoint() - - if ( - not self._protocol.read_event.is_set() - and not self._transport.is_closing() - ): - self._transport.resume_reading() - await self._protocol.read_event.wait() - self._transport.pause_reading() - - try: - chunk = self._protocol.read_queue.popleft() - except IndexError: - if self._closed: - raise ClosedResourceError from None - elif self._protocol.exception: - raise self._protocol.exception - else: - raise EndOfStream from None - - if len(chunk) > max_bytes: - # Split the oversized chunk - chunk, leftover = chunk[:max_bytes], chunk[max_bytes:] - self._protocol.read_queue.appendleft(leftover) - - # If the read queue is empty, clear the flag so that the next call will block until - # data is available - if not self._protocol.read_queue: - self._protocol.read_event.clear() - - return chunk - - async def send(self, item: bytes) -> None: - with self._send_guard: - await checkpoint() - - if self._closed: - raise ClosedResourceError - elif self._protocol.exception is not None: - raise self._protocol.exception - - try: - self._transport.write(item) - except RuntimeError as exc: - if self._transport.is_closing(): - raise BrokenResourceError from exc - else: - raise - - await self._protocol.write_event.wait() - - async def send_eof(self) -> None: - try: - self._transport.write_eof() - except OSError: - pass - - async def aclose(self) -> None: - if not self._transport.is_closing(): - self._closed = True - try: - self._transport.write_eof() - except OSError: - pass - - self._transport.close() - await sleep(0) - self._transport.abort() - - -class UNIXSocketStream(abc.SocketStream): - _receive_future: asyncio.Future | None = None - _send_future: asyncio.Future | None = None - _closing = False - - def __init__(self, raw_socket: socket.socket): - self.__raw_socket = raw_socket - self._loop = get_running_loop() - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - - @property - def _raw_socket(self) -> socket.socket: - return self.__raw_socket - - def _wait_until_readable(self, loop: asyncio.AbstractEventLoop) -> asyncio.Future: - def callback(f: object) -> None: - del self._receive_future - loop.remove_reader(self.__raw_socket) - - f = self._receive_future = asyncio.Future() - self._loop.add_reader(self.__raw_socket, f.set_result, None) - f.add_done_callback(callback) - return f - - def _wait_until_writable(self, loop: asyncio.AbstractEventLoop) -> asyncio.Future: - def callback(f: object) -> None: - del self._send_future - loop.remove_writer(self.__raw_socket) - - f = self._send_future = asyncio.Future() - self._loop.add_writer(self.__raw_socket, f.set_result, None) - f.add_done_callback(callback) - return f - - async def send_eof(self) -> None: - with self._send_guard: - self._raw_socket.shutdown(socket.SHUT_WR) - - async def receive(self, max_bytes: int = 65536) -> bytes: - loop = get_running_loop() - await checkpoint() - with self._receive_guard: - while True: - try: - data = self.__raw_socket.recv(max_bytes) - except BlockingIOError: - await self._wait_until_readable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - else: - if not data: - raise EndOfStream - - return data - - async def send(self, item: bytes) -> None: - loop = get_running_loop() - await checkpoint() - with self._send_guard: - view = memoryview(item) - while view: - try: - bytes_sent = self.__raw_socket.send(view) - except BlockingIOError: - await self._wait_until_writable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - else: - view = view[bytes_sent:] - - async def receive_fds(self, msglen: int, maxfds: int) -> tuple[bytes, list[int]]: - if not isinstance(msglen, int) or msglen < 0: - raise ValueError("msglen must be a non-negative integer") - if not isinstance(maxfds, int) or maxfds < 1: - raise ValueError("maxfds must be a positive integer") - - loop = get_running_loop() - fds = array.array("i") - await checkpoint() - with self._receive_guard: - while True: - try: - message, ancdata, flags, addr = self.__raw_socket.recvmsg( - msglen, socket.CMSG_LEN(maxfds * fds.itemsize) - ) - except BlockingIOError: - await self._wait_until_readable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - else: - if not message and not ancdata: - raise EndOfStream - - break - - for cmsg_level, cmsg_type, cmsg_data in ancdata: - if cmsg_level != socket.SOL_SOCKET or cmsg_type != socket.SCM_RIGHTS: - raise RuntimeError( - f"Received unexpected ancillary data; message = {message!r}, " - f"cmsg_level = {cmsg_level}, cmsg_type = {cmsg_type}" - ) - - fds.frombytes(cmsg_data[: len(cmsg_data) - (len(cmsg_data) % fds.itemsize)]) - - return message, list(fds) - - async def send_fds(self, message: bytes, fds: Collection[int | IOBase]) -> None: - if not message: - raise ValueError("message must not be empty") - if not fds: - raise ValueError("fds must not be empty") - - loop = get_running_loop() - filenos: list[int] = [] - for fd in fds: - if isinstance(fd, int): - filenos.append(fd) - elif isinstance(fd, IOBase): - filenos.append(fd.fileno()) - - fdarray = array.array("i", filenos) - await checkpoint() - with self._send_guard: - while True: - try: - # The ignore can be removed after mypy picks up - # https://github.com/python/typeshed/pull/5545 - self.__raw_socket.sendmsg( - [message], [(socket.SOL_SOCKET, socket.SCM_RIGHTS, fdarray)] - ) - break - except BlockingIOError: - await self._wait_until_writable(loop) - except OSError as exc: - if self._closing: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - - async def aclose(self) -> None: - if not self._closing: - self._closing = True - if self.__raw_socket.fileno() != -1: - self.__raw_socket.close() - - if self._receive_future: - self._receive_future.set_result(None) - if self._send_future: - self._send_future.set_result(None) - - -class TCPSocketListener(abc.SocketListener): - _accept_scope: CancelScope | None = None - _closed = False - - def __init__(self, raw_socket: socket.socket): - self.__raw_socket = raw_socket - self._loop = cast(asyncio.BaseEventLoop, get_running_loop()) - self._accept_guard = ResourceGuard("accepting connections from") - - @property - def _raw_socket(self) -> socket.socket: - return self.__raw_socket - - async def accept(self) -> abc.SocketStream: - if self._closed: - raise ClosedResourceError - - with self._accept_guard: - await checkpoint() - with CancelScope() as self._accept_scope: - try: - client_sock, _addr = await self._loop.sock_accept(self._raw_socket) - except asyncio.CancelledError: - # Workaround for https://bugs.python.org/issue41317 - try: - self._loop.remove_reader(self._raw_socket) - except (ValueError, NotImplementedError): - pass - - if self._closed: - raise ClosedResourceError from None - - raise - finally: - self._accept_scope = None - - client_sock.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1) - transport, protocol = await self._loop.connect_accepted_socket( - StreamProtocol, client_sock - ) - return SocketStream(transport, protocol) - - async def aclose(self) -> None: - if self._closed: - return - - self._closed = True - if self._accept_scope: - # Workaround for https://bugs.python.org/issue41317 - try: - self._loop.remove_reader(self._raw_socket) - except (ValueError, NotImplementedError): - pass - - self._accept_scope.cancel() - await sleep(0) - - self._raw_socket.close() - - -class UNIXSocketListener(abc.SocketListener): - def __init__(self, raw_socket: socket.socket): - self.__raw_socket = raw_socket - self._loop = get_running_loop() - self._accept_guard = ResourceGuard("accepting connections from") - self._closed = False - - async def accept(self) -> abc.SocketStream: - await checkpoint() - with self._accept_guard: - while True: - try: - client_sock, _ = self.__raw_socket.accept() - client_sock.setblocking(False) - return UNIXSocketStream(client_sock) - except BlockingIOError: - f: asyncio.Future = asyncio.Future() - self._loop.add_reader(self.__raw_socket, f.set_result, None) - f.add_done_callback( - lambda _: self._loop.remove_reader(self.__raw_socket) - ) - await f - except OSError as exc: - if self._closed: - raise ClosedResourceError from None - else: - raise BrokenResourceError from exc - - async def aclose(self) -> None: - self._closed = True - self.__raw_socket.close() - - @property - def _raw_socket(self) -> socket.socket: - return self.__raw_socket - - -class UDPSocket(abc.UDPSocket): - def __init__( - self, transport: asyncio.DatagramTransport, protocol: DatagramProtocol - ): - self._transport = transport - self._protocol = protocol - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - self._closed = False - - @property - def _raw_socket(self) -> socket.socket: - return self._transport.get_extra_info("socket") - - async def aclose(self) -> None: - if not self._transport.is_closing(): - self._closed = True - self._transport.close() - - async def receive(self) -> tuple[bytes, IPSockAddrType]: - with self._receive_guard: - await checkpoint() - - # If the buffer is empty, ask for more data - if not self._protocol.read_queue and not self._transport.is_closing(): - self._protocol.read_event.clear() - await self._protocol.read_event.wait() - - try: - return self._protocol.read_queue.popleft() - except IndexError: - if self._closed: - raise ClosedResourceError from None - else: - raise BrokenResourceError from None - - async def send(self, item: UDPPacketType) -> None: - with self._send_guard: - await checkpoint() - await self._protocol.write_event.wait() - if self._closed: - raise ClosedResourceError - elif self._transport.is_closing(): - raise BrokenResourceError - else: - self._transport.sendto(*item) - - -class ConnectedUDPSocket(abc.ConnectedUDPSocket): - def __init__( - self, transport: asyncio.DatagramTransport, protocol: DatagramProtocol - ): - self._transport = transport - self._protocol = protocol - self._receive_guard = ResourceGuard("reading from") - self._send_guard = ResourceGuard("writing to") - self._closed = False - - @property - def _raw_socket(self) -> socket.socket: - return self._transport.get_extra_info("socket") - - async def aclose(self) -> None: - if not self._transport.is_closing(): - self._closed = True - self._transport.close() - - async def receive(self) -> bytes: - with self._receive_guard: - await checkpoint() - - # If the buffer is empty, ask for more data - if not self._protocol.read_queue and not self._transport.is_closing(): - self._protocol.read_event.clear() - await self._protocol.read_event.wait() - - try: - packet = self._protocol.read_queue.popleft() - except IndexError: - if self._closed: - raise ClosedResourceError from None - else: - raise BrokenResourceError from None - - return packet[0] - - async def send(self, item: bytes) -> None: - with self._send_guard: - await checkpoint() - await self._protocol.write_event.wait() - if self._closed: - raise ClosedResourceError - elif self._transport.is_closing(): - raise BrokenResourceError - else: - self._transport.sendto(item) - - -async def connect_tcp( - host: str, port: int, local_addr: tuple[str, int] | None = None -) -> SocketStream: - transport, protocol = cast( - Tuple[asyncio.Transport, StreamProtocol], - await get_running_loop().create_connection( - StreamProtocol, host, port, local_addr=local_addr - ), - ) - transport.pause_reading() - return SocketStream(transport, protocol) - - -async def connect_unix(path: str) -> UNIXSocketStream: - await checkpoint() - loop = get_running_loop() - raw_socket = socket.socket(socket.AF_UNIX) - raw_socket.setblocking(False) - while True: - try: - raw_socket.connect(path) - except BlockingIOError: - f: asyncio.Future = asyncio.Future() - loop.add_writer(raw_socket, f.set_result, None) - f.add_done_callback(lambda _: loop.remove_writer(raw_socket)) - await f - except BaseException: - raw_socket.close() - raise - else: - return UNIXSocketStream(raw_socket) - - -async def create_udp_socket( - family: socket.AddressFamily, - local_address: IPSockAddrType | None, - remote_address: IPSockAddrType | None, - reuse_port: bool, -) -> UDPSocket | ConnectedUDPSocket: - result = await get_running_loop().create_datagram_endpoint( - DatagramProtocol, - local_addr=local_address, - remote_addr=remote_address, - family=family, - reuse_port=reuse_port, - ) - transport = result[0] - protocol = result[1] - if protocol.exception: - transport.close() - raise protocol.exception - - if not remote_address: - return UDPSocket(transport, protocol) - else: - return ConnectedUDPSocket(transport, protocol) - - -async def getaddrinfo( - host: bytes | str, - port: str | int | None, - *, - family: int | AddressFamily = 0, - type: int | SocketKind = 0, - proto: int = 0, - flags: int = 0, -) -> GetAddrInfoReturnType: - # https://github.com/python/typeshed/pull/4304 - result = await get_running_loop().getaddrinfo( - host, port, family=family, type=type, proto=proto, flags=flags - ) - return cast(GetAddrInfoReturnType, result) - - -async def getnameinfo(sockaddr: IPSockAddrType, flags: int = 0) -> tuple[str, str]: - return await get_running_loop().getnameinfo(sockaddr, flags) - - -_read_events: RunVar[dict[Any, asyncio.Event]] = RunVar("read_events") -_write_events: RunVar[dict[Any, asyncio.Event]] = RunVar("write_events") - - -async def wait_socket_readable(sock: socket.socket) -> None: - await checkpoint() - try: - read_events = _read_events.get() - except LookupError: - read_events = {} - _read_events.set(read_events) - - if read_events.get(sock): - raise BusyResourceError("reading from") from None - - loop = get_running_loop() - event = read_events[sock] = asyncio.Event() - loop.add_reader(sock, event.set) - try: - await event.wait() - finally: - if read_events.pop(sock, None) is not None: - loop.remove_reader(sock) - readable = True - else: - readable = False - - if not readable: - raise ClosedResourceError - - -async def wait_socket_writable(sock: socket.socket) -> None: - await checkpoint() - try: - write_events = _write_events.get() - except LookupError: - write_events = {} - _write_events.set(write_events) - - if write_events.get(sock): - raise BusyResourceError("writing to") from None - - loop = get_running_loop() - event = write_events[sock] = asyncio.Event() - loop.add_writer(sock.fileno(), event.set) - try: - await event.wait() - finally: - if write_events.pop(sock, None) is not None: - loop.remove_writer(sock) - writable = True - else: - writable = False - - if not writable: - raise ClosedResourceError - - -# -# Synchronization -# - - -class Event(BaseEvent): - def __new__(cls) -> Event: - return object.__new__(cls) - - def __init__(self) -> None: - self._event = asyncio.Event() - - def set(self) -> DeprecatedAwaitable: - self._event.set() - return DeprecatedAwaitable(self.set) - - def is_set(self) -> bool: - return self._event.is_set() - - async def wait(self) -> None: - if await self._event.wait(): - await checkpoint() - - def statistics(self) -> EventStatistics: - return EventStatistics(len(self._event._waiters)) # type: ignore[attr-defined] - - -class CapacityLimiter(BaseCapacityLimiter): - _total_tokens: float = 0 - - def __new__(cls, total_tokens: float) -> CapacityLimiter: - return object.__new__(cls) - - def __init__(self, total_tokens: float): - self._borrowers: set[Any] = set() - self._wait_queue: OrderedDict[Any, asyncio.Event] = OrderedDict() - self.total_tokens = total_tokens - - async def __aenter__(self) -> None: - await self.acquire() - - async def __aexit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> None: - self.release() - - @property - def total_tokens(self) -> float: - return self._total_tokens - - @total_tokens.setter - def total_tokens(self, value: float) -> None: - if not isinstance(value, int) and not math.isinf(value): - raise TypeError("total_tokens must be an int or math.inf") - if value < 1: - raise ValueError("total_tokens must be >= 1") - - old_value = self._total_tokens - self._total_tokens = value - events = [] - for event in self._wait_queue.values(): - if value <= old_value: - break - - if not event.is_set(): - events.append(event) - old_value += 1 - - for event in events: - event.set() - - @property - def borrowed_tokens(self) -> int: - return len(self._borrowers) - - @property - def available_tokens(self) -> float: - return self._total_tokens - len(self._borrowers) - - def acquire_nowait(self) -> DeprecatedAwaitable: - self.acquire_on_behalf_of_nowait(current_task()) - return DeprecatedAwaitable(self.acquire_nowait) - - def acquire_on_behalf_of_nowait(self, borrower: object) -> DeprecatedAwaitable: - if borrower in self._borrowers: - raise RuntimeError( - "this borrower is already holding one of this CapacityLimiter's " - "tokens" - ) - - if self._wait_queue or len(self._borrowers) >= self._total_tokens: - raise WouldBlock - - self._borrowers.add(borrower) - return DeprecatedAwaitable(self.acquire_on_behalf_of_nowait) - - async def acquire(self) -> None: - return await self.acquire_on_behalf_of(current_task()) - - async def acquire_on_behalf_of(self, borrower: object) -> None: - await checkpoint_if_cancelled() - try: - self.acquire_on_behalf_of_nowait(borrower) - except WouldBlock: - event = asyncio.Event() - self._wait_queue[borrower] = event - try: - await event.wait() - except BaseException: - self._wait_queue.pop(borrower, None) - raise - - self._borrowers.add(borrower) - else: - try: - await cancel_shielded_checkpoint() - except BaseException: - self.release() - raise - - def release(self) -> None: - self.release_on_behalf_of(current_task()) - - def release_on_behalf_of(self, borrower: object) -> None: - try: - self._borrowers.remove(borrower) - except KeyError: - raise RuntimeError( - "this borrower isn't holding any of this CapacityLimiter's " "tokens" - ) from None - - # Notify the next task in line if this limiter has free capacity now - if self._wait_queue and len(self._borrowers) < self._total_tokens: - event = self._wait_queue.popitem(last=False)[1] - event.set() - - def statistics(self) -> CapacityLimiterStatistics: - return CapacityLimiterStatistics( - self.borrowed_tokens, - self.total_tokens, - tuple(self._borrowers), - len(self._wait_queue), - ) - - -_default_thread_limiter: RunVar[CapacityLimiter] = RunVar("_default_thread_limiter") - - -def current_default_thread_limiter() -> CapacityLimiter: - try: - return _default_thread_limiter.get() - except LookupError: - limiter = CapacityLimiter(40) - _default_thread_limiter.set(limiter) - return limiter - - -# -# Operating system signals -# - - -class _SignalReceiver(DeprecatedAsyncContextManager["_SignalReceiver"]): - def __init__(self, signals: tuple[int, ...]): - self._signals = signals - self._loop = get_running_loop() - self._signal_queue: deque[int] = deque() - self._future: asyncio.Future = asyncio.Future() - self._handled_signals: set[int] = set() - - def _deliver(self, signum: int) -> None: - self._signal_queue.append(signum) - if not self._future.done(): - self._future.set_result(None) - - def __enter__(self) -> _SignalReceiver: - for sig in set(self._signals): - self._loop.add_signal_handler(sig, self._deliver, sig) - self._handled_signals.add(sig) - - return self - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - for sig in self._handled_signals: - self._loop.remove_signal_handler(sig) - return None - - def __aiter__(self) -> _SignalReceiver: - return self - - async def __anext__(self) -> int: - await checkpoint() - if not self._signal_queue: - self._future = asyncio.Future() - await self._future - - return self._signal_queue.popleft() - - -def open_signal_receiver(*signals: int) -> _SignalReceiver: - return _SignalReceiver(signals) - - -# -# Testing and debugging -# - - -def _create_task_info(task: asyncio.Task) -> TaskInfo: - task_state = _task_states.get(task) - if task_state is None: - name = task.get_name() if _native_task_names else None - parent_id = None - else: - name = task_state.name - parent_id = task_state.parent_id - - return TaskInfo(id(task), parent_id, name, get_coro(task)) - - -def get_current_task() -> TaskInfo: - return _create_task_info(current_task()) # type: ignore[arg-type] - - -def get_running_tasks() -> list[TaskInfo]: - return [_create_task_info(task) for task in all_tasks() if not task.done()] - - -async def wait_all_tasks_blocked() -> None: - await checkpoint() - this_task = current_task() - while True: - for task in all_tasks(): - if task is this_task: - continue - - if task._fut_waiter is None or task._fut_waiter.done(): # type: ignore[attr-defined] - await sleep(0.1) - break - else: - return - - -class TestRunner(abc.TestRunner): - def __init__( - self, - debug: bool = False, - use_uvloop: bool = False, - policy: asyncio.AbstractEventLoopPolicy | None = None, - ): - self._exceptions: list[BaseException] = [] - _maybe_set_event_loop_policy(policy, use_uvloop) - self._loop = asyncio.new_event_loop() - self._loop.set_debug(debug) - self._loop.set_exception_handler(self._exception_handler) - asyncio.set_event_loop(self._loop) - - def _cancel_all_tasks(self) -> None: - to_cancel = all_tasks(self._loop) - if not to_cancel: - return - - for task in to_cancel: - task.cancel() - - self._loop.run_until_complete( - asyncio.gather(*to_cancel, return_exceptions=True) - ) - - for task in to_cancel: - if task.cancelled(): - continue - if task.exception() is not None: - raise cast(BaseException, task.exception()) - - def _exception_handler( - self, loop: asyncio.AbstractEventLoop, context: dict[str, Any] - ) -> None: - if isinstance(context.get("exception"), Exception): - self._exceptions.append(context["exception"]) - else: - loop.default_exception_handler(context) - - def _raise_async_exceptions(self) -> None: - # Re-raise any exceptions raised in asynchronous callbacks - if self._exceptions: - exceptions, self._exceptions = self._exceptions, [] - if len(exceptions) == 1: - raise exceptions[0] - elif exceptions: - raise ExceptionGroup(exceptions) - - def close(self) -> None: - try: - self._cancel_all_tasks() - self._loop.run_until_complete(self._loop.shutdown_asyncgens()) - finally: - asyncio.set_event_loop(None) - self._loop.close() - - def run_asyncgen_fixture( - self, - fixture_func: Callable[..., AsyncGenerator[T_Retval, Any]], - kwargs: dict[str, Any], - ) -> Iterable[T_Retval]: - async def fixture_runner() -> None: - agen = fixture_func(**kwargs) - try: - retval = await agen.asend(None) - self._raise_async_exceptions() - except BaseException as exc: - f.set_exception(exc) - return - else: - f.set_result(retval) - - await event.wait() - try: - await agen.asend(None) - except StopAsyncIteration: - pass - else: - await agen.aclose() - raise RuntimeError("Async generator fixture did not stop") - - f = self._loop.create_future() - event = asyncio.Event() - fixture_task = self._loop.create_task(fixture_runner()) - self._loop.run_until_complete(f) - yield f.result() - event.set() - self._loop.run_until_complete(fixture_task) - self._raise_async_exceptions() - - def run_fixture( - self, - fixture_func: Callable[..., Coroutine[Any, Any, T_Retval]], - kwargs: dict[str, Any], - ) -> T_Retval: - retval = self._loop.run_until_complete(fixture_func(**kwargs)) - self._raise_async_exceptions() - return retval - - def run_test( - self, test_func: Callable[..., Coroutine[Any, Any, Any]], kwargs: dict[str, Any] - ) -> None: - try: - self._loop.run_until_complete(test_func(**kwargs)) - except Exception as exc: - self._exceptions.append(exc) - - self._raise_async_exceptions() diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_tasks.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_tasks.py deleted file mode 100644 index e9d9c2bd67f105d9e728ffed5496b010051b1452..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/anyio/_core/_tasks.py +++ /dev/null @@ -1,180 +0,0 @@ -from __future__ import annotations - -import math -from types import TracebackType -from warnings import warn - -from ..abc._tasks import TaskGroup, TaskStatus -from ._compat import ( - DeprecatedAsyncContextManager, - DeprecatedAwaitable, - DeprecatedAwaitableFloat, -) -from ._eventloop import get_asynclib - - -class _IgnoredTaskStatus(TaskStatus[object]): - def started(self, value: object = None) -> None: - pass - - -TASK_STATUS_IGNORED = _IgnoredTaskStatus() - - -class CancelScope(DeprecatedAsyncContextManager["CancelScope"]): - """ - Wraps a unit of work that can be made separately cancellable. - - :param deadline: The time (clock value) when this scope is cancelled automatically - :param shield: ``True`` to shield the cancel scope from external cancellation - """ - - def __new__( - cls, *, deadline: float = math.inf, shield: bool = False - ) -> CancelScope: - return get_asynclib().CancelScope(shield=shield, deadline=deadline) - - def cancel(self) -> DeprecatedAwaitable: - """Cancel this scope immediately.""" - raise NotImplementedError - - @property - def deadline(self) -> float: - """ - The time (clock value) when this scope is cancelled automatically. - - Will be ``float('inf')`` if no timeout has been set. - - """ - raise NotImplementedError - - @deadline.setter - def deadline(self, value: float) -> None: - raise NotImplementedError - - @property - def cancel_called(self) -> bool: - """``True`` if :meth:`cancel` has been called.""" - raise NotImplementedError - - @property - def shield(self) -> bool: - """ - ``True`` if this scope is shielded from external cancellation. - - While a scope is shielded, it will not receive cancellations from outside. - - """ - raise NotImplementedError - - @shield.setter - def shield(self, value: bool) -> None: - raise NotImplementedError - - def __enter__(self) -> CancelScope: - raise NotImplementedError - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - raise NotImplementedError - - -def open_cancel_scope(*, shield: bool = False) -> CancelScope: - """ - Open a cancel scope. - - :param shield: ``True`` to shield the cancel scope from external cancellation - :return: a cancel scope - - .. deprecated:: 3.0 - Use :class:`~CancelScope` directly. - - """ - warn( - "open_cancel_scope() is deprecated -- use CancelScope() directly", - DeprecationWarning, - ) - return get_asynclib().CancelScope(shield=shield) - - -class FailAfterContextManager(DeprecatedAsyncContextManager[CancelScope]): - def __init__(self, cancel_scope: CancelScope): - self._cancel_scope = cancel_scope - - def __enter__(self) -> CancelScope: - return self._cancel_scope.__enter__() - - def __exit__( - self, - exc_type: type[BaseException] | None, - exc_val: BaseException | None, - exc_tb: TracebackType | None, - ) -> bool | None: - retval = self._cancel_scope.__exit__(exc_type, exc_val, exc_tb) - if self._cancel_scope.cancel_called: - raise TimeoutError - - return retval - - -def fail_after(delay: float | None, shield: bool = False) -> FailAfterContextManager: - """ - Create a context manager which raises a :class:`TimeoutError` if does not finish in time. - - :param delay: maximum allowed time (in seconds) before raising the exception, or ``None`` to - disable the timeout - :param shield: ``True`` to shield the cancel scope from external cancellation - :return: a context manager that yields a cancel scope - :rtype: :class:`~typing.ContextManager`\\[:class:`~anyio.CancelScope`\\] - - """ - deadline = ( - (get_asynclib().current_time() + delay) if delay is not None else math.inf - ) - cancel_scope = get_asynclib().CancelScope(deadline=deadline, shield=shield) - return FailAfterContextManager(cancel_scope) - - -def move_on_after(delay: float | None, shield: bool = False) -> CancelScope: - """ - Create a cancel scope with a deadline that expires after the given delay. - - :param delay: maximum allowed time (in seconds) before exiting the context block, or ``None`` - to disable the timeout - :param shield: ``True`` to shield the cancel scope from external cancellation - :return: a cancel scope - - """ - deadline = ( - (get_asynclib().current_time() + delay) if delay is not None else math.inf - ) - return get_asynclib().CancelScope(deadline=deadline, shield=shield) - - -def current_effective_deadline() -> DeprecatedAwaitableFloat: - """ - Return the nearest deadline among all the cancel scopes effective for the current task. - - :return: a clock value from the event loop's internal clock (or ``float('inf')`` if - there is no deadline in effect, or ``float('-inf')`` if the current scope has - been cancelled) - :rtype: float - - """ - return DeprecatedAwaitableFloat( - get_asynclib().current_effective_deadline(), current_effective_deadline - ) - - -def create_task_group() -> TaskGroup: - """ - Create a task group. - - :return: a task group - - """ - return get_asynclib().TaskGroup() diff --git a/spaces/colakin/video-generater/public/ffmpeg/fftools/ffmpeg_filter.c b/spaces/colakin/video-generater/public/ffmpeg/fftools/ffmpeg_filter.c deleted file mode 100644 index 161ea9c866c04ec0e78e4b0343f66a3c8b6efebf..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/fftools/ffmpeg_filter.c +++ /dev/null @@ -1,1550 +0,0 @@ -/* - * ffmpeg filter configuration - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -#include - -#include "ffmpeg.h" - -#include "libavfilter/avfilter.h" -#include "libavfilter/buffersink.h" -#include "libavfilter/buffersrc.h" - -#include "libavutil/avassert.h" -#include "libavutil/avstring.h" -#include "libavutil/bprint.h" -#include "libavutil/channel_layout.h" -#include "libavutil/display.h" -#include "libavutil/opt.h" -#include "libavutil/pixdesc.h" -#include "libavutil/pixfmt.h" -#include "libavutil/imgutils.h" -#include "libavutil/samplefmt.h" -#include "libavutil/timestamp.h" - -// FIXME: YUV420P etc. are actually supported with full color range, -// yet the latter information isn't available here. -static const enum AVPixelFormat *get_compliance_normal_pix_fmts(const AVCodec *codec, const enum AVPixelFormat default_formats[]) -{ - static const enum AVPixelFormat mjpeg_formats[] = - { AV_PIX_FMT_YUVJ420P, AV_PIX_FMT_YUVJ422P, AV_PIX_FMT_YUVJ444P, - AV_PIX_FMT_NONE }; - - if (!strcmp(codec->name, "mjpeg")) { - return mjpeg_formats; - } else { - return default_formats; - } -} - -static enum AVPixelFormat -choose_pixel_fmt(const AVCodec *codec, enum AVPixelFormat target, - int strict_std_compliance) -{ - if (codec && codec->pix_fmts) { - const enum AVPixelFormat *p = codec->pix_fmts; - const AVPixFmtDescriptor *desc = av_pix_fmt_desc_get(target); - //FIXME: This should check for AV_PIX_FMT_FLAG_ALPHA after PAL8 pixel format without alpha is implemented - int has_alpha = desc ? desc->nb_components % 2 == 0 : 0; - enum AVPixelFormat best= AV_PIX_FMT_NONE; - - if (strict_std_compliance > FF_COMPLIANCE_UNOFFICIAL) { - p = get_compliance_normal_pix_fmts(codec, p); - } - for (; *p != AV_PIX_FMT_NONE; p++) { - best = av_find_best_pix_fmt_of_2(best, *p, target, has_alpha, NULL); - if (*p == target) - break; - } - if (*p == AV_PIX_FMT_NONE) { - if (target != AV_PIX_FMT_NONE) - av_log(NULL, AV_LOG_WARNING, - "Incompatible pixel format '%s' for codec '%s', auto-selecting format '%s'\n", - av_get_pix_fmt_name(target), - codec->name, - av_get_pix_fmt_name(best)); - return best; - } - } - return target; -} - -/* May return NULL (no pixel format found), a static string or a string - * backed by the bprint. Nothing has been written to the AVBPrint in case - * NULL is returned. The AVBPrint provided should be clean. */ -static const char *choose_pix_fmts(OutputFilter *ofilter, AVBPrint *bprint) -{ - OutputStream *ost = ofilter->ost; - AVCodecContext *enc = ost->enc_ctx; - const AVDictionaryEntry *strict_dict = av_dict_get(ost->encoder_opts, "strict", NULL, 0); - if (strict_dict) - // used by choose_pixel_fmt() and below - av_opt_set(ost->enc_ctx, "strict", strict_dict->value, 0); - - if (ost->keep_pix_fmt) { - avfilter_graph_set_auto_convert(ofilter->graph->graph, - AVFILTER_AUTO_CONVERT_NONE); - if (ost->enc_ctx->pix_fmt == AV_PIX_FMT_NONE) - return NULL; - return av_get_pix_fmt_name(ost->enc_ctx->pix_fmt); - } - if (ost->enc_ctx->pix_fmt != AV_PIX_FMT_NONE) { - return av_get_pix_fmt_name(choose_pixel_fmt(enc->codec, enc->pix_fmt, - ost->enc_ctx->strict_std_compliance)); - } else if (enc->codec->pix_fmts) { - const enum AVPixelFormat *p; - - p = enc->codec->pix_fmts; - if (ost->enc_ctx->strict_std_compliance > FF_COMPLIANCE_UNOFFICIAL) { - p = get_compliance_normal_pix_fmts(enc->codec, p); - } - - for (; *p != AV_PIX_FMT_NONE; p++) { - const char *name = av_get_pix_fmt_name(*p); - av_bprintf(bprint, "%s%c", name, p[1] == AV_PIX_FMT_NONE ? '\0' : '|'); - } - if (!av_bprint_is_complete(bprint)) - report_and_exit(AVERROR(ENOMEM)); - return bprint->str; - } else - return NULL; -} - -/* Define a function for appending a list of allowed formats - * to an AVBPrint. If nonempty, the list will have a header. */ -#define DEF_CHOOSE_FORMAT(name, type, var, supported_list, none, printf_format, get_name) \ -static void choose_ ## name (OutputFilter *ofilter, AVBPrint *bprint) \ -{ \ - if (ofilter->var == none && !ofilter->supported_list) \ - return; \ - av_bprintf(bprint, #name "="); \ - if (ofilter->var != none) { \ - av_bprintf(bprint, printf_format, get_name(ofilter->var)); \ - } else { \ - const type *p; \ - \ - for (p = ofilter->supported_list; *p != none; p++) { \ - av_bprintf(bprint, printf_format "|", get_name(*p)); \ - } \ - if (bprint->len > 0) \ - bprint->str[--bprint->len] = '\0'; \ - } \ - av_bprint_chars(bprint, ':', 1); \ -} - -//DEF_CHOOSE_FORMAT(pix_fmts, enum AVPixelFormat, format, formats, AV_PIX_FMT_NONE, -// GET_PIX_FMT_NAME) - -DEF_CHOOSE_FORMAT(sample_fmts, enum AVSampleFormat, format, formats, - AV_SAMPLE_FMT_NONE, "%s", av_get_sample_fmt_name) - -DEF_CHOOSE_FORMAT(sample_rates, int, sample_rate, sample_rates, 0, - "%d", ) - -static void choose_channel_layouts(OutputFilter *ofilter, AVBPrint *bprint) -{ - if (av_channel_layout_check(&ofilter->ch_layout)) { - av_bprintf(bprint, "channel_layouts="); - av_channel_layout_describe_bprint(&ofilter->ch_layout, bprint); - } else if (ofilter->ch_layouts) { - const AVChannelLayout *p; - - av_bprintf(bprint, "channel_layouts="); - for (p = ofilter->ch_layouts; p->nb_channels; p++) { - av_channel_layout_describe_bprint(p, bprint); - av_bprintf(bprint, "|"); - } - if (bprint->len > 0) - bprint->str[--bprint->len] = '\0'; - } else - return; - av_bprint_chars(bprint, ':', 1); -} - -static OutputFilter *ofilter_alloc(FilterGraph *fg) -{ - OutputFilter *ofilter; - - ofilter = ALLOC_ARRAY_ELEM(fg->outputs, fg->nb_outputs); - ofilter->graph = fg; - ofilter->format = -1; - ofilter->last_pts = AV_NOPTS_VALUE; - - return ofilter; -} - -void fg_free(FilterGraph **pfg) -{ - FilterGraph *fg = *pfg; - - if (!fg) - return; - - avfilter_graph_free(&fg->graph); - for (int j = 0; j < fg->nb_inputs; j++) { - InputFilter *ifilter = fg->inputs[j]; - struct InputStream *ist = ifilter->ist; - - if (ifilter->frame_queue) { - AVFrame *frame; - while (av_fifo_read(ifilter->frame_queue, &frame, 1) >= 0) - av_frame_free(&frame); - av_fifo_freep2(&ifilter->frame_queue); - } - av_freep(&ifilter->displaymatrix); - if (ist->sub2video.sub_queue) { - AVSubtitle sub; - while (av_fifo_read(ist->sub2video.sub_queue, &sub, 1) >= 0) - avsubtitle_free(&sub); - av_fifo_freep2(&ist->sub2video.sub_queue); - } - av_buffer_unref(&ifilter->hw_frames_ctx); - av_freep(&ifilter->name); - av_freep(&fg->inputs[j]); - } - av_freep(&fg->inputs); - for (int j = 0; j < fg->nb_outputs; j++) { - OutputFilter *ofilter = fg->outputs[j]; - - avfilter_inout_free(&ofilter->out_tmp); - av_freep(&ofilter->name); - av_channel_layout_uninit(&ofilter->ch_layout); - av_freep(&fg->outputs[j]); - } - av_freep(&fg->outputs); - av_freep(&fg->graph_desc); - - av_freep(pfg); -} - -FilterGraph *fg_create(char *graph_desc) -{ - FilterGraph *fg; - - fg = ALLOC_ARRAY_ELEM(filtergraphs, nb_filtergraphs); - fg->index = nb_filtergraphs - 1; - fg->graph_desc = graph_desc; - - return fg; -} - -int init_simple_filtergraph(InputStream *ist, OutputStream *ost) -{ - FilterGraph *fg; - OutputFilter *ofilter; - InputFilter *ifilter; - - fg = fg_create(NULL); - if (!fg) - report_and_exit(AVERROR(ENOMEM)); - - ofilter = ofilter_alloc(fg); - ofilter->ost = ost; - - ost->filter = ofilter; - - ifilter = ALLOC_ARRAY_ELEM(fg->inputs, fg->nb_inputs); - ifilter->ist = ist; - ifilter->graph = fg; - ifilter->format = -1; - - ifilter->frame_queue = av_fifo_alloc2(8, sizeof(AVFrame*), AV_FIFO_FLAG_AUTO_GROW); - if (!ifilter->frame_queue) - report_and_exit(AVERROR(ENOMEM)); - - ist_filter_add(ist, ifilter, 1); - - return 0; -} - -static char *describe_filter_link(FilterGraph *fg, AVFilterInOut *inout, int in) -{ - AVFilterContext *ctx = inout->filter_ctx; - AVFilterPad *pads = in ? ctx->input_pads : ctx->output_pads; - int nb_pads = in ? ctx->nb_inputs : ctx->nb_outputs; - char *res; - - if (nb_pads > 1) - res = av_strdup(ctx->filter->name); - else - res = av_asprintf("%s:%s", ctx->filter->name, - avfilter_pad_get_name(pads, inout->pad_idx)); - if (!res) - report_and_exit(AVERROR(ENOMEM)); - return res; -} - -static void init_input_filter(FilterGraph *fg, AVFilterInOut *in) -{ - InputStream *ist = NULL; - enum AVMediaType type = avfilter_pad_get_type(in->filter_ctx->input_pads, in->pad_idx); - InputFilter *ifilter; - int i; - - // TODO: support other filter types - if (type != AVMEDIA_TYPE_VIDEO && type != AVMEDIA_TYPE_AUDIO) { - av_log(NULL, AV_LOG_FATAL, "Only video and audio filters supported " - "currently.\n"); - exit_program(1); - } - - if (in->name) { - AVFormatContext *s; - AVStream *st = NULL; - char *p; - int file_idx = strtol(in->name, &p, 0); - - if (file_idx < 0 || file_idx >= nb_input_files) { - av_log(NULL, AV_LOG_FATAL, "Invalid file index %d in filtergraph description %s.\n", - file_idx, fg->graph_desc); - exit_program(1); - } - s = input_files[file_idx]->ctx; - - for (i = 0; i < s->nb_streams; i++) { - enum AVMediaType stream_type = s->streams[i]->codecpar->codec_type; - if (stream_type != type && - !(stream_type == AVMEDIA_TYPE_SUBTITLE && - type == AVMEDIA_TYPE_VIDEO /* sub2video hack */)) - continue; - if (check_stream_specifier(s, s->streams[i], *p == ':' ? p + 1 : p) == 1) { - st = s->streams[i]; - break; - } - } - if (!st) { - av_log(NULL, AV_LOG_FATAL, "Stream specifier '%s' in filtergraph description %s " - "matches no streams.\n", p, fg->graph_desc); - exit_program(1); - } - ist = input_files[file_idx]->streams[st->index]; - if (ist->user_set_discard == AVDISCARD_ALL) { - av_log(NULL, AV_LOG_FATAL, "Stream specifier '%s' in filtergraph description %s " - "matches a disabled input stream.\n", p, fg->graph_desc); - exit_program(1); - } - } else { - /* find the first unused stream of corresponding type */ - for (ist = ist_iter(NULL); ist; ist = ist_iter(ist)) { - if (ist->user_set_discard == AVDISCARD_ALL) - continue; - if (ist->dec_ctx->codec_type == type && ist->discard) - break; - } - if (!ist) { - av_log(NULL, AV_LOG_FATAL, "Cannot find a matching stream for " - "unlabeled input pad %d on filter %s\n", in->pad_idx, - in->filter_ctx->name); - exit_program(1); - } - } - av_assert0(ist); - - ifilter = ALLOC_ARRAY_ELEM(fg->inputs, fg->nb_inputs); - ifilter->ist = ist; - ifilter->graph = fg; - ifilter->format = -1; - ifilter->type = ist->st->codecpar->codec_type; - ifilter->name = describe_filter_link(fg, in, 1); - - ifilter->frame_queue = av_fifo_alloc2(8, sizeof(AVFrame*), AV_FIFO_FLAG_AUTO_GROW); - if (!ifilter->frame_queue) - report_and_exit(AVERROR(ENOMEM)); - - ist_filter_add(ist, ifilter, 0); -} - -static int read_binary(const char *path, uint8_t **data, int *len) -{ - AVIOContext *io = NULL; - int64_t fsize; - int ret; - - *data = NULL; - *len = 0; - - ret = avio_open2(&io, path, AVIO_FLAG_READ, &int_cb, NULL); - if (ret < 0) { - av_log(NULL, AV_LOG_ERROR, "Cannot open file '%s': %s\n", - path, av_err2str(ret)); - return ret; - } - - fsize = avio_size(io); - if (fsize < 0 || fsize > INT_MAX) { - av_log(NULL, AV_LOG_ERROR, "Cannot obtain size of file %s\n", path); - ret = AVERROR(EIO); - goto fail; - } - - *data = av_malloc(fsize); - if (!*data) { - ret = AVERROR(ENOMEM); - goto fail; - } - - ret = avio_read(io, *data, fsize); - if (ret != fsize) { - av_log(NULL, AV_LOG_ERROR, "Error reading file %s\n", path); - ret = ret < 0 ? ret : AVERROR(EIO); - goto fail; - } - - *len = fsize; - - ret = 0; -fail: - avio_close(io); - if (ret < 0) { - av_freep(data); - *len = 0; - } - return ret; -} - -static int filter_opt_apply(AVFilterContext *f, const char *key, const char *val) -{ - const AVOption *o = NULL; - int ret; - - ret = av_opt_set(f, key, val, AV_OPT_SEARCH_CHILDREN); - if (ret >= 0) - return 0; - - if (ret == AVERROR_OPTION_NOT_FOUND && key[0] == '/') - o = av_opt_find(f, key + 1, NULL, 0, AV_OPT_SEARCH_CHILDREN); - if (!o) - goto err_apply; - - // key is a valid option name prefixed with '/' - // interpret value as a path from which to load the actual option value - key++; - - if (o->type == AV_OPT_TYPE_BINARY) { - uint8_t *data; - int len; - - ret = read_binary(val, &data, &len); - if (ret < 0) - goto err_load; - - ret = av_opt_set_bin(f, key, data, len, AV_OPT_SEARCH_CHILDREN); - av_freep(&data); - } else { - char *data = file_read(val); - if (!data) { - ret = AVERROR(EIO); - goto err_load; - } - - ret = av_opt_set(f, key, data, AV_OPT_SEARCH_CHILDREN); - av_freep(&data); - } - if (ret < 0) - goto err_apply; - - return 0; - -err_apply: - av_log(NULL, AV_LOG_ERROR, - "Error applying option '%s' to filter '%s': %s\n", - key, f->filter->name, av_err2str(ret)); - return ret; -err_load: - av_log(NULL, AV_LOG_ERROR, - "Error loading value for option '%s' from file '%s'\n", - key, val); - return ret; -} - -static int graph_opts_apply(AVFilterGraphSegment *seg) -{ - for (size_t i = 0; i < seg->nb_chains; i++) { - AVFilterChain *ch = seg->chains[i]; - - for (size_t j = 0; j < ch->nb_filters; j++) { - AVFilterParams *p = ch->filters[j]; - const AVDictionaryEntry *e = NULL; - - av_assert0(p->filter); - - while ((e = av_dict_iterate(p->opts, e))) { - int ret = filter_opt_apply(p->filter, e->key, e->value); - if (ret < 0) - return ret; - } - - av_dict_free(&p->opts); - } - } - - return 0; -} - -static int graph_parse(AVFilterGraph *graph, const char *desc, - AVFilterInOut **inputs, AVFilterInOut **outputs, - AVBufferRef *hw_device) -{ - AVFilterGraphSegment *seg; - int ret; - - *inputs = NULL; - *outputs = NULL; - - ret = avfilter_graph_segment_parse(graph, desc, 0, &seg); - if (ret < 0) - return ret; - - ret = avfilter_graph_segment_create_filters(seg, 0); - if (ret < 0) - goto fail; - - if (hw_device) { - for (int i = 0; i < graph->nb_filters; i++) { - AVFilterContext *f = graph->filters[i]; - - if (!(f->filter->flags & AVFILTER_FLAG_HWDEVICE)) - continue; - f->hw_device_ctx = av_buffer_ref(hw_device); - if (!f->hw_device_ctx) { - ret = AVERROR(ENOMEM); - goto fail; - } - } - } - - ret = graph_opts_apply(seg); - if (ret < 0) - goto fail; - - ret = avfilter_graph_segment_apply(seg, 0, inputs, outputs); - -fail: - avfilter_graph_segment_free(&seg); - return ret; -} - -int init_complex_filtergraph(FilterGraph *fg) -{ - AVFilterInOut *inputs, *outputs, *cur; - AVFilterGraph *graph; - int ret = 0; - - /* this graph is only used for determining the kinds of inputs - * and outputs we have, and is discarded on exit from this function */ - graph = avfilter_graph_alloc(); - if (!graph) - return AVERROR(ENOMEM); - graph->nb_threads = 1; - - ret = graph_parse(graph, fg->graph_desc, &inputs, &outputs, NULL); - if (ret < 0) - goto fail; - - for (cur = inputs; cur; cur = cur->next) - init_input_filter(fg, cur); - - for (cur = outputs; cur;) { - OutputFilter *const ofilter = ofilter_alloc(fg); - - ofilter->out_tmp = cur; - ofilter->type = avfilter_pad_get_type(cur->filter_ctx->output_pads, - cur->pad_idx); - ofilter->name = describe_filter_link(fg, cur, 0); - cur = cur->next; - ofilter->out_tmp->next = NULL; - } - -fail: - avfilter_inout_free(&inputs); - avfilter_graph_free(&graph); - return ret; -} - -static int insert_trim(int64_t start_time, int64_t duration, - AVFilterContext **last_filter, int *pad_idx, - const char *filter_name) -{ - AVFilterGraph *graph = (*last_filter)->graph; - AVFilterContext *ctx; - const AVFilter *trim; - enum AVMediaType type = avfilter_pad_get_type((*last_filter)->output_pads, *pad_idx); - const char *name = (type == AVMEDIA_TYPE_VIDEO) ? "trim" : "atrim"; - int ret = 0; - - if (duration == INT64_MAX && start_time == AV_NOPTS_VALUE) - return 0; - - trim = avfilter_get_by_name(name); - if (!trim) { - av_log(NULL, AV_LOG_ERROR, "%s filter not present, cannot limit " - "recording time.\n", name); - return AVERROR_FILTER_NOT_FOUND; - } - - ctx = avfilter_graph_alloc_filter(graph, trim, filter_name); - if (!ctx) - return AVERROR(ENOMEM); - - if (duration != INT64_MAX) { - ret = av_opt_set_int(ctx, "durationi", duration, - AV_OPT_SEARCH_CHILDREN); - } - if (ret >= 0 && start_time != AV_NOPTS_VALUE) { - ret = av_opt_set_int(ctx, "starti", start_time, - AV_OPT_SEARCH_CHILDREN); - } - if (ret < 0) { - av_log(ctx, AV_LOG_ERROR, "Error configuring the %s filter", name); - return ret; - } - - ret = avfilter_init_str(ctx, NULL); - if (ret < 0) - return ret; - - ret = avfilter_link(*last_filter, *pad_idx, ctx, 0); - if (ret < 0) - return ret; - - *last_filter = ctx; - *pad_idx = 0; - return 0; -} - -static int insert_filter(AVFilterContext **last_filter, int *pad_idx, - const char *filter_name, const char *args) -{ - AVFilterGraph *graph = (*last_filter)->graph; - AVFilterContext *ctx; - int ret; - - ret = avfilter_graph_create_filter(&ctx, - avfilter_get_by_name(filter_name), - filter_name, args, NULL, graph); - if (ret < 0) - return ret; - - ret = avfilter_link(*last_filter, *pad_idx, ctx, 0); - if (ret < 0) - return ret; - - *last_filter = ctx; - *pad_idx = 0; - return 0; -} - -static int configure_output_video_filter(FilterGraph *fg, OutputFilter *ofilter, AVFilterInOut *out) -{ - OutputStream *ost = ofilter->ost; - OutputFile *of = output_files[ost->file_index]; - AVFilterContext *last_filter = out->filter_ctx; - AVBPrint bprint; - int pad_idx = out->pad_idx; - int ret; - const char *pix_fmts; - char name[255]; - - snprintf(name, sizeof(name), "out_%d_%d", ost->file_index, ost->index); - ret = avfilter_graph_create_filter(&ofilter->filter, - avfilter_get_by_name("buffersink"), - name, NULL, NULL, fg->graph); - - if (ret < 0) - return ret; - - if ((ofilter->width || ofilter->height) && ofilter->ost->autoscale) { - char args[255]; - AVFilterContext *filter; - const AVDictionaryEntry *e = NULL; - - snprintf(args, sizeof(args), "%d:%d", - ofilter->width, ofilter->height); - - while ((e = av_dict_iterate(ost->sws_dict, e))) { - av_strlcatf(args, sizeof(args), ":%s=%s", e->key, e->value); - } - - snprintf(name, sizeof(name), "scaler_out_%d_%d", - ost->file_index, ost->index); - if ((ret = avfilter_graph_create_filter(&filter, avfilter_get_by_name("scale"), - name, args, NULL, fg->graph)) < 0) - return ret; - if ((ret = avfilter_link(last_filter, pad_idx, filter, 0)) < 0) - return ret; - - last_filter = filter; - pad_idx = 0; - } - - av_bprint_init(&bprint, 0, AV_BPRINT_SIZE_UNLIMITED); - if ((pix_fmts = choose_pix_fmts(ofilter, &bprint))) { - AVFilterContext *filter; - - ret = avfilter_graph_create_filter(&filter, - avfilter_get_by_name("format"), - "format", pix_fmts, NULL, fg->graph); - av_bprint_finalize(&bprint, NULL); - if (ret < 0) - return ret; - if ((ret = avfilter_link(last_filter, pad_idx, filter, 0)) < 0) - return ret; - - last_filter = filter; - pad_idx = 0; - } - - if (ost->frame_rate.num && 0) { - AVFilterContext *fps; - char args[255]; - - snprintf(args, sizeof(args), "fps=%d/%d", ost->frame_rate.num, - ost->frame_rate.den); - snprintf(name, sizeof(name), "fps_out_%d_%d", - ost->file_index, ost->index); - ret = avfilter_graph_create_filter(&fps, avfilter_get_by_name("fps"), - name, args, NULL, fg->graph); - if (ret < 0) - return ret; - - ret = avfilter_link(last_filter, pad_idx, fps, 0); - if (ret < 0) - return ret; - last_filter = fps; - pad_idx = 0; - } - - snprintf(name, sizeof(name), "trim_out_%d_%d", - ost->file_index, ost->index); - ret = insert_trim(of->start_time, of->recording_time, - &last_filter, &pad_idx, name); - if (ret < 0) - return ret; - - - if ((ret = avfilter_link(last_filter, pad_idx, ofilter->filter, 0)) < 0) - return ret; - - return 0; -} - -static int configure_output_audio_filter(FilterGraph *fg, OutputFilter *ofilter, AVFilterInOut *out) -{ - OutputStream *ost = ofilter->ost; - OutputFile *of = output_files[ost->file_index]; - AVFilterContext *last_filter = out->filter_ctx; - int pad_idx = out->pad_idx; - AVBPrint args; - char name[255]; - int ret; - - snprintf(name, sizeof(name), "out_%d_%d", ost->file_index, ost->index); - ret = avfilter_graph_create_filter(&ofilter->filter, - avfilter_get_by_name("abuffersink"), - name, NULL, NULL, fg->graph); - if (ret < 0) - return ret; - if ((ret = av_opt_set_int(ofilter->filter, "all_channel_counts", 1, AV_OPT_SEARCH_CHILDREN)) < 0) - return ret; - -#define AUTO_INSERT_FILTER(opt_name, filter_name, arg) do { \ - AVFilterContext *filt_ctx; \ - \ - av_log(NULL, AV_LOG_INFO, opt_name " is forwarded to lavfi " \ - "similarly to -af " filter_name "=%s.\n", arg); \ - \ - ret = avfilter_graph_create_filter(&filt_ctx, \ - avfilter_get_by_name(filter_name), \ - filter_name, arg, NULL, fg->graph); \ - if (ret < 0) \ - goto fail; \ - \ - ret = avfilter_link(last_filter, pad_idx, filt_ctx, 0); \ - if (ret < 0) \ - goto fail; \ - \ - last_filter = filt_ctx; \ - pad_idx = 0; \ -} while (0) - av_bprint_init(&args, 0, AV_BPRINT_SIZE_UNLIMITED); -#if FFMPEG_OPT_MAP_CHANNEL - if (ost->audio_channels_mapped) { - AVChannelLayout mapped_layout = { 0 }; - int i; - av_channel_layout_default(&mapped_layout, ost->audio_channels_mapped); - av_channel_layout_describe_bprint(&mapped_layout, &args); - for (i = 0; i < ost->audio_channels_mapped; i++) - if (ost->audio_channels_map[i] != -1) - av_bprintf(&args, "|c%d=c%d", i, ost->audio_channels_map[i]); - - AUTO_INSERT_FILTER("-map_channel", "pan", args.str); - av_bprint_clear(&args); - } -#endif - - choose_sample_fmts(ofilter, &args); - choose_sample_rates(ofilter, &args); - choose_channel_layouts(ofilter, &args); - if (!av_bprint_is_complete(&args)) { - ret = AVERROR(ENOMEM); - goto fail; - } - if (args.len) { - AVFilterContext *format; - - snprintf(name, sizeof(name), "format_out_%d_%d", - ost->file_index, ost->index); - ret = avfilter_graph_create_filter(&format, - avfilter_get_by_name("aformat"), - name, args.str, NULL, fg->graph); - if (ret < 0) - goto fail; - - ret = avfilter_link(last_filter, pad_idx, format, 0); - if (ret < 0) - goto fail; - - last_filter = format; - pad_idx = 0; - } - - if (ost->apad && of->shortest) { - int i; - - for (i = 0; i < of->nb_streams; i++) - if (of->streams[i]->st->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) - break; - - if (i < of->nb_streams) { - AUTO_INSERT_FILTER("-apad", "apad", ost->apad); - } - } - - snprintf(name, sizeof(name), "trim for output stream %d:%d", - ost->file_index, ost->index); - ret = insert_trim(of->start_time, of->recording_time, - &last_filter, &pad_idx, name); - if (ret < 0) - goto fail; - - if ((ret = avfilter_link(last_filter, pad_idx, ofilter->filter, 0)) < 0) - goto fail; -fail: - av_bprint_finalize(&args, NULL); - - return ret; -} - -static int configure_output_filter(FilterGraph *fg, OutputFilter *ofilter, - AVFilterInOut *out) -{ - if (!ofilter->ost) { - av_log(NULL, AV_LOG_FATAL, "Filter %s has an unconnected output\n", ofilter->name); - exit_program(1); - } - - switch (avfilter_pad_get_type(out->filter_ctx->output_pads, out->pad_idx)) { - case AVMEDIA_TYPE_VIDEO: return configure_output_video_filter(fg, ofilter, out); - case AVMEDIA_TYPE_AUDIO: return configure_output_audio_filter(fg, ofilter, out); - default: av_assert0(0); return 0; - } -} - -void check_filter_outputs(void) -{ - int i; - for (i = 0; i < nb_filtergraphs; i++) { - int n; - for (n = 0; n < filtergraphs[i]->nb_outputs; n++) { - OutputFilter *output = filtergraphs[i]->outputs[n]; - if (!output->ost) { - av_log(NULL, AV_LOG_FATAL, "Filter %s has an unconnected output\n", output->name); - exit_program(1); - } - } - } -} - -static int sub2video_prepare(InputStream *ist, InputFilter *ifilter) -{ - AVFormatContext *avf = input_files[ist->file_index]->ctx; - int i, w, h; - - /* Compute the size of the canvas for the subtitles stream. - If the subtitles codecpar has set a size, use it. Otherwise use the - maximum dimensions of the video streams in the same file. */ - w = ifilter->width; - h = ifilter->height; - if (!(w && h)) { - for (i = 0; i < avf->nb_streams; i++) { - if (avf->streams[i]->codecpar->codec_type == AVMEDIA_TYPE_VIDEO) { - w = FFMAX(w, avf->streams[i]->codecpar->width); - h = FFMAX(h, avf->streams[i]->codecpar->height); - } - } - if (!(w && h)) { - w = FFMAX(w, 720); - h = FFMAX(h, 576); - } - av_log(avf, AV_LOG_INFO, "sub2video: using %dx%d canvas\n", w, h); - } - ist->sub2video.w = ifilter->width = w; - ist->sub2video.h = ifilter->height = h; - - ifilter->width = ist->dec_ctx->width ? ist->dec_ctx->width : ist->sub2video.w; - ifilter->height = ist->dec_ctx->height ? ist->dec_ctx->height : ist->sub2video.h; - - /* rectangles are AV_PIX_FMT_PAL8, but we have no guarantee that the - palettes for all rectangles are identical or compatible */ - ifilter->format = AV_PIX_FMT_RGB32; - - ist->sub2video.frame = av_frame_alloc(); - if (!ist->sub2video.frame) - return AVERROR(ENOMEM); - ist->sub2video.last_pts = INT64_MIN; - ist->sub2video.end_pts = INT64_MIN; - - /* sub2video structure has been (re-)initialized. - Mark it as such so that the system will be - initialized with the first received heartbeat. */ - ist->sub2video.initialize = 1; - - return 0; -} - -static int configure_input_video_filter(FilterGraph *fg, InputFilter *ifilter, - AVFilterInOut *in) -{ - AVFilterContext *last_filter; - const AVFilter *buffer_filt = avfilter_get_by_name("buffer"); - const AVPixFmtDescriptor *desc; - InputStream *ist = ifilter->ist; - InputFile *f = input_files[ist->file_index]; - AVRational tb = ist->framerate.num ? av_inv_q(ist->framerate) : - ist->st->time_base; - AVRational fr = ist->framerate; - AVRational sar; - AVBPrint args; - char name[255]; - int ret, pad_idx = 0; - int64_t tsoffset = 0; - AVBufferSrcParameters *par = av_buffersrc_parameters_alloc(); - - if (!par) - return AVERROR(ENOMEM); - memset(par, 0, sizeof(*par)); - par->format = AV_PIX_FMT_NONE; - - if (ist->dec_ctx->codec_type == AVMEDIA_TYPE_AUDIO) { - av_log(NULL, AV_LOG_ERROR, "Cannot connect video filter to audio input\n"); - ret = AVERROR(EINVAL); - goto fail; - } - - if (!fr.num) - fr = ist->framerate_guessed; - - if (ist->dec_ctx->codec_type == AVMEDIA_TYPE_SUBTITLE) { - ret = sub2video_prepare(ist, ifilter); - if (ret < 0) - goto fail; - } - - sar = ifilter->sample_aspect_ratio; - if(!sar.den) - sar = (AVRational){0,1}; - av_bprint_init(&args, 0, AV_BPRINT_SIZE_AUTOMATIC); - av_bprintf(&args, - "video_size=%dx%d:pix_fmt=%d:time_base=%d/%d:" - "pixel_aspect=%d/%d", - ifilter->width, ifilter->height, ifilter->format, - tb.num, tb.den, sar.num, sar.den); - if (fr.num && fr.den) - av_bprintf(&args, ":frame_rate=%d/%d", fr.num, fr.den); - snprintf(name, sizeof(name), "graph %d input from stream %d:%d", fg->index, - ist->file_index, ist->st->index); - - - if ((ret = avfilter_graph_create_filter(&ifilter->filter, buffer_filt, name, - args.str, NULL, fg->graph)) < 0) - goto fail; - par->hw_frames_ctx = ifilter->hw_frames_ctx; - ret = av_buffersrc_parameters_set(ifilter->filter, par); - if (ret < 0) - goto fail; - av_freep(&par); - last_filter = ifilter->filter; - - desc = av_pix_fmt_desc_get(ifilter->format); - av_assert0(desc); - - // TODO: insert hwaccel enabled filters like transpose_vaapi into the graph - if (ist->autorotate && !(desc->flags & AV_PIX_FMT_FLAG_HWACCEL)) { - int32_t *displaymatrix = ifilter->displaymatrix; - double theta; - - if (!displaymatrix) - displaymatrix = (int32_t *)av_stream_get_side_data(ist->st, AV_PKT_DATA_DISPLAYMATRIX, NULL); - theta = get_rotation(displaymatrix); - - if (fabs(theta - 90) < 1.0) { - ret = insert_filter(&last_filter, &pad_idx, "transpose", - displaymatrix[3] > 0 ? "cclock_flip" : "clock"); - } else if (fabs(theta - 180) < 1.0) { - if (displaymatrix[0] < 0) { - ret = insert_filter(&last_filter, &pad_idx, "hflip", NULL); - if (ret < 0) - return ret; - } - if (displaymatrix[4] < 0) { - ret = insert_filter(&last_filter, &pad_idx, "vflip", NULL); - } - } else if (fabs(theta - 270) < 1.0) { - ret = insert_filter(&last_filter, &pad_idx, "transpose", - displaymatrix[3] < 0 ? "clock_flip" : "cclock"); - } else if (fabs(theta) > 1.0) { - char rotate_buf[64]; - snprintf(rotate_buf, sizeof(rotate_buf), "%f*PI/180", theta); - ret = insert_filter(&last_filter, &pad_idx, "rotate", rotate_buf); - } else if (fabs(theta) < 1.0) { - if (displaymatrix && displaymatrix[4] < 0) { - ret = insert_filter(&last_filter, &pad_idx, "vflip", NULL); - } - } - if (ret < 0) - return ret; - } - - snprintf(name, sizeof(name), "trim_in_%d_%d", - ist->file_index, ist->st->index); - if (copy_ts) { - tsoffset = f->start_time == AV_NOPTS_VALUE ? 0 : f->start_time; - if (!start_at_zero && f->ctx->start_time != AV_NOPTS_VALUE) - tsoffset += f->ctx->start_time; - } - ret = insert_trim(((f->start_time == AV_NOPTS_VALUE) || !f->accurate_seek) ? - AV_NOPTS_VALUE : tsoffset, f->recording_time, - &last_filter, &pad_idx, name); - if (ret < 0) - return ret; - - if ((ret = avfilter_link(last_filter, 0, in->filter_ctx, in->pad_idx)) < 0) - return ret; - return 0; -fail: - av_freep(&par); - - return ret; -} - -static int configure_input_audio_filter(FilterGraph *fg, InputFilter *ifilter, - AVFilterInOut *in) -{ - AVFilterContext *last_filter; - const AVFilter *abuffer_filt = avfilter_get_by_name("abuffer"); - InputStream *ist = ifilter->ist; - InputFile *f = input_files[ist->file_index]; - AVBPrint args; - char name[255]; - int ret, pad_idx = 0; - int64_t tsoffset = 0; - - if (ist->dec_ctx->codec_type != AVMEDIA_TYPE_AUDIO) { - av_log(NULL, AV_LOG_ERROR, "Cannot connect audio filter to non audio input\n"); - return AVERROR(EINVAL); - } - - av_bprint_init(&args, 0, AV_BPRINT_SIZE_AUTOMATIC); - av_bprintf(&args, "time_base=%d/%d:sample_rate=%d:sample_fmt=%s", - 1, ifilter->sample_rate, - ifilter->sample_rate, - av_get_sample_fmt_name(ifilter->format)); - if (av_channel_layout_check(&ifilter->ch_layout) && - ifilter->ch_layout.order != AV_CHANNEL_ORDER_UNSPEC) { - av_bprintf(&args, ":channel_layout="); - av_channel_layout_describe_bprint(&ifilter->ch_layout, &args); - } else - av_bprintf(&args, ":channels=%d", ifilter->ch_layout.nb_channels); - snprintf(name, sizeof(name), "graph_%d_in_%d_%d", fg->index, - ist->file_index, ist->st->index); - - if ((ret = avfilter_graph_create_filter(&ifilter->filter, abuffer_filt, - name, args.str, NULL, - fg->graph)) < 0) - return ret; - last_filter = ifilter->filter; - - snprintf(name, sizeof(name), "trim for input stream %d:%d", - ist->file_index, ist->st->index); - if (copy_ts) { - tsoffset = f->start_time == AV_NOPTS_VALUE ? 0 : f->start_time; - if (!start_at_zero && f->ctx->start_time != AV_NOPTS_VALUE) - tsoffset += f->ctx->start_time; - } - ret = insert_trim(((f->start_time == AV_NOPTS_VALUE) || !f->accurate_seek) ? - AV_NOPTS_VALUE : tsoffset, f->recording_time, - &last_filter, &pad_idx, name); - if (ret < 0) - return ret; - - if ((ret = avfilter_link(last_filter, 0, in->filter_ctx, in->pad_idx)) < 0) - return ret; - - return 0; -} - -static int configure_input_filter(FilterGraph *fg, InputFilter *ifilter, - AVFilterInOut *in) -{ - if (!ifilter->ist->dec) { - av_log(NULL, AV_LOG_ERROR, - "No decoder for stream #%d:%d, filtering impossible\n", - ifilter->ist->file_index, ifilter->ist->st->index); - return AVERROR_DECODER_NOT_FOUND; - } - switch (avfilter_pad_get_type(in->filter_ctx->input_pads, in->pad_idx)) { - case AVMEDIA_TYPE_VIDEO: return configure_input_video_filter(fg, ifilter, in); - case AVMEDIA_TYPE_AUDIO: return configure_input_audio_filter(fg, ifilter, in); - default: av_assert0(0); return 0; - } -} - -static void cleanup_filtergraph(FilterGraph *fg) -{ - int i; - for (i = 0; i < fg->nb_outputs; i++) - fg->outputs[i]->filter = (AVFilterContext *)NULL; - for (i = 0; i < fg->nb_inputs; i++) - fg->inputs[i]->filter = (AVFilterContext *)NULL; - avfilter_graph_free(&fg->graph); -} - -static int filter_is_buffersrc(const AVFilterContext *f) -{ - return f->nb_inputs == 0 && - (!strcmp(f->filter->name, "buffer") || - !strcmp(f->filter->name, "abuffer")); -} - -static int graph_is_meta(AVFilterGraph *graph) -{ - for (unsigned i = 0; i < graph->nb_filters; i++) { - const AVFilterContext *f = graph->filters[i]; - - /* in addition to filters flagged as meta, also - * disregard sinks and buffersources (but not other sources, - * since they introduce data we are not aware of) - */ - if (!((f->filter->flags & AVFILTER_FLAG_METADATA_ONLY) || - f->nb_outputs == 0 || - filter_is_buffersrc(f))) - return 0; - } - return 1; -} - -int configure_filtergraph(FilterGraph *fg) -{ - AVBufferRef *hw_device; - AVFilterInOut *inputs, *outputs, *cur; - int ret, i, simple = filtergraph_is_simple(fg); - const char *graph_desc = simple ? fg->outputs[0]->ost->avfilter : - fg->graph_desc; - - cleanup_filtergraph(fg); - if (!(fg->graph = avfilter_graph_alloc())) - return AVERROR(ENOMEM); - - if (simple) { - OutputStream *ost = fg->outputs[0]->ost; - - if (filter_nbthreads) { - ret = av_opt_set(fg->graph, "threads", filter_nbthreads, 0); - if (ret < 0) - goto fail; - } else { - const AVDictionaryEntry *e = NULL; - e = av_dict_get(ost->encoder_opts, "threads", NULL, 0); - if (e) - av_opt_set(fg->graph, "threads", e->value, 0); - } - - if (av_dict_count(ost->sws_dict)) { - ret = av_dict_get_string(ost->sws_dict, - &fg->graph->scale_sws_opts, - '=', ':'); - if (ret < 0) - goto fail; - } - - if (av_dict_count(ost->swr_opts)) { - char *args; - ret = av_dict_get_string(ost->swr_opts, &args, '=', ':'); - if (ret < 0) - goto fail; - av_opt_set(fg->graph, "aresample_swr_opts", args, 0); - av_free(args); - } - } else { - fg->graph->nb_threads = filter_complex_nbthreads; - } - - hw_device = hw_device_for_filter(); - - if ((ret = graph_parse(fg->graph, graph_desc, &inputs, &outputs, hw_device)) < 0) - goto fail; - - if (simple && (!inputs || inputs->next || !outputs || outputs->next)) { - const char *num_inputs; - const char *num_outputs; - if (!outputs) { - num_outputs = "0"; - } else if (outputs->next) { - num_outputs = ">1"; - } else { - num_outputs = "1"; - } - if (!inputs) { - num_inputs = "0"; - } else if (inputs->next) { - num_inputs = ">1"; - } else { - num_inputs = "1"; - } - av_log(NULL, AV_LOG_ERROR, "Simple filtergraph '%s' was expected " - "to have exactly 1 input and 1 output." - " However, it had %s input(s) and %s output(s)." - " Please adjust, or use a complex filtergraph (-filter_complex) instead.\n", - graph_desc, num_inputs, num_outputs); - ret = AVERROR(EINVAL); - goto fail; - } - - for (cur = inputs, i = 0; cur; cur = cur->next, i++) - if ((ret = configure_input_filter(fg, fg->inputs[i], cur)) < 0) { - avfilter_inout_free(&inputs); - avfilter_inout_free(&outputs); - goto fail; - } - avfilter_inout_free(&inputs); - - for (cur = outputs, i = 0; cur; cur = cur->next, i++) - configure_output_filter(fg, fg->outputs[i], cur); - avfilter_inout_free(&outputs); - - if (!auto_conversion_filters) - avfilter_graph_set_auto_convert(fg->graph, AVFILTER_AUTO_CONVERT_NONE); - if ((ret = avfilter_graph_config(fg->graph, NULL)) < 0) - goto fail; - - fg->is_meta = graph_is_meta(fg->graph); - - /* limit the lists of allowed formats to the ones selected, to - * make sure they stay the same if the filtergraph is reconfigured later */ - for (i = 0; i < fg->nb_outputs; i++) { - OutputFilter *ofilter = fg->outputs[i]; - AVFilterContext *sink = ofilter->filter; - - ofilter->format = av_buffersink_get_format(sink); - - ofilter->width = av_buffersink_get_w(sink); - ofilter->height = av_buffersink_get_h(sink); - - ofilter->sample_rate = av_buffersink_get_sample_rate(sink); - av_channel_layout_uninit(&ofilter->ch_layout); - ret = av_buffersink_get_ch_layout(sink, &ofilter->ch_layout); - if (ret < 0) - goto fail; - } - - for (i = 0; i < fg->nb_inputs; i++) { - AVFrame *tmp; - while (av_fifo_read(fg->inputs[i]->frame_queue, &tmp, 1) >= 0) { - ret = av_buffersrc_add_frame(fg->inputs[i]->filter, tmp); - av_frame_free(&tmp); - if (ret < 0) - goto fail; - } - } - - /* send the EOFs for the finished inputs */ - for (i = 0; i < fg->nb_inputs; i++) { - if (fg->inputs[i]->eof) { - ret = av_buffersrc_add_frame(fg->inputs[i]->filter, NULL); - if (ret < 0) - goto fail; - } - } - - /* process queued up subtitle packets */ - for (i = 0; i < fg->nb_inputs; i++) { - InputStream *ist = fg->inputs[i]->ist; - if (ist->sub2video.sub_queue && ist->sub2video.frame) { - AVSubtitle tmp; - while (av_fifo_read(ist->sub2video.sub_queue, &tmp, 1) >= 0) { - sub2video_update(ist, INT64_MIN, &tmp); - avsubtitle_free(&tmp); - } - } - } - - return 0; - -fail: - cleanup_filtergraph(fg); - return ret; -} - -int ifilter_parameters_from_frame(InputFilter *ifilter, const AVFrame *frame) -{ - AVFrameSideData *sd; - int ret; - - av_buffer_unref(&ifilter->hw_frames_ctx); - - ifilter->format = frame->format; - - ifilter->width = frame->width; - ifilter->height = frame->height; - ifilter->sample_aspect_ratio = frame->sample_aspect_ratio; - - ifilter->sample_rate = frame->sample_rate; - ret = av_channel_layout_copy(&ifilter->ch_layout, &frame->ch_layout); - if (ret < 0) - return ret; - - av_freep(&ifilter->displaymatrix); - sd = av_frame_get_side_data(frame, AV_FRAME_DATA_DISPLAYMATRIX); - if (sd) - ifilter->displaymatrix = av_memdup(sd->data, sizeof(int32_t) * 9); - - if (frame->hw_frames_ctx) { - ifilter->hw_frames_ctx = av_buffer_ref(frame->hw_frames_ctx); - if (!ifilter->hw_frames_ctx) - return AVERROR(ENOMEM); - } - - return 0; -} - -int filtergraph_is_simple(FilterGraph *fg) -{ - return !fg->graph_desc; -} - -int reap_filters(int flush) -{ - AVFrame *filtered_frame = NULL; - - /* Reap all buffers present in the buffer sinks */ - for (OutputStream *ost = ost_iter(NULL); ost; ost = ost_iter(ost)) { - AVFilterContext *filter; - int ret = 0; - - if (!ost->filter || !ost->filter->graph->graph) - continue; - filter = ost->filter->filter; - - filtered_frame = ost->filtered_frame; - - while (1) { - ret = av_buffersink_get_frame_flags(filter, filtered_frame, - AV_BUFFERSINK_FLAG_NO_REQUEST); - if (ret < 0) { - if (ret != AVERROR(EAGAIN) && ret != AVERROR_EOF) { - av_log(NULL, AV_LOG_WARNING, - "Error in av_buffersink_get_frame_flags(): %s\n", av_err2str(ret)); - } else if (flush && ret == AVERROR_EOF) { - if (av_buffersink_get_type(filter) == AVMEDIA_TYPE_VIDEO) - enc_frame(ost, NULL); - } - break; - } - if (ost->finished) { - av_frame_unref(filtered_frame); - continue; - } - - if (filtered_frame->pts != AV_NOPTS_VALUE) { - AVRational tb = av_buffersink_get_time_base(filter); - ost->filter->last_pts = av_rescale_q(filtered_frame->pts, tb, - AV_TIME_BASE_Q); - filtered_frame->time_base = tb; - - if (debug_ts) - av_log(NULL, AV_LOG_INFO, "filter_raw -> pts:%s pts_time:%s time_base:%d/%d\n", - av_ts2str(filtered_frame->pts), - av_ts2timestr(filtered_frame->pts, &tb), - tb.num, tb.den); - } - - enc_frame(ost, filtered_frame); - av_frame_unref(filtered_frame); - } - } - - return 0; -} - -int ifilter_send_eof(InputFilter *ifilter, int64_t pts) -{ - int ret; - - ifilter->eof = 1; - - if (ifilter->filter) { - ret = av_buffersrc_close(ifilter->filter, pts, AV_BUFFERSRC_FLAG_PUSH); - if (ret < 0) - return ret; - } else { - // the filtergraph was never configured - if (ifilter->format < 0) { - ret = ifilter_parameters_from_codecpar(ifilter, ifilter->ist->par); - if (ret < 0) - return ret; - } - if (ifilter->format < 0 && (ifilter->type == AVMEDIA_TYPE_AUDIO || ifilter->type == AVMEDIA_TYPE_VIDEO)) { - av_log(NULL, AV_LOG_ERROR, "Cannot determine format of input stream %d:%d after EOF\n", ifilter->ist->file_index, ifilter->ist->st->index); - return AVERROR_INVALIDDATA; - } - } - - return 0; -} - -int ifilter_send_frame(InputFilter *ifilter, AVFrame *frame, int keep_reference) -{ - FilterGraph *fg = ifilter->graph; - AVFrameSideData *sd; - int need_reinit, ret; - int buffersrc_flags = AV_BUFFERSRC_FLAG_PUSH; - - if (keep_reference) - buffersrc_flags |= AV_BUFFERSRC_FLAG_KEEP_REF; - - /* determine if the parameters for this input changed */ - need_reinit = ifilter->format != frame->format; - - switch (ifilter->ist->par->codec_type) { - case AVMEDIA_TYPE_AUDIO: - need_reinit |= ifilter->sample_rate != frame->sample_rate || - av_channel_layout_compare(&ifilter->ch_layout, &frame->ch_layout); - break; - case AVMEDIA_TYPE_VIDEO: - need_reinit |= ifilter->width != frame->width || - ifilter->height != frame->height; - break; - } - - if (!ifilter->ist->reinit_filters && fg->graph) - need_reinit = 0; - - if (!!ifilter->hw_frames_ctx != !!frame->hw_frames_ctx || - (ifilter->hw_frames_ctx && ifilter->hw_frames_ctx->data != frame->hw_frames_ctx->data)) - need_reinit = 1; - - if (sd = av_frame_get_side_data(frame, AV_FRAME_DATA_DISPLAYMATRIX)) { - if (!ifilter->displaymatrix || memcmp(sd->data, ifilter->displaymatrix, sizeof(int32_t) * 9)) - need_reinit = 1; - } else if (ifilter->displaymatrix) - need_reinit = 1; - - if (need_reinit) { - ret = ifilter_parameters_from_frame(ifilter, frame); - if (ret < 0) - return ret; - } - - /* (re)init the graph if possible, otherwise buffer the frame and return */ - if (need_reinit || !fg->graph) { - if (!ifilter_has_all_input_formats(fg)) { - AVFrame *tmp = av_frame_clone(frame); - if (!tmp) - return AVERROR(ENOMEM); - - ret = av_fifo_write(ifilter->frame_queue, &tmp, 1); - if (ret < 0) - av_frame_free(&tmp); - - return ret; - } - - ret = reap_filters(1); - if (ret < 0 && ret != AVERROR_EOF) { - av_log(NULL, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret)); - return ret; - } - - ret = configure_filtergraph(fg); - if (ret < 0) { - av_log(NULL, AV_LOG_ERROR, "Error reinitializing filters!\n"); - return ret; - } - } - - ret = av_buffersrc_add_frame_flags(ifilter->filter, frame, buffersrc_flags); - if (ret < 0) { - if (ret != AVERROR_EOF) - av_log(NULL, AV_LOG_ERROR, "Error while filtering: %s\n", av_err2str(ret)); - return ret; - } - - return 0; -} - -int fg_transcode_step(FilterGraph *graph, InputStream **best_ist) -{ - int i, ret; - int nb_requests, nb_requests_max = 0; - InputFilter *ifilter; - InputStream *ist; - - *best_ist = NULL; - ret = avfilter_graph_request_oldest(graph->graph); - if (ret >= 0) - return reap_filters(0); - - if (ret == AVERROR_EOF) { - ret = reap_filters(1); - for (i = 0; i < graph->nb_outputs; i++) - close_output_stream(graph->outputs[i]->ost); - return ret; - } - if (ret != AVERROR(EAGAIN)) - return ret; - - for (i = 0; i < graph->nb_inputs; i++) { - ifilter = graph->inputs[i]; - ist = ifilter->ist; - if (input_files[ist->file_index]->eagain || - input_files[ist->file_index]->eof_reached) - continue; - nb_requests = av_buffersrc_get_nb_failed_requests(ifilter->filter); - if (nb_requests > nb_requests_max) { - nb_requests_max = nb_requests; - *best_ist = ist; - } - } - - if (!*best_ist) - for (i = 0; i < graph->nb_outputs; i++) - graph->outputs[i]->ost->unavailable = 1; - - return 0; -} diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g723_1enc.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g723_1enc.c deleted file mode 100644 index be80153130e1c7835645d94bda06357934416bec..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/g723_1enc.c +++ /dev/null @@ -1,1255 +0,0 @@ -/* - * G.723.1 compatible encoder - * Copyright (c) Mohamed Naufal - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * G.723.1 compatible encoder - */ - -#include -#include - -#include "libavutil/channel_layout.h" -#include "libavutil/common.h" -#include "libavutil/mem.h" -#include "libavutil/opt.h" - -#include "avcodec.h" -#include "celp_math.h" -#include "codec_internal.h" -#include "encode.h" -#include "g723_1.h" - -#define BITSTREAM_WRITER_LE -#include "put_bits.h" - -/** - * Hamming window coefficients scaled by 2^15 - */ -static const int16_t hamming_window[LPC_FRAME] = { - 2621, 2631, 2659, 2705, 2770, 2853, 2955, 3074, 3212, 3367, - 3541, 3731, 3939, 4164, 4405, 4663, 4937, 5226, 5531, 5851, - 6186, 6534, 6897, 7273, 7661, 8062, 8475, 8899, 9334, 9780, - 10235, 10699, 11172, 11653, 12141, 12636, 13138, 13645, 14157, 14673, - 15193, 15716, 16242, 16769, 17298, 17827, 18356, 18884, 19411, 19935, - 20457, 20975, 21489, 21999, 22503, 23002, 23494, 23978, 24455, 24924, - 25384, 25834, 26274, 26704, 27122, 27529, 27924, 28306, 28675, 29031, - 29373, 29700, 30012, 30310, 30592, 30857, 31107, 31340, 31557, 31756, - 31938, 32102, 32249, 32377, 32488, 32580, 32654, 32710, 32747, 32766, - 32766, 32747, 32710, 32654, 32580, 32488, 32377, 32249, 32102, 31938, - 31756, 31557, 31340, 31107, 30857, 30592, 30310, 30012, 29700, 29373, - 29031, 28675, 28306, 27924, 27529, 27122, 26704, 26274, 25834, 25384, - 24924, 24455, 23978, 23494, 23002, 22503, 21999, 21489, 20975, 20457, - 19935, 19411, 18884, 18356, 17827, 17298, 16769, 16242, 15716, 15193, - 14673, 14157, 13645, 13138, 12636, 12141, 11653, 11172, 10699, 10235, - 9780, 9334, 8899, 8475, 8062, 7661, 7273, 6897, 6534, 6186, - 5851, 5531, 5226, 4937, 4663, 4405, 4164, 3939, 3731, 3541, - 3367, 3212, 3074, 2955, 2853, 2770, 2705, 2659, 2631, 2621 -}; - -/** - * Binomial window coefficients scaled by 2^15 - */ -static const int16_t binomial_window[LPC_ORDER] = { - 32749, 32695, 32604, 32477, 32315, 32118, 31887, 31622, 31324, 30995 -}; - -/** - * 0.994^i scaled by 2^15 - */ -static const int16_t bandwidth_expand[LPC_ORDER] = { - 32571, 32376, 32182, 31989, 31797, 31606, 31416, 31228, 31040, 30854 -}; - -/** - * 0.5^i scaled by 2^15 - */ -static const int16_t percept_flt_tbl[2][LPC_ORDER] = { - /* Zero part */ - {29491, 26542, 23888, 21499, 19349, 17414, 15673, 14106, 12695, 11425}, - /* Pole part */ - {16384, 8192, 4096, 2048, 1024, 512, 256, 128, 64, 32} -}; - -static av_cold int g723_1_encode_init(AVCodecContext *avctx) -{ - G723_1_Context *s = avctx->priv_data; - G723_1_ChannelContext *p = &s->ch[0]; - - if (avctx->sample_rate != 8000) { - av_log(avctx, AV_LOG_ERROR, "Only 8000Hz sample rate supported\n"); - return AVERROR(EINVAL); - } - - if (avctx->bit_rate == 6300) { - p->cur_rate = RATE_6300; - } else if (avctx->bit_rate == 5300) { - av_log(avctx, AV_LOG_ERROR, "Use bitrate 6300 instead of 5300.\n"); - avpriv_report_missing_feature(avctx, "Bitrate 5300"); - return AVERROR_PATCHWELCOME; - } else { - av_log(avctx, AV_LOG_ERROR, "Bitrate not supported, use 6300\n"); - return AVERROR(EINVAL); - } - avctx->frame_size = 240; - memcpy(p->prev_lsp, dc_lsp, LPC_ORDER * sizeof(int16_t)); - - return 0; -} - -/** - * Remove DC component from the input signal. - * - * @param buf input signal - * @param fir zero memory - * @param iir pole memory - */ -static void highpass_filter(int16_t *buf, int16_t *fir, int *iir) -{ - int i; - for (i = 0; i < FRAME_LEN; i++) { - *iir = (buf[i] - *fir) * (1 << 15) + MULL2(*iir, 0x7f00); - *fir = buf[i]; - buf[i] = av_clipl_int32((int64_t)*iir + (1 << 15)) >> 16; - } -} - -/** - * Estimate autocorrelation of the input vector. - * - * @param buf input buffer - * @param autocorr autocorrelation coefficients vector - */ -static void comp_autocorr(int16_t *buf, int16_t *autocorr) -{ - int i, scale, temp; - int16_t vector[LPC_FRAME]; - - ff_g723_1_scale_vector(vector, buf, LPC_FRAME); - - /* Apply the Hamming window */ - for (i = 0; i < LPC_FRAME; i++) - vector[i] = (vector[i] * hamming_window[i] + (1 << 14)) >> 15; - - /* Compute the first autocorrelation coefficient */ - temp = ff_dot_product(vector, vector, LPC_FRAME); - - /* Apply a white noise correlation factor of (1025/1024) */ - temp += temp >> 10; - - /* Normalize */ - scale = ff_g723_1_normalize_bits(temp, 31); - autocorr[0] = av_clipl_int32((int64_t) (temp << scale) + - (1 << 15)) >> 16; - - /* Compute the remaining coefficients */ - if (!autocorr[0]) { - memset(autocorr + 1, 0, LPC_ORDER * sizeof(int16_t)); - } else { - for (i = 1; i <= LPC_ORDER; i++) { - temp = ff_dot_product(vector, vector + i, LPC_FRAME - i); - temp = MULL2(temp * (1 << scale), binomial_window[i - 1]); - autocorr[i] = av_clipl_int32((int64_t) temp + (1 << 15)) >> 16; - } - } -} - -/** - * Use Levinson-Durbin recursion to compute LPC coefficients from - * autocorrelation values. - * - * @param lpc LPC coefficients vector - * @param autocorr autocorrelation coefficients vector - * @param error prediction error - */ -static void levinson_durbin(int16_t *lpc, int16_t *autocorr, int16_t error) -{ - int16_t vector[LPC_ORDER]; - int16_t partial_corr; - int i, j, temp; - - memset(lpc, 0, LPC_ORDER * sizeof(int16_t)); - - for (i = 0; i < LPC_ORDER; i++) { - /* Compute the partial correlation coefficient */ - temp = 0; - for (j = 0; j < i; j++) - temp -= lpc[j] * autocorr[i - j - 1]; - temp = (autocorr[i] * (1 << 13) + temp) * (1 << 3); - - if (FFABS(temp) >= (error << 16)) - break; - - partial_corr = temp / (error << 1); - - lpc[i] = (partial_corr + (1 << 1)) >> 2; - - /* Update the prediction error */ - temp = MULL2(temp, partial_corr); - error = av_clipl_int32((int64_t) (error << 16) - temp + - (1 << 15)) >> 16; - - memcpy(vector, lpc, i * sizeof(int16_t)); - for (j = 0; j < i; j++) { - temp = partial_corr * vector[i - j - 1] * 2; - lpc[j] = av_clipl_int32((int64_t) (lpc[j] * (1 << 16)) - temp + - (1 << 15)) >> 16; - } - } -} - -/** - * Calculate LPC coefficients for the current frame. - * - * @param buf current frame - * @param prev_data 2 trailing subframes of the previous frame - * @param lpc LPC coefficients vector - */ -static void comp_lpc_coeff(int16_t *buf, int16_t *lpc) -{ - int16_t autocorr[(LPC_ORDER + 1) * SUBFRAMES]; - int16_t *autocorr_ptr = autocorr; - int16_t *lpc_ptr = lpc; - int i, j; - - for (i = 0, j = 0; j < SUBFRAMES; i += SUBFRAME_LEN, j++) { - comp_autocorr(buf + i, autocorr_ptr); - levinson_durbin(lpc_ptr, autocorr_ptr + 1, autocorr_ptr[0]); - - lpc_ptr += LPC_ORDER; - autocorr_ptr += LPC_ORDER + 1; - } -} - -static void lpc2lsp(int16_t *lpc, int16_t *prev_lsp, int16_t *lsp) -{ - int f[LPC_ORDER + 2]; ///< coefficients of the sum and difference - ///< polynomials (F1, F2) ordered as - ///< f1[0], f2[0], ...., f1[5], f2[5] - - int max, shift, cur_val, prev_val, count, p; - int i, j; - int64_t temp; - - /* Initialize f1[0] and f2[0] to 1 in Q25 */ - for (i = 0; i < LPC_ORDER; i++) - lsp[i] = (lpc[i] * bandwidth_expand[i] + (1 << 14)) >> 15; - - /* Apply bandwidth expansion on the LPC coefficients */ - f[0] = f[1] = 1 << 25; - - /* Compute the remaining coefficients */ - for (i = 0; i < LPC_ORDER / 2; i++) { - /* f1 */ - f[2 * i + 2] = -f[2 * i] - (lsp[i] + lsp[LPC_ORDER - 1 - i]) * (1 << 12); - /* f2 */ - f[2 * i + 3] = f[2 * i + 1] - (lsp[i] - lsp[LPC_ORDER - 1 - i]) * (1 << 12); - } - - /* Divide f1[5] and f2[5] by 2 for use in polynomial evaluation */ - f[LPC_ORDER] >>= 1; - f[LPC_ORDER + 1] >>= 1; - - /* Normalize and shorten */ - max = FFABS(f[0]); - for (i = 1; i < LPC_ORDER + 2; i++) - max = FFMAX(max, FFABS(f[i])); - - shift = ff_g723_1_normalize_bits(max, 31); - - for (i = 0; i < LPC_ORDER + 2; i++) - f[i] = av_clipl_int32((int64_t) (f[i] * (1 << shift)) + (1 << 15)) >> 16; - - /** - * Evaluate F1 and F2 at uniform intervals of pi/256 along the - * unit circle and check for zero crossings. - */ - p = 0; - temp = 0; - for (i = 0; i <= LPC_ORDER / 2; i++) - temp += f[2 * i] * G723_1_COS_TAB_FIRST_ELEMENT; - prev_val = av_clipl_int32(temp << 1); - count = 0; - for (i = 1; i < COS_TBL_SIZE / 2; i++) { - /* Evaluate */ - temp = 0; - for (j = 0; j <= LPC_ORDER / 2; j++) - temp += f[LPC_ORDER - 2 * j + p] * ff_g723_1_cos_tab[i * j % COS_TBL_SIZE]; - cur_val = av_clipl_int32(temp * 2); - - /* Check for sign change, indicating a zero crossing */ - if ((cur_val ^ prev_val) < 0) { - int abs_cur = FFABS(cur_val); - int abs_prev = FFABS(prev_val); - int sum = abs_cur + abs_prev; - - shift = ff_g723_1_normalize_bits(sum, 31); - sum <<= shift; - abs_prev = abs_prev << shift >> 8; - lsp[count++] = ((i - 1) << 7) + (abs_prev >> 1) / (sum >> 16); - - if (count == LPC_ORDER) - break; - - /* Switch between sum and difference polynomials */ - p ^= 1; - - /* Evaluate */ - temp = 0; - for (j = 0; j <= LPC_ORDER / 2; j++) - temp += f[LPC_ORDER - 2 * j + p] * - ff_g723_1_cos_tab[i * j % COS_TBL_SIZE]; - cur_val = av_clipl_int32(temp * 2); - } - prev_val = cur_val; - } - - if (count != LPC_ORDER) - memcpy(lsp, prev_lsp, LPC_ORDER * sizeof(int16_t)); -} - -/** - * Quantize the current LSP subvector. - * - * @param num band number - * @param offset offset of the current subvector in an LPC_ORDER vector - * @param size size of the current subvector - */ -#define get_index(num, offset, size) \ -{ \ - int error, max = -1; \ - int16_t temp[4]; \ - int i, j; \ - \ - for (i = 0; i < LSP_CB_SIZE; i++) { \ - for (j = 0; j < size; j++){ \ - temp[j] = (weight[j + (offset)] * ff_g723_1_lsp_band##num[i][j] + \ - (1 << 14)) >> 15; \ - } \ - error = ff_g723_1_dot_product(lsp + (offset), temp, size) * 2; \ - error -= ff_g723_1_dot_product(ff_g723_1_lsp_band##num[i], temp, size); \ - if (error > max) { \ - max = error; \ - lsp_index[num] = i; \ - } \ - } \ -} - -/** - * Vector quantize the LSP frequencies. - * - * @param lsp the current lsp vector - * @param prev_lsp the previous lsp vector - */ -static void lsp_quantize(uint8_t *lsp_index, int16_t *lsp, int16_t *prev_lsp) -{ - int16_t weight[LPC_ORDER]; - int16_t min, max; - int shift, i; - - /* Calculate the VQ weighting vector */ - weight[0] = (1 << 20) / (lsp[1] - lsp[0]); - weight[LPC_ORDER - 1] = (1 << 20) / - (lsp[LPC_ORDER - 1] - lsp[LPC_ORDER - 2]); - - for (i = 1; i < LPC_ORDER - 1; i++) { - min = FFMIN(lsp[i] - lsp[i - 1], lsp[i + 1] - lsp[i]); - if (min > 0x20) - weight[i] = (1 << 20) / min; - else - weight[i] = INT16_MAX; - } - - /* Normalize */ - max = 0; - for (i = 0; i < LPC_ORDER; i++) - max = FFMAX(weight[i], max); - - shift = ff_g723_1_normalize_bits(max, 15); - for (i = 0; i < LPC_ORDER; i++) { - weight[i] <<= shift; - } - - /* Compute the VQ target vector */ - for (i = 0; i < LPC_ORDER; i++) { - lsp[i] -= dc_lsp[i] + - (((prev_lsp[i] - dc_lsp[i]) * 12288 + (1 << 14)) >> 15); - } - - get_index(0, 0, 3); - get_index(1, 3, 3); - get_index(2, 6, 4); -} - -/** - * Perform IIR filtering. - * - * @param fir_coef FIR coefficients - * @param iir_coef IIR coefficients - * @param src source vector - * @param dest destination vector - */ -static void iir_filter(int16_t *fir_coef, int16_t *iir_coef, - int16_t *src, int16_t *dest) -{ - int m, n; - - for (m = 0; m < SUBFRAME_LEN; m++) { - int64_t filter = 0; - for (n = 1; n <= LPC_ORDER; n++) { - filter -= fir_coef[n - 1] * src[m - n] - - iir_coef[n - 1] * dest[m - n]; - } - - dest[m] = av_clipl_int32(src[m] * (1 << 16) + filter * (1 << 3) + - (1 << 15)) >> 16; - } -} - -/** - * Apply the formant perceptual weighting filter. - * - * @param flt_coef filter coefficients - * @param unq_lpc unquantized lpc vector - */ -static void perceptual_filter(G723_1_ChannelContext *p, int16_t *flt_coef, - int16_t *unq_lpc, int16_t *buf) -{ - int16_t vector[FRAME_LEN + LPC_ORDER]; - int i, j, k, l = 0; - - memcpy(buf, p->iir_mem, sizeof(int16_t) * LPC_ORDER); - memcpy(vector, p->fir_mem, sizeof(int16_t) * LPC_ORDER); - memcpy(vector + LPC_ORDER, buf + LPC_ORDER, sizeof(int16_t) * FRAME_LEN); - - for (i = LPC_ORDER, j = 0; j < SUBFRAMES; i += SUBFRAME_LEN, j++) { - for (k = 0; k < LPC_ORDER; k++) { - flt_coef[k + 2 * l] = (unq_lpc[k + l] * percept_flt_tbl[0][k] + - (1 << 14)) >> 15; - flt_coef[k + 2 * l + LPC_ORDER] = (unq_lpc[k + l] * - percept_flt_tbl[1][k] + - (1 << 14)) >> 15; - } - iir_filter(flt_coef + 2 * l, flt_coef + 2 * l + LPC_ORDER, - vector + i, buf + i); - l += LPC_ORDER; - } - memcpy(p->iir_mem, buf + FRAME_LEN, sizeof(int16_t) * LPC_ORDER); - memcpy(p->fir_mem, vector + FRAME_LEN, sizeof(int16_t) * LPC_ORDER); -} - -/** - * Estimate the open loop pitch period. - * - * @param buf perceptually weighted speech - * @param start estimation is carried out from this position - */ -static int estimate_pitch(int16_t *buf, int start) -{ - int max_exp = 32; - int max_ccr = 0x4000; - int max_eng = 0x7fff; - int index = PITCH_MIN; - int offset = start - PITCH_MIN + 1; - - int ccr, eng, orig_eng, ccr_eng, exp; - int diff, temp; - - int i; - - orig_eng = ff_dot_product(buf + offset, buf + offset, HALF_FRAME_LEN); - - for (i = PITCH_MIN; i <= PITCH_MAX - 3; i++) { - offset--; - - /* Update energy and compute correlation */ - orig_eng += buf[offset] * buf[offset] - - buf[offset + HALF_FRAME_LEN] * buf[offset + HALF_FRAME_LEN]; - ccr = ff_dot_product(buf + start, buf + offset, HALF_FRAME_LEN); - if (ccr <= 0) - continue; - - /* Split into mantissa and exponent to maintain precision */ - exp = ff_g723_1_normalize_bits(ccr, 31); - ccr = av_clipl_int32((int64_t) (ccr << exp) + (1 << 15)) >> 16; - exp <<= 1; - ccr *= ccr; - temp = ff_g723_1_normalize_bits(ccr, 31); - ccr = ccr << temp >> 16; - exp += temp; - - temp = ff_g723_1_normalize_bits(orig_eng, 31); - eng = av_clipl_int32((int64_t) (orig_eng << temp) + (1 << 15)) >> 16; - exp -= temp; - - if (ccr >= eng) { - exp--; - ccr >>= 1; - } - if (exp > max_exp) - continue; - - if (exp + 1 < max_exp) - goto update; - - /* Equalize exponents before comparison */ - if (exp + 1 == max_exp) - temp = max_ccr >> 1; - else - temp = max_ccr; - ccr_eng = ccr * max_eng; - diff = ccr_eng - eng * temp; - if (diff > 0 && (i - index < PITCH_MIN || diff > ccr_eng >> 2)) { -update: - index = i; - max_exp = exp; - max_ccr = ccr; - max_eng = eng; - } - } - return index; -} - -/** - * Compute harmonic noise filter parameters. - * - * @param buf perceptually weighted speech - * @param pitch_lag open loop pitch period - * @param hf harmonic filter parameters - */ -static void comp_harmonic_coeff(int16_t *buf, int16_t pitch_lag, HFParam *hf) -{ - int ccr, eng, max_ccr, max_eng; - int exp, max, diff; - int energy[15]; - int i, j; - - for (i = 0, j = pitch_lag - 3; j <= pitch_lag + 3; i++, j++) { - /* Compute residual energy */ - energy[i << 1] = ff_dot_product(buf - j, buf - j, SUBFRAME_LEN); - /* Compute correlation */ - energy[(i << 1) + 1] = ff_dot_product(buf, buf - j, SUBFRAME_LEN); - } - - /* Compute target energy */ - energy[14] = ff_dot_product(buf, buf, SUBFRAME_LEN); - - /* Normalize */ - max = 0; - for (i = 0; i < 15; i++) - max = FFMAX(max, FFABS(energy[i])); - - exp = ff_g723_1_normalize_bits(max, 31); - for (i = 0; i < 15; i++) { - energy[i] = av_clipl_int32((int64_t)(energy[i] * (1 << exp)) + - (1 << 15)) >> 16; - } - - hf->index = -1; - hf->gain = 0; - max_ccr = 1; - max_eng = 0x7fff; - - for (i = 0; i <= 6; i++) { - eng = energy[i << 1]; - ccr = energy[(i << 1) + 1]; - - if (ccr <= 0) - continue; - - ccr = (ccr * ccr + (1 << 14)) >> 15; - diff = ccr * max_eng - eng * max_ccr; - if (diff > 0) { - max_ccr = ccr; - max_eng = eng; - hf->index = i; - } - } - - if (hf->index == -1) { - hf->index = pitch_lag; - return; - } - - eng = energy[14] * max_eng; - eng = (eng >> 2) + (eng >> 3); - ccr = energy[(hf->index << 1) + 1] * energy[(hf->index << 1) + 1]; - if (eng < ccr) { - eng = energy[(hf->index << 1) + 1]; - - if (eng >= max_eng) - hf->gain = 0x2800; - else - hf->gain = ((eng << 15) / max_eng * 0x2800 + (1 << 14)) >> 15; - } - hf->index += pitch_lag - 3; -} - -/** - * Apply the harmonic noise shaping filter. - * - * @param hf filter parameters - */ -static void harmonic_filter(HFParam *hf, const int16_t *src, int16_t *dest) -{ - int i; - - for (i = 0; i < SUBFRAME_LEN; i++) { - int64_t temp = hf->gain * src[i - hf->index] * 2; - dest[i] = av_clipl_int32(src[i] * (1 << 16) - temp + (1 << 15)) >> 16; - } -} - -static void harmonic_noise_sub(HFParam *hf, const int16_t *src, int16_t *dest) -{ - int i; - for (i = 0; i < SUBFRAME_LEN; i++) { - int64_t temp = hf->gain * src[i - hf->index] * 2; - dest[i] = av_clipl_int32((dest[i] - src[i]) * (1 << 16) + temp + - (1 << 15)) >> 16; - } -} - -/** - * Combined synthesis and formant perceptual weighting filer. - * - * @param qnt_lpc quantized lpc coefficients - * @param perf_lpc perceptual filter coefficients - * @param perf_fir perceptual filter fir memory - * @param perf_iir perceptual filter iir memory - * @param scale the filter output will be scaled by 2^scale - */ -static void synth_percept_filter(int16_t *qnt_lpc, int16_t *perf_lpc, - int16_t *perf_fir, int16_t *perf_iir, - const int16_t *src, int16_t *dest, int scale) -{ - int i, j; - int16_t buf_16[SUBFRAME_LEN + LPC_ORDER]; - int64_t buf[SUBFRAME_LEN]; - - int16_t *bptr_16 = buf_16 + LPC_ORDER; - - memcpy(buf_16, perf_fir, sizeof(int16_t) * LPC_ORDER); - memcpy(dest - LPC_ORDER, perf_iir, sizeof(int16_t) * LPC_ORDER); - - for (i = 0; i < SUBFRAME_LEN; i++) { - int64_t temp = 0; - for (j = 1; j <= LPC_ORDER; j++) - temp -= qnt_lpc[j - 1] * bptr_16[i - j]; - - buf[i] = src[i] * (1 << 15) + temp * (1 << 3); - bptr_16[i] = av_clipl_int32(buf[i] + (1 << 15)) >> 16; - } - - for (i = 0; i < SUBFRAME_LEN; i++) { - int64_t fir = 0, iir = 0; - for (j = 1; j <= LPC_ORDER; j++) { - fir -= perf_lpc[j - 1] * bptr_16[i - j]; - iir += perf_lpc[j + LPC_ORDER - 1] * dest[i - j]; - } - dest[i] = av_clipl_int32((buf[i] + fir * (1 << 3)) * (1 << scale) + iir * (1 << 3) + - (1 << 15)) >> 16; - } - memcpy(perf_fir, buf_16 + SUBFRAME_LEN, sizeof(int16_t) * LPC_ORDER); - memcpy(perf_iir, dest + SUBFRAME_LEN - LPC_ORDER, - sizeof(int16_t) * LPC_ORDER); -} - -/** - * Compute the adaptive codebook contribution. - * - * @param buf input signal - * @param index the current subframe index - */ -static void acb_search(G723_1_ChannelContext *p, int16_t *residual, - int16_t *impulse_resp, const int16_t *buf, - int index) -{ - int16_t flt_buf[PITCH_ORDER][SUBFRAME_LEN]; - - const int16_t *cb_tbl = ff_g723_1_adaptive_cb_gain85; - - int ccr_buf[PITCH_ORDER * SUBFRAMES << 2]; - - int pitch_lag = p->pitch_lag[index >> 1]; - int acb_lag = 1; - int acb_gain = 0; - int odd_frame = index & 1; - int iter = 3 + odd_frame; - int count = 0; - int tbl_size = 85; - - int i, j, k, l, max; - int64_t temp; - - if (!odd_frame) { - if (pitch_lag == PITCH_MIN) - pitch_lag++; - else - pitch_lag = FFMIN(pitch_lag, PITCH_MAX - 5); - } - - for (i = 0; i < iter; i++) { - ff_g723_1_get_residual(residual, p->prev_excitation, pitch_lag + i - 1); - - for (j = 0; j < SUBFRAME_LEN; j++) { - temp = 0; - for (k = 0; k <= j; k++) - temp += residual[PITCH_ORDER - 1 + k] * impulse_resp[j - k]; - flt_buf[PITCH_ORDER - 1][j] = av_clipl_int32(temp * 2 + (1 << 15)) >> 16; - } - - for (j = PITCH_ORDER - 2; j >= 0; j--) { - flt_buf[j][0] = (residual[j] + (1 << 1)) >> 2; - for (k = 1; k < SUBFRAME_LEN; k++) { - temp = flt_buf[j + 1][k - 1] * (1 << 15) + - residual[j] * impulse_resp[k]; - flt_buf[j][k] = av_clipl_int32(temp * 2 + (1 << 15)) >> 16; - } - } - - /* Compute crosscorrelation with the signal */ - for (j = 0; j < PITCH_ORDER; j++) { - temp = ff_dot_product(buf, flt_buf[j], SUBFRAME_LEN); - ccr_buf[count++] = av_clipl_int32(temp * 2); - } - - /* Compute energies */ - for (j = 0; j < PITCH_ORDER; j++) { - ccr_buf[count++] = ff_g723_1_dot_product(flt_buf[j], flt_buf[j], - SUBFRAME_LEN); - } - - for (j = 1; j < PITCH_ORDER; j++) { - for (k = 0; k < j; k++) { - temp = ff_dot_product(flt_buf[j], flt_buf[k], SUBFRAME_LEN); - ccr_buf[count++] = av_clipl_int32(temp * (1 << 2)); - } - } - } - - /* Normalize and shorten */ - max = 0; - for (i = 0; i < 20 * iter; i++) - max = FFMAX(max, FFABS(ccr_buf[i])); - - temp = ff_g723_1_normalize_bits(max, 31); - - for (i = 0; i < 20 * iter; i++) - ccr_buf[i] = av_clipl_int32((int64_t) (ccr_buf[i] * (1 << temp)) + - (1 << 15)) >> 16; - - max = 0; - for (i = 0; i < iter; i++) { - /* Select quantization table */ - if (!odd_frame && pitch_lag + i - 1 >= SUBFRAME_LEN - 2 || - odd_frame && pitch_lag >= SUBFRAME_LEN - 2) { - cb_tbl = ff_g723_1_adaptive_cb_gain170; - tbl_size = 170; - } - - for (j = 0, k = 0; j < tbl_size; j++, k += 20) { - temp = 0; - for (l = 0; l < 20; l++) - temp += ccr_buf[20 * i + l] * cb_tbl[k + l]; - temp = av_clipl_int32(temp); - - if (temp > max) { - max = temp; - acb_gain = j; - acb_lag = i; - } - } - } - - if (!odd_frame) { - pitch_lag += acb_lag - 1; - acb_lag = 1; - } - - p->pitch_lag[index >> 1] = pitch_lag; - p->subframe[index].ad_cb_lag = acb_lag; - p->subframe[index].ad_cb_gain = acb_gain; -} - -/** - * Subtract the adaptive codebook contribution from the input - * to obtain the residual. - * - * @param buf target vector - */ -static void sub_acb_contrib(const int16_t *residual, const int16_t *impulse_resp, - int16_t *buf) -{ - int i, j; - /* Subtract adaptive CB contribution to obtain the residual */ - for (i = 0; i < SUBFRAME_LEN; i++) { - int64_t temp = buf[i] * (1 << 14); - for (j = 0; j <= i; j++) - temp -= residual[j] * impulse_resp[i - j]; - - buf[i] = av_clipl_int32(temp * (1 << 2) + (1 << 15)) >> 16; - } -} - -/** - * Quantize the residual signal using the fixed codebook (MP-MLQ). - * - * @param optim optimized fixed codebook parameters - * @param buf excitation vector - */ -static void get_fcb_param(FCBParam *optim, int16_t *impulse_resp, - int16_t *buf, int pulse_cnt, int pitch_lag) -{ - FCBParam param; - int16_t impulse_r[SUBFRAME_LEN]; - int16_t temp_corr[SUBFRAME_LEN]; - int16_t impulse_corr[SUBFRAME_LEN]; - - int ccr1[SUBFRAME_LEN]; - int ccr2[SUBFRAME_LEN]; - int amp, err, max, max_amp_index, min, scale, i, j, k, l; - - int64_t temp; - - /* Update impulse response */ - memcpy(impulse_r, impulse_resp, sizeof(int16_t) * SUBFRAME_LEN); - param.dirac_train = 0; - if (pitch_lag < SUBFRAME_LEN - 2) { - param.dirac_train = 1; - ff_g723_1_gen_dirac_train(impulse_r, pitch_lag); - } - - for (i = 0; i < SUBFRAME_LEN; i++) - temp_corr[i] = impulse_r[i] >> 1; - - /* Compute impulse response autocorrelation */ - temp = ff_g723_1_dot_product(temp_corr, temp_corr, SUBFRAME_LEN); - - scale = ff_g723_1_normalize_bits(temp, 31); - impulse_corr[0] = av_clipl_int32((temp << scale) + (1 << 15)) >> 16; - - for (i = 1; i < SUBFRAME_LEN; i++) { - temp = ff_g723_1_dot_product(temp_corr + i, temp_corr, - SUBFRAME_LEN - i); - impulse_corr[i] = av_clipl_int32(temp * (1 << scale) + (1 << 15)) >> 16; - } - - /* Compute crosscorrelation of impulse response with residual signal */ - scale -= 4; - for (i = 0; i < SUBFRAME_LEN; i++) { - temp = ff_g723_1_dot_product(buf + i, impulse_r, SUBFRAME_LEN - i); - if (scale < 0) - ccr1[i] = temp >> -scale; - else - ccr1[i] = av_clipl_int32(temp * (1 << scale)); - } - - /* Search loop */ - for (i = 0; i < GRID_SIZE; i++) { - /* Maximize the crosscorrelation */ - max = 0; - for (j = i; j < SUBFRAME_LEN; j += GRID_SIZE) { - temp = FFABS(ccr1[j]); - if (temp >= max) { - max = temp; - param.pulse_pos[0] = j; - } - } - - /* Quantize the gain (max crosscorrelation/impulse_corr[0]) */ - amp = max; - min = 1 << 30; - max_amp_index = GAIN_LEVELS - 2; - for (j = max_amp_index; j >= 2; j--) { - temp = av_clipl_int32((int64_t) ff_g723_1_fixed_cb_gain[j] * - impulse_corr[0] << 1); - temp = FFABS(temp - amp); - if (temp < min) { - min = temp; - max_amp_index = j; - } - } - - max_amp_index--; - /* Select additional gain values */ - for (j = 1; j < 5; j++) { - for (k = i; k < SUBFRAME_LEN; k += GRID_SIZE) { - temp_corr[k] = 0; - ccr2[k] = ccr1[k]; - } - param.amp_index = max_amp_index + j - 2; - amp = ff_g723_1_fixed_cb_gain[param.amp_index]; - - param.pulse_sign[0] = (ccr2[param.pulse_pos[0]] < 0) ? -amp : amp; - temp_corr[param.pulse_pos[0]] = 1; - - for (k = 1; k < pulse_cnt; k++) { - max = INT_MIN; - for (l = i; l < SUBFRAME_LEN; l += GRID_SIZE) { - if (temp_corr[l]) - continue; - temp = impulse_corr[FFABS(l - param.pulse_pos[k - 1])]; - temp = av_clipl_int32((int64_t) temp * - param.pulse_sign[k - 1] * 2); - ccr2[l] -= temp; - temp = FFABS(ccr2[l]); - if (temp > max) { - max = temp; - param.pulse_pos[k] = l; - } - } - - param.pulse_sign[k] = (ccr2[param.pulse_pos[k]] < 0) ? - -amp : amp; - temp_corr[param.pulse_pos[k]] = 1; - } - - /* Create the error vector */ - memset(temp_corr, 0, sizeof(int16_t) * SUBFRAME_LEN); - - for (k = 0; k < pulse_cnt; k++) - temp_corr[param.pulse_pos[k]] = param.pulse_sign[k]; - - for (k = SUBFRAME_LEN - 1; k >= 0; k--) { - temp = 0; - for (l = 0; l <= k; l++) { - int prod = av_clipl_int32((int64_t) temp_corr[l] * - impulse_r[k - l] * 2); - temp = av_clipl_int32(temp + prod); - } - temp_corr[k] = temp >> 14; - } - - /* Compute square of error */ - err = 0; - for (k = 0; k < SUBFRAME_LEN; k++) { - int64_t prod; - prod = av_clipl_int32((int64_t) buf[k] * temp_corr[k] * 2); - err = av_clipl_int32(err - prod); - prod = av_clipl_int32((int64_t) temp_corr[k] * temp_corr[k]); - err = av_clipl_int32(err + prod); - } - - /* Minimize */ - if (err < optim->min_err) { - optim->min_err = err; - optim->grid_index = i; - optim->amp_index = param.amp_index; - optim->dirac_train = param.dirac_train; - - for (k = 0; k < pulse_cnt; k++) { - optim->pulse_sign[k] = param.pulse_sign[k]; - optim->pulse_pos[k] = param.pulse_pos[k]; - } - } - } - } -} - -/** - * Encode the pulse position and gain of the current subframe. - * - * @param optim optimized fixed CB parameters - * @param buf excitation vector - */ -static void pack_fcb_param(G723_1_Subframe *subfrm, FCBParam *optim, - int16_t *buf, int pulse_cnt) -{ - int i, j; - - j = PULSE_MAX - pulse_cnt; - - subfrm->pulse_sign = 0; - subfrm->pulse_pos = 0; - - for (i = 0; i < SUBFRAME_LEN >> 1; i++) { - int val = buf[optim->grid_index + (i << 1)]; - if (!val) { - subfrm->pulse_pos += ff_g723_1_combinatorial_table[j][i]; - } else { - subfrm->pulse_sign <<= 1; - if (val < 0) - subfrm->pulse_sign++; - j++; - - if (j == PULSE_MAX) - break; - } - } - subfrm->amp_index = optim->amp_index; - subfrm->grid_index = optim->grid_index; - subfrm->dirac_train = optim->dirac_train; -} - -/** - * Compute the fixed codebook excitation. - * - * @param buf target vector - * @param impulse_resp impulse response of the combined filter - */ -static void fcb_search(G723_1_ChannelContext *p, int16_t *impulse_resp, - int16_t *buf, int index) -{ - FCBParam optim; - int pulse_cnt = pulses[index]; - int i; - - optim.min_err = 1 << 30; - get_fcb_param(&optim, impulse_resp, buf, pulse_cnt, SUBFRAME_LEN); - - if (p->pitch_lag[index >> 1] < SUBFRAME_LEN - 2) { - get_fcb_param(&optim, impulse_resp, buf, pulse_cnt, - p->pitch_lag[index >> 1]); - } - - /* Reconstruct the excitation */ - memset(buf, 0, sizeof(int16_t) * SUBFRAME_LEN); - for (i = 0; i < pulse_cnt; i++) - buf[optim.pulse_pos[i]] = optim.pulse_sign[i]; - - pack_fcb_param(&p->subframe[index], &optim, buf, pulse_cnt); - - if (optim.dirac_train) - ff_g723_1_gen_dirac_train(buf, p->pitch_lag[index >> 1]); -} - -/** - * Pack the frame parameters into output bitstream. - * - * @param frame output buffer - * @param size size of the buffer - */ -static void pack_bitstream(G723_1_ChannelContext *p, AVPacket *avpkt, int info_bits) -{ - PutBitContext pb; - int i, temp; - - init_put_bits(&pb, avpkt->data, avpkt->size); - - put_bits(&pb, 2, info_bits); - - put_bits(&pb, 8, p->lsp_index[2]); - put_bits(&pb, 8, p->lsp_index[1]); - put_bits(&pb, 8, p->lsp_index[0]); - - put_bits(&pb, 7, p->pitch_lag[0] - PITCH_MIN); - put_bits(&pb, 2, p->subframe[1].ad_cb_lag); - put_bits(&pb, 7, p->pitch_lag[1] - PITCH_MIN); - put_bits(&pb, 2, p->subframe[3].ad_cb_lag); - - /* Write 12 bit combined gain */ - for (i = 0; i < SUBFRAMES; i++) { - temp = p->subframe[i].ad_cb_gain * GAIN_LEVELS + - p->subframe[i].amp_index; - if (p->cur_rate == RATE_6300) - temp += p->subframe[i].dirac_train << 11; - put_bits(&pb, 12, temp); - } - - put_bits(&pb, 1, p->subframe[0].grid_index); - put_bits(&pb, 1, p->subframe[1].grid_index); - put_bits(&pb, 1, p->subframe[2].grid_index); - put_bits(&pb, 1, p->subframe[3].grid_index); - - if (p->cur_rate == RATE_6300) { - put_bits(&pb, 1, 0); /* reserved bit */ - - /* Write 13 bit combined position index */ - temp = (p->subframe[0].pulse_pos >> 16) * 810 + - (p->subframe[1].pulse_pos >> 14) * 90 + - (p->subframe[2].pulse_pos >> 16) * 9 + - (p->subframe[3].pulse_pos >> 14); - put_bits(&pb, 13, temp); - - put_bits(&pb, 16, p->subframe[0].pulse_pos & 0xffff); - put_bits(&pb, 14, p->subframe[1].pulse_pos & 0x3fff); - put_bits(&pb, 16, p->subframe[2].pulse_pos & 0xffff); - put_bits(&pb, 14, p->subframe[3].pulse_pos & 0x3fff); - - put_bits(&pb, 6, p->subframe[0].pulse_sign); - put_bits(&pb, 5, p->subframe[1].pulse_sign); - put_bits(&pb, 6, p->subframe[2].pulse_sign); - put_bits(&pb, 5, p->subframe[3].pulse_sign); - } - - flush_put_bits(&pb); -} - -static int g723_1_encode_frame(AVCodecContext *avctx, AVPacket *avpkt, - const AVFrame *frame, int *got_packet_ptr) -{ - G723_1_Context *s = avctx->priv_data; - G723_1_ChannelContext *p = &s->ch[0]; - int16_t unq_lpc[LPC_ORDER * SUBFRAMES]; - int16_t qnt_lpc[LPC_ORDER * SUBFRAMES]; - int16_t cur_lsp[LPC_ORDER]; - int16_t weighted_lpc[LPC_ORDER * SUBFRAMES << 1]; - int16_t vector[FRAME_LEN + PITCH_MAX]; - int offset, ret, i, j, info_bits = 0; - int16_t *in, *start; - HFParam hf[4]; - - /* duplicate input */ - start = in = av_memdup(frame->data[0], frame->nb_samples * sizeof(int16_t)); - if (!in) - return AVERROR(ENOMEM); - - highpass_filter(in, &p->hpf_fir_mem, &p->hpf_iir_mem); - - memcpy(vector, p->prev_data, HALF_FRAME_LEN * sizeof(int16_t)); - memcpy(vector + HALF_FRAME_LEN, in, FRAME_LEN * sizeof(int16_t)); - - comp_lpc_coeff(vector, unq_lpc); - lpc2lsp(&unq_lpc[LPC_ORDER * 3], p->prev_lsp, cur_lsp); - lsp_quantize(p->lsp_index, cur_lsp, p->prev_lsp); - - /* Update memory */ - memcpy(vector + LPC_ORDER, p->prev_data + SUBFRAME_LEN, - sizeof(int16_t) * SUBFRAME_LEN); - memcpy(vector + LPC_ORDER + SUBFRAME_LEN, in, - sizeof(int16_t) * (HALF_FRAME_LEN + SUBFRAME_LEN)); - memcpy(p->prev_data, in + HALF_FRAME_LEN, - sizeof(int16_t) * HALF_FRAME_LEN); - memcpy(in, vector + LPC_ORDER, sizeof(int16_t) * FRAME_LEN); - - perceptual_filter(p, weighted_lpc, unq_lpc, vector); - - memcpy(in, vector + LPC_ORDER, sizeof(int16_t) * FRAME_LEN); - memcpy(vector, p->prev_weight_sig, sizeof(int16_t) * PITCH_MAX); - memcpy(vector + PITCH_MAX, in, sizeof(int16_t) * FRAME_LEN); - - ff_g723_1_scale_vector(vector, vector, FRAME_LEN + PITCH_MAX); - - p->pitch_lag[0] = estimate_pitch(vector, PITCH_MAX); - p->pitch_lag[1] = estimate_pitch(vector, PITCH_MAX + HALF_FRAME_LEN); - - for (i = PITCH_MAX, j = 0; j < SUBFRAMES; i += SUBFRAME_LEN, j++) - comp_harmonic_coeff(vector + i, p->pitch_lag[j >> 1], hf + j); - - memcpy(vector, p->prev_weight_sig, sizeof(int16_t) * PITCH_MAX); - memcpy(vector + PITCH_MAX, in, sizeof(int16_t) * FRAME_LEN); - memcpy(p->prev_weight_sig, vector + FRAME_LEN, sizeof(int16_t) * PITCH_MAX); - - for (i = 0, j = 0; j < SUBFRAMES; i += SUBFRAME_LEN, j++) - harmonic_filter(hf + j, vector + PITCH_MAX + i, in + i); - - ff_g723_1_inverse_quant(cur_lsp, p->prev_lsp, p->lsp_index, 0); - ff_g723_1_lsp_interpolate(qnt_lpc, cur_lsp, p->prev_lsp); - - memcpy(p->prev_lsp, cur_lsp, sizeof(int16_t) * LPC_ORDER); - - offset = 0; - for (i = 0; i < SUBFRAMES; i++) { - int16_t impulse_resp[SUBFRAME_LEN]; - int16_t residual[SUBFRAME_LEN + PITCH_ORDER - 1]; - int16_t flt_in[SUBFRAME_LEN]; - int16_t zero[LPC_ORDER], fir[LPC_ORDER], iir[LPC_ORDER]; - - /** - * Compute the combined impulse response of the synthesis filter, - * formant perceptual weighting filter and harmonic noise shaping filter - */ - memset(zero, 0, sizeof(int16_t) * LPC_ORDER); - memset(vector, 0, sizeof(int16_t) * PITCH_MAX); - memset(flt_in, 0, sizeof(int16_t) * SUBFRAME_LEN); - - flt_in[0] = 1 << 13; /* Unit impulse */ - synth_percept_filter(qnt_lpc + offset, weighted_lpc + (offset << 1), - zero, zero, flt_in, vector + PITCH_MAX, 1); - harmonic_filter(hf + i, vector + PITCH_MAX, impulse_resp); - - /* Compute the combined zero input response */ - flt_in[0] = 0; - memcpy(fir, p->perf_fir_mem, sizeof(int16_t) * LPC_ORDER); - memcpy(iir, p->perf_iir_mem, sizeof(int16_t) * LPC_ORDER); - - synth_percept_filter(qnt_lpc + offset, weighted_lpc + (offset << 1), - fir, iir, flt_in, vector + PITCH_MAX, 0); - memcpy(vector, p->harmonic_mem, sizeof(int16_t) * PITCH_MAX); - harmonic_noise_sub(hf + i, vector + PITCH_MAX, in); - - acb_search(p, residual, impulse_resp, in, i); - ff_g723_1_gen_acb_excitation(residual, p->prev_excitation, - p->pitch_lag[i >> 1], &p->subframe[i], - p->cur_rate); - sub_acb_contrib(residual, impulse_resp, in); - - fcb_search(p, impulse_resp, in, i); - - /* Reconstruct the excitation */ - ff_g723_1_gen_acb_excitation(impulse_resp, p->prev_excitation, - p->pitch_lag[i >> 1], &p->subframe[i], - RATE_6300); - - memmove(p->prev_excitation, p->prev_excitation + SUBFRAME_LEN, - sizeof(int16_t) * (PITCH_MAX - SUBFRAME_LEN)); - for (j = 0; j < SUBFRAME_LEN; j++) - in[j] = av_clip_int16(in[j] * 2 + impulse_resp[j]); - memcpy(p->prev_excitation + PITCH_MAX - SUBFRAME_LEN, in, - sizeof(int16_t) * SUBFRAME_LEN); - - /* Update filter memories */ - synth_percept_filter(qnt_lpc + offset, weighted_lpc + (offset << 1), - p->perf_fir_mem, p->perf_iir_mem, - in, vector + PITCH_MAX, 0); - memmove(p->harmonic_mem, p->harmonic_mem + SUBFRAME_LEN, - sizeof(int16_t) * (PITCH_MAX - SUBFRAME_LEN)); - memcpy(p->harmonic_mem + PITCH_MAX - SUBFRAME_LEN, vector + PITCH_MAX, - sizeof(int16_t) * SUBFRAME_LEN); - - in += SUBFRAME_LEN; - offset += LPC_ORDER; - } - - av_free(start); - - ret = ff_get_encode_buffer(avctx, avpkt, frame_size[info_bits], 0); - if (ret < 0) - return ret; - - *got_packet_ptr = 1; - pack_bitstream(p, avpkt, info_bits); - return 0; -} - -static const FFCodecDefault defaults[] = { - { "b", "6300" }, - { NULL }, -}; - -const FFCodec ff_g723_1_encoder = { - .p.name = "g723_1", - CODEC_LONG_NAME("G.723.1"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_G723_1, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_ENCODER_REORDERED_OPAQUE, - .priv_data_size = sizeof(G723_1_Context), - .init = g723_1_encode_init, - FF_CODEC_ENCODE_CB(g723_1_encode_frame), - .defaults = defaults, - .p.sample_fmts = (const enum AVSampleFormat[]) { - AV_SAMPLE_FMT_S16, AV_SAMPLE_FMT_NONE - }, - .p.ch_layouts = (const AVChannelLayout[]){ - AV_CHANNEL_LAYOUT_MONO, { 0 } - }, -}; diff --git a/spaces/congsaPfin/Manga-OCR/app.py b/spaces/congsaPfin/Manga-OCR/app.py deleted file mode 100644 index de00755086ecc26f8d74b91f9297ffebd14be115..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/app.py +++ /dev/null @@ -1,44 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import functools -import pathlib -import urllib.request - -import gradio as gr -import PIL.Image -from manga_ocr import MangaOcr - -TITLE = 'Manga OCR' -DESCRIPTION = 'This is an unofficial demo for https://github.com/kha-white/manga-ocr.' - - -def download_sample_images() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - image_dir.mkdir() - for index in range(12): - url = f'https://raw.githubusercontent.com/kha-white/manga-ocr/master/assets/examples/{index:02d}.jpg' - urllib.request.urlretrieve(url, image_dir / f'{index:02d}.jpg') - return sorted(image_dir.rglob('*.jpg')) - - -def run(image: PIL.Image.Image, mocr: MangaOcr) -> str: - return mocr(image) - - -mocr = MangaOcr() -func = functools.partial(run, mocr=mocr) - -image_paths = download_sample_images() -examples = [[path.as_posix()] for path in image_paths] - -gr.Interface( - fn=func, - inputs=gr.Image(label='Input', type='pil'), - outputs=gr.Text(label='Output'), - examples=examples, - title=TITLE, - description=DESCRIPTION, -).launch(show_api=False) diff --git a/spaces/congsaPfin/Manga-OCR/logs/Brawlhalla PC Gratis Download and Play the Free-to-Play Platform Fighter with Historys Greatest Warriors.md b/spaces/congsaPfin/Manga-OCR/logs/Brawlhalla PC Gratis Download and Play the Free-to-Play Platform Fighter with Historys Greatest Warriors.md deleted file mode 100644 index 4154ab4a0b055fdc0b6c0fcc1487e4134cfdd19c..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Brawlhalla PC Gratis Download and Play the Free-to-Play Platform Fighter with Historys Greatest Warriors.md +++ /dev/null @@ -1,108 +0,0 @@ - -

            Download Brawlhalla PC Gratis: A Free-to-Play Platform Fighting Game

            -

            If you are looking for a fun and exciting game that you can play with your friends or online, you should check out Brawlhalla. Brawlhalla is a free-to-play platform fighting game that supports up to 8 players online or local. You can choose from over 50 legends, each with their own unique weapons, abilities, and playstyles. You can also customize your character with skins, colors, taunts, and more. Brawlhalla is available on multiple platforms, including PC, PS5, PS4, Xbox Series X|S, Xbox One, Nintendo Switch, iOS, and Android. And the best part is, you can download Brawlhalla PC gratis and enjoy the game without spending a dime. In this article, we will tell you what Brawlhalla is, how to download it for free on your PC, why you should play it, and some tips and tricks to help you improve your skills.

            -

            What is Brawlhalla?

            -

            A brief introduction to the game and its features

            -

            Brawlhalla is a 2D platform fighting game that was developed by Blue Mammoth Games and published by Ubisoft. It was released in 2017 and has since become one of the most popular games in its genre. Brawlhalla is inspired by games like Super Smash Bros., but it has its own unique features and mechanics. The game takes place in an eternal battle arena where the greatest warriors in history brawl to prove who's the best. You can play as legends from different cultures and eras, such as Vikings, pirates, ninjas, samurai, aliens, robots, and more. You can also play as characters from other franchises, such as Lara Croft from Tomb Raider, Finn and Jake from Adventure Time, Shovel Knight from Shovel Knight, Rayman from Rayman, and more.

            -

            download brawlhalla pc gratis


            Download Zip ►►►►► https://urlca.com/2uOfc0



            -

            Brawlhalla has various game modes that you can enjoy solo or with others. You can play casual free-for-alls, ranked matches, custom games with your friends, or join tournaments and events. You can also play single-player and co-op modes, such as training mode, brawl of the week, missions, and more. Brawlhalla also supports cross-play and cross-progression across all platforms, so you can play with anyone on any device and keep your progress wherever you go.

            -

            How to download Brawlhalla PC gratis?

            -

            The steps to download and install the game from different platforms

            -

            Downloading Brawlhalla PC gratis is very easy and simple. You just need to follow these steps:

            -
              -
            • If you want to download Brawlhalla from Steam, you need to have a Steam account and the Steam client installed on your PC. You can create a Steam account for free at https://store.steampowered.com/join/ and download the Steam client at https://store.steampowered.com/about/. Once you have Steam on your PC, open it and search for Brawlhalla in the store. Click on the "Play Game" button and follow the instructions to install the game.
            • -
            • If you want to download Brawlhalla from Epic Games Store, you need to have an Epic Games account and the Epic Games Launcher installed on your PC. You can create an Epic Games account for free at https://www.epicgames.com/id/register and download the Epic Games Launcher at https://www.epicgames.com/store/en-US/download. Once you have Epic Games Launcher on your PC, open it and search for Brawlhalla in the store. Click on the "Get" button and follow the instructions to install the game.
            • -
            • If you want to download Brawlhalla from Ubisoft Connect, you need to have a Ubisoft account and the Ubisoft Connect client installed on your PC. You can create a Ubisoft account for free at https://account.ubisoft.com/en-US/login and download the Ubisoft Connect client at https://ubisoftconnect.com/en-US/download/. Once you have Ubisoft Connect on your PC, open it and search for Brawlhalla in the store. Click on the "Play Now" button and follow the instructions to install the game.
            • -
            -

            After you have installed Brawlhalla on your PC, you can launch it from the platform of your choice and start playing. You can also link your accounts from different platforms to access your progress and items across all devices.

            -

            How to download brawlhalla for free on pc
            -Brawlhalla pc game free download full version
            -Brawlhalla free 2D platform fighting game for pc
            -Download brawlhalla cross-play platform fighter for pc
            -Brawlhalla pc game download gratis italiano
            -Brawlhalla free online multiplayer game for pc
            -Brawlhalla pc game system requirements and download size
            -Brawlhalla free steam download for pc
            -Brawlhalla pc game review and gameplay
            -Brawlhalla best legends and characters to download for pc
            -Brawlhalla free skins and codes for pc
            -Brawlhalla pc game tips and tricks for beginners
            -Brawlhalla free tournaments and events for pc players
            -Brawlhalla pc game mods and hacks download
            -Brawlhalla free update and patch notes for pc
            -Brawlhalla pc game controller support and settings
            -Brawlhalla free download for windows 10/8/7 pc
            -Brawlhalla pc game offline mode and single player
            -Brawlhalla free custom games and private rooms for pc
            -Brawlhalla pc game ranked matches and leaderboards
            -Brawlhalla free battle pass and rewards for pc
            -Brawlhalla pc game crossovers and collaborations download
            -Brawlhalla free fan art and wallpapers for pc
            -Brawlhalla pc game community and forums
            -Brawlhalla free soundtrack and music download for pc
            -Download brawlhalla for mac os x gratis
            -Download brawlhalla for linux gratis
            -Download brawlhalla apk for android gratis
            -Download brawlhalla ipa for ios gratis
            -Download brawlhalla for nintendo switch gratis
            -Download brawlhalla for xbox one gratis
            -Download brawlhalla for xbox series x|s gratis
            -Download brawlhalla for ps4 gratis
            -Download brawlhalla for ps5 gratis
            -Download brawlhalla ultimate edition for pc gratis
            -Download brawlhalla all legends pack for pc gratis
            -Download brawlhalla gold edition for pc gratis
            -Download brawlhalla collectors edition for pc gratis
            -Download brawlhalla valhallentine pack for pc gratis
            -Download brawlhalla heatwave pack for pc gratis
            -Download brawlhalla back to school pack for pc gratis
            -Download brawlhalla home team pack for pc gratis
            -Download brawlhalla winter championship pack for pc gratis
            -Download brawlhalla spring championship pack for pc gratis
            -Download brawlhalla summer championship pack for pc gratis
            -Download brawlhalla autumn championship pack for pc gratis

            -

            Why should you play Brawlhalla PC gratis?

            -

            The benefits and advantages of playing the game for free

            -

            Brawlhalla is a game that you can play for free without any limitations or restrictions. You don't need to pay anything to enjoy the full game experience. Here are some of the benefits and advantages of playing Brawlhalla PC gratis:

            -
              -
            • You can access all the game modes, features, and updates without any cost. You can play online or offline, solo or with others, casually or competitively, and join tournaments and events.
            • -
            • You can unlock all the legends and items in the game by playing and earning gold, which is the in-game currency. You can also use gold to buy skins, colors, taunts, and more. You don't need to spend real money to get anything in the game.
            • -
            • You can try out different legends and find your favorite one. Every week, there is a rotation of 8 free legends that you can play with. You can also test any legend in the training mode before buying them with gold.
            • -
            • You can have fun and challenge yourself with the game's diverse and dynamic gameplay. You can learn new skills, combos, techniques, and strategies with each legend and weapon. You can also adapt to different stages, items, and opponents.
            • -
            • You can join a friendly and active community of players from around the world. You can chat, team up, compete, and make friends with other players. You can also watch streams, videos, guides, and tips from other players and content creators.
            • -
            -

            Tips and tricks for playing Brawlhalla PC gratis

            -

            Some useful advice and strategies for beginners and advanced players

            -

            If you want to improve your skills and performance in Brawlhalla PC gratis, here are some tips and tricks that you should know:

            -
              -
            • Practice makes perfect. The best way to get better at the game is to practice regularly and learn from your mistakes. You can use the training mode to practice your moves, combos, timings, and dodges. You can also watch replays of your matches to analyze your strengths and weaknesses.
            • -
            • Know your legend. Each legend has their own stats, weapons, signatures, and playstyle. You should know how to use your legend's abilities effectively and efficiently. You should also know how to counter your opponent's legend and exploit their weaknesses.
            • -
            • Know your weapon. Each weapon has its own range, speed, damage, recovery, and hitboxes. You should know how to use your weapon's attacks in different situations and angles. You should also know how to switch between your weapons depending on the stage and the opponent.
            • -
            • Know your stage. Each stage has its own size, shape, platforms, edges, walls, and hazards. You should know how to use the stage's features to your advantage and avoid its disadvantages. You should also know how to control the stage's space and pressure your opponent.
            • -
            • Know your items. Each item has its own function, effect, duration, and cooldown. You should know how to use the items wisely and strategically. You should also know how to avoid or counter the items that your opponent uses.
            • -
            -

            Conclusion

            -

            Brawlhalla is a free-to-play platform fighting game that you can download on your PC from various platforms. It is a fun and exciting game that you can play with anyone on any device. It has a lot of features, modes, legends, items, and updates that you can enjoy without spending any money. It also has a lot of tips and tricks that you can learn to improve your skills and performance. If you are looking for a game that will keep you entertained for hours, you should download Brawlh alla PC gratis and join the brawl.

            -

            FAQs

            -

            Here are some of the frequently asked questions about Brawlhalla PC gratis:

            -
              -
            1. How do I play Brawlhalla PC gratis with my friends?
            2. -

              You can play Brawlhalla PC gratis with your friends by creating or joining a custom game room. You can invite your friends to your room by sending them the room number or the invite link. You can also join your friends' rooms by entering their room number or clicking on their invite link. You can then choose the game mode, settings, and legends that you want to play with.

              -
            3. How do I link my Brawlhalla accounts from different platforms?
            4. -

              You can link your Brawlhalla accounts from different platforms by following these steps:

              -
                -
              • Go to https://www.brawlhalla.com/account/ and log in with your Ubisoft account.
              • -
              • Click on the "Link Accounts" button and choose the platform that you want to link.
              • -
              • Follow the instructions to authorize and confirm the linking process.
              • -
              • Repeat the steps for any other platform that you want to link.
              • -
              -

              Once you have linked your accounts, you can access your progress and items across all platforms.

              -
            5. How do I get more gold in Brawlhalla PC gratis?
            6. -

              You can get more gold in Brawlhalla PC gratis by playing and completing matches, missions, and events. You can also get more gold by logging in daily, leveling up your account and legends, and watching streams and videos from Brawlhalla partners.

              -
            7. How do I get more skins, colors, taunts, and other items in Brawlhalla PC gratis?
            8. -

              You can get more skins, colors, taunts, and other items in Brawlhalla PC gratis by buying them with gold or mammoth coins, which are the premium currency of the game. You can also get more items by participating in seasonal events, such as Halloween, Christmas, Valentine's Day, etc. You can also get more items by redeeming codes that are given away by Brawlhalla developers and content creators.

              -
            9. How do I contact Brawlhalla support if I have any issues or questions?
            10. -

              You can contact Brawlhalla support by visiting https://www.brawlhalla.com/support/ and filling out the form with your details and inquiry. You can also contact Brawlhalla support by sending an email to support@brawlhalla.com. You can also visit the official Brawlhalla website, forums, social media pages, and Discord server for more information and help.

              -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Crusader Kings 2 A Game of Thrones Mod - Whats New in the Latest Version.md b/spaces/congsaPfin/Manga-OCR/logs/Crusader Kings 2 A Game of Thrones Mod - Whats New in the Latest Version.md deleted file mode 100644 index 64639b33a22baad8fd2fc74133d8cfbfb161a88d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Crusader Kings 2 A Game of Thrones Mod - Whats New in the Latest Version.md +++ /dev/null @@ -1,186 +0,0 @@ - -

            Crusader Kings 2: A Game of Thrones Mod Review

            -

            If you are a fan of both Crusader Kings 2 and A Song of Ice and Fire, then you are in for a treat. Crusader Kings 2: A Game of Thrones mod is a full-conversion mod that transforms the medieval world of Crusader Kings 2 into the fantasy realm of Westeros and beyond. You can play as any character from the books or show, from kings and queens to lords and ladies, from knights and maesters to wildlings and white walkers. You can relive the events from the series or create your own alternate history. You can fight for the Iron Throne or forge your own kingdom. You can make alliances or enemies with other players or NPCs. You can scheme, plot, duel, marry, assassinate, conquer, or die. The choice is yours.

            -

            But how do you get started with this amazing mod? And what are some of the features that make it stand out from the vanilla game and other mods? And what are some tips and tricks to help you succeed in this brutal and unforgiving world? And finally, is this mod worth playing and recommending? In this article, I will answer all these questions and more. Let's begin.

            -

            crusader kings 2 game of thrones mod download


            Download File ===> https://urlca.com/2uOauI



            -

            How to download and install Crusader Kings 2: A Game of Thrones mod

            -

            The first step to enjoy this mod is to download and install it. This is not very difficult, but it requires some attention to detail. Here are the steps you need to follow:

            -
              -
            1. Make sure you have Crusader Kings 2 installed on your computer. You can buy it from Steam or other platforms. The latest version of the game is 3.3.5.
            2. -
            3. Download the latest version of Crusader Kings 2: A Game of Thrones mod from its official website, ModDB page, or Steam workshop page. The latest version of the mod is 2.2.
            4. -
            5. Extract the downloaded file using a program like WinRAR or 7-Zip. You should get two folders: A Game of Thrones and A Game of Thrones Music.
            6. -
            7. Copy these two folders into your Crusader Kings 2 mod folder. The default location is C:\Users\YourName\Documents\Paradox Interactive\Crusader Kings II\mod.
            8. -
            9. Launch Crusader Kings 2 from Steam or other platforms. In the launcher window, check the boxes next to A Game of Thrones and A Game of Thrones Music under Mods.
            10. -
            11. Click Play and enjoy!
            12. -
            -

            Note: If you have any other mods installed or enabled, they may conflict with Crusader Kings 2: A Game of Thrones mod. It is recommended to disable them before playing this mod.

            -

            Main features of Crusader Kings 2: A Game of Thrones mod

            -

            Crusader Kings 2: A Game of Thrones mod is not just a simple reskin of the vanilla game. It adds many new features and mechanics that enhance the gameplay and immersion of the mod. Here are some of the main features of the mod:

            -

            Different scenarios and bookmarks to choose from

            -

            One of the most appealing aspects of this mod is that it allows you to play in different historical periods and scenarios from the books and show. You can choose from various bookmarks that correspond to major events and turning points in the story, such as:

            -
              -
            • The Bleeding Years: The aftermath of Aegon's Conquest, when the Seven Kingdoms were still in turmoil and rebellion.
            • -
            • The Conquest: The beginning of Aegon's Conquest, when he landed on Westeros with his dragons and armies.
            • -
            • The Century of Blood: The chaotic period after the Doom of Valyria, when the Free Cities fought each other for supremacy.
            • -
            • The Dance of the Dragons: The civil war between the Targaryen factions over the succession to the Iron Throne.
            • -
            • The Blackfyre Rebellion: The first of several rebellions led by the bastard sons of Aegon IV against the legitimate Targaryen line.
            • -
            • The War of the Ninepenny Kings: The last attempt by a Blackfyre pretender to claim the Iron Throne, supported by several foreign powers.
            • -
            • The Crowned Stag: The end of Robert's Rebellion, when he overthrew the Mad King and became the new king.
            • -
            • The Clash of Kings: The start of the War of the Five Kings, when several claimants rose up against Joffrey Baratheon.
            • -
            • A Feast for Crows: The aftermath of the War of the Five Kings, when new threats and challenges emerged in Westeros and beyond.
            • -
            • A Dream of Spring: The latest bookmark, based on the events from the show's final season and some speculation from the books.
            • -
            -

            Each bookmark has its own set of characters, factions, events, and challenges. You can play as any character you want, from major players like Daenerys Targaryen or Jon Snow, to minor lords like Edmure Tully or Asha Greyjoy, to custom characters you create yourself. You can also create your own scenarios using the Ruler Designer or Custom Game options.

            -

            New mechanics and events to fit the lore and flavor of A Song of Ice and Fire

            -

            Another feature that makes this mod stand out is that it adds many new mechanics and events that are specific to the lore and flavor of A Song of Ice and Fire. These include:

            -

            crusader kings 2 a game of thrones ck2 agot mod
            -ck2 game of thrones mod installation guide
            -crusader kings 2 game of thrones mod steam workshop
            -how to play crusader kings 2 game of thrones mod
            -crusader kings 2 game of thrones mod submods
            -ck2 agot mod latest version download
            -crusader kings 2 game of thrones mod cheats
            -ck2 game of thrones mod best characters
            -crusader kings 2 game of thrones mod wiki
            -ck2 agot mod compatible mods
            -crusader kings 2 game of thrones mod scenarios
            -ck2 game of thrones mod tips and tricks
            -crusader kings 2 game of thrones mod crash fix
            -ck2 agot mod update log
            -crusader kings 2 game of thrones mod reddit
            -ck2 game of thrones mod custom house
            -crusader kings 2 game of thrones mod events
            -ck2 agot mod console commands
            -crusader kings 2 game of thrones mod traits
            -ck2 game of thrones mod dragon egg
            -crusader kings 2 game of thrones mod mac download
            -ck2 agot mod changelog
            -crusader kings 2 game of thrones mod multiplayer
            -ck2 game of thrones mod night king
            -crusader kings 2 game of thrones mod review
            -ck2 agot mod bloodlines
            -crusader kings 2 game of thrones mod white walkers
            -ck2 game of thrones mod faceless men
            -crusader kings 2 game of thrones mod essos
            -ck2 agot mod soundtrack download
            -crusader kings 2 game of thrones mod valyrian steel sword
            -ck2 game of thrones mod targaryen invasion
            -crusader kings 2 game of thrones mod religions
            -ck2 agot mod nicknames plus download
            -crusader kings 2 game of thrones mod cultures
            -ck2 game of thrones mod more decisions download
            -crusader kings 2 game of thrones mod blackfyre rebellion
            -ck2 agot mod more bookmarks download
            -crusader kings 2 game of thrones mod robert's rebellion start date
            -ck2 game of thrones mod war for the dawn download
            -crusader kings 2 game of thrones mod daenerys targaryen start date
            -ck2 agot mod free cities download
            -crusader kings 2 game of thrones mod jon snow start date
            -ck2 agot mod colonize valyria download

            -
              -
            • A new map that covers not only Westeros, but also Essos, Sothoryos, and parts of Ulthos. The map has more than 600 provinces and 80 regions, each with its own culture, religion, terrain, climate, and resources.
            • -
            • A new system of traits that reflect the personality, skills, appearance, and background of each character. Some traits are inherited, some are acquired, some are hidden, and some are unique. Traits can affect your stats, relationships, events, decisions, and actions.
            • -
            • A new system of cultures and religions that influence your gameplay and roleplay. There are more than 100 cultures and 50 religions in the mod, each with its own customs, laws, traditions, holy sites, festivals, holy orders, heresies, and special mechanics. For example, followers of the Old Gods can use weirwood trees to communicate with their ancestors or spy on their enemies; followers of R'hllor can perform blood magic or sacrifice prisoners to gain favors from their god; followers of Drowned God can drown their enemies or themselves to prove their faith; etc.
            • -
            • A new system of laws and succession that determine how your realm is governed and who inherits your titles. There are different types of laws for different regions, such as feudal laws for Westeros, nomadic laws for Dothraki, merchant republic laws for Braavos, etc. There are also different types of succession for different titles, such as primogeniture for most kingdoms , elective for the Iron Throne, seniority for the Night's Watch, etc. You can also change your laws and succession through decisions or events, such as adopting feudalism, abolishing slavery, granting women equal rights, etc.
            • -
            • A new system of events and decisions that add more depth and flavor to your gameplay and roleplay. There are hundreds of events and decisions in the mod, some of them based on the books and show, some of them original or inspired by other sources. These events and decisions can affect your character, your realm, your relations, your wars, your plots, and more. For example, you can hold a tourney, a feast, a trial by combat, a coronation, a wedding, a funeral, etc.; you can join a faction, a society, a secret cult, a rebellion, etc.; you can claim a dragon egg, hatch a dragon, tame a dragon, ride a dragon, etc.; you can explore the ruins of Valyria, the lands beyond the Wall, the mysterious islands of the Jade Sea, etc.; you can encounter characters and creatures from the lore, such as direwolves, giants, wargs, faceless men, white walkers, etc.
            • -
            -

            A new dueling engine, personal interaction system, and pets

            -

            Another feature that makes this mod unique is that it adds a new dueling engine, a personal interaction system, and pets. These features allow you to have more control and fun with your character and others. Here are some details about these features:

            -
              -
            • A new dueling engine that allows you to challenge or be challenged by other characters to a duel. You can duel for honor, for glory, for revenge, for love, or for fun. You can choose your weapon, your armor, your tactics, and your attitude. You can also choose to fight honorably or dishonorably. The outcome of the duel depends on your skills, traits, equipment, luck, and choices. You can win or lose the duel; you can wound or kill your opponent; you can gain or lose prestige and piety; you can earn or lose respect and reputation; you can make or break alliances and friendships; you can trigger or avoid wars and plots.
            • -
            • A personal interaction system that allows you to interact with other characters in more ways than before. You can send them gifts, insults, threats, requests, invitations, etc. You can also use special interactions that are specific to your culture or religion. For example, you can invite someone to a feast, a hunt, a tourney, a wedding, etc.; you can seduce someone, flirt with someone, propose to someone, etc.; you can ask someone for a favor, a loan, a claim, etc.; you can offer someone your protection, your friendship, your loyalty, etc.; you can demand someone's submission, tribute, hostage, etc.; you can blackmail someone, expose someone, slander someone, etc. These interactions can have various effects on your and their stats, relations, events, and actions.
            • -
            • A pet system that allows you to have a pet as your companion and friend. You can acquire a pet through events or decisions, such as finding a stray animal, buying an exotic creature, inheriting a family pet, etc. You can choose from various types of pets, such as dogs, cats, birds, horses, dragons, direwolves, lions, bears, etc. Each pet has its own name, traits, appearance, and personality. You can interact with your pet in various ways, such as playing with it, feeding it, training it, grooming it, etc. Your pet can also interact with other characters and pets in various ways, such as liking them, hating them, attacking them, befriending them, etc. Your pet can affect your stats, relations, events, and actions. Your pet can also have its own events and decisions, such as getting sick, getting lost, getting pregnant, etc.
            • -
            -

            Tips and tricks for playing Crusader Kings 2: A Game of Thrones mod

            -

            Crusader Kings 2: A Game of Thrones mod is not an easy mod to play. It is full of challenges, dangers, and surprises. You need to be careful, cunning, and adaptable to survive and thrive in this world. Here are some tips and tricks to help you along the way:

            -

            How to survive as a ruler in a hostile and unpredictable world

            -

            As a ruler, you have many responsibilities and duties, but also many enemies and threats. You need to balance your power, your prestige, your piety, and your popularity. You need to keep your realm stable, your vassals loyal, your subjects happy, and your rivals at bay. You need to be aware of the political, military, economic, and social situation of your realm and the world. You need to be prepared for any eventuality, such as wars, rebellions, plots, intrigues, assassinations, etc. Here are some tips to help you survive as a ruler:

            -
              -
            • Choose your character wisely. Depending on the scenario and bookmark you choose, you may have different options for your starting character. Some characters are more powerful, more wealthy, more influential, or more popular than others. Some characters have more allies, more enemies, more claims, or more challenges than others. Some characters have more interesting or unique stories or events than others. Choose a character that suits your playstyle, your goals, and your preferences.
            • -
            • Choose your focus wisely. As a ruler, you can choose a focus that gives you bonuses to certain stats or actions. For example, you can choose a martial focus that gives you bonuses to combat or warfare; you can choose a stewardship focus that gives you bonuses to income or management; you can choose a diplomacy focus that gives you bonuses to relations or alliances; etc. Choose a focus that matches your needs, your strengths, or your weaknesses.
            • -
            • Choose your ambitions and plots wisely. As a ruler , you can choose an ambition that gives you a goal to pursue or a plot that gives you a scheme to execute. For example, you can choose an ambition to become king, to improve your stats, to have a child, etc.; you can choose a plot to kill someone, to fabricate a claim, to kidnap someone, etc. Choose an ambition or a plot that is realistic, achievable, and beneficial for you.
            • -
            • Choose your allies and enemies wisely. As a ruler, you can make or break alliances and rivalries with other characters. For example, you can marry someone, swear fealty to someone, form a pact with someone, etc.; you can declare war on someone, rebel against someone, plot against someone, etc. Choose your allies and enemies based on your interests, your values, and your situation. Be careful not to make too many enemies or too few allies.
            • -
            • Choose your actions and reactions wisely. As a ruler, you can take various actions or react to various events that affect your gameplay and roleplay. For example, you can grant or revoke titles, raise or lower taxes, change laws or succession, etc.; you can accept or decline invitations, requests, offers, etc.; you can respond to events with different options that have different consequences. Choose your actions and reactions based on your goals, your personality, and your circumstances. Be careful not to act rashly or recklessly.
            • -
            -

            How to use your council, vassals, allies, and enemies to your advantage

            -

            As a ruler, you are not alone in this world. You have various characters that can help you or hinder you in your endeavors. You have a council of advisors that can assist you in various tasks; you have vassals that owe you allegiance and service; you have allies that can support you in times of need; and you have enemies that can oppose you in times of trouble. You need to use these characters to your advantage and avoid their disadvantages. Here are some tips to help you do that:

            -
              -
            • Use your council wisely. Your council consists of six positions: Hand of the King (or equivalent), Master of Laws, Master of Coin, Master of Arms, Master of Whispers, and Maester (or equivalent). Each position has a specific function and skill associated with it. For example, the Hand of the King helps you with general administration and diplomacy; the Master of Laws helps you with justice and intrigue; the Master of Coin helps you with economy and stewardship; etc. You can assign any character in your realm to any position on your council, as long as they meet the requirements. You can also fire or replace them at any time. Choose your councilors based on their skills, their traits, their loyalty, and their opinion of you. You can also use your councilors to perform various jobs, such as improving relations, fabricating claims, collecting taxes, training troops, spying on enemies, etc. You can also consult your council on important decisions, such as declaring war, changing laws, granting titles, etc. Your councilors can also influence your events and actions, for better or worse.
            • -
            • Use your vassals wisely. Your vassals are the lords and ladies that hold lands and titles under you. They provide you with troops, taxes, and support. They also have their own interests, ambitions, and opinions. You need to keep your vassals happy and loyal, or else they may rebel against you or join your enemies. You can improve your relations with your vassals by granting them titles, honors, favors, gifts, etc. You can also reduce their power and influence by revoking their titles, transferring their vassals, imprisoning them, executing them, etc. You can also use your vassals to expand your realm, by pressing their claims, marrying them off, or supporting them in wars.
            • -
            • Use your allies wisely. Your allies are the characters that have a formal or informal alliance with you. They can be your relatives, your friends, your vassals, your liege, or your fellow faction members. They can help you in wars, plots, events, and actions. They can also ask you for help in return. You can form alliances with other characters by marrying them or their relatives, by joining their factions or societies, by swearing fealty to them or offering them protection, etc. You can also break alliances with other characters by divorcing them or their relatives, by leaving their factions or societies, by rebelling against them or refusing their protection, etc. You can also use your allies to weaken your enemies, by declaring war on them, by plotting against them, by supporting their rivals, etc.
            • -
            • Use your enemies wisely. Your enemies are the characters that have a formal or informal rivalry with you. They can be your rivals, your traitors, your rebels, your competitors, or your adversaries. They can harm you in wars, plots, events, and actions. They can also benefit you in some ways. You can use your enemies to improve your prestige, your piety, your reputation, and your skills. You can also use your enemies to unite your realm, your vassals, your allies, and your friends. You can also use your enemies to justify your actions, such as declaring war, changing laws, revoking titles, etc.
            • -
            -

            Review of Crusader Kings 2: A Game of Thrones mod

            -

            Now that we have covered some of the features and tips of Crusader Kings 2: A Game of Thrones mod, let's review it and see how it compares to the vanilla game and other mods. Here are some of the pros and cons of the mod:

            -

            Pros

            -
              -
            • The mod is faithful and immersive to the source material. It captures the essence and atmosphere of A Song of Ice and Fire very well. It has many references and details from the books and show that fans will appreciate and enjoy.
            • -
            • The mod is rich and diverse in content and gameplay. It has many new features and mechanics that add more depth and flavor to the game. It has many different scenarios and bookmarks that offer different challenges and opportunities. It has many events and decisions that make the game more dynamic and unpredictable.
            • -
            • The mod is fun and engaging to play. It has a lot of replay value and variety. It allows you to create your own stories and histories with different characters and factions. It allows you to roleplay as your favorite or custom character with different traits and actions. It allows you to interact with other characters and pets in various ways.
            • -
            -

            Cons

            -
              -
            • The mod is complex and difficult to play. It has a steep learning curve and requires a lot of attention and strategy. It has many rules and mechanics that can be confusing or overwhelming for new or casual players. It has many events and decisions that can be frustrating or unfair for unlucky or unprepared players.
            • -
            • The mod is buggy and unstable to play. It has many errors and glitches that can affect the performance and quality of the game. It may crash, freeze, lag, or corrupt your save files. It may also conflict with other mods or updates of the game. It requires constant maintenance and updates from the developers and the community.
            • -
            • The mod is incomplete and unfinished to play. It is still in development and has many features and content that are missing or incomplete. It may not reflect the latest or final version of the books or show. It may also have some inconsistencies or inaccuracies with the lore or canon.
            • -
            -

            Comparison with the vanilla game and other mods

            -

            Crusader Kings 2: A Game of Thrones mod is not the only mod for Crusader Kings 2. There are many other mods that offer different settings, themes, features, and gameplay. Some of them are historical, some of them are fantasy, some of them are original, and some of them are based on other media. Here are some of the most popular and notable mods for Crusader Kings 2:

            -
OSCPURAMGraphicsStorage
Windows 7 or later2 GHz dual-core processor1 GBOpenGL 2.0 compatible with 256 MB VRAM300 MB
Mac OS X 10.10 or later2 GHz dual-core processor1 GBOpenGL 2.0 compatible with 256 MB VRAM300 MB
Linux (Ubuntu 14.04 or later)2 GHz dual-core processor1 GBOpenGL 2.0 compatible with 256 MB VRAM300 MB
Android 5.0 or later2 GHz dual-core processor1 GBN/AN/A
iOS 10.0 or laterN/AN/AN/AN/A
- - - - - - - - - - - - - - - - - - - - - - - - -
ModDescription
CK2+A mod that enhances and expands the vanilla game with more content, features, mechanics, and balance. It aims to make the game more historical, realistic, challenging, and fun.
HIP (Historical Immersion Project)A mod that overhauls and improves the vanilla game with more historical accuracy, immersion, diversity, and detail. It includes several submods that focus on different aspects of the game, such as map, culture, religion, law, etc.
Elder KingsA mod that converts the game into the fantasy world of The Elder Scrolls series. It allows you to play as various races, factions, and characters from the lore, and experience the events and scenarios that shaped the world of Tamriel and beyond.
After the End Fan ForkA mod that sets the game in a post-apocalyptic North America after an unknown cataclysm. It allows you to play as various cultures, religions, and societies that emerged from the ashes of civilization.
Gods!A mod that adds a new layer of gameplay and roleplay by allowing you to play as a god or a demigod in a mythical world. You can create your own pantheon, influence mortals, perform miracles, fight other gods, etc.
-

How does Crusader Kings 2: A Game of Thrones mod compare to these mods? Well, it depends on your preferences and expectations. If you are looking for a more historical or realistic mod, then you may prefer CK2+ or HIP. If you are looking for a more fantasy or original mod, then you may prefer Elder Kings or After the End Fan Fork. If you are looking for a more divine or supernatural mod, then you may prefer Gods!. If you are looking for a mod that is faithful and immersive to A Song of Ice and Fire, then you may prefer Crusader Kings 2: A Game of Thrones mod.

-

Personal opinion and rating of the mod

-

Finally, let me share my personal opinion and rating of Crusader Kings 2: A Game of Thrones mod. I have played this mod for many hours and enjoyed it very much. I think it is one of the best mods for Crusader Kings 2 and one of the best adaptations of A Song of Ice and Fire. I love how it captures the essence and atmosphere of the books and show. I love how it adds many new features and mechanics that enhance the gameplay and immersion of the mod. I love how it allows me to create my own stories and histories with different characters and factions. I love how it is fun and engaging to play. I think it is a mod that every fan of Crusader Kings 2 and A Song of Ice and Fire should try. However, I also recognize that the mod is not perfect. It has some flaws and drawbacks that can affect the enjoyment and satisfaction of some players. I think the mod is complex and difficult to play. It requires a lot of attention and strategy. It has many rules and mechanics that can be confusing or overwhelming for new or casual players. It has many events and decisions that can be frustrating or unfair for unlucky or unprepared players. I think the mod is buggy and unstable to play. It has many errors and glitches that can affect the performance and quality of the game. It may crash, freeze, lag, or corrupt your save files. It may also conflict with other mods or updates of the game. It requires constant maintenance and updates from the developers and the community. I think the mod is incomplete and unfinished to play. It is still in development and has many features and content that are missing or incomplete. It may not reflect the latest or final version of the books or show. It may also have some inconsistencies or inaccuracies with the lore or canon. Therefore, based on my personal opinion and experience, I would rate Crusader Kings 2: A Game of Thrones mod 8 out of 10. I think it is a great mod that deserves praise and recognition, but also has room for improvement and development.

Conclusion

-

In conclusion, Crusader Kings 2: A Game of Thrones mod is a full-conversion mod that transforms the medieval world of Crusader Kings 2 into the fantasy realm of Westeros and beyond. It allows you to play as various characters and factions from the books and show, and experience the events and scenarios that shaped the world of A Song of Ice and Fire. It adds many new features and mechanics that enhance the gameplay and immersion of the mod. It also has some flaws and drawbacks that can affect the enjoyment and satisfaction of some players. It is a mod that every fan of Crusader Kings 2 and A Song of Ice and Fire should try, but also be aware of its limitations and challenges.

-

If you are interested in Crusader Kings 2: A Game of Thrones mod, you can download it from its official website, ModDB page, or Steam workshop page. You can also visit its forum or subreddit for more information, discussion, feedback, support, etc.

-

Thank you for reading this article. I hope you found it helpful and enjoyable. Have fun playing Crusader Kings 2: A Game of Thrones mod!

-

FAQs

-

Here are some frequently asked questions (FAQs) about Crusader Kings 2: A Game of Thrones mod:

-
    -
  1. Q: What are the system requirements for Crusader Kings 2: A Game of Thrones mod?
    -A: The system requirements for Crusader Kings 2: A Game of Thrones mod are the same as for Crusader Kings 2, plus some additional disk space for the mod files. Here are the minimum system requirements for Crusader Kings 2:
    -- OS: Windows 7
    -- Processor: Intel® Pentium® IV 2.4 GHz or AMD 3500+
    -- Memory: 4 GB RAM
    -- Hard Disk Space: 2 GB
    -- Video Card: NVIDIA® GeForce 8800 or ATI Radeon® X1900, 512mb graphics memory required.
    -- DirectX®: 9.0c
    -- Sound: Direct X-compatible sound card
    -- Additional: 3-button mouse and keyboard
  2. -
  3. Q: How do I update Crusader Kings 2: A Game of Thrones mod?
    -A: To update Crusader Kings 2: A Game of Thrones mod, you need to download the latest version of the mod from its official website, ModDB page, or Steam workshop page. Then, you need to extract the downloaded file using a program like WinRAR or 7-Zip. You should get two folders: A Game of Thrones and A Game of Thrones Music. You need to copy these two folders into your Crusader Kings 2 mod folder, replacing the old ones. The default location is C:\Users\YourName\Documents\Paradox Interactive\Crusader Kings II\mod. Then, you need to launch Crusader Kings 2 from Steam or other platforms. In the launcher window, check the boxes next to A Game of Thrones and A Game of Thrones Music under Mods. Click Play and enjoy the updated mod.
  4. -
  5. Q: How do I uninstall Crusader Kings 2: A Game of Thrones mod?
    -A: To uninstall Crusader Kings 2: A Game of Thrones mod, you need to delete the two folders: A Game of Thrones and A Game of Thrones Music from your Crusader Kings 2 mod folder. The default location is C:\Users\YourName\Documents\Paradox Interactive\Crusader Kings II\mod. Then, you need to launch Crusader Kings 2 from Steam or other platforms. In the launcher window, uncheck the boxes next to A Game of Thrones and A Game of Thrones Music under Mods. Click Play and enjoy the vanilla game or other mods.
  6. -
  7. Q: How do I play multiplayer with Crusader Kings 2: A Game of Thrones mod?
    -A: To play multiplayer with Crusader Kings 2: A Game of Thrones mod, you need to have the same version of the game and the mod as your friends. You also need to disable any other mods that may conflict with Crusader Kings 2: A Game of Thrones mod. Then, you need to launch Crusader Kings 2 from Steam or other platforms. In the launcher window, check the boxes next to A Game of Thrones and A Game of Thrones Music under Mods. Click Play and then click Multiplayer. You can either host a game or join a game hosted by your friends. You can also use third-party programs like Hamachi or Evolve to create a virtual LAN network and play with your friends.
  8. -
  9. Q: How do I report bugs or give feedback for Crusader Kings 2: A Game of Thrones mod?
    -A: To report bugs or give feedback for Crusader Kings 2: A Game of Thrones mod, you can visit its forum or subreddit. There, you can post your issues, suggestions, questions, comments, etc. You can also contact the developers and the community directly through their Discord server. Please be respectful and constructive when reporting bugs or giving feedback.
  10. -
  11. Q: How do I support Crusader Kings 2: A Game of Thrones mod?
    -A: To support Crusader Kings 2: A Game of Thrones mod, you can do several things. You can spread the word about the mod and recommend it to your friends and other players. You can rate and review the mod on its website, ModDB page, or Steam workshop page. You can donate money to the developers and the community through their Patreon page. You can also contribute to the development and improvement of the mod by providing bug reports, feedback, suggestions, ideas, etc.
  12. -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download MP3 Music from MP3 Juice Com in 2021 with High Quality.md b/spaces/congsaPfin/Manga-OCR/logs/Download MP3 Music from MP3 Juice Com in 2021 with High Quality.md deleted file mode 100644 index c5692496bbb2e9968136c549a205d72e6f3d986d..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download MP3 Music from MP3 Juice Com in 2021 with High Quality.md +++ /dev/null @@ -1,106 +0,0 @@ - -

MP3 Juice Com 2021: A Free and Fast Way to Download MP3 Music

-

Do you love listening to music online but hate paying for subscriptions or dealing with ads? Do you want to download your favorite songs in high quality and listen to them offline anytime and anywhere? If you answered yes to these questions, then you might want to check out MP3 Juice Com, a free and fast website that lets you download MP3 music from various sources. In this article, we will tell you everything you need to know about MP3 Juice Com, including what it is, how it works, what are its benefits, how to use it, and whether it is safe and legal. We will also give you some alternatives to MP3 Juice Com in case you want to try something different. So, let's get started!

-

mp3 juice com 2021 music download mp3


Download ○○○ https://urlca.com/2uOfSi



-

What is MP3 Juice Com?

-

MP3 Juice Com is a website that allows you to download MP3 music from YouTube, SoundCloud, Mixcloud, Audiomack, Jamendo, and other platforms. You can either search for the song by its name, artist, album, or genre, or paste the URL of the song from any of these sources. MP3 Juice Com will then convert the video or audio file into an MP3 file that you can download or play online. You can also choose the quality and format of the file according to your preference.

-

How does MP3 Juice Com work?

-

MP3 Juice Com works by using a technology called web scraping, which means that it extracts data from other websites and displays it on its own website. When you search for a song or paste a URL on MP3 Juice Com, it will scrape the relevant information from the source website and show you the results. You can then click on the download or play button to get the MP3 file. MP3 Juice Com does not host any of the files on its own server, so it does not violate any copyright laws.

-

What are the benefits of using MP3 Juice Com?

-

There are many benefits of using MP3 Juice Com to download MP3 music, such as:

- -

How to use MP3 Juice Com to download MP3 music?

-

Using MP3 Juice Com to download MP3 music is very simple and straightforward. Here are the steps you need to follow:

-

mp3 juice cc free music download 2021
-mp3 juice com 2021 songs download free
-mp3 juice download music 2021 online
-mp3 juice com 2021 new music download
-mp3 juice free music downloader 2021
-mp3 juice com 2021 latest songs download
-mp3 juice music download 2021 app
-mp3 juice com 2021 best music download
-mp3 juice download free music 2021
-mp3 juice com 2021 top music download
-mp3 juice music download 2021 website
-mp3 juice com 2021 popular music download
-mp3 juice music download 2021 apk
-mp3 juice com 2021 trending music download
-mp3 juice music download 2021 for pc
-mp3 juice com 2021 hindi music download
-mp3 juice music download 2021 for android
-mp3 juice com 2021 english music download
-mp3 juice music download 2021 for iphone
-mp3 juice com 2021 tamil music download
-mp3 juice music download 2021 for mac
-mp3 juice com 2021 telugu music download
-mp3 juice music download 2021 for windows
-mp3 juice com 2021 malayalam music download
-mp3 juice music download 2021 for linux
-mp3 juice com 2021 punjabi music download
-mp3 juice music download 2021 chrome extension
-mp3 juice com 2021 bollywood music download
-mp3 juice music download 2021 firefox addon
-mp3 juice com 2021 kannada music download
-mp3 juice music download 2021 youtube converter
-mp3 juice com 2021 marathi music download
-mp3 juice music download 2021 soundcloud downloader
-mp3 juice com 2021 gujarati music download
-mp3 juice music download 2021 spotify ripper
-mp3 juice com 2021 urdu music download
-mp3 juice music download 2021 deezer downloader
-mp3 juice com 2021 nepali music download
-mp3 juice music download 2021 tidal downloader
-mp3 juice com 2021 bengali music download
-mp3 juice music download 2021 amazon downloader
-mp3 juice com 2021 odia music download
-mp3 juice music download 2021 apple downloader
-mp3 juice com 2021 assamese music download
-mp3 juice music download 2021 pandora downloader
-mp3 juice com 2021 sinhala music download
-mp3 juice music download 2021 gaana downloader
-mp3 juice com 2021 haryanvi music download
-mp3 juice music download 2021 jiosaavn downloader

-

Step 1: Visit the website

-

The first step is to visit the website of MP3 Juice Com at [https://www.mp\uE000ju

Step 2: Search for the song or paste the URL

-

The next step is to search for the song you want to download or paste the URL of the song from any of the supported platforms. You can use the search bar at the top of the website to type in the name, artist, album, or genre of the song. You can also use the filters to narrow down your search results by source, duration, or popularity. Alternatively, you can copy and paste the URL of the song from YouTube, SoundCloud, Mixcloud, Audiomack, Jamendo, or any other platform that MP3 Juice Com supports. You can find the URL by clicking on the share button on the source website and copying the link.

-

Step 3: Choose the quality and format

-

Once you have found the song you want to download, you can choose the quality and format of the file. MP3 Juice Com offers different options for quality and format, such as 128 kbps, 192 kbps, 256 kbps, 320 kbps, MP4, M4A, WEBM, and more. You can click on the drop-down menu next to the download button to see the available options. You can also preview the song by clicking on the play button before downloading it.

-

Step 4: Download or play the song

-

The final step is to download or play the song. You can click on the download button to save the file to your device or click on the play button to stream it online. You can also share the song with your friends by clicking on the share button and choosing your preferred platform. You can enjoy your MP3 music offline anytime and anywhere!

-

Is MP3 Juice Com safe and legal?

-

One of the common questions that people have about MP3 Juice Com is whether it is safe and legal to use. The answer is not so simple, as it depends on several factors. Here are some of the safety and legal issues that you should be aware of when using MP3 Juice Com:

-

Safety issues

-

MP3 Juice Com is generally safe to use, as it does not ask for any personal information or access to your device. It also does not host any of the files on its own server, so it does not contain any viruses or malware. However, you should still be careful when using MP3 Juice Com, as some of the source websites may have pop-up ads or redirects that may lead you to malicious or inappropriate content. You should also avoid clicking on any suspicious links or downloading any unknown files from these websites. You should also use a reliable antivirus software and a VPN service to protect your device and your privacy.

-

Legal issues

-

MP3 Juice Com is not illegal in itself, as it does not violate any copyright laws by scraping data from other websites and displaying it on its own website. However, downloading MP3 music from MP3 Juice Com may be illegal in some countries or regions, depending on the laws and regulations of the original content owners and the fair use doctrine. You should always respect the intellectual property rights of the artists and creators and only download MP3 music for personal and non-commercial use. You should also check the terms and conditions of the source websites and the laws of your country or region before using MP3 Juice Com. You should also be aware of the potential risks of downloading MP3 music from MP3 Juice Com, such as infringing on the rights of the content owners, facing legal actions or penalties, or losing access to the source websites.

-

What are some alternatives to MP3 Juice Com?

-

If you are looking for some alternatives to MP3 Juice Com, you may want to try these websites that also allow you to download MP3 music from various sources:

-

YouTube to MP3 Converter

-

YouTube to MP3 Converter is a website that lets you download MP3 music from YouTube videos. You can either search for the video by its name or paste the URL of the video. You can then choose the quality and format of the file and download it to your device. You can also use this website to download videos from YouTube in different formats, such as MP4, AVI, MKV, and more.

-

SoundCloud Downloader

-

SoundCloud Downloader is a website that lets you download MP3 music from SoundCloud tracks. You can either search for the track by its name or paste the URL of the track. You can then choose the quality and format of the file and download it to your device. You can also use this website to download playlists and albums from SoundCloud.

-

Audiomack Downloader

-

Audiomack Downloader is a website that lets you download MP3 music from Audiomack songs. You can either search for the song by its name or paste the URL of the song. You can then choose the quality and format of the file and download it to your device. You can also use this website to download albums and playlists from Audiomack.

-

Conclusion

-

MP3 Juice Com is a free and fast website that lets you download MP3 music from various sources, such as YouTube, SoundCloud, Mixcloud, Audiomack, Jamendo, and more. You can either search for the song by its name, artist, album, or genre, or paste the URL of the song from any of these platforms. You can also choose the quality and format of the file according to your preference. However, you should also be careful about the safety and legal issues that may arise when using MP3 Juice Com, such as pop-up ads, redirects, viruses, malware, intellectual property rights, legal actions, penalties, and access restrictions. You should always respect the rights of the content owners and only download MP3 music for personal and non-commercial use. You should also check the terms and conditions of the source websites and the laws of your country or region before using MP3 Juice Com. If you are looking for some alternatives to MP3 Juice Com, you may want to try YouTube to MP3 Converter, SoundCloud Downloader, or Audiomack Downloader.

-

FAQs

-

Here are some frequently asked questions about MP3 Juice Com:

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Ringtone iPhone 12 WhatsApp The Ultimate Guide.md b/spaces/congsaPfin/Manga-OCR/logs/Download Ringtone iPhone 12 WhatsApp The Ultimate Guide.md deleted file mode 100644 index 2c6f9d967a582fb5970dee05c11965872a075a66..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Ringtone iPhone 12 WhatsApp The Ultimate Guide.md +++ /dev/null @@ -1,97 +0,0 @@ - -

How to Download Ringtone for iPhone 12 WhatsApp

-

WhatsApp is one of the most popular instant messaging apps in the world. It allows you to send text and voice messages, make voice and video calls, and share images, documents, user locations, and other content with your contacts. If you use WhatsApp frequently, you might want to customize your ringtone so that you can easily identify when you receive a notification or a call from WhatsApp.

-

download ringtone iphone 12 whatsapp


Download >>> https://urlca.com/2uOafm



-

If you have an iPhone 12, you can enjoy some of the best features of WhatsApp, such as end-to-end encryption, self-destructing messages and images, video and voice calls in HD quality, fun and lively stickers, and more. You can also download any ringtone that you like from WhatsApp and set it as your ringtone for WhatsApp notifications and calls.

-

In this article, we will show you how to download ringtone for iPhone 12 WhatsApp in three easy steps. You will need the following tools:

- -

Step 1: Save the WhatsApp Audio on Your iPhone

-

The first step is to save the audio file that you want to use as a ringtone on your iPhone. You cannot set an audio file as a ringtone directly from WhatsApp. You need to save it to your iPhone's storage first. Here's how:

-
    -
  1. Launch WhatsApp app and open the chat where you've received or sent the audio file that you want to use as a ringtone.
  2. -
  3. Tap and hold on to the audio file until you see additional options.
  4. -
  5. Tap on the Forward option.
  6. -
  7. The audio file will now be selected. Tap on the Share icon at the bottom left corner of the screen.
  8. -
  9. From the list of apps that appear, tap on the Files app.
  10. -
  11. Choose a location where you want to save the audio file, such as On My iPhone or iCloud Drive.
  12. -
  13. Tap on Save to confirm your action.
  14. -
-

You have now saved the audio file on your iPhone. You can find it in the Files app in the location that you chose.

-

Step 2: Convert the Audio File to a Ringtone Format

-

The next step is to convert the audio file to a ringtone format. You cannot set any audio file as a ringtone on your iPhone. You need to convert it to a .m4r format, which is the standard format for iPhone ringtones. You can use the GarageBand app to do this. Here's how:

-
    -
  1. Launch GarageBand app and tap on the + icon at the top right corner of the screen.
  2. -
  3. Tap on New Song and choose Audio Recorder as the instrument.
  4. -
  5. Tap on the Tracks View icon at the top left corner of the screen. It looks like three horizontal lines with circles.
  6. -
  7. Tap on the Loop Browser icon at the top right corner of the screen. It looks like a loop.
  8. -
  9. Tap on Browse Items from the Files app.
  10. -
  11. Navigate to the location where you saved the audio file in Step 1 and tap on it to import it to GarageBand.
  12. -
  13. You will see the audio file as a blue waveform on the track. You can use the tools at the top of the screen to edit it, such as trimming, splitting, looping, or adding effects.
  14. -
  15. When you are happy with your editing, tap on My Songs at the top left corner of the screen to save your project.
  16. -
  17. Tap and hold on your project until you see additional options.
  18. -
  19. Tap on Share and choose Ringtone as the option.
  20. -
  21. Enter a name for your ringtone and tap on Export.
  22. -
-

You have now converted the audio file to a ringtone format. You can find it in the Settings app under Sounds & Haptics > Ringtone.

-

Step 3: Set the Ringtone for WhatsApp on Your iPhone

-

The final step is to set the ringtone for WhatsApp on your iPhone. You can choose different ringtones for WhatsApp notifications and calls. Here's how:

-
    -
  1. Launch Settings app and tap on Sounds & Haptics.
  2. -
  3. Scroll down to find WhatsApp under Notification Sounds and tap on it.
  4. -
  5. You will see a list of ringtones that you can choose from. Tap on the one that you created in Step 2. You will hear a preview of how it sounds.
  6. -
  7. If you want to use the same ringtone for WhatsApp calls, tap on Ringtone under Sounds and Vibration Patterns and choose your ringtone from there.
  8. -
-

You have now set your ringtone for WhatsApp on your iPhone. You can test it by sending yourself a message or calling yourself from another device using WhatsApp.

-

How to download ringtone for iphone 12 from whatsapp
-Iphone 12 original ringtone download whatsapp
-Whatsapp audio as ringtone on iphone 12
-Iphone 12 pro max whatsapp ringtone download
-Download free iphone 12 ringtones from zedge
-Change whatsapp ringtone on iphone 12 settings
-Iphone 12 ios 13 whatsapp ringtone download
-Download iphone 12 tv ad ringtone for whatsapp
-Save whatsapp audio file as iphone 12 ringtone
-Iphone 12 dot ding ringtone download whatsapp
-Download iphone 12 alarm sound for whatsapp
-Whatsapp notification ringtone for iphone 12
-Iphone 12 flash ringtone download for whatsapp
-How to personalize iphone 12 ringtone with whatsapp audio
-Download iphone x to 13 pro max ringtone for whatsapp
-Iphone 12 new sms ringtone download whatsapp
-Whatsapp call ringtone for iphone 12
-Download iphone 12 apple ringtone for whatsapp
-How to transfer whatsapp audio to iphone 12 ringtone
-Iphone 12 sb-90 ringtone download for whatsapp
-Download best iphone 12 ringtones for whatsapp
-Change whatsapp tone on iphone 12 without itunes
-Iphone 12 neWsmsiPhone12 ringtone download for whatsapp
-How to sync whatsapp audio with iphone 12 ringtone
-Iphone 12 pro whatsapp ringtone download free
-Download latest iphone 12 ringtones for whatsapp
-Set custom whatsapp ringtone on iphone 12
-Iphone 12 mulan ringtone download for whatsapp
-How to convert whatsapp audio to iphone 12 ringtone format
-Iphone 12 superconducting ringtone download for whatsapp

-

Conclusion

-

In this article, we have shown you how to download ringtone for iPhone 12 WhatsApp in three easy steps. You can use any audio file that you receive or record on WhatsApp as a ringtone, as long as it is not longer than 40 seconds. You can also use different ringtones for different contacts on WhatsApp by assigning them custom tones. You can also use the same ringtone for WhatsApp and other apps on your iPhone by choosing it from the list of ringtones in the Settings app. If you want to delete a ringtone that you created using GarageBand app, you can do so by opening GarageBand app, tapping on My Songs, tapping on Select, tapping on Delete, and confirming your action. If you want to download more ringtones for your iPhone 12, you can do so from iTunes Store app by tapping on More, tapping on Tones, browsing or searching for ringtones, and buying them with your Apple ID.

-

We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave them in the comments section below. We would love to hear from you!

-

FAQs

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Liger Akdi Pakdi Full Song Download in High Quality Audio.md b/spaces/congsaPfin/Manga-OCR/logs/Liger Akdi Pakdi Full Song Download in High Quality Audio.md deleted file mode 100644 index 171dbf8465ea111e36eef9f3a766e0f3fbe80a14..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Liger Akdi Pakdi Full Song Download in High Quality Audio.md +++ /dev/null @@ -1,126 +0,0 @@ -
-

Akdi Pakdi Full Song Download: A Massy Dance Number from Liger

-

If you are looking for a catchy and energetic dance song to groove to, then you might want to check out Akdi Pakdi, the first song from the upcoming movie Liger. This song has been creating a lot of buzz among the fans of Vijay Deverakonda and Ananya Panday, who are starring in this movie as the lead pair. Akdi Pakdi is a Hindi song that has been sung by Dev Negi, Pawni Pandey, and Lijo George, and composed by Lijo George-Dj Chetas. The song has a catchy hookline, upbeat music, and impressive dance moves by Vijay and Ananya. In this article, we will tell you everything you need to know about Akdi Pakdi, including its details, meaning, video, movie, and how to download it online.

-

akdi pakdi full song download


DOWNLOADhttps://urlca.com/2uOfuv



-

Akdi Pakdi: The Song Details

-

Akdi Pakdi is the first song from the movie Liger, which is a sports drama film directed by Puri Jagannadh and co-produced by Karan Johar. The song was released on July 11, 2022, by Sony Music India on their official YouTube channel. The song has received over 45 million views and 988 thousand likes so far. Here are some more details about the song:

-

The Singers and Composers of Akdi Pakdi

-

The song has been sung by three talented singers, namely Dev Negi, Pawni Pandey, and Lijo George. Dev Negi is a popular playback singer who has sung many hit songs like Badri Ki Dulhania, Sweety Tera Drama, Chalti Hai Kya 9 Se 12, etc. Pawni Pandey is also a well-known singer who has lent her voice to songs like Laila Main Laila, Balam Pichkari, Dil Ka Telephone, etc. Lijo George is not only a singer but also a music composer who has composed music for many movies like Loveyatri, Mitron, Malang, etc. He has also collaborated with Dj Chetas, who is one of the most famous DJs in India. Together, they have created some amazing remixes and original songs like Kamariya, Lamborghini, Coca Cola Tu, etc.

-

The Lyrics and Meaning of Akdi Pakdi

-

The lyrics of the song have been written by Mohsin Shaikh and Azeem Dayani. Mohsin Shaikh is a lyricist who has penned songs like Makhna, Dilbar Dilbar, Garmi, etc. Azeem Dayani is also a lyricist who has written songs like Bekhayali, Tujhe Kitna Chahne Lage, Tera Ban Jaunga, etc. He is also a music supervisor who has worked on many movies like Kabir Singh, Good Newwz, Kesari, etc.

-

The song is a fun and flirty track that revolves around Vijay's character trying to woo Ananya's character. The title of the song em>Akdi Pakdi means one-two in Hindi, and it is a common phrase used in children's games like hopscotch or tag. However, in the context of the song, it also implies a playful chase between the lovers, who tease and flirt with each other. The song has some catchy lines like Akdi pakdi akdi pakdi, tu bhaage main pakdu (One-two one-two, you run and I catch you), Tere nakhre hain anmol, main toh khareedu (Your tantrums are priceless, I want to buy them), Tu hai meri lollipop, main toh chakdu (You are my lollipop, I want to lick you), etc.

-

akdi pakdi liger mp3 download
-akdi pakdi official music video
-akdi pakdi film version song
-akdi pakdi vijay deverakonda ananya panday
-akdi pakdi lijo george dj chetas
-akdi pakdi lyrics in hindi
-akdi pakdi song ringtone download
-akdi pakdi dance performance
-akdi pakdi puri jagannadh movie
-akdi pakdi sony music india
-akdi pakdi reaction video
-akdi pakdi full hd video download
-akdi pakdi 320kbps song download
-akdi pakdi choreography tutorial
-akdi pakdi behind the scenes
-akdi pakdi remix version song
-akdi pakdi whatsapp status video
-akdi pakdi dev negi pawni pandey
-akdi pakdi teaser trailer liger
-akdi pakdi song meaning in english
-akdi pakdi karaoke with lyrics
-akdi pakdi instrumental music download
-akdi pakdi audio launch event
-akdi pakdi song review and rating
-akdi pakdi mashup with other songs
-akdipakdifullsongdownload.com website
-akdipakdifullsongdownload hashtag on social media
-akdipakdifullsongdownload coupon code for discounts
-akdipakdifullsongdownload blog posts and articles
-akdipakdifullsongdownload online contest and giveaway
-how to download akdipakdifullsong for free
-where to watch akdipakdifullsong online streaming
-when is the release date of akdipakdifullsong movie
-who are the singers of akdipakdifullsong song
-what is the story of akdipakdifullsong film
-why is akdipakdifullsong song so popular and viral
-which is the best platform to listen to akdipakdifullsong song
-what are the benefits of downloading akdipakdifullsong song
-how to make a cover version of akdipakdifullsong song

-

The Music Video and Choreography of Akdi Pakdi

-

The music video of the song has been directed by Puri Jagannadh, who is also the director of the movie Liger. The video features Vijay Deverakonda and Ananya Panday in a colorful and vibrant setting, where they dance and romance with each other. The video also showcases some glimpses of the movie, where Vijay plays a mixed martial arts fighter and Ananya plays his love interest.

-

The choreography of the song has been done by Raju Sundaram, who is a renowned choreographer and actor in the South Indian film industry. He has won three National Film Awards for Best Choreography for his work in movies like Minsara Kanavu, Janatha Garage, and Rowdy Rathore. He has also choreographed songs for many Bollywood movies like Chennai Express, Kick, Ra.One, etc. He has given some energetic and stylish dance moves to Vijay and Ananya, who have performed them with grace and confidence.

-

Akdi Pakdi: The Movie Details

-

Akdi Pakdi is not just a song, but also a part of a much-awaited movie called Liger. Liger is a sports drama film that marks the debut of Vijay Deverakonda in Bollywood. The movie is also a pan-Indian project that will be released in five languages: Hindi, Telugu, Tamil, Kannada, and Malayalam. Here are some more details about the movie:

-

Liger: The Plot and Genre of the Movie

-

Liger is a movie that revolves around the life of a mixed martial arts fighter named Ranveer, played by Vijay Deverakonda. Ranveer is a street fighter who has a rare blood group that makes him aggressive and powerful. He is also known as Liger, which is a hybrid of a lion and a tiger. He falls in love with Mona, played by Ananya Panday, who is a bubbly and cheerful girl who supports him in his dreams. However, Ranveer's life takes a turn when he gets involved in the dark world of underground fighting, where he has to face many challenges and enemies.

-

Liger is a movie that combines action, romance, drama, and comedy. It is a movie that showcases the journey of a fighter who wants to achieve his goals and prove his worth. It is also a movie that explores the themes of love, friendship, family, and identity.

-

Liger: The Cast and Crew of the Movie

-

Liger has an ensemble cast that includes some of the most talented actors from different film industries. Apart from Vijay Deverakonda and Ananya Panday, the movie also stars Ramya Krishnan, Ronit Roy, Vishu Reddy, Ali, Makarand Deshpande, Abdul Quadir Amin, etc. The movie also features some special appearances by Jackie Shroff, Suniel Shetty, Chunky Pandey, etc.

-

Liger has been directed by Puri Jagannadh, who is one of the most successful directors in Telugu cinema. He has directed movies like Pokiri, Businessman, Temper, iSmart Shankar, etc. He has also co-produced the movie with Karan Johar, who is one of the most influential producers in Bollywood. He has produced movies like Kuch Kuch Hota Hai, Kabhi Khushi Kabhie Gham, Dostana, Yeh Jawaani Hai Deewani, etc. The movie has been written by Puri Jagannadh and Charmme Kaur, who is also one of the co-producers of the movie. The music of the movie has been composed by Lijo George-Dj Chetas, who have also composed the song Akdi Pakdi. The cinematography of the movie has been done by Vishnu Sarma, who has worked on movies like Mardaani 2, Bala, etc. The editing of the movie has been done by Junaid Siddiqui, who has worked on movies like Dabangg 3, Race 3, etc.

-

Liger: The Release Date and Languages of the Movie

-

Liger is a movie that has been in the making for a long time. The movie was announced in January 2020, and the shooting began in February 2020. However, due to the COVID-19 pandemic, the shooting was halted and resumed several times. The movie was initially scheduled to release on September 9, 2021, but it was postponed due to the second wave of the pandemic. The makers have now announced that the movie will release on January 26, 2022, on the occasion of Republic Day.

-

Liger is a movie that will be released in five languages: Hindi, Telugu, Tamil, Kannada, and Malayalam. The movie will also have dubbed versions in other languages like English, French, German, etc. The movie will be distributed by Zee Studios and Dharma Productions in India and by AA Films and Phars Film internationally.

-

Akdi Pakdi: How to Download the Full Song Online

-

If you are a fan of Akdi Pakdi and want to download the full song online, then you might be wondering how to do it. There are many ways to download songs online, but not all of them are legal and safe. In this section, we will tell you how to download Akdi Pakdi online in a legal and safe manner. We will also tell you the benefits and drawbacks of downloading songs online, and the alternatives and options you have.

-

The Legal and Safe Ways to Download Akdi Pakdi

-

The best way to download Akdi Pakdi online is to use a legal and safe platform that has the rights to stream or download the song. Some of these platforms are:

- -

These are some of the legal and safe ways to download Akdi Pakdi online. However, there are some benefits and drawbacks of downloading songs online that you should be aware of.

-

The Benefits and Drawbacks of Downloading Akdi Pakdi

-

Downloading Akdi Pakdi online has some benefits and drawbacks that you should consider before doing it. Some of these are:

- - - - - - - - - - - - - - - - - -
BenefitsDrawbacks
You can listen to the song anytime and anywhere without internet connection.You need to pay for a subscription fee or buy the song from an online store, which can be expensive or inconvenient.
You can enjoy the high-quality sound and video of the song without buffering or interruptions.You need to have enough storage space on your device to download the song, which can take up a lot of memory.
You can share the song with your friends and family through Bluetooth, USB, or other methods.You might face legal issues or penalties if you download the song from an illegal or pirated source, which can harm your device or data.
-

These are some of the benefits and drawbacks of downloading Akdi Pakdi online. You should weigh them carefully before deciding to download the song online.

-

The Alternatives and Options to Download Akdi Pakdi

-

If you do not want to download Akdi Pakdi online, or if you are not satisfied with the legal and safe ways to download it, then you have some alternatives and options to enjoy the song. Some of these are:

- -

These are some of the alternatives and options to download Akdi Pakdi online. You can choose any of them according to your preference and convenience.

-

Conclusion

-

Akdi Pakdi is a massy dance number from the movie Liger that has been loved by many people. The song has been sung by Dev Negi, Pawni Pandey, and Lijo George, and composed by Lijo George-Dj Chetas. The song has been written by Mohsin Shaikh and Azeem Dayani. The song has a catchy hookline, upbeat music, and impressive dance moves by Vijay Deverakonda and Ananya Panday. The song has been directed by Puri Jagannadh and choreographed by Raju Sundaram.

-

Liger is a sports drama film that marks the debut of Vijay Deverakonda in Bollywood. The movie is also a pan-Indian project that will be released in five languages: Hindi, Telugu, Tamil, Kannada, and Malayalam. The movie is directed by Puri Jagannadh and co-produced by Karan Johar. The movie stars Vijay Deverakonda as a mixed martial arts fighter and Ananya Panday as his love interest. The movie also features Ramya Krishnan, Ronit Roy, Vishu Reddy, Ali, Makarand Deshpande, etc. The movie will release on January 26 , 2022, on the occasion of Republic Day.

-

If you want to download Akdi Pakdi online, you can use a legal and safe platform like YouTube Music, Spotify, JioSaavn, or Gaana. You can also watch the song on YouTube, listen to it on FM radio, buy the CD or DVD of the song, or watch the movie Liger in theatres. However, you should be aware of the benefits and drawbacks of downloading songs online, and choose the best option for you.

-

We hope you enjoyed this article and learned something new about Akdi Pakdi and Liger. If you have any questions or feedback, please feel free to leave a comment below. And don't forget to share this article with your friends and family who might be interested in this topic. Thank you for reading!

-

FAQs

-

Here are some frequently asked questions and answers related to Akdi Pakdi:

-
    -
  1. What is the meaning of Liger?
  2. -

    Liger is a hybrid of a lion and a tiger, which is also the nickname of Vijay Deverakonda's character in the movie. It symbolizes his strength, power, and aggression as a fighter.

    -
  3. Who is the director of Liger?
  4. -

    Liger is directed by Puri Jagannadh, who is one of the most successful directors in Telugu cinema. He has directed movies like Pokiri, Businessman, Temper, iSmart Shankar, etc.

    -
  5. Who is the producer of Liger?
  6. -

    Liger is co-produced by Karan Johar, who is one of the most influential producers in Bollywood. He has produced movies like Kuch Kuch Hota Hai, Kabhi Khushi Kabhie Gham, Dostana, Yeh Jawaani Hai Deewani, etc.

    -
  7. Who are the singers of Akdi Pakdi?
  8. -

    Akdi Pakdi is sung by Dev Negi, Pawni Pandey, and Lijo George. Dev Negi is a popular playback singer who has sung many hit songs like Badri Ki Dulhania, Sweety Tera Drama, Chalti Hai Kya 9 Se 12, etc. Pawni Pandey is also a well-known singer who has lent her voice to songs like Laila Main Laila, Balam Pichkari, Dil Ka Telephone, etc. Lijo George is not only a singer but also a music composer who has composed music for many movies like Loveyatri, Mitron, Malang, etc.

    -
  9. Who are the composers of Akdi Pakdi?
  10. -

    Akdi Pakdi is composed by Lijo George-Dj Chetas. They are a duo of music composers and DJs who have created some amazing remixes and original songs like Kamariya, Lamborghini, Coca Cola Tu, etc.

    -

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/MGlobal Live MOD APK Watch and Interact with Streamers from All Over the World without Restrictions!.md b/spaces/congsaPfin/Manga-OCR/logs/MGlobal Live MOD APK Watch and Interact with Streamers from All Over the World without Restrictions!.md deleted file mode 100644 index bf497d10869691d170527beab8fa2aa72b639d2e..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/MGlobal Live MOD APK Watch and Interact with Streamers from All Over the World without Restrictions!.md +++ /dev/null @@ -1,83 +0,0 @@ - -

Download MGlobal Live: A Guide to Enjoy Live Streaming on Your Phone

-

Live streaming is one of the most popular forms of entertainment nowadays. You can watch various kinds of shows, from music, dance, comedy, gaming, to beauty, cooking, and more. You can also interact with the hosts and other viewers in real-time, making it more fun and engaging.

-

download mglobal live


Download ===> https://urlca.com/2uOdE8



-

But how do you find the best live streaming app for your phone? There are so many options out there, but not all of them are worth your time and data. Some may have low-quality videos, boring content, or annoying ads. Some may even have hidden fees or malware that can harm your device.

-

That's why you need to download MGlobal Live, a live streaming app that offers you high-quality videos, diverse content, and exciting features. In this article, we will tell you everything you need to know about MGlobal Live, how to download it on your Android or iOS device, why you should download it, and some tips and tricks to use it.

-

What is MGlobal Live?

-

MGlobal Live is a live streaming app that allows you to watch and create live videos on your phone. It is developed by MGlobal Team, a company that specializes in online video platforms. MGlobal Live is a modified version of the original Mlive app, which has the same features and interface, but with a different name.

-

MGlobal Live has millions of users from all over the world who enjoy watching and making live shows. You can find various categories of content on MGlobal Live, such as music, dance, comedy, gaming, beauty, cooking, education, and more. You can also choose your preferred language from English, Indonesian, Thai, Vietnamese, Chinese, Arabic, and others.

-

download mglobal live mod apk
-download mglobal live apk latest version
-download mglobal live for android
-download mglobal live streaming app
-download mglobal live unlock all room
-download mglobal live mod apk 2023
-download mglobal live apk free
-download mglobal live for pc
-download mglobal live mod apk unlimited coins
-download mglobal live app for ios
-download mglobal live mod apk terbaru
-download mglobal live apk no banned
-download mglobal live for laptop
-download mglobal live mod apk jalan tikus[^1^]
-download mglobal live apk versi lama
-download mglobal live mod apk no password
-download mglobal live apk pure
-download mglobal live for windows 10
-download mglobal live mod apk tanpa coin
-download mglobal live app for iphone
-download mglobal live mod apk 2022
-download mglobal live apk full unlocked
-download mglobal live for mac
-download mglobal live mod apk no root
-download mglobal live apk mirror
-download mglobal live mod apk gratis
-download mglobal live apk update
-download mglobal live for chromebook
-download mglobal live mod apk anti banned
-download mglobal live apk rexdl
-download mglobal live mod apk vip
-download mglobal live apk original
-download mglobal live for desktop
-download mglobal live mod apk no ads
-download mglobal live apk uptodown
-download mglobal live mod apk bebas gembok
-download mglobal live apk old version
-download mglobal live for linux
-download mglobal live mod apk work 100%
-download mglobal live apk revdl

-

Features of MGlobal Live

-

MGlobal Live has many features that make it stand out from other live streaming apps. Here are some of them:

- -

How to Download MGlobal Live on Android and iOS

-

MGlobal Live is available for both Android and iOS devices. However, you may not find it on the official app stores due to some restrictions or bans. Therefore, you need to download it from a third-party source. Here are the steps to download MGlobal Live on your phone have different talents and personalities. You can follow and support your favorite hosts who make your day better with their live shows. You can also get exclusive benefits and updates from them.

-

To follow and support your favorite hosts, you can go to their live room and tap on the follow button at the bottom right corner. You will see a list of options that you can choose from, such as join fan club, send message, send gift, or report. You can also tap on their profile picture to see more information about them, such as their name, age, location, bio, level, fans, and gifts.

-

Be Respectful and Friendly in the Chat Room

-

MGlobal Live is a social platform that allows you to chat with other users who share your interests and passions. You can chat with them in the chat room, send them messages, emojis, stickers, and gifts. You can also make new friends and join fan clubs.

-

However, you should also be respectful and friendly in the chat room. You should not spam, curse, harass, or bully other users or hosts. You should also not share any personal or sensitive information, such as your phone number, address, bank account, or password. You should also not promote any illegal or harmful activities, such as gambling, drugs, or violence.

-

If you encounter any user or host who violates these rules, you can report them to the MGlobal Live team. You can also block them from contacting you or entering your live room.

-

Conclusion

-

MGlobal Live is a live streaming app that offers you high-quality videos, diverse content, and exciting features. You can watch and create live shows on your phone with ease and fun. You can also interact with other users and send gifts to your favorite hosts.

-

To download MGlobal Live on your Android or iOS device, you need to follow the steps above. You can also use the tips and tricks above to make the most out of MGlobal Live. MGlobal Live is a great app for live streaming lovers who want to enjoy live shows on their phone.

-

FAQs

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Swing Shoot and Save the World with Spider Stickman Rope Hero Mod APK (Unlimited Money).md b/spaces/congsaPfin/Manga-OCR/logs/Swing Shoot and Save the World with Spider Stickman Rope Hero Mod APK (Unlimited Money).md deleted file mode 100644 index bc42acfb0bd20f4e32caf4f1c9683d44e4be4642..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Swing Shoot and Save the World with Spider Stickman Rope Hero Mod APK (Unlimited Money).md +++ /dev/null @@ -1,69 +0,0 @@ -
-

Download Spider Stickman Rope Hero Mod APK Unlimited Money

Do you love action games with superheroes and villains? Do you want to swing around the city like Spider-Man and fight crime with your rope skills? If yes, then you should try Spider Stickman Rope Hero, a fun and exciting game where you can become a stickman hero with amazing abilities. But wait, there's more! You can also download the mod apk version of this game and enjoy unlimited money, unlocked items, no ads, and more. In this article, we will tell you everything you need to know about Spider Stickman Rope Hero and how to download and install the mod apk on your device. Let's get started!

What is Spider Stickman Rope Hero?

Spider Stickman Rope Hero is an adventure game developed by Dino Games Studio. It is available for Android devices on Google Play Store. In this game, you can create your own stickman character and customize it with different costumes, masks, weapons, and accessories. You can also use your rope to swing around the city, climb buildings, jump from rooftops, and explore different locations. You can also fight against various enemies, such as gangsters, zombies, robots, aliens, and even dinosaurs. You can use your fists, guns, grenades, swords, hammers, axes, and more to defeat them. You can also drive different vehicles, such as cars, bikes, helicopters, tanks, etc., and perform stunts and crashes. The game has a lot of missions to complete and rewards to collect. You can also play in free mode and do whatever you want in the open world.

-

download spider stickman rope hero mod apk unlimited money


Download Zip ––– https://urlca.com/2uOb4U



The game has stunning 3D graphics and realistic physics that make it look like a real stickman movie. The game also has awesome sound effects and music that enhance the atmosphere and mood of the game. The game has simple and intuitive controls that make it easy to play for anyone. You can use the virtual joystick to move your character and the buttons to perform actions.

Why download the mod apk?

Spider Stickman Rope Hero is a free game that you can download from Google Play Store. However, it also has some limitations and drawbacks that may affect your gaming experience. For example:

If you want to enjoy Spider Stickman Rope Hero without any restrictions or hassles, then you should download the mod apk version of this game. The mod apk is a modified version of the original game that has some extra features and benefits that are not available in the original version. For example:

-

Conclusion

-

Spider Stickman Rope Hero is a game that will make you feel like a superhero with amazing rope skills. You can swing around the city, fight against enemies, drive vehicles, and complete missions. You can also download the mod apk version of this game and enjoy unlimited money, unlocked items, no ads, and more. If you are looking for a game that is fun, exciting, and addictive, then you should try Spider Stickman Rope Hero. Download it now and become the stickman hero you always wanted to be!

-

FAQs

-

What is the latest version of Spider Stickman Rope Hero mod apk?

-

The latest version of Spider Stickman Rope Hero mod apk is 1.0.5. It was updated on June 15, 2023. It has some bug fixes and improvements.

-

Is Spider Stickman Rope Hero mod apk safe to use?

-

Yes, Spider Stickman Rope Hero mod apk is safe to use. It does not contain any viruses or malware that may harm your device or data. However, you should always download it from a trusted source and scan it before installing it.

-

Can I play Spider Stickman Rope Hero mod apk offline?

-

Yes, you can play Spider Stickman Rope Hero mod apk offline. You do not need an internet connection to play the game. However, some features or functions may not work properly without an internet connection.

-

How can I update Spider Stickman Rope Hero mod apk?

-

You can update Spider Stickman Rope Hero mod apk by downloading the latest version from our website and installing it over the existing one. You do not need to uninstall the previous version.

-

download spider stickman rope hero mod apk latest version
-download spider stickman rope hero mod apk free shopping
-download spider stickman rope hero mod apk unlimited gems
-download spider stickman rope hero mod apk no ads
-download spider stickman rope hero mod apk android 1
-download spider stickman rope hero mod apk revdl
-download spider stickman rope hero mod apk happymod
-download spider stickman rope hero mod apk rexdl
-download spider stickman rope hero mod apk offline
-download spider stickman rope hero mod apk online
-download spider stickman rope hero hack apk unlimited money
-download spider stickman rope hero cheat apk unlimited money
-download spider stickman rope hero cracked apk unlimited money
-download spider stickman rope hero premium apk unlimited money
-download spider stickman rope hero pro apk unlimited money
-download spider stickman rope hero full apk unlimited money
-download spider stickman rope hero unlocked apk unlimited money
-download spider stickman rope hero mega mod apk unlimited money
-download spider stickman rope hero god mode apk unlimited money
-download spider stickman rope hero vip mod apk unlimited money
-how to download spider stickman rope hero mod apk unlimited money
-where to download spider stickman rope hero mod apk unlimited money
-best site to download spider stickman rope hero mod apk unlimited money
-safe site to download spider stickman rope hero mod apk unlimited money
-trusted site to download spider stickman rope hero mod apk unlimited money
-easy way to download spider stickman rope hero mod apk unlimited money
-fast way to download spider stickman rope hero mod apk unlimited money
-free way to download spider stickman rope hero mod apk unlimited money
-legal way to download spider stickman rope hero mod apk unlimited money
-working way to download spider stickman rope hero mod apk unlimited money
-download and install spider stickman rope hero mod apk unlimited money
-download and play spider stickman rope hero mod apk unlimited money
-download and enjoy spider stickman rope hero mod apk unlimited money
-download and update spider stickman rope hero mod apk unlimited money
-download and review spider stickman rope hero mod apk unlimited money
-benefits of downloading spider stickman rope hero mod apk unlimited money
-features of downloading spider stickman rope hero mod apk unlimited money
-advantages of downloading spider stickman rope hero mod apk unlimited money
-disadvantages of downloading spider stickman rope hero mod apk unlimited money
-risks of downloading spider stickman rope hero mod apk unlimited money
-tips for downloading spider stickman rope hero mod apk unlimited money
-tricks for downloading spider stickman rope hero mod apk unlimited money
-hacks for downloading spider stickman rope hero mod apk unlimited money
-guides for downloading spider stickman rope hero mod apk unlimited money
-tutorials for downloading spider stickman rope hero mod apk unlimited money
-videos for downloading spider stickman rope hero mod apk unlimited money
-blogs for downloading spider stickman rope hero mod apk unlimited money
-forums for downloading spider stickman rope hero mod apk unlimited money
-reviews for downloading spider stickman rope hero mod apk unlimited money

-

How can I contact the developer of Spider Stickman Rope Hero?

-

You can contact the developer of Spider Stickman Rope Hero by sending an email to dinogamesstudio@gmail.com. You can also visit their Facebook page or YouTube channel for more information.

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Top 10 Sites to Download Video Games for Free in 2023.md b/spaces/congsaPfin/Manga-OCR/logs/Top 10 Sites to Download Video Games for Free in 2023.md deleted file mode 100644 index d35e18b1928c3e6b16de2b9300e70cca41cd8157..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Top 10 Sites to Download Video Games for Free in 2023.md +++ /dev/null @@ -1,116 +0,0 @@ -
-
- Access a variety of games
- Enjoy offline gaming | | H2: Risks of Downloading Video Games for Free | - Legal issues and penalties
- Malware and viruses
- Poor quality and performance | | H2: Tips for Downloading Video Games for Free Safely and Legally | - Use trusted and reputable sites
- Check the game ratings and reviews
- Scan the files before installing
- Avoid pirated and cracked games | | H2: Best Sites to Download Video Games for Free | - Ocean of Games
- Steam
- ThePcGames.Net
- OvaGames
- GameJolt | | H1: Conclusion | Summary of the main points and recommendations | Table 2: Article with HTML formatting

How to Download Video Games for Free

-

If you are a gaming enthusiast, you might be wondering how to download video games for free. After all, buying new games can be expensive and time-consuming, especially if you have a limited budget and a busy schedule. Fortunately, there are many ways to enjoy your favorite games without spending a dime. In this article, we will explore the benefits and risks of downloading video games for free, as well as some tips and tricks to do it safely and legally. We will also share some of the best sites to download PC games for free in 2023.

-

download video games for free


Download Ziphttps://urlca.com/2uO7y4



-

Benefits of Downloading Video Games for Free

-

Downloading video games for free has many advantages, such as:

- -

Risks of Downloading Video Games for Free

-

However, downloading video games for free also has some drawbacks, such as:

- -

Tips for Downloading Video Games for Free Safely and Legally

-

To avoid the risks of downloading video games for free, you should follow these tips:

- -

Best Sites to Download Video Games for Free

-

Here are some of the best sites to download PC games for free in 2023:

- - - - - - - - - - - - - - - - - - - - - - - - - -
SiteDescription
Ocean of GamesThis site offers a huge collection of games from various genres and categories. You can download them easily and quickly, without any registration or payment. You can also find the latest releases and updates, as well as game guides and tutorials.
SteamThis site is one of the most popular and trusted platforms for PC gaming. It has thousands of games, both free and paid, from indie developers to big publishers. You can also enjoy features such as cloud saving, achievements, multiplayer, chat, and more.
ThePcGames.NetThis site provides high-quality and full-version PC games for free. You can download them directly or through torrent links, without any ads or surveys. You can also request games that are not available on the site.
OvaGamesThis site specializes in free PC games with single links or parts. You can download them from various file hosting services, such as Google Drive, Mega, or MediaFire. You can also find DLCs, patches, cracks, and repacks.
GameJoltThis site is a community-driven platform for indie games. You can download and play thousands of games for free, as well as rate, review, and comment on them. You can also create your own games and share them with others.
-

Conclusion

-

Downloading video games for free is a great way to enjoy your gaming hobby without breaking the bank. However, you should also be aware of the risks and challenges involved, such as legal issues, malware, and poor quality. To download video games for free safely and legally, you should use trusted and reputable sites, check the game ratings and reviews, scan the files before installing, and avoid pirated and cracked games. Some of the best sites to download PC games for free in 2023 are Ocean of Games, Steam, ThePcGames.Net, OvaGames, and GameJolt.

-

download free pc games full version
-download free games for windows 10
-download free games for android
-download free games from steam
-download free games for xbox one
-download free games for ps4
-download free games for nintendo switch
-download free games for mac
-download free games for ios
-download free games for linux
-download free games offline
-download free games online
-download free games without wifi
-download free games without ads
-download free games without registration
-download free games with controller support
-download free games with multiplayer
-download free games with high graphics
-download free games with low system requirements
-download free games with no virus
-download free games from microsoft store
-download free games from ea
-download free games from epic games store
-download free games from gog.com
-download free games from origin
-download free games from uplay
-download free games from itch.io
-download free games from gamejolt.com
-download free games from humble bundle
-download free games from ocean of games
-download free action games for pc
-download free adventure games for pc
-download free racing games for pc
-download free shooting games for pc
-download free strategy games for pc
-download free simulation games for pc
-download free sports games for pc
-download free puzzle games for pc
-download free horror games for pc
-download free rpg games for pc
-how to download video games for free on pc
-how to download video games for free on android
-how to download video games for free on ps4
-how to download video games for free on xbox one
-how to download video games for free on nintendo switch

-

FAQs

-

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/contluForse/HuggingGPT/assets/AnyDVD HD V7.4.8.0 Final-BRD Utorrent.md b/spaces/contluForse/HuggingGPT/assets/AnyDVD HD V7.4.8.0 Final-BRD Utorrent.md deleted file mode 100644 index c1094a419c885b638f32f416bd0eee79c599f960..0000000000000000000000000000000000000000 --- a/spaces/contluForse/HuggingGPT/assets/AnyDVD HD V7.4.8.0 Final-BRD Utorrent.md +++ /dev/null @@ -1,5 +0,0 @@ - -

AnyDVD HD v7.4.8.0 Final-BRD utorrent
font psl kanda modern extra.rar
bijbel in gewone taal ebook 18
EZ Green Screen Photoshop keygen
kitab hakikat insan pdf free downloadgolkes
Oxford English for Careers Nursing 2 pdf.rar
genetica medica jorde pdf download
menucool slider license crack 12
Frozen 2 movie full version free download
CommView for WiFi 5.2.484 Including WEP Hack

-

AnyDVD HD v7.4.8.0 Final-BRD utorrent


Download Filehttps://ssurll.com/2uzyqR



aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/cooelf/Multimodal-CoT/timm/models/nasnet.py b/spaces/cooelf/Multimodal-CoT/timm/models/nasnet.py deleted file mode 100644 index 2afe82c3f374dd4790bc940289c8e3794497fbbc..0000000000000000000000000000000000000000 --- a/spaces/cooelf/Multimodal-CoT/timm/models/nasnet.py +++ /dev/null @@ -1,567 +0,0 @@ -""" NasNet-A (Large) - nasnetalarge implementation grabbed from Cadene's pretrained models - https://github.com/Cadene/pretrained-models.pytorch -""" -from functools import partial - -import torch -import torch.nn as nn -import torch.nn.functional as F - -from .helpers import build_model_with_cfg -from .layers import ConvBnAct, create_conv2d, create_pool2d, create_classifier -from .registry import register_model - -__all__ = ['NASNetALarge'] - -default_cfgs = { - 'nasnetalarge': { - 'url': 'http://data.lip6.fr/cadene/pretrainedmodels/nasnetalarge-a1897284.pth', - 'input_size': (3, 331, 331), - 'pool_size': (11, 11), - 'crop_pct': 0.911, - 'interpolation': 'bicubic', - 'mean': (0.5, 0.5, 0.5), - 'std': (0.5, 0.5, 0.5), - 'num_classes': 1000, - 'first_conv': 'conv0.conv', - 'classifier': 'last_linear', - 'label_offset': 1, # 1001 classes in pretrained weights - }, -} - - -class ActConvBn(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, stride=1, padding=''): - super(ActConvBn, self).__init__() - self.act = nn.ReLU() - self.conv = create_conv2d( - in_channels, out_channels, kernel_size=kernel_size, stride=stride, padding=padding) - self.bn = nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.1) - - def forward(self, x): - x = self.act(x) - x = self.conv(x) - x = self.bn(x) - return x - - -class SeparableConv2d(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, stride, padding=''): - super(SeparableConv2d, self).__init__() - self.depthwise_conv2d = create_conv2d( - in_channels, in_channels, kernel_size=kernel_size, - stride=stride, padding=padding, groups=in_channels) - self.pointwise_conv2d = create_conv2d( - in_channels, out_channels, kernel_size=1, padding=0) - - def forward(self, x): - x = self.depthwise_conv2d(x) - x = self.pointwise_conv2d(x) - return x - - -class BranchSeparables(nn.Module): - - def __init__(self, in_channels, out_channels, kernel_size, stride=1, pad_type='', stem_cell=False): - super(BranchSeparables, self).__init__() - middle_channels = out_channels if stem_cell else in_channels - self.act_1 = nn.ReLU() - self.separable_1 = SeparableConv2d( - in_channels, middle_channels, kernel_size, stride=stride, padding=pad_type) - self.bn_sep_1 = nn.BatchNorm2d(middle_channels, eps=0.001, momentum=0.1) - self.act_2 = nn.ReLU(inplace=True) - self.separable_2 = SeparableConv2d( - middle_channels, out_channels, kernel_size, stride=1, padding=pad_type) - self.bn_sep_2 = nn.BatchNorm2d(out_channels, eps=0.001, momentum=0.1) - - def forward(self, x): - x = self.act_1(x) - x = self.separable_1(x) - x = self.bn_sep_1(x) - x = self.act_2(x) - x = self.separable_2(x) - x = self.bn_sep_2(x) - return x - - -class CellStem0(nn.Module): - def __init__(self, stem_size, num_channels=42, pad_type=''): - super(CellStem0, self).__init__() - self.num_channels = num_channels - self.stem_size = stem_size - self.conv_1x1 = ActConvBn(self.stem_size, self.num_channels, 1, stride=1) - - self.comb_iter_0_left = BranchSeparables(self.num_channels, self.num_channels, 5, 2, pad_type) - self.comb_iter_0_right = BranchSeparables(self.stem_size, self.num_channels, 7, 2, pad_type, stem_cell=True) - - self.comb_iter_1_left = create_pool2d('max', 3, 2, padding=pad_type) - self.comb_iter_1_right = BranchSeparables(self.stem_size, self.num_channels, 7, 2, pad_type, stem_cell=True) - - self.comb_iter_2_left = create_pool2d('avg', 3, 2, count_include_pad=False, padding=pad_type) - self.comb_iter_2_right = BranchSeparables(self.stem_size, self.num_channels, 5, 2, pad_type, stem_cell=True) - - self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - - self.comb_iter_4_left = BranchSeparables(self.num_channels, self.num_channels, 3, 1, pad_type) - self.comb_iter_4_right = create_pool2d('max', 3, 2, padding=pad_type) - - def forward(self, x): - x1 = self.conv_1x1(x) - - x_comb_iter_0_left = self.comb_iter_0_left(x1) - x_comb_iter_0_right = self.comb_iter_0_right(x) - x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right - - x_comb_iter_1_left = self.comb_iter_1_left(x1) - x_comb_iter_1_right = self.comb_iter_1_right(x) - x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right - - x_comb_iter_2_left = self.comb_iter_2_left(x1) - x_comb_iter_2_right = self.comb_iter_2_right(x) - x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right - - x_comb_iter_3_right = self.comb_iter_3_right(x_comb_iter_0) - x_comb_iter_3 = x_comb_iter_3_right + x_comb_iter_1 - - x_comb_iter_4_left = self.comb_iter_4_left(x_comb_iter_0) - x_comb_iter_4_right = self.comb_iter_4_right(x1) - x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right - - x_out = torch.cat([x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1) - return x_out - - -class CellStem1(nn.Module): - - def __init__(self, stem_size, num_channels, pad_type=''): - super(CellStem1, self).__init__() - self.num_channels = num_channels - self.stem_size = stem_size - self.conv_1x1 = ActConvBn(2 * self.num_channels, self.num_channels, 1, stride=1) - - self.act = nn.ReLU() - self.path_1 = nn.Sequential() - self.path_1.add_module('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)) - self.path_1.add_module('conv', nn.Conv2d(self.stem_size, self.num_channels // 2, 1, stride=1, bias=False)) - - self.path_2 = nn.Sequential() - self.path_2.add_module('pad', nn.ZeroPad2d((-1, 1, -1, 1))) - self.path_2.add_module('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)) - self.path_2.add_module('conv', nn.Conv2d(self.stem_size, self.num_channels // 2, 1, stride=1, bias=False)) - - self.final_path_bn = nn.BatchNorm2d(self.num_channels, eps=0.001, momentum=0.1) - - self.comb_iter_0_left = BranchSeparables(self.num_channels, self.num_channels, 5, 2, pad_type) - self.comb_iter_0_right = BranchSeparables(self.num_channels, self.num_channels, 7, 2, pad_type) - - self.comb_iter_1_left = create_pool2d('max', 3, 2, padding=pad_type) - self.comb_iter_1_right = BranchSeparables(self.num_channels, self.num_channels, 7, 2, pad_type) - - self.comb_iter_2_left = create_pool2d('avg', 3, 2, count_include_pad=False, padding=pad_type) - self.comb_iter_2_right = BranchSeparables(self.num_channels, self.num_channels, 5, 2, pad_type) - - self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - - self.comb_iter_4_left = BranchSeparables(self.num_channels, self.num_channels, 3, 1, pad_type) - self.comb_iter_4_right = create_pool2d('max', 3, 2, padding=pad_type) - - def forward(self, x_conv0, x_stem_0): - x_left = self.conv_1x1(x_stem_0) - - x_relu = self.act(x_conv0) - # path 1 - x_path1 = self.path_1(x_relu) - # path 2 - x_path2 = self.path_2(x_relu) - # final path - x_right = self.final_path_bn(torch.cat([x_path1, x_path2], 1)) - - x_comb_iter_0_left = self.comb_iter_0_left(x_left) - x_comb_iter_0_right = self.comb_iter_0_right(x_right) - x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right - - x_comb_iter_1_left = self.comb_iter_1_left(x_left) - x_comb_iter_1_right = self.comb_iter_1_right(x_right) - x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right - - x_comb_iter_2_left = self.comb_iter_2_left(x_left) - x_comb_iter_2_right = self.comb_iter_2_right(x_right) - x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right - - x_comb_iter_3_right = self.comb_iter_3_right(x_comb_iter_0) - x_comb_iter_3 = x_comb_iter_3_right + x_comb_iter_1 - - x_comb_iter_4_left = self.comb_iter_4_left(x_comb_iter_0) - x_comb_iter_4_right = self.comb_iter_4_right(x_left) - x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right - - x_out = torch.cat([x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1) - return x_out - - -class FirstCell(nn.Module): - - def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''): - super(FirstCell, self).__init__() - self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, 1, stride=1) - - self.act = nn.ReLU() - self.path_1 = nn.Sequential() - self.path_1.add_module('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)) - self.path_1.add_module('conv', nn.Conv2d(in_chs_left, out_chs_left, 1, stride=1, bias=False)) - - self.path_2 = nn.Sequential() - self.path_2.add_module('pad', nn.ZeroPad2d((-1, 1, -1, 1))) - self.path_2.add_module('avgpool', nn.AvgPool2d(1, stride=2, count_include_pad=False)) - self.path_2.add_module('conv', nn.Conv2d(in_chs_left, out_chs_left, 1, stride=1, bias=False)) - - self.final_path_bn = nn.BatchNorm2d(out_chs_left * 2, eps=0.001, momentum=0.1) - - self.comb_iter_0_left = BranchSeparables(out_chs_right, out_chs_right, 5, 1, pad_type) - self.comb_iter_0_right = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type) - - self.comb_iter_1_left = BranchSeparables(out_chs_right, out_chs_right, 5, 1, pad_type) - self.comb_iter_1_right = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type) - - self.comb_iter_2_left = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - - self.comb_iter_3_left = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - - self.comb_iter_4_left = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type) - - def forward(self, x, x_prev): - x_relu = self.act(x_prev) - x_path1 = self.path_1(x_relu) - x_path2 = self.path_2(x_relu) - x_left = self.final_path_bn(torch.cat([x_path1, x_path2], 1)) - x_right = self.conv_1x1(x) - - x_comb_iter_0_left = self.comb_iter_0_left(x_right) - x_comb_iter_0_right = self.comb_iter_0_right(x_left) - x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right - - x_comb_iter_1_left = self.comb_iter_1_left(x_left) - x_comb_iter_1_right = self.comb_iter_1_right(x_left) - x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right - - x_comb_iter_2_left = self.comb_iter_2_left(x_right) - x_comb_iter_2 = x_comb_iter_2_left + x_left - - x_comb_iter_3_left = self.comb_iter_3_left(x_left) - x_comb_iter_3_right = self.comb_iter_3_right(x_left) - x_comb_iter_3 = x_comb_iter_3_left + x_comb_iter_3_right - - x_comb_iter_4_left = self.comb_iter_4_left(x_right) - x_comb_iter_4 = x_comb_iter_4_left + x_right - - x_out = torch.cat([x_left, x_comb_iter_0, x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1) - return x_out - - -class NormalCell(nn.Module): - - def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''): - super(NormalCell, self).__init__() - self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, 1, stride=1, padding=pad_type) - self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, 1, stride=1, padding=pad_type) - - self.comb_iter_0_left = BranchSeparables(out_chs_right, out_chs_right, 5, 1, pad_type) - self.comb_iter_0_right = BranchSeparables(out_chs_left, out_chs_left, 3, 1, pad_type) - - self.comb_iter_1_left = BranchSeparables(out_chs_left, out_chs_left, 5, 1, pad_type) - self.comb_iter_1_right = BranchSeparables(out_chs_left, out_chs_left, 3, 1, pad_type) - - self.comb_iter_2_left = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - - self.comb_iter_3_left = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - - self.comb_iter_4_left = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type) - - def forward(self, x, x_prev): - x_left = self.conv_prev_1x1(x_prev) - x_right = self.conv_1x1(x) - - x_comb_iter_0_left = self.comb_iter_0_left(x_right) - x_comb_iter_0_right = self.comb_iter_0_right(x_left) - x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right - - x_comb_iter_1_left = self.comb_iter_1_left(x_left) - x_comb_iter_1_right = self.comb_iter_1_right(x_left) - x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right - - x_comb_iter_2_left = self.comb_iter_2_left(x_right) - x_comb_iter_2 = x_comb_iter_2_left + x_left - - x_comb_iter_3_left = self.comb_iter_3_left(x_left) - x_comb_iter_3_right = self.comb_iter_3_right(x_left) - x_comb_iter_3 = x_comb_iter_3_left + x_comb_iter_3_right - - x_comb_iter_4_left = self.comb_iter_4_left(x_right) - x_comb_iter_4 = x_comb_iter_4_left + x_right - - x_out = torch.cat([x_left, x_comb_iter_0, x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1) - return x_out - - -class ReductionCell0(nn.Module): - - def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''): - super(ReductionCell0, self).__init__() - self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, 1, stride=1, padding=pad_type) - self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, 1, stride=1, padding=pad_type) - - self.comb_iter_0_left = BranchSeparables(out_chs_right, out_chs_right, 5, 2, pad_type) - self.comb_iter_0_right = BranchSeparables(out_chs_right, out_chs_right, 7, 2, pad_type) - - self.comb_iter_1_left = create_pool2d('max', 3, 2, padding=pad_type) - self.comb_iter_1_right = BranchSeparables(out_chs_right, out_chs_right, 7, 2, pad_type) - - self.comb_iter_2_left = create_pool2d('avg', 3, 2, count_include_pad=False, padding=pad_type) - self.comb_iter_2_right = BranchSeparables(out_chs_right, out_chs_right, 5, 2, pad_type) - - self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - - self.comb_iter_4_left = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type) - self.comb_iter_4_right = create_pool2d('max', 3, 2, padding=pad_type) - - def forward(self, x, x_prev): - x_left = self.conv_prev_1x1(x_prev) - x_right = self.conv_1x1(x) - - x_comb_iter_0_left = self.comb_iter_0_left(x_right) - x_comb_iter_0_right = self.comb_iter_0_right(x_left) - x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right - - x_comb_iter_1_left = self.comb_iter_1_left(x_right) - x_comb_iter_1_right = self.comb_iter_1_right(x_left) - x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right - - x_comb_iter_2_left = self.comb_iter_2_left(x_right) - x_comb_iter_2_right = self.comb_iter_2_right(x_left) - x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right - - x_comb_iter_3_right = self.comb_iter_3_right(x_comb_iter_0) - x_comb_iter_3 = x_comb_iter_3_right + x_comb_iter_1 - - x_comb_iter_4_left = self.comb_iter_4_left(x_comb_iter_0) - x_comb_iter_4_right = self.comb_iter_4_right(x_right) - x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right - - x_out = torch.cat([x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1) - return x_out - - -class ReductionCell1(nn.Module): - - def __init__(self, in_chs_left, out_chs_left, in_chs_right, out_chs_right, pad_type=''): - super(ReductionCell1, self).__init__() - self.conv_prev_1x1 = ActConvBn(in_chs_left, out_chs_left, 1, stride=1, padding=pad_type) - self.conv_1x1 = ActConvBn(in_chs_right, out_chs_right, 1, stride=1, padding=pad_type) - - self.comb_iter_0_left = BranchSeparables(out_chs_right, out_chs_right, 5, 2, pad_type) - self.comb_iter_0_right = BranchSeparables(out_chs_right, out_chs_right, 7, 2, pad_type) - - self.comb_iter_1_left = create_pool2d('max', 3, 2, padding=pad_type) - self.comb_iter_1_right = BranchSeparables(out_chs_right, out_chs_right, 7, 2, pad_type) - - self.comb_iter_2_left = create_pool2d('avg', 3, 2, count_include_pad=False, padding=pad_type) - self.comb_iter_2_right = BranchSeparables(out_chs_right, out_chs_right, 5, 2, pad_type) - - self.comb_iter_3_right = create_pool2d('avg', 3, 1, count_include_pad=False, padding=pad_type) - - self.comb_iter_4_left = BranchSeparables(out_chs_right, out_chs_right, 3, 1, pad_type) - self.comb_iter_4_right = create_pool2d('max', 3, 2, padding=pad_type) - - def forward(self, x, x_prev): - x_left = self.conv_prev_1x1(x_prev) - x_right = self.conv_1x1(x) - - x_comb_iter_0_left = self.comb_iter_0_left(x_right) - x_comb_iter_0_right = self.comb_iter_0_right(x_left) - x_comb_iter_0 = x_comb_iter_0_left + x_comb_iter_0_right - - x_comb_iter_1_left = self.comb_iter_1_left(x_right) - x_comb_iter_1_right = self.comb_iter_1_right(x_left) - x_comb_iter_1 = x_comb_iter_1_left + x_comb_iter_1_right - - x_comb_iter_2_left = self.comb_iter_2_left(x_right) - x_comb_iter_2_right = self.comb_iter_2_right(x_left) - x_comb_iter_2 = x_comb_iter_2_left + x_comb_iter_2_right - - x_comb_iter_3_right = self.comb_iter_3_right(x_comb_iter_0) - x_comb_iter_3 = x_comb_iter_3_right + x_comb_iter_1 - - x_comb_iter_4_left = self.comb_iter_4_left(x_comb_iter_0) - x_comb_iter_4_right = self.comb_iter_4_right(x_right) - x_comb_iter_4 = x_comb_iter_4_left + x_comb_iter_4_right - - x_out = torch.cat([x_comb_iter_1, x_comb_iter_2, x_comb_iter_3, x_comb_iter_4], 1) - return x_out - - -class NASNetALarge(nn.Module): - """NASNetALarge (6 @ 4032) """ - - def __init__(self, num_classes=1000, in_chans=3, stem_size=96, channel_multiplier=2, - num_features=4032, output_stride=32, drop_rate=0., global_pool='avg', pad_type='same'): - super(NASNetALarge, self).__init__() - self.num_classes = num_classes - self.stem_size = stem_size - self.num_features = num_features - self.channel_multiplier = channel_multiplier - self.drop_rate = drop_rate - assert output_stride == 32 - - channels = self.num_features // 24 - # 24 is default value for the architecture - - self.conv0 = ConvBnAct( - in_channels=in_chans, out_channels=self.stem_size, kernel_size=3, padding=0, stride=2, - norm_layer=partial(nn.BatchNorm2d, eps=0.001, momentum=0.1), apply_act=False) - - self.cell_stem_0 = CellStem0( - self.stem_size, num_channels=channels // (channel_multiplier ** 2), pad_type=pad_type) - self.cell_stem_1 = CellStem1( - self.stem_size, num_channels=channels // channel_multiplier, pad_type=pad_type) - - self.cell_0 = FirstCell( - in_chs_left=channels, out_chs_left=channels // 2, - in_chs_right=2 * channels, out_chs_right=channels, pad_type=pad_type) - self.cell_1 = NormalCell( - in_chs_left=2 * channels, out_chs_left=channels, - in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type) - self.cell_2 = NormalCell( - in_chs_left=6 * channels, out_chs_left=channels, - in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type) - self.cell_3 = NormalCell( - in_chs_left=6 * channels, out_chs_left=channels, - in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type) - self.cell_4 = NormalCell( - in_chs_left=6 * channels, out_chs_left=channels, - in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type) - self.cell_5 = NormalCell( - in_chs_left=6 * channels, out_chs_left=channels, - in_chs_right=6 * channels, out_chs_right=channels, pad_type=pad_type) - - self.reduction_cell_0 = ReductionCell0( - in_chs_left=6 * channels, out_chs_left=2 * channels, - in_chs_right=6 * channels, out_chs_right=2 * channels, pad_type=pad_type) - self.cell_6 = FirstCell( - in_chs_left=6 * channels, out_chs_left=channels, - in_chs_right=8 * channels, out_chs_right=2 * channels, pad_type=pad_type) - self.cell_7 = NormalCell( - in_chs_left=8 * channels, out_chs_left=2 * channels, - in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type) - self.cell_8 = NormalCell( - in_chs_left=12 * channels, out_chs_left=2 * channels, - in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type) - self.cell_9 = NormalCell( - in_chs_left=12 * channels, out_chs_left=2 * channels, - in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type) - self.cell_10 = NormalCell( - in_chs_left=12 * channels, out_chs_left=2 * channels, - in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type) - self.cell_11 = NormalCell( - in_chs_left=12 * channels, out_chs_left=2 * channels, - in_chs_right=12 * channels, out_chs_right=2 * channels, pad_type=pad_type) - - self.reduction_cell_1 = ReductionCell1( - in_chs_left=12 * channels, out_chs_left=4 * channels, - in_chs_right=12 * channels, out_chs_right=4 * channels, pad_type=pad_type) - self.cell_12 = FirstCell( - in_chs_left=12 * channels, out_chs_left=2 * channels, - in_chs_right=16 * channels, out_chs_right=4 * channels, pad_type=pad_type) - self.cell_13 = NormalCell( - in_chs_left=16 * channels, out_chs_left=4 * channels, - in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type) - self.cell_14 = NormalCell( - in_chs_left=24 * channels, out_chs_left=4 * channels, - in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type) - self.cell_15 = NormalCell( - in_chs_left=24 * channels, out_chs_left=4 * channels, - in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type) - self.cell_16 = NormalCell( - in_chs_left=24 * channels, out_chs_left=4 * channels, - in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type) - self.cell_17 = NormalCell( - in_chs_left=24 * channels, out_chs_left=4 * channels, - in_chs_right=24 * channels, out_chs_right=4 * channels, pad_type=pad_type) - self.act = nn.ReLU(inplace=True) - self.feature_info = [ - dict(num_chs=96, reduction=2, module='conv0'), - dict(num_chs=168, reduction=4, module='cell_stem_1.conv_1x1.act'), - dict(num_chs=1008, reduction=8, module='reduction_cell_0.conv_1x1.act'), - dict(num_chs=2016, reduction=16, module='reduction_cell_1.conv_1x1.act'), - dict(num_chs=4032, reduction=32, module='act'), - ] - - self.global_pool, self.last_linear = create_classifier( - self.num_features, self.num_classes, pool_type=global_pool) - - def get_classifier(self): - return self.last_linear - - def reset_classifier(self, num_classes, global_pool='avg'): - self.num_classes = num_classes - self.global_pool, self.last_linear = create_classifier( - self.num_features, self.num_classes, pool_type=global_pool) - - def forward_features(self, x): - x_conv0 = self.conv0(x) - - x_stem_0 = self.cell_stem_0(x_conv0) - x_stem_1 = self.cell_stem_1(x_conv0, x_stem_0) - - x_cell_0 = self.cell_0(x_stem_1, x_stem_0) - x_cell_1 = self.cell_1(x_cell_0, x_stem_1) - x_cell_2 = self.cell_2(x_cell_1, x_cell_0) - x_cell_3 = self.cell_3(x_cell_2, x_cell_1) - x_cell_4 = self.cell_4(x_cell_3, x_cell_2) - x_cell_5 = self.cell_5(x_cell_4, x_cell_3) - - x_reduction_cell_0 = self.reduction_cell_0(x_cell_5, x_cell_4) - x_cell_6 = self.cell_6(x_reduction_cell_0, x_cell_4) - x_cell_7 = self.cell_7(x_cell_6, x_reduction_cell_0) - x_cell_8 = self.cell_8(x_cell_7, x_cell_6) - x_cell_9 = self.cell_9(x_cell_8, x_cell_7) - x_cell_10 = self.cell_10(x_cell_9, x_cell_8) - x_cell_11 = self.cell_11(x_cell_10, x_cell_9) - - x_reduction_cell_1 = self.reduction_cell_1(x_cell_11, x_cell_10) - x_cell_12 = self.cell_12(x_reduction_cell_1, x_cell_10) - x_cell_13 = self.cell_13(x_cell_12, x_reduction_cell_1) - x_cell_14 = self.cell_14(x_cell_13, x_cell_12) - x_cell_15 = self.cell_15(x_cell_14, x_cell_13) - x_cell_16 = self.cell_16(x_cell_15, x_cell_14) - x_cell_17 = self.cell_17(x_cell_16, x_cell_15) - x = self.act(x_cell_17) - return x - - def forward(self, x): - x = self.forward_features(x) - x = self.global_pool(x) - if self.drop_rate > 0: - x = F.dropout(x, self.drop_rate, training=self.training) - x = self.last_linear(x) - return x - - -def _create_nasnet(variant, pretrained=False, **kwargs): - return build_model_with_cfg( - NASNetALarge, variant, pretrained, - default_cfg=default_cfgs[variant], - feature_cfg=dict(feature_cls='hook', no_rewrite=True), # not possible to re-write this model - **kwargs) - - -@register_model -def nasnetalarge(pretrained=False, **kwargs): - """NASNet-A large model architecture. - """ - model_kwargs = dict(pad_type='same', **kwargs) - return _create_nasnet('nasnetalarge', pretrained, **model_kwargs) diff --git a/spaces/coraKong/WorldSimulation/plugins/CharacterCreationPlugin.py b/spaces/coraKong/WorldSimulation/plugins/CharacterCreationPlugin.py deleted file mode 100644 index 280a9a293942dbea39caaaa3a781c0955122fbe0..0000000000000000000000000000000000000000 --- a/spaces/coraKong/WorldSimulation/plugins/CharacterCreationPlugin.py +++ /dev/null @@ -1,22 +0,0 @@ -import random -from Character import Character -from utils import get_random_name - -class CharacterCreationPlugin: - def __init__(self, special_constitution_ratio=None, spiritual_roots_ratio=None): - self.special_constitution_ratio = special_constitution_ratio - self.spiritual_roots_ratio = spiritual_roots_ratio - - def create_character(self): - # 根据 special_constitution_ratio 随机确定这个角色拥有哪些特殊体质 - special_constitution = [1 if random.random() < ratio else 0 for ratio in self.special_constitution_ratio] - - # 根据 spiritual_roots_ratio 随机确定这个角色拥有哪些灵根 - spiritual_roots = [1 if random.random() < ratio else 0 for ratio in self.spiritual_roots_ratio] - - character = Character(get_random_name(), random.choice(["男", "女"]), special_constitution, spiritual_roots) - return character - - def set_parameters(self, special_constitution_ratio, spiritual_roots_ratio): - self.special_constitution_ratio = special_constitution_ratio - self.spiritual_roots_ratio = spiritual_roots_ratio \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/backbone/backbone.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/backbone/backbone.py deleted file mode 100644 index 04f3c3c009d972bcab46eaeab33a8bfcc05b726c..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/detectron2/modeling/backbone/backbone.py +++ /dev/null @@ -1,74 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from abc import ABCMeta, abstractmethod -from typing import Dict -import torch.nn as nn - -from annotator.oneformer.detectron2.layers import ShapeSpec - -__all__ = ["Backbone"] - - -class Backbone(nn.Module, metaclass=ABCMeta): - """ - Abstract base class for network backbones. - """ - - def __init__(self): - """ - The `__init__` method of any subclass can specify its own set of arguments. - """ - super().__init__() - - @abstractmethod - def forward(self): - """ - Subclasses must override this method, but adhere to the same return type. - - Returns: - dict[str->Tensor]: mapping from feature name (e.g., "res2") to tensor - """ - pass - - @property - def size_divisibility(self) -> int: - """ - Some backbones require the input height and width to be divisible by a - specific integer. This is typically true for encoder / decoder type networks - with lateral connection (e.g., FPN) for which feature maps need to match - dimension in the "bottom up" and "top down" paths. Set to 0 if no specific - input size divisibility is required. - """ - return 0 - - @property - def padding_constraints(self) -> Dict[str, int]: - """ - This property is a generalization of size_divisibility. Some backbones and training - recipes require specific padding constraints, such as enforcing divisibility by a specific - integer (e.g., FPN) or padding to a square (e.g., ViTDet with large-scale jitter - in :paper:vitdet). `padding_constraints` contains these optional items like: - { - "size_divisibility": int, - "square_size": int, - # Future options are possible - } - `size_divisibility` will read from here if presented and `square_size` indicates the - square padding size if `square_size` > 0. - - TODO: use type of Dict[str, int] to avoid torchscipt issues. The type of padding_constraints - could be generalized as TypedDict (Python 3.8+) to support more types in the future. - """ - return {} - - def output_shape(self): - """ - Returns: - dict[str->ShapeSpec] - """ - # this is a backward-compatible default - return { - name: ShapeSpec( - channels=self._out_feature_channels[name], stride=self._out_feature_strides[name] - ) - for name in self._out_features - } diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/functions/__init__.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/functions/__init__.py deleted file mode 100644 index 2b06b5ac538b63bdb9a6c82e4635b95bb5491d5b..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/oneformer/oneformer/modeling/pixel_decoder/ops/functions/__init__.py +++ /dev/null @@ -1,13 +0,0 @@ -# ------------------------------------------------------------------------------------------------ -# Deformable DETR -# Copyright (c) 2020 SenseTime. All Rights Reserved. -# Licensed under the Apache License, Version 2.0 [see LICENSE for details] -# ------------------------------------------------------------------------------------------------ -# Modified from https://github.com/chengdazhi/Deformable-Convolution-V2-PyTorch/tree/pytorch_1.0.0 -# ------------------------------------------------------------------------------------------------ - -# Copyright (c) Facebook, Inc. and its affiliates. -# Modified by Bowen Cheng from https://github.com/fundamentalvision/Deformable-DETR - -from .ms_deform_attn_func import MSDeformAttnFunction - diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/visualization/image.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/visualization/image.py deleted file mode 100644 index 61a56c75b67f593c298408462c63c0468be8e276..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/visualization/image.py +++ /dev/null @@ -1,152 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import cv2 -import numpy as np - -from annotator.uniformer.mmcv.image import imread, imwrite -from .color import color_val - - -def imshow(img, win_name='', wait_time=0): - """Show an image. - - Args: - img (str or ndarray): The image to be displayed. - win_name (str): The window name. - wait_time (int): Value of waitKey param. - """ - cv2.imshow(win_name, imread(img)) - if wait_time == 0: # prevent from hanging if windows was closed - while True: - ret = cv2.waitKey(1) - - closed = cv2.getWindowProperty(win_name, cv2.WND_PROP_VISIBLE) < 1 - # if user closed window or if some key pressed - if closed or ret != -1: - break - else: - ret = cv2.waitKey(wait_time) - - -def imshow_bboxes(img, - bboxes, - colors='green', - top_k=-1, - thickness=1, - show=True, - win_name='', - wait_time=0, - out_file=None): - """Draw bboxes on an image. - - Args: - img (str or ndarray): The image to be displayed. - bboxes (list or ndarray): A list of ndarray of shape (k, 4). - colors (list[str or tuple or Color]): A list of colors. - top_k (int): Plot the first k bboxes only if set positive. - thickness (int): Thickness of lines. - show (bool): Whether to show the image. - win_name (str): The window name. - wait_time (int): Value of waitKey param. - out_file (str, optional): The filename to write the image. - - Returns: - ndarray: The image with bboxes drawn on it. - """ - img = imread(img) - img = np.ascontiguousarray(img) - - if isinstance(bboxes, np.ndarray): - bboxes = [bboxes] - if not isinstance(colors, list): - colors = [colors for _ in range(len(bboxes))] - colors = [color_val(c) for c in colors] - assert len(bboxes) == len(colors) - - for i, _bboxes in enumerate(bboxes): - _bboxes = _bboxes.astype(np.int32) - if top_k <= 0: - _top_k = _bboxes.shape[0] - else: - _top_k = min(top_k, _bboxes.shape[0]) - for j in range(_top_k): - left_top = (_bboxes[j, 0], _bboxes[j, 1]) - right_bottom = (_bboxes[j, 2], _bboxes[j, 3]) - cv2.rectangle( - img, left_top, right_bottom, colors[i], thickness=thickness) - - if show: - imshow(img, win_name, wait_time) - if out_file is not None: - imwrite(img, out_file) - return img - - -def imshow_det_bboxes(img, - bboxes, - labels, - class_names=None, - score_thr=0, - bbox_color='green', - text_color='green', - thickness=1, - font_scale=0.5, - show=True, - win_name='', - wait_time=0, - out_file=None): - """Draw bboxes and class labels (with scores) on an image. - - Args: - img (str or ndarray): The image to be displayed. - bboxes (ndarray): Bounding boxes (with scores), shaped (n, 4) or - (n, 5). - labels (ndarray): Labels of bboxes. - class_names (list[str]): Names of each classes. - score_thr (float): Minimum score of bboxes to be shown. - bbox_color (str or tuple or :obj:`Color`): Color of bbox lines. - text_color (str or tuple or :obj:`Color`): Color of texts. - thickness (int): Thickness of lines. - font_scale (float): Font scales of texts. - show (bool): Whether to show the image. - win_name (str): The window name. - wait_time (int): Value of waitKey param. - out_file (str or None): The filename to write the image. - - Returns: - ndarray: The image with bboxes drawn on it. - """ - assert bboxes.ndim == 2 - assert labels.ndim == 1 - assert bboxes.shape[0] == labels.shape[0] - assert bboxes.shape[1] == 4 or bboxes.shape[1] == 5 - img = imread(img) - img = np.ascontiguousarray(img) - - if score_thr > 0: - assert bboxes.shape[1] == 5 - scores = bboxes[:, -1] - inds = scores > score_thr - bboxes = bboxes[inds, :] - labels = labels[inds] - - bbox_color = color_val(bbox_color) - text_color = color_val(text_color) - - for bbox, label in zip(bboxes, labels): - bbox_int = bbox.astype(np.int32) - left_top = (bbox_int[0], bbox_int[1]) - right_bottom = (bbox_int[2], bbox_int[3]) - cv2.rectangle( - img, left_top, right_bottom, bbox_color, thickness=thickness) - label_text = class_names[ - label] if class_names is not None else f'cls {label}' - if len(bbox) > 4: - label_text += f'|{bbox[-1]:.02f}' - cv2.putText(img, label_text, (bbox_int[0], bbox_int[1] - 2), - cv2.FONT_HERSHEY_COMPLEX, font_scale, text_color) - - if show: - imshow(img, win_name, wait_time) - if out_file is not None: - imwrite(img, out_file) - return img diff --git a/spaces/crashedice/signify/SOURCE/yolo_files/utils/loss.py b/spaces/crashedice/signify/SOURCE/yolo_files/utils/loss.py deleted file mode 100644 index c577ca6a19de76599b7343603795c2672099c1b5..0000000000000000000000000000000000000000 --- a/spaces/crashedice/signify/SOURCE/yolo_files/utils/loss.py +++ /dev/null @@ -1,216 +0,0 @@ -# Loss functions - -import torch -import torch.nn as nn - -from SOURCE.yolo_files.utils.general import bbox_iou -from SOURCE.yolo_files.utils.torch_utils import is_parallel - - -def smooth_BCE(eps=0.1): # https://github.com/ultralytics/yolov3/issues/238#issuecomment-598028441 - # return positive, negative label smoothing BCE targets - return 1.0 - 0.5 * eps, 0.5 * eps - - -class BCEBlurWithLogitsLoss(nn.Module): - # BCEwithLogitLoss() with reduced missing label effects. - def __init__(self, alpha=0.05): - super(BCEBlurWithLogitsLoss, self).__init__() - self.loss_fcn = nn.BCEWithLogitsLoss(reduction='none') # must be nn.BCEWithLogitsLoss() - self.alpha = alpha - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - pred = torch.sigmoid(pred) # prob from logits - dx = pred - true # reduce only missing label effects - # dx = (pred - true).abs() # reduce missing label and false label effects - alpha_factor = 1 - torch.exp((dx - 1) / (self.alpha + 1e-4)) - loss *= alpha_factor - return loss.mean() - - -class FocalLoss(nn.Module): - # Wraps focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super(FocalLoss, self).__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = 'none' # required to apply FL to each element - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - # p_t = torch.exp(-loss) - # loss *= self.alpha * (1.000001 - p_t) ** self.gamma # non-zero power for gradient stability - - # TF implementation https://github.com/tensorflow/addons/blob/v0.7.1/tensorflow_addons/losses/focal_loss.py - pred_prob = torch.sigmoid(pred) # prob from logits - p_t = true * pred_prob + (1 - true) * (1 - pred_prob) - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = (1.0 - p_t) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == 'mean': - return loss.mean() - elif self.reduction == 'sum': - return loss.sum() - else: # 'none' - return loss - - -class QFocalLoss(nn.Module): - # Wraps Quality focal loss around existing loss_fcn(), i.e. criteria = FocalLoss(nn.BCEWithLogitsLoss(), gamma=1.5) - def __init__(self, loss_fcn, gamma=1.5, alpha=0.25): - super(QFocalLoss, self).__init__() - self.loss_fcn = loss_fcn # must be nn.BCEWithLogitsLoss() - self.gamma = gamma - self.alpha = alpha - self.reduction = loss_fcn.reduction - self.loss_fcn.reduction = 'none' # required to apply FL to each element - - def forward(self, pred, true): - loss = self.loss_fcn(pred, true) - - pred_prob = torch.sigmoid(pred) # prob from logits - alpha_factor = true * self.alpha + (1 - true) * (1 - self.alpha) - modulating_factor = torch.abs(true - pred_prob) ** self.gamma - loss *= alpha_factor * modulating_factor - - if self.reduction == 'mean': - return loss.mean() - elif self.reduction == 'sum': - return loss.sum() - else: # 'none' - return loss - - -class ComputeLoss: - # Compute losses - def __init__(self, model, autobalance=False): - super(ComputeLoss, self).__init__() - device = next(model.parameters()).device # get model device - h = model.hyp # hyperparameters - - # Define criteria - BCEcls = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['cls_pw']], device=device)) - BCEobj = nn.BCEWithLogitsLoss(pos_weight=torch.tensor([h['obj_pw']], device=device)) - - # Class label smoothing https://arxiv.org/pdf/1902.04103.pdf eqn 3 - self.cp, self.cn = smooth_BCE(eps=h.get('label_smoothing', 0.0)) # positive, negative BCE targets - - # Focal loss - g = h['fl_gamma'] # focal loss gamma - if g > 0: - BCEcls, BCEobj = FocalLoss(BCEcls, g), FocalLoss(BCEobj, g) - - det = model.module.model[-1] if is_parallel(model) else model.model[-1] # Detect() module - self.balance = {3: [4.0, 1.0, 0.4]}.get(det.nl, [4.0, 1.0, 0.25, 0.06, .02]) # P3-P7 - self.ssi = list(det.stride).index(16) if autobalance else 0 # stride 16 index - self.BCEcls, self.BCEobj, self.gr, self.hyp, self.autobalance = BCEcls, BCEobj, model.gr, h, autobalance - for k in 'na', 'nc', 'nl', 'anchors': - setattr(self, k, getattr(det, k)) - - def __call__(self, p, targets): # predictions, targets, model - device = targets.device - lcls, lbox, lobj = torch.zeros(1, device=device), torch.zeros(1, device=device), torch.zeros(1, device=device) - tcls, tbox, indices, anchors = self.build_targets(p, targets) # targets - - # Losses - for i, pi in enumerate(p): # layer index, layer predictions - b, a, gj, gi = indices[i] # image, anchor, gridy, gridx - tobj = torch.zeros_like(pi[..., 0], device=device) # target obj - - n = b.shape[0] # number of targets - if n: - ps = pi[b, a, gj, gi] # prediction subset corresponding to targets - - # Regression - pxy = ps[:, :2].sigmoid() * 2. - 0.5 - pwh = (ps[:, 2:4].sigmoid() * 2) ** 2 * anchors[i] - pbox = torch.cat((pxy, pwh), 1) # predicted box - iou = bbox_iou(pbox.T, tbox[i], x1y1x2y2=False, CIoU=True) # iou(prediction, target) - lbox += (1.0 - iou).mean() # iou loss - - # Objectness - tobj[b, a, gj, gi] = (1.0 - self.gr) + self.gr * iou.detach().clamp(0).type(tobj.dtype) # iou ratio - - # Classification - if self.nc > 1: # cls loss (only if multiple classes) - t = torch.full_like(ps[:, 5:], self.cn, device=device) # targets - t[range(n), tcls[i]] = self.cp - lcls += self.BCEcls(ps[:, 5:], t) # BCE - - # Append targets to text file - # with open('targets.txt', 'a') as file: - # [file.write('%11.5g ' * 4 % tuple(x) + '\n') for x in torch.cat((txy[i], twh[i]), 1)] - - obji = self.BCEobj(pi[..., 4], tobj) - lobj += obji * self.balance[i] # obj loss - if self.autobalance: - self.balance[i] = self.balance[i] * 0.9999 + 0.0001 / obji.detach().item() - - if self.autobalance: - self.balance = [x / self.balance[self.ssi] for x in self.balance] - lbox *= self.hyp['box'] - lobj *= self.hyp['obj'] - lcls *= self.hyp['cls'] - bs = tobj.shape[0] # batch size - - loss = lbox + lobj + lcls - return loss * bs, torch.cat((lbox, lobj, lcls, loss)).detach() - - def build_targets(self, p, targets): - # Build targets for compute_loss(), input targets(image,class,x,y,w,h) - na, nt = self.na, targets.shape[0] # number of anchors, targets - tcls, tbox, indices, anch = [], [], [], [] - gain = torch.ones(7, device=targets.device) # normalized to gridspace gain - ai = torch.arange(na, device=targets.device).float().view(na, 1).repeat(1, nt) # same as .repeat_interleave(nt) - targets = torch.cat((targets.repeat(na, 1, 1), ai[:, :, None]), 2) # append anchor indices - - g = 0.5 # bias - off = torch.tensor([[0, 0], - [1, 0], [0, 1], [-1, 0], [0, -1], # j,k,l,m - # [1, 1], [1, -1], [-1, 1], [-1, -1], # jk,jm,lk,lm - ], device=targets.device).float() * g # offsets - - for i in range(self.nl): - anchors = self.anchors[i] - gain[2:6] = torch.tensor(p[i].shape)[[3, 2, 3, 2]] # xyxy gain - - # Match targets to anchors - t = targets * gain - if nt: - # Matches - r = t[:, :, 4:6] / anchors[:, None] # wh ratio - j = torch.max(r, 1. / r).max(2)[0] < self.hyp['anchor_t'] # compare - # j = wh_iou(anchors, t[:, 4:6]) > model.hyp['iou_t'] # iou(3,n)=wh_iou(anchors(3,2), gwh(n,2)) - t = t[j] # filter - - # Offsets - gxy = t[:, 2:4] # grid xy - gxi = gain[[2, 3]] - gxy # inverse - j, k = ((gxy % 1. < g) & (gxy > 1.)).T - l, m = ((gxi % 1. < g) & (gxi > 1.)).T - j = torch.stack((torch.ones_like(j), j, k, l, m)) - t = t.repeat((5, 1, 1))[j] - offsets = (torch.zeros_like(gxy)[None] + off[:, None])[j] - else: - t = targets[0] - offsets = 0 - - # Define - b, c = t[:, :2].long().T # image, class - gxy = t[:, 2:4] # grid xy - gwh = t[:, 4:6] # grid wh - gij = (gxy - offsets).long() - gi, gj = gij.T # grid xy indices - - # Append - a = t[:, 6].long() # anchor indices - indices.append((b, a, gj.clamp_(0, gain[3] - 1), gi.clamp_(0, gain[2] - 1))) # image, anchor, grid indices - tbox.append(torch.cat((gxy - gij, gwh), 1)) # box - anch.append(anchors[a]) # anchors - tcls.append(c) # class - - return tcls, tbox, indices, anch diff --git a/spaces/csuer/vits/text/ngu_dialect.py b/spaces/csuer/vits/text/ngu_dialect.py deleted file mode 100644 index ce3e12bbf0469426872eed5f681985d3e1be9b26..0000000000000000000000000000000000000000 --- a/spaces/csuer/vits/text/ngu_dialect.py +++ /dev/null @@ -1,30 +0,0 @@ -import re -import opencc - - -dialects = {'SZ': 'suzhou', 'WX': 'wuxi', 'CZ': 'changzhou', 'HZ': 'hangzhou', - 'SX': 'shaoxing', 'NB': 'ningbo', 'JJ': 'jingjiang', 'YX': 'yixing', - 'JD': 'jiading', 'ZR': 'zhenru', 'PH': 'pinghu', 'TX': 'tongxiang', - 'JS': 'jiashan', 'HN': 'xiashi', 'LP': 'linping', 'XS': 'xiaoshan', - 'FY': 'fuyang', 'RA': 'ruao', 'CX': 'cixi', 'SM': 'sanmen', - 'TT': 'tiantai', 'WZ': 'wenzhou', 'SC': 'suichang', 'YB': 'youbu'} - -converters = {} - -for dialect in dialects.values(): - try: - converters[dialect] = opencc.OpenCC(dialect) - except: - pass - - -def ngu_dialect_to_ipa(text, dialect): - dialect = dialects[dialect] - text = converters[dialect].convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/daarumadx/bot/src/transform/opencv/mask.py b/spaces/daarumadx/bot/src/transform/opencv/mask.py deleted file mode 100644 index 16cb257b102efb52fca7dc88af3d1e11efed448e..0000000000000000000000000000000000000000 --- a/spaces/daarumadx/bot/src/transform/opencv/mask.py +++ /dev/null @@ -1,182 +0,0 @@ -"""OpenCV Mask Transforms.""" - -import cv2 -import numpy as np - -from transform.opencv import ImageTransformOpenCV -from transform.opencv.bodypart.extract import extract_annotations -from transform.gan.generator import tensor2im - - -class MaskImageTransformOpenCV(ImageTransformOpenCV): - """Mask Image Transform OpenCV.""" - - def __init__(self, input_index=(-2, -1)): - """ - Mask Image Transform OpenCV constructor. - - :param input_index: index where to take the inputs (default is (-2,-1) - for the two previous transformation) - :param args: args parameter to run the image transformation (default use Conf.args) - """ - super().__init__(input_index=input_index) - - -class MaskToMaskref(MaskImageTransformOpenCV): - """Mask & Correct -> MaskRef [OPENCV].""" - - def _execute(self, *args): - """ - Create mask ref. - - :param args: <[RGB,RGB]>image correct, image mask - :return: image - """ - # Create a total green image - green = np.zeros(args[0].shape, np.uint8) - green[:, :, :] = (0, 255, 0) # (B, G, R) - - # Define the green color filter - f1 = np.asarray([0, 250, 0]) # green color filter - f2 = np.asarray([10, 255, 10]) - - # From mask, extrapolate only the green mask - green_mask = cv2.inRange(args[1], f1, f2) # green is 0 - - # (OPTIONAL) Apply dilate and open to mask - kernel = np.ones((5, 5), np.uint8) # Try change it? - green_mask = cv2.dilate(green_mask, kernel, iterations=1) - # green_mask = cv2.morphologyEx(green_mask, cv2.MORPH_OPEN, kernel) - - # Create an inverted mask - green_mask_inv = cv2.bitwise_not(green_mask) - - # Cut correct and green image, using the green_mask & green_mask_inv - res1 = cv2.bitwise_and(args[0], args[0], mask=green_mask_inv) - res2 = cv2.bitwise_and(green, green, mask=green_mask) - - # Compone: - return cv2.add(res1, res2) - - -class MaskdetToMaskfin(MaskImageTransformOpenCV): - """Maskdet -> Maskfin [OPENCV].""" - - def __init__(self, input_index=(-2, -1)): - """ - Maskdet To Maskfin constructor. - - :param input_index: index where to take the inputs (default is (-2,-1) - for the two previous transformation) - :param args: args parameter to run the image transformation (default use Conf.args) - """ - super().__init__(input_index=input_index) - - def _setup(self, *args): - self.__aur_size = self._args["prefs"]["aursize"] - self.__nip_size = self._args["prefs"]["nipsize"] - self.__tit_size = self._args["prefs"]["titsize"] - self.__vag_size = self._args["prefs"]["vagsize"] - self.__hair_size = self._args["prefs"]["hairsize"] - - def _execute(self, *args): - """ - Create maskfin. - - steps: - 1. Extract annotation - 1.a: Filter by color - 1.b: Find ellipses - 1.c: Filter out ellipses by max size, and max total numbers - 1.d: Detect Problems - 1.e: Resolve the problems, or discard the transformation - 2. With the body list, draw maskfin, using maskref - - :param args: <[RGB, RGB]> maskref image, maskdet image - :return: image - """ - def to_int(a, b): - return int(round(a * float(b))) - - enable_pubes = (self.__hair_size > 0) - - # Create a total green image, in which draw details ellipses - details = np.zeros(args[0].shape, np.uint8) - details[:, :, :] = (0, 255, 0) # (B, G, R) - - # Extract body part features: - bodypart_list = extract_annotations(args[1], enable_pubes) - - # Check if the list is not empty: - if bodypart_list: - - self.__draw_bodypart_details(bodypart_list, details, to_int) - - # Define the green color filter - f1 = np.asarray([0, 250, 0]) # green color filter - f2 = np.asarray([10, 255, 10]) - - # From maskref, extrapolate only the green mask - green_mask = cv2.bitwise_not(cv2.inRange(args[0], f1, f2)) # green is 0 - - # Create an inverted mask - green_mask_inv = cv2.bitwise_not(green_mask) - - # Cut maskref and detail image, using the green_mask & green_mask_inv - res1 = cv2.bitwise_and(args[0], args[0], mask=green_mask) - res2 = cv2.bitwise_and(details, details, mask=green_mask_inv) - - # Compone: - maskfin = cv2.add(res1, res2) - return maskfin - - def __draw_bodypart_details(self, bodypart_list, details, to_int): - # Draw body part in details image: - for obj in bodypart_list: - - if obj.w < obj.h: - a_max = int(obj.h / 2) # asse maggiore - a_min = int(obj.w / 2) # asse minore - angle = 0 # angle - else: - a_max = int(obj.w / 2) - a_min = int(obj.h / 2) - angle = 90 - - x = int(obj.x) - y = int(obj.y) - - aurmax = to_int(self.__aur_size, a_max) - aurmin = to_int(self.__aur_size, a_min) - nipmax = to_int(self.__nip_size, a_max) - nipmin = to_int(self.__nip_size, a_min) - titmax = to_int(self.__tit_size, a_max) - titmin = to_int(self.__tit_size, a_min) - vagmax = to_int(self.__vag_size, a_max) - vagmin = to_int(self.__vag_size, a_min) - hairmax = to_int(self.__hair_size, a_max) - hairmin = to_int(self.__hair_size, a_min) - - self.__draw_ellipse(a_max, a_min, angle, aurmax, aurmin, details, hairmax, hairmin, nipmax, nipmin, obj, - titmax, titmin, vagmax, vagmin, x, y) - - @staticmethod - def __draw_ellipse(a_max, a_min, angle, aurmax, aurmin, details, hairmax, hairmin, nipmax, nipmin, obj, - titmax, titmin, vagmax, vagmin, x, y): - # Draw ellipse - if obj.name == "tit": - cv2.ellipse(details, (x, y), (titmax, titmin), angle, 0, 360, (0, 205, 0), -1) # (0,0,0,50) - elif obj.name == "aur": - cv2.ellipse(details, (x, y), (aurmax, aurmin), angle, 0, 360, (0, 0, 255), -1) # red - elif obj.name == "nip": - cv2.ellipse(details, (x, y), (nipmax, nipmin), angle, 0, 360, (255, 255, 255), -1) # white - elif obj.name == "belly": - cv2.ellipse(details, (x, y), (a_max, a_min), angle, 0, 360, (255, 0, 255), -1) # purple - elif obj.name == "vag": - cv2.ellipse(details, (x, y), (vagmax, vagmin), angle, 0, 360, (255, 0, 0), -1) # blue - elif obj.name == "hair": - xmin = x - hairmax - ymin = y - hairmin - xmax = x + hairmax - ymax = y + hairmax - cv2.rectangle(details, (xmin, ymin), (xmax, ymax), (100, 100, 100), -1) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiofiles/os.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiofiles/os.py deleted file mode 100644 index 29bc748fa91a6d3de6ec42842416de6af7134f5c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/aiofiles/os.py +++ /dev/null @@ -1,51 +0,0 @@ -"""Async executor versions of file functions from the os module.""" -import os - -from . import ospath as path -from .ospath import wrap - -__all__ = [ - "path", - "stat", - "statvfs", - "rename", - "renames", - "replace", - "remove", - "unlink", - "mkdir", - "makedirs", - "rmdir", - "removedirs", - "link", - "symlink", - "readlink", - "listdir", - "scandir", - "access", - "sendfile", - "wrap", -] - - -stat = wrap(os.stat) -rename = wrap(os.rename) -renames = wrap(os.renames) -replace = wrap(os.replace) -remove = wrap(os.remove) -unlink = wrap(os.unlink) -mkdir = wrap(os.mkdir) -makedirs = wrap(os.makedirs) -rmdir = wrap(os.rmdir) -removedirs = wrap(os.removedirs) -link = wrap(os.link) -symlink = wrap(os.symlink) -readlink = wrap(os.readlink) -listdir = wrap(os.listdir) -scandir = wrap(os.scandir) -access = wrap(os.access) - -if hasattr(os, "sendfile"): - sendfile = wrap(os.sendfile) -if hasattr(os, "statvfs"): - statvfs = wrap(os.statvfs) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-320faa81.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-320faa81.js deleted file mode 100644 index e292b1e2242e82aec3fcc2bbedf871391d21f82c..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-320faa81.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as s,e,s as o}from"./index-9e76ffee.js";class n extends s{constructor(t){super(),e(this,t,null,null,o,{})}}const c=n,r=["static"];export{c as Component,r as modes}; -//# sourceMappingURL=index-320faa81.js.map diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/repository.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/repository.py deleted file mode 100644 index 757995870b202b1a18a83df61e55cacdd7b21439..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/huggingface_hub/repository.py +++ /dev/null @@ -1,1461 +0,0 @@ -import atexit -import os -import re -import subprocess -import threading -import time -from contextlib import contextmanager -from pathlib import Path -from typing import Callable, Dict, Iterator, List, Optional, Tuple, Union -from urllib.parse import urlparse - -from huggingface_hub.constants import REPO_TYPES_URL_PREFIXES, REPOCARD_NAME -from huggingface_hub.repocard import metadata_load, metadata_save - -from .hf_api import HfApi, repo_type_and_id_from_hf_id -from .lfs import LFS_MULTIPART_UPLOAD_COMMAND -from .utils import ( - HfFolder, - SoftTemporaryDirectory, - logging, - run_subprocess, - tqdm, - validate_hf_hub_args, -) -from .utils._typing import TypedDict - - -logger = logging.get_logger(__name__) - - -class CommandInProgress: - """ - Utility to follow commands launched asynchronously. - """ - - def __init__( - self, - title: str, - is_done_method: Callable, - status_method: Callable, - process: subprocess.Popen, - post_method: Optional[Callable] = None, - ): - self.title = title - self._is_done = is_done_method - self._status = status_method - self._process = process - self._stderr = "" - self._stdout = "" - self._post_method = post_method - - @property - def is_done(self) -> bool: - """ - Whether the process is done. - """ - result = self._is_done() - - if result and self._post_method is not None: - self._post_method() - self._post_method = None - - return result - - @property - def status(self) -> int: - """ - The exit code/status of the current action. Will return `0` if the - command has completed successfully, and a number between 1 and 255 if - the process errored-out. - - Will return -1 if the command is still ongoing. - """ - return self._status() - - @property - def failed(self) -> bool: - """ - Whether the process errored-out. - """ - return self.status > 0 - - @property - def stderr(self) -> str: - """ - The current output message on the standard error. - """ - if self._process.stderr is not None: - self._stderr += self._process.stderr.read() - return self._stderr - - @property - def stdout(self) -> str: - """ - The current output message on the standard output. - """ - if self._process.stdout is not None: - self._stdout += self._process.stdout.read() - return self._stdout - - def __repr__(self): - status = self.status - - if status == -1: - status = "running" - - return ( - f"[{self.title} command, status code: {status}," - f" {'in progress.' if not self.is_done else 'finished.'} PID:" - f" {self._process.pid}]" - ) - - -def is_git_repo(folder: Union[str, Path]) -> bool: - """ - Check if the folder is the root or part of a git repository - - Args: - folder (`str`): - The folder in which to run the command. - - Returns: - `bool`: `True` if the repository is part of a repository, `False` - otherwise. - """ - folder_exists = os.path.exists(os.path.join(folder, ".git")) - git_branch = subprocess.run("git branch".split(), cwd=folder, stdout=subprocess.PIPE, stderr=subprocess.PIPE) - return folder_exists and git_branch.returncode == 0 - - -def is_local_clone(folder: Union[str, Path], remote_url: str) -> bool: - """ - Check if the folder is a local clone of the remote_url - - Args: - folder (`str` or `Path`): - The folder in which to run the command. - remote_url (`str`): - The url of a git repository. - - Returns: - `bool`: `True` if the repository is a local clone of the remote - repository specified, `False` otherwise. - """ - if not is_git_repo(folder): - return False - - remotes = run_subprocess("git remote -v", folder).stdout - - # Remove token for the test with remotes. - remote_url = re.sub(r"https://.*@", "https://", remote_url) - remotes = [re.sub(r"https://.*@", "https://", remote) for remote in remotes.split()] - return remote_url in remotes - - -def is_tracked_with_lfs(filename: Union[str, Path]) -> bool: - """ - Check if the file passed is tracked with git-lfs. - - Args: - filename (`str` or `Path`): - The filename to check. - - Returns: - `bool`: `True` if the file passed is tracked with git-lfs, `False` - otherwise. - """ - folder = Path(filename).parent - filename = Path(filename).name - - try: - p = run_subprocess("git check-attr -a".split() + [filename], folder) - attributes = p.stdout.strip() - except subprocess.CalledProcessError as exc: - if not is_git_repo(folder): - return False - else: - raise OSError(exc.stderr) - - if len(attributes) == 0: - return False - - found_lfs_tag = {"diff": False, "merge": False, "filter": False} - - for attribute in attributes.split("\n"): - for tag in found_lfs_tag.keys(): - if tag in attribute and "lfs" in attribute: - found_lfs_tag[tag] = True - - return all(found_lfs_tag.values()) - - -def is_git_ignored(filename: Union[str, Path]) -> bool: - """ - Check if file is git-ignored. Supports nested .gitignore files. - - Args: - filename (`str` or `Path`): - The filename to check. - - Returns: - `bool`: `True` if the file passed is ignored by `git`, `False` - otherwise. - """ - folder = Path(filename).parent - filename = Path(filename).name - - try: - p = run_subprocess("git check-ignore".split() + [filename], folder, check=False) - # Will return exit code 1 if not gitignored - is_ignored = not bool(p.returncode) - except subprocess.CalledProcessError as exc: - raise OSError(exc.stderr) - - return is_ignored - - -def is_binary_file(filename: Union[str, Path]) -> bool: - """ - Check if file is a binary file. - - Args: - filename (`str` or `Path`): - The filename to check. - - Returns: - `bool`: `True` if the file passed is a binary file, `False` otherwise. - """ - try: - with open(filename, "rb") as f: - content = f.read(10 * (1024**2)) # Read a maximum of 10MB - - # Code sample taken from the following stack overflow thread - # https://stackoverflow.com/questions/898669/how-can-i-detect-if-a-file-is-binary-non-text-in-python/7392391#7392391 - text_chars = bytearray({7, 8, 9, 10, 12, 13, 27} | set(range(0x20, 0x100)) - {0x7F}) - return bool(content.translate(None, text_chars)) - except UnicodeDecodeError: - return True - - -def files_to_be_staged(pattern: str = ".", folder: Union[str, Path, None] = None) -> List[str]: - """ - Returns a list of filenames that are to be staged. - - Args: - pattern (`str` or `Path`): - The pattern of filenames to check. Put `.` to get all files. - folder (`str` or `Path`): - The folder in which to run the command. - - Returns: - `List[str]`: List of files that are to be staged. - """ - try: - p = run_subprocess("git ls-files --exclude-standard -mo".split() + [pattern], folder) - if len(p.stdout.strip()): - files = p.stdout.strip().split("\n") - else: - files = [] - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - return files - - -def is_tracked_upstream(folder: Union[str, Path]) -> bool: - """ - Check if the current checked-out branch is tracked upstream. - - Args: - folder (`str` or `Path`): - The folder in which to run the command. - - Returns: - `bool`: `True` if the current checked-out branch is tracked upstream, - `False` otherwise. - """ - try: - run_subprocess("git rev-parse --symbolic-full-name --abbrev-ref @{u}", folder) - return True - except subprocess.CalledProcessError as exc: - if "HEAD" in exc.stderr: - raise OSError("No branch checked out") - - return False - - -def commits_to_push(folder: Union[str, Path], upstream: Optional[str] = None) -> int: - """ - Check the number of commits that would be pushed upstream - - Args: - folder (`str` or `Path`): - The folder in which to run the command. - upstream (`str`, *optional*): - The name of the upstream repository with which the comparison should be - made. - - Returns: - `int`: Number of commits that would be pushed upstream were a `git - push` to proceed. - """ - try: - result = run_subprocess(f"git cherry -v {upstream or ''}", folder) - return len(result.stdout.split("\n")) - 1 - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - -class PbarT(TypedDict): - # Used to store an opened progress bar in `_lfs_log_progress` - bar: tqdm - past_bytes: int - - -@contextmanager -def _lfs_log_progress(): - """ - This is a context manager that will log the Git LFS progress of cleaning, - smudging, pulling and pushing. - """ - - if logger.getEffectiveLevel() >= logging.ERROR: - try: - yield - except Exception: - pass - return - - def output_progress(stopping_event: threading.Event): - """ - To be launched as a separate thread with an event meaning it should stop - the tail. - """ - # Key is tuple(state, filename), value is a dict(tqdm bar and a previous value) - pbars: Dict[Tuple[str, str], PbarT] = {} - - def close_pbars(): - for pbar in pbars.values(): - pbar["bar"].update(pbar["bar"].total - pbar["past_bytes"]) - pbar["bar"].refresh() - pbar["bar"].close() - - def tail_file(filename) -> Iterator[str]: - """ - Creates a generator to be iterated through, which will return each - line one by one. Will stop tailing the file if the stopping_event is - set. - """ - with open(filename, "r") as file: - current_line = "" - while True: - if stopping_event.is_set(): - close_pbars() - break - - line_bit = file.readline() - if line_bit is not None and not len(line_bit.strip()) == 0: - current_line += line_bit - if current_line.endswith("\n"): - yield current_line - current_line = "" - else: - time.sleep(1) - - # If the file isn't created yet, wait for a few seconds before trying again. - # Can be interrupted with the stopping_event. - while not os.path.exists(os.environ["GIT_LFS_PROGRESS"]): - if stopping_event.is_set(): - close_pbars() - return - - time.sleep(2) - - for line in tail_file(os.environ["GIT_LFS_PROGRESS"]): - try: - state, file_progress, byte_progress, filename = line.split() - except ValueError as error: - # Try/except to ease debugging. See https://github.com/huggingface/huggingface_hub/issues/1373. - raise ValueError(f"Cannot unpack LFS progress line:\n{line}") from error - description = f"{state.capitalize()} file {filename}" - - current_bytes, total_bytes = byte_progress.split("/") - current_bytes_int = int(current_bytes) - total_bytes_int = int(total_bytes) - - pbar = pbars.get((state, filename)) - if pbar is None: - # Initialize progress bar - pbars[(state, filename)] = { - "bar": tqdm( - desc=description, - initial=current_bytes_int, - total=total_bytes_int, - unit="B", - unit_scale=True, - unit_divisor=1024, - ), - "past_bytes": int(current_bytes), - } - else: - # Update progress bar - pbar["bar"].update(current_bytes_int - pbar["past_bytes"]) - pbar["past_bytes"] = current_bytes_int - - current_lfs_progress_value = os.environ.get("GIT_LFS_PROGRESS", "") - - with SoftTemporaryDirectory() as tmpdir: - os.environ["GIT_LFS_PROGRESS"] = os.path.join(tmpdir, "lfs_progress") - logger.debug(f"Following progress in {os.environ['GIT_LFS_PROGRESS']}") - - exit_event = threading.Event() - x = threading.Thread(target=output_progress, args=(exit_event,), daemon=True) - x.start() - - try: - yield - finally: - exit_event.set() - x.join() - - os.environ["GIT_LFS_PROGRESS"] = current_lfs_progress_value - - -class Repository: - """ - Helper class to wrap the git and git-lfs commands. - - The aim is to facilitate interacting with huggingface.co hosted model or - dataset repos, though not a lot here (if any) is actually specific to - huggingface.co. - """ - - command_queue: List[CommandInProgress] - - @validate_hf_hub_args - def __init__( - self, - local_dir: Union[str, Path], - clone_from: Optional[str] = None, - repo_type: Optional[str] = None, - token: Union[bool, str] = True, - git_user: Optional[str] = None, - git_email: Optional[str] = None, - revision: Optional[str] = None, - skip_lfs_files: bool = False, - client: Optional[HfApi] = None, - ): - """ - Instantiate a local clone of a git repo. - - If `clone_from` is set, the repo will be cloned from an existing remote repository. - If the remote repo does not exist, a `EnvironmentError` exception will be thrown. - Please create the remote repo first using [`create_repo`]. - - `Repository` uses the local git credentials by default. If explicitly set, the `token` - or the `git_user`/`git_email` pair will be used instead. - - Args: - local_dir (`str` or `Path`): - path (e.g. `'my_trained_model/'`) to the local directory, where - the `Repository` will be initialized. - clone_from (`str`, *optional*): - Either a repository url or `repo_id`. - Example: - - `"https://huggingface.co/philschmid/playground-tests"` - - `"philschmid/playground-tests"` - repo_type (`str`, *optional*): - To set when cloning a repo from a repo_id. Default is model. - token (`bool` or `str`, *optional*): - A valid authentication token (see https://huggingface.co/settings/token). - If `None` or `True` and machine is logged in (through `huggingface-cli login` - or [`~huggingface_hub.login`]), token will be retrieved from the cache. - If `False`, token is not sent in the request header. - git_user (`str`, *optional*): - will override the `git config user.name` for committing and - pushing files to the hub. - git_email (`str`, *optional*): - will override the `git config user.email` for committing and - pushing files to the hub. - revision (`str`, *optional*): - Revision to checkout after initializing the repository. If the - revision doesn't exist, a branch will be created with that - revision name from the default branch's current HEAD. - skip_lfs_files (`bool`, *optional*, defaults to `False`): - whether to skip git-LFS files or not. - client (`HfApi`, *optional*): - Instance of [`HfApi`] to use when calling the HF Hub API. A new - instance will be created if this is left to `None`. - - Raises: - - [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) - if the remote repository set in `clone_from` does not exist. - """ - if isinstance(local_dir, Path): - local_dir = str(local_dir) - os.makedirs(local_dir, exist_ok=True) - self.local_dir = os.path.join(os.getcwd(), local_dir) - self._repo_type = repo_type - self.command_queue = [] - self.skip_lfs_files = skip_lfs_files - self.client = client if client is not None else HfApi() - - self.check_git_versions() - - if isinstance(token, str): - self.huggingface_token: Optional[str] = token - elif token is False: - self.huggingface_token = None - else: - # if `True` -> explicit use of the cached token - # if `None` -> implicit use of the cached token - self.huggingface_token = HfFolder.get_token() - - if clone_from is not None: - self.clone_from(repo_url=clone_from) - else: - if is_git_repo(self.local_dir): - logger.debug("[Repository] is a valid git repo") - else: - raise ValueError("If not specifying `clone_from`, you need to pass Repository a valid git clone.") - - if self.huggingface_token is not None and (git_email is None or git_user is None): - user = self.client.whoami(self.huggingface_token) - - if git_email is None: - git_email = user["email"] - - if git_user is None: - git_user = user["fullname"] - - if git_user is not None or git_email is not None: - self.git_config_username_and_email(git_user, git_email) - - self.lfs_enable_largefiles() - self.git_credential_helper_store() - - if revision is not None: - self.git_checkout(revision, create_branch_ok=True) - - # This ensures that all commands exit before exiting the Python runtime. - # This will ensure all pushes register on the hub, even if other errors happen in subsequent operations. - atexit.register(self.wait_for_commands) - - @property - def current_branch(self) -> str: - """ - Returns the current checked out branch. - - Returns: - `str`: Current checked out branch. - """ - try: - result = run_subprocess("git rev-parse --abbrev-ref HEAD", self.local_dir).stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - return result - - def check_git_versions(self): - """ - Checks that `git` and `git-lfs` can be run. - - Raises: - - [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) - if `git` or `git-lfs` are not installed. - """ - try: - git_version = run_subprocess("git --version", self.local_dir).stdout.strip() - except FileNotFoundError: - raise EnvironmentError("Looks like you do not have git installed, please install.") - - try: - lfs_version = run_subprocess("git-lfs --version", self.local_dir).stdout.strip() - except FileNotFoundError: - raise EnvironmentError( - "Looks like you do not have git-lfs installed, please install." - " You can install from https://git-lfs.github.com/." - " Then run `git lfs install` (you only have to do this once)." - ) - logger.info(git_version + "\n" + lfs_version) - - @validate_hf_hub_args - def clone_from(self, repo_url: str, token: Union[bool, str, None] = None): - """ - Clone from a remote. If the folder already exists, will try to clone the - repository within it. - - If this folder is a git repository with linked history, will try to - update the repository. - - Args: - repo_url (`str`): - The URL from which to clone the repository - token (`Union[str, bool]`, *optional*): - Whether to use the authentication token. It can be: - - a string which is the token itself - - `False`, which would not use the authentication token - - `True`, which would fetch the authentication token from the - local folder and use it (you should be logged in for this to - work). - - `None`, which would retrieve the value of - `self.huggingface_token`. - - - - Raises the following error: - - - [`ValueError`](https://docs.python.org/3/library/exceptions.html#ValueError) - if an organization token (starts with "api_org") is passed. Use must use - your own personal access token (see https://hf.co/settings/tokens). - - - [`EnvironmentError`](https://docs.python.org/3/library/exceptions.html#EnvironmentError) - if you are trying to clone the repository in a non-empty folder, or if the - `git` operations raise errors. - - - """ - token = ( - token # str -> use it - if isinstance(token, str) - else ( - None # `False` -> explicit no token - if token is False - else self.huggingface_token # `None` or `True` -> use default - ) - ) - if token is not None and token.startswith("api_org"): - raise ValueError( - "You must use your personal access token, not an Organization token" - " (see https://hf.co/settings/tokens)." - ) - - hub_url = self.client.endpoint - if hub_url in repo_url or ("http" not in repo_url and len(repo_url.split("/")) <= 2): - repo_type, namespace, repo_name = repo_type_and_id_from_hf_id(repo_url, hub_url=hub_url) - repo_id = f"{namespace}/{repo_name}" if namespace is not None else repo_name - - if repo_type is not None: - self._repo_type = repo_type - - repo_url = hub_url + "/" - - if self._repo_type in REPO_TYPES_URL_PREFIXES: - repo_url += REPO_TYPES_URL_PREFIXES[self._repo_type] - - if token is not None: - # Add token in git url when provided - scheme = urlparse(repo_url).scheme - repo_url = repo_url.replace(f"{scheme}://", f"{scheme}://user:{token}@") - - repo_url += repo_id - - # For error messages, it's cleaner to show the repo url without the token. - clean_repo_url = re.sub(r"(https?)://.*@", r"\1://", repo_url) - try: - run_subprocess("git lfs install", self.local_dir) - - # checks if repository is initialized in a empty repository or in one with files - if len(os.listdir(self.local_dir)) == 0: - logger.warning(f"Cloning {clean_repo_url} into local empty directory.") - - with _lfs_log_progress(): - env = os.environ.copy() - - if self.skip_lfs_files: - env.update({"GIT_LFS_SKIP_SMUDGE": "1"}) - - run_subprocess( - # 'git lfs clone' is deprecated (will display a warning in the terminal) - # but we still use it as it provides a nicer UX when downloading large - # files (shows progress). - f"{'git clone' if self.skip_lfs_files else 'git lfs clone'} {repo_url} .", - self.local_dir, - env=env, - ) - else: - # Check if the folder is the root of a git repository - if not is_git_repo(self.local_dir): - raise EnvironmentError( - "Tried to clone a repository in a non-empty folder that isn't" - f" a git repository ('{self.local_dir}'). If you really want to" - f" do this, do it manually:\n cd {self.local_dir} && git init" - " && git remote add origin && git pull origin main\n or clone" - " repo to a new folder and move your existing files there" - " afterwards." - ) - - if is_local_clone(self.local_dir, repo_url): - logger.warning( - f"{self.local_dir} is already a clone of {clean_repo_url}." - " Make sure you pull the latest changes with" - " `repo.git_pull()`." - ) - else: - output = run_subprocess("git remote get-url origin", self.local_dir, check=False) - - error_msg = ( - f"Tried to clone {clean_repo_url} in an unrelated git" - " repository.\nIf you believe this is an error, please add" - f" a remote with the following URL: {clean_repo_url}." - ) - if output.returncode == 0: - clean_local_remote_url = re.sub(r"https://.*@", "https://", output.stdout) - error_msg += f"\nLocal path has its origin defined as: {clean_local_remote_url}" - raise EnvironmentError(error_msg) - - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def git_config_username_and_email(self, git_user: Optional[str] = None, git_email: Optional[str] = None): - """ - Sets git username and email (only in the current repo). - - Args: - git_user (`str`, *optional*): - The username to register through `git`. - git_email (`str`, *optional*): - The email to register through `git`. - """ - try: - if git_user is not None: - run_subprocess("git config user.name".split() + [git_user], self.local_dir) - - if git_email is not None: - run_subprocess(f"git config user.email {git_email}".split(), self.local_dir) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def git_credential_helper_store(self): - """ - Sets the git credential helper to `store` - """ - try: - run_subprocess("git config credential.helper store", self.local_dir) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def git_head_hash(self) -> str: - """ - Get commit sha on top of HEAD. - - Returns: - `str`: The current checked out commit SHA. - """ - try: - p = run_subprocess("git rev-parse HEAD", self.local_dir) - return p.stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def git_remote_url(self) -> str: - """ - Get URL to origin remote. - - Returns: - `str`: The URL of the `origin` remote. - """ - try: - p = run_subprocess("git config --get remote.origin.url", self.local_dir) - url = p.stdout.strip() - # Strip basic auth info. - return re.sub(r"https://.*@", "https://", url) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def git_head_commit_url(self) -> str: - """ - Get URL to last commit on HEAD. We assume it's been pushed, and the url - scheme is the same one as for GitHub or HuggingFace. - - Returns: - `str`: The URL to the current checked-out commit. - """ - sha = self.git_head_hash() - url = self.git_remote_url() - if url.endswith("/"): - url = url[:-1] - return f"{url}/commit/{sha}" - - def list_deleted_files(self) -> List[str]: - """ - Returns a list of the files that are deleted in the working directory or - index. - - Returns: - `List[str]`: A list of files that have been deleted in the working - directory or index. - """ - try: - git_status = run_subprocess("git status -s", self.local_dir).stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - if len(git_status) == 0: - return [] - - # Receives a status like the following - # D .gitignore - # D new_file.json - # AD new_file1.json - # ?? new_file2.json - # ?? new_file4.json - - # Strip each line of whitespaces - modified_files_statuses = [status.strip() for status in git_status.split("\n")] - - # Only keep files that are deleted using the D prefix - deleted_files_statuses = [status for status in modified_files_statuses if "D" in status.split()[0]] - - # Remove the D prefix and strip to keep only the relevant filename - deleted_files = [status.split()[-1].strip() for status in deleted_files_statuses] - - return deleted_files - - def lfs_track(self, patterns: Union[str, List[str]], filename: bool = False): - """ - Tell git-lfs to track files according to a pattern. - - Setting the `filename` argument to `True` will treat the arguments as - literal filenames, not as patterns. Any special glob characters in the - filename will be escaped when writing to the `.gitattributes` file. - - Args: - patterns (`Union[str, List[str]]`): - The pattern, or list of patterns, to track with git-lfs. - filename (`bool`, *optional*, defaults to `False`): - Whether to use the patterns as literal filenames. - """ - if isinstance(patterns, str): - patterns = [patterns] - try: - for pattern in patterns: - run_subprocess( - f"git lfs track {'--filename' if filename else ''} {pattern}", - self.local_dir, - ) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def lfs_untrack(self, patterns: Union[str, List[str]]): - """ - Tell git-lfs to untrack those files. - - Args: - patterns (`Union[str, List[str]]`): - The pattern, or list of patterns, to untrack with git-lfs. - """ - if isinstance(patterns, str): - patterns = [patterns] - try: - for pattern in patterns: - run_subprocess("git lfs untrack".split() + [pattern], self.local_dir) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def lfs_enable_largefiles(self): - """ - HF-specific. This enables upload support of files >5GB. - """ - try: - lfs_config = "git config lfs.customtransfer.multipart" - run_subprocess(f"{lfs_config}.path huggingface-cli", self.local_dir) - run_subprocess( - f"{lfs_config}.args {LFS_MULTIPART_UPLOAD_COMMAND}", - self.local_dir, - ) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def auto_track_binary_files(self, pattern: str = ".") -> List[str]: - """ - Automatically track binary files with git-lfs. - - Args: - pattern (`str`, *optional*, defaults to "."): - The pattern with which to track files that are binary. - - Returns: - `List[str]`: List of filenames that are now tracked due to being - binary files - """ - files_to_be_tracked_with_lfs = [] - - deleted_files = self.list_deleted_files() - - for filename in files_to_be_staged(pattern, folder=self.local_dir): - if filename in deleted_files: - continue - - path_to_file = os.path.join(os.getcwd(), self.local_dir, filename) - - if not (is_tracked_with_lfs(path_to_file) or is_git_ignored(path_to_file)): - size_in_mb = os.path.getsize(path_to_file) / (1024 * 1024) - - if size_in_mb >= 10: - logger.warning( - "Parsing a large file to check if binary or not. Tracking large" - " files using `repository.auto_track_large_files` is" - " recommended so as to not load the full file in memory." - ) - - is_binary = is_binary_file(path_to_file) - - if is_binary: - self.lfs_track(filename) - files_to_be_tracked_with_lfs.append(filename) - - # Cleanup the .gitattributes if files were deleted - self.lfs_untrack(deleted_files) - - return files_to_be_tracked_with_lfs - - def auto_track_large_files(self, pattern: str = ".") -> List[str]: - """ - Automatically track large files (files that weigh more than 10MBs) with - git-lfs. - - Args: - pattern (`str`, *optional*, defaults to "."): - The pattern with which to track files that are above 10MBs. - - Returns: - `List[str]`: List of filenames that are now tracked due to their - size. - """ - files_to_be_tracked_with_lfs = [] - - deleted_files = self.list_deleted_files() - - for filename in files_to_be_staged(pattern, folder=self.local_dir): - if filename in deleted_files: - continue - - path_to_file = os.path.join(os.getcwd(), self.local_dir, filename) - size_in_mb = os.path.getsize(path_to_file) / (1024 * 1024) - - if size_in_mb >= 10 and not is_tracked_with_lfs(path_to_file) and not is_git_ignored(path_to_file): - self.lfs_track(filename) - files_to_be_tracked_with_lfs.append(filename) - - # Cleanup the .gitattributes if files were deleted - self.lfs_untrack(deleted_files) - - return files_to_be_tracked_with_lfs - - def lfs_prune(self, recent=False): - """ - git lfs prune - - Args: - recent (`bool`, *optional*, defaults to `False`): - Whether to prune files even if they were referenced by recent - commits. See the following - [link](https://github.com/git-lfs/git-lfs/blob/f3d43f0428a84fc4f1e5405b76b5a73ec2437e65/docs/man/git-lfs-prune.1.ronn#recent-files) - for more information. - """ - try: - with _lfs_log_progress(): - result = run_subprocess(f"git lfs prune {'--recent' if recent else ''}", self.local_dir) - logger.info(result.stdout) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def git_pull(self, rebase: bool = False, lfs: bool = False): - """ - git pull - - Args: - rebase (`bool`, *optional*, defaults to `False`): - Whether to rebase the current branch on top of the upstream - branch after fetching. - lfs (`bool`, *optional*, defaults to `False`): - Whether to fetch the LFS files too. This option only changes the - behavior when a repository was cloned without fetching the LFS - files; calling `repo.git_pull(lfs=True)` will then fetch the LFS - file from the remote repository. - """ - command = "git pull" if not lfs else "git lfs pull" - if rebase: - command += " --rebase" - try: - with _lfs_log_progress(): - result = run_subprocess(command, self.local_dir) - logger.info(result.stdout) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def git_add(self, pattern: str = ".", auto_lfs_track: bool = False): - """ - git add - - Setting the `auto_lfs_track` parameter to `True` will automatically - track files that are larger than 10MB with `git-lfs`. - - Args: - pattern (`str`, *optional*, defaults to "."): - The pattern with which to add files to staging. - auto_lfs_track (`bool`, *optional*, defaults to `False`): - Whether to automatically track large and binary files with - git-lfs. Any file over 10MB in size, or in binary format, will - be automatically tracked. - """ - if auto_lfs_track: - # Track files according to their size (>=10MB) - tracked_files = self.auto_track_large_files(pattern) - - # Read the remaining files and track them if they're binary - tracked_files.extend(self.auto_track_binary_files(pattern)) - - if tracked_files: - logger.warning( - f"Adding files tracked by Git LFS: {tracked_files}. This may take a" - " bit of time if the files are large." - ) - - try: - result = run_subprocess("git add -v".split() + [pattern], self.local_dir) - logger.info(f"Adding to index:\n{result.stdout}\n") - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def git_commit(self, commit_message: str = "commit files to HF hub"): - """ - git commit - - Args: - commit_message (`str`, *optional*, defaults to "commit files to HF hub"): - The message attributed to the commit. - """ - try: - result = run_subprocess("git commit -v -m".split() + [commit_message], self.local_dir) - logger.info(f"Committed:\n{result.stdout}\n") - except subprocess.CalledProcessError as exc: - if len(exc.stderr) > 0: - raise EnvironmentError(exc.stderr) - else: - raise EnvironmentError(exc.stdout) - - def git_push( - self, - upstream: Optional[str] = None, - blocking: bool = True, - auto_lfs_prune: bool = False, - ) -> Union[str, Tuple[str, CommandInProgress]]: - """ - git push - - If used without setting `blocking`, will return url to commit on remote - repo. If used with `blocking=True`, will return a tuple containing the - url to commit and the command object to follow for information about the - process. - - Args: - upstream (`str`, *optional*): - Upstream to which this should push. If not specified, will push - to the lastly defined upstream or to the default one (`origin - main`). - blocking (`bool`, *optional*, defaults to `True`): - Whether the function should return only when the push has - finished. Setting this to `False` will return an - `CommandInProgress` object which has an `is_done` property. This - property will be set to `True` when the push is finished. - auto_lfs_prune (`bool`, *optional*, defaults to `False`): - Whether to automatically prune files once they have been pushed - to the remote. - """ - command = "git push" - - if upstream: - command += f" --set-upstream {upstream}" - - number_of_commits = commits_to_push(self.local_dir, upstream) - - if number_of_commits > 1: - logger.warning(f"Several commits ({number_of_commits}) will be pushed upstream.") - if blocking: - logger.warning("The progress bars may be unreliable.") - - try: - with _lfs_log_progress(): - process = subprocess.Popen( - command.split(), - stderr=subprocess.PIPE, - stdout=subprocess.PIPE, - encoding="utf-8", - cwd=self.local_dir, - ) - - if blocking: - stdout, stderr = process.communicate() - return_code = process.poll() - process.kill() - - if len(stderr): - logger.warning(stderr) - - if return_code: - raise subprocess.CalledProcessError(return_code, process.args, output=stdout, stderr=stderr) - - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - if not blocking: - - def status_method(): - status = process.poll() - if status is None: - return -1 - else: - return status - - command_in_progress = CommandInProgress( - "push", - is_done_method=lambda: process.poll() is not None, - status_method=status_method, - process=process, - post_method=self.lfs_prune if auto_lfs_prune else None, - ) - - self.command_queue.append(command_in_progress) - - return self.git_head_commit_url(), command_in_progress - - if auto_lfs_prune: - self.lfs_prune() - - return self.git_head_commit_url() - - def git_checkout(self, revision: str, create_branch_ok: bool = False): - """ - git checkout a given revision - - Specifying `create_branch_ok` to `True` will create the branch to the - given revision if that revision doesn't exist. - - Args: - revision (`str`): - The revision to checkout. - create_branch_ok (`str`, *optional*, defaults to `False`): - Whether creating a branch named with the `revision` passed at - the current checked-out reference if `revision` isn't an - existing revision is allowed. - """ - try: - result = run_subprocess(f"git checkout {revision}", self.local_dir) - logger.warning(f"Checked out {revision} from {self.current_branch}.") - logger.warning(result.stdout) - except subprocess.CalledProcessError as exc: - if not create_branch_ok: - raise EnvironmentError(exc.stderr) - else: - try: - result = run_subprocess(f"git checkout -b {revision}", self.local_dir) - logger.warning( - f"Revision `{revision}` does not exist. Created and checked out branch `{revision}`." - ) - logger.warning(result.stdout) - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def tag_exists(self, tag_name: str, remote: Optional[str] = None) -> bool: - """ - Check if a tag exists or not. - - Args: - tag_name (`str`): - The name of the tag to check. - remote (`str`, *optional*): - Whether to check if the tag exists on a remote. This parameter - should be the identifier of the remote. - - Returns: - `bool`: Whether the tag exists. - """ - if remote: - try: - result = run_subprocess(f"git ls-remote origin refs/tags/{tag_name}", self.local_dir).stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - return len(result) != 0 - else: - try: - git_tags = run_subprocess("git tag", self.local_dir).stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - git_tags = git_tags.split("\n") - return tag_name in git_tags - - def delete_tag(self, tag_name: str, remote: Optional[str] = None) -> bool: - """ - Delete a tag, both local and remote, if it exists - - Args: - tag_name (`str`): - The tag name to delete. - remote (`str`, *optional*): - The remote on which to delete the tag. - - Returns: - `bool`: `True` if deleted, `False` if the tag didn't exist. - If remote is not passed, will just be updated locally - """ - delete_locally = True - delete_remotely = True - - if not self.tag_exists(tag_name): - delete_locally = False - - if not self.tag_exists(tag_name, remote=remote): - delete_remotely = False - - if delete_locally: - try: - run_subprocess(["git", "tag", "-d", tag_name], self.local_dir).stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - if remote and delete_remotely: - try: - run_subprocess(f"git push {remote} --delete {tag_name}", self.local_dir).stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - return True - - def add_tag(self, tag_name: str, message: Optional[str] = None, remote: Optional[str] = None): - """ - Add a tag at the current head and push it - - If remote is None, will just be updated locally - - If no message is provided, the tag will be lightweight. if a message is - provided, the tag will be annotated. - - Args: - tag_name (`str`): - The name of the tag to be added. - message (`str`, *optional*): - The message that accompanies the tag. The tag will turn into an - annotated tag if a message is passed. - remote (`str`, *optional*): - The remote on which to add the tag. - """ - if message: - tag_args = ["git", "tag", "-a", tag_name, "-m", message] - else: - tag_args = ["git", "tag", tag_name] - - try: - run_subprocess(tag_args, self.local_dir).stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - if remote: - try: - run_subprocess(f"git push {remote} {tag_name}", self.local_dir).stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - def is_repo_clean(self) -> bool: - """ - Return whether or not the git status is clean or not - - Returns: - `bool`: `True` if the git status is clean, `False` otherwise. - """ - try: - git_status = run_subprocess("git status --porcelain", self.local_dir).stdout.strip() - except subprocess.CalledProcessError as exc: - raise EnvironmentError(exc.stderr) - - return len(git_status) == 0 - - def push_to_hub( - self, - commit_message: str = "commit files to HF hub", - blocking: bool = True, - clean_ok: bool = True, - auto_lfs_prune: bool = False, - ) -> Union[None, str, Tuple[str, CommandInProgress]]: - """ - Helper to add, commit, and push files to remote repository on the - HuggingFace Hub. Will automatically track large files (>10MB). - - Args: - commit_message (`str`): - Message to use for the commit. - blocking (`bool`, *optional*, defaults to `True`): - Whether the function should return only when the `git push` has - finished. - clean_ok (`bool`, *optional*, defaults to `True`): - If True, this function will return None if the repo is - untouched. Default behavior is to fail because the git command - fails. - auto_lfs_prune (`bool`, *optional*, defaults to `False`): - Whether to automatically prune files once they have been pushed - to the remote. - """ - if clean_ok and self.is_repo_clean(): - logger.info("Repo currently clean. Ignoring push_to_hub") - return None - self.git_add(auto_lfs_track=True) - self.git_commit(commit_message) - return self.git_push( - upstream=f"origin {self.current_branch}", - blocking=blocking, - auto_lfs_prune=auto_lfs_prune, - ) - - @contextmanager - def commit( - self, - commit_message: str, - branch: Optional[str] = None, - track_large_files: bool = True, - blocking: bool = True, - auto_lfs_prune: bool = False, - ): - """ - Context manager utility to handle committing to a repository. This - automatically tracks large files (>10Mb) with git-lfs. Set the - `track_large_files` argument to `False` if you wish to ignore that - behavior. - - Args: - commit_message (`str`): - Message to use for the commit. - branch (`str`, *optional*): - The branch on which the commit will appear. This branch will be - checked-out before any operation. - track_large_files (`bool`, *optional*, defaults to `True`): - Whether to automatically track large files or not. Will do so by - default. - blocking (`bool`, *optional*, defaults to `True`): - Whether the function should return only when the `git push` has - finished. - auto_lfs_prune (`bool`, defaults to `True`): - Whether to automatically prune files once they have been pushed - to the remote. - - Examples: - - ```python - >>> with Repository( - ... "text-files", - ... clone_from="/text-files", - ... token=True, - >>> ).commit("My first file :)"): - ... with open("file.txt", "w+") as f: - ... f.write(json.dumps({"hey": 8})) - - >>> import torch - - >>> model = torch.nn.Transformer() - >>> with Repository( - ... "torch-model", - ... clone_from="/torch-model", - ... token=True, - >>> ).commit("My cool model :)"): - ... torch.save(model.state_dict(), "model.pt") - ``` - - """ - - files_to_stage = files_to_be_staged(".", folder=self.local_dir) - - if len(files_to_stage): - if len(files_to_stage) > 5: - files_in_msg = str(files_to_stage[:5])[:-1] + ", ...]" - - logger.error( - "There exists some updated files in the local repository that are not" - f" committed: {files_in_msg}. This may lead to errors if checking out" - " a branch. These files and their modifications will be added to the" - " current commit." - ) - - if branch is not None: - self.git_checkout(branch, create_branch_ok=True) - - if is_tracked_upstream(self.local_dir): - logger.warning("Pulling changes ...") - self.git_pull(rebase=True) - else: - logger.warning(f"The current branch has no upstream branch. Will push to 'origin {self.current_branch}'") - - current_working_directory = os.getcwd() - os.chdir(os.path.join(current_working_directory, self.local_dir)) - - try: - yield self - finally: - self.git_add(auto_lfs_track=track_large_files) - - try: - self.git_commit(commit_message) - except OSError as e: - # If no changes are detected, there is nothing to commit. - if "nothing to commit" not in str(e): - raise e - - try: - self.git_push( - upstream=f"origin {self.current_branch}", - blocking=blocking, - auto_lfs_prune=auto_lfs_prune, - ) - except OSError as e: - # If no changes are detected, there is nothing to commit. - if "could not read Username" in str(e): - raise OSError("Couldn't authenticate user for push. Did you set `token` to `True`?") from e - else: - raise e - - os.chdir(current_working_directory) - - def repocard_metadata_load(self) -> Optional[Dict]: - filepath = os.path.join(self.local_dir, REPOCARD_NAME) - if os.path.isfile(filepath): - return metadata_load(filepath) - return None - - def repocard_metadata_save(self, data: Dict) -> None: - return metadata_save(os.path.join(self.local_dir, REPOCARD_NAME), data) - - @property - def commands_failed(self): - """ - Returns the asynchronous commands that failed. - """ - return [c for c in self.command_queue if c.status > 0] - - @property - def commands_in_progress(self): - """ - Returns the asynchronous commands that are currently in progress. - """ - return [c for c in self.command_queue if not c.is_done] - - def wait_for_commands(self): - """ - Blocking method: blocks all subsequent execution until all commands have - been processed. - """ - index = 0 - for command_failed in self.commands_failed: - logger.error(f"The {command_failed.title} command with PID {command_failed._process.pid} failed.") - logger.error(command_failed.stderr) - - while self.commands_in_progress: - if index % 10 == 0: - logger.warning( - f"Waiting for the following commands to finish before shutting down: {self.commands_in_progress}." - ) - - index += 1 - - time.sleep(1) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_inline/balance_pairs.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_inline/balance_pairs.py deleted file mode 100644 index bbb2101c7e1614dde2323d3a8a42b388f354789e..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/markdown_it/rules_inline/balance_pairs.py +++ /dev/null @@ -1,137 +0,0 @@ -"""Balance paired characters (*, _, etc) in inline tokens.""" -from __future__ import annotations - -from .state_inline import Delimiter, StateInline - - -def processDelimiters(state: StateInline, delimiters: list[Delimiter]) -> None: - """For each opening emphasis-like marker find a matching closing one.""" - if not delimiters: - return - - openersBottom = {} - maximum = len(delimiters) - - # headerIdx is the first delimiter of the current (where closer is) delimiter run - headerIdx = 0 - lastTokenIdx = -2 # needs any value lower than -1 - jumps: list[int] = [] - closerIdx = 0 - while closerIdx < maximum: - closer = delimiters[closerIdx] - - jumps.append(0) - - # markers belong to same delimiter run if: - # - they have adjacent tokens - # - AND markers are the same - # - if ( - delimiters[headerIdx].marker != closer.marker - or lastTokenIdx != closer.token - 1 - ): - headerIdx = closerIdx - lastTokenIdx = closer.token - - # Length is only used for emphasis-specific "rule of 3", - # if it's not defined (in strikethrough or 3rd party plugins), - # we can default it to 0 to disable those checks. - # - closer.length = closer.length or 0 - - if not closer.close: - closerIdx += 1 - continue - - # Previously calculated lower bounds (previous fails) - # for each marker, each delimiter length modulo 3, - # and for whether this closer can be an opener; - # https://github.com/commonmark/cmark/commit/34250e12ccebdc6372b8b49c44fab57c72443460 - if closer.marker not in openersBottom: - openersBottom[closer.marker] = [-1, -1, -1, -1, -1, -1] - - minOpenerIdx = openersBottom[closer.marker][ - (3 if closer.open else 0) + (closer.length % 3) - ] - - openerIdx = headerIdx - jumps[headerIdx] - 1 - - newMinOpenerIdx = openerIdx - - while openerIdx > minOpenerIdx: - opener = delimiters[openerIdx] - - if opener.marker != closer.marker: - openerIdx -= jumps[openerIdx] + 1 - continue - - if opener.open and opener.end < 0: - isOddMatch = False - - # from spec: - # - # If one of the delimiters can both open and close emphasis, then the - # sum of the lengths of the delimiter runs containing the opening and - # closing delimiters must not be a multiple of 3 unless both lengths - # are multiples of 3. - # - if ( - (opener.close or closer.open) - and ((opener.length + closer.length) % 3 == 0) - and (opener.length % 3 != 0 or closer.length % 3 != 0) - ): - isOddMatch = True - - if not isOddMatch: - # If previous delimiter cannot be an opener, we can safely skip - # the entire sequence in future checks. This is required to make - # sure algorithm has linear complexity (see *_*_*_*_*_... case). - # - if openerIdx > 0 and not delimiters[openerIdx - 1].open: - lastJump = jumps[openerIdx - 1] + 1 - else: - lastJump = 0 - - jumps[closerIdx] = closerIdx - openerIdx + lastJump - jumps[openerIdx] = lastJump - - closer.open = False - opener.end = closerIdx - opener.close = False - newMinOpenerIdx = -1 - - # treat next token as start of run, - # it optimizes skips in **<...>**a**<...>** pathological case - lastTokenIdx = -2 - - break - - openerIdx -= jumps[openerIdx] + 1 - - if newMinOpenerIdx != -1: - # If match for this delimiter run failed, we want to set lower bound for - # future lookups. This is required to make sure algorithm has linear - # complexity. - # - # See details here: - # https:#github.com/commonmark/cmark/issues/178#issuecomment-270417442 - # - openersBottom[closer.marker][ - (3 if closer.open else 0) + ((closer.length or 0) % 3) - ] = newMinOpenerIdx - - closerIdx += 1 - - -def link_pairs(state: StateInline) -> None: - tokens_meta = state.tokens_meta - maximum = len(state.tokens_meta) - - processDelimiters(state, state.delimiters) - - curr = 0 - while curr < maximum: - curr_meta = tokens_meta[curr] - if curr_meta and "delimiters" in curr_meta: - processDelimiters(state, curr_meta["delimiters"]) - curr += 1 diff --git a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Bing.py b/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Bing.py deleted file mode 100644 index 87e04ac82293c7e22068af431ac407bdee435a1b..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/g4f/Provider/Providers/Bing.py +++ /dev/null @@ -1,349 +0,0 @@ -import os -import json -import random -import json -import os -import uuid -import ssl -import certifi -import aiohttp -import asyncio - -import requests -from ...typing import sha256, Dict, get_type_hints - -url = 'https://bing.com/chat' -model = ['gpt-4'] -supports_stream = True -needs_auth = False - -ssl_context = ssl.create_default_context() -ssl_context.load_verify_locations(certifi.where()) - - -class optionsSets: - optionSet: dict = { - 'tone': str, - 'optionsSets': list - } - - jailbreak: dict = { - "optionsSets": [ - 'saharasugg', - 'enablenewsfc', - 'clgalileo', - 'gencontentv3', - "nlu_direct_response_filter", - "deepleo", - "disable_emoji_spoken_text", - "responsible_ai_policy_235", - "enablemm", - "h3precise" - # "harmonyv3", - "dtappid", - "cricinfo", - "cricinfov2", - "dv3sugg", - "nojbfedge" - ] - } - - -class Defaults: - delimiter = '\x1e' - ip_address = f'13.{random.randint(104, 107)}.{random.randint(0, 255)}.{random.randint(0, 255)}' - - allowedMessageTypes = [ - 'Chat', - 'Disengaged', - 'AdsQuery', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - 'ActionRequest', - 'Context', - 'Progress', - 'AdsQuery', - 'SemanticSerp' - ] - - sliceIds = [ - - # "222dtappid", - # "225cricinfo", - # "224locals0" - - 'winmuid3tf', - 'osbsdusgreccf', - 'ttstmout', - 'crchatrev', - 'winlongmsgtf', - 'ctrlworkpay', - 'norespwtf', - 'tempcacheread', - 'temptacache', - '505scss0', - '508jbcars0', - '515enbotdets0', - '5082tsports', - '515vaoprvs', - '424dagslnv1s0', - 'kcimgattcf', - '427startpms0' - ] - - location = { - 'locale': 'en-US', - 'market': 'en-US', - 'region': 'US', - 'locationHints': [ - { - 'country': 'United States', - 'state': 'California', - 'city': 'Los Angeles', - 'timezoneoffset': 8, - 'countryConfidence': 8, - 'Center': { - 'Latitude': 34.0536909, - 'Longitude': -118.242766 - }, - 'RegionType': 2, - 'SourceType': 1 - } - ], - } - - -def _format(msg: dict) -> str: - return json.dumps(msg, ensure_ascii=False) + Defaults.delimiter - - -async def create_conversation(): - for _ in range(5): - create = requests.get('https://www.bing.com/turing/conversation/create', - headers={ - 'authority': 'edgeservices.bing.com', - 'accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7', - 'accept-language': 'en-US,en;q=0.9', - 'cache-control': 'max-age=0', - 'sec-ch-ua': '"Chromium";v="110", "Not A(Brand";v="24", "Microsoft Edge";v="110"', - 'sec-ch-ua-arch': '"x86"', - 'sec-ch-ua-bitness': '"64"', - 'sec-ch-ua-full-version': '"110.0.1587.69"', - 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-model': '""', - 'sec-ch-ua-platform': '"Windows"', - 'sec-ch-ua-platform-version': '"15.0.0"', - 'sec-fetch-dest': 'document', - 'sec-fetch-mode': 'navigate', - 'sec-fetch-site': 'none', - 'sec-fetch-user': '?1', - 'upgrade-insecure-requests': '1', - 'user-agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/110.0.0.0 Safari/537.36 Edg/110.0.1587.69', - 'x-edge-shopping-flag': '1', - 'x-forwarded-for': Defaults.ip_address - }) - - conversationId = create.json().get('conversationId') - clientId = create.json().get('clientId') - conversationSignature = create.json().get('conversationSignature') - - if not conversationId or not clientId or not conversationSignature and _ == 4: - raise Exception('Failed to create conversation.') - - return conversationId, clientId, conversationSignature - - -async def stream_generate(prompt: str, mode: optionsSets.optionSet = optionsSets.jailbreak, context: bool or str = False): - timeout = aiohttp.ClientTimeout(total=900) - session = aiohttp.ClientSession(timeout=timeout) - - conversationId, clientId, conversationSignature = await create_conversation() - - wss = await session.ws_connect('wss://sydney.bing.com/sydney/ChatHub', ssl=ssl_context, autoping=False, - headers={ - 'accept': 'application/json', - 'accept-language': 'en-US,en;q=0.9', - 'content-type': 'application/json', - 'sec-ch-ua': '"Not_A Brand";v="99", "Microsoft Edge";v="110", "Chromium";v="110"', - 'sec-ch-ua-arch': '"x86"', - 'sec-ch-ua-bitness': '"64"', - 'sec-ch-ua-full-version': '"109.0.1518.78"', - 'sec-ch-ua-full-version-list': '"Chromium";v="110.0.5481.192", "Not A(Brand";v="24.0.0.0", "Microsoft Edge";v="110.0.1587.69"', - 'sec-ch-ua-mobile': '?0', - 'sec-ch-ua-model': '', - 'sec-ch-ua-platform': '"Windows"', - 'sec-ch-ua-platform-version': '"15.0.0"', - 'sec-fetch-dest': 'empty', - 'sec-fetch-mode': 'cors', - 'sec-fetch-site': 'same-origin', - 'x-ms-client-request-id': str(uuid.uuid4()), - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - 'Referer': 'https://www.bing.com/search?q=Bing+AI&showconv=1&FORM=hpcodx', - 'Referrer-Policy': 'origin-when-cross-origin', - 'x-forwarded-for': Defaults.ip_address - }) - - await wss.send_str(_format({'protocol': 'json', 'version': 1})) - await wss.receive(timeout=900) - - struct = { - 'arguments': [ - { - **mode, - 'source': 'cib', - 'allowedMessageTypes': Defaults.allowedMessageTypes, - 'sliceIds': Defaults.sliceIds, - 'traceId': os.urandom(16).hex(), - 'isStartOfSession': True, - 'message': Defaults.location | { - 'author': 'user', - 'inputMethod': 'Keyboard', - 'text': prompt, - 'messageType': 'Chat' - }, - 'conversationSignature': conversationSignature, - 'participant': { - 'id': clientId - }, - 'conversationId': conversationId - } - ], - 'invocationId': '0', - 'target': 'chat', - 'type': 4 - } - - if context: - struct['arguments'][0]['previousMessages'] = [ - { - "author": "user", - "description": context, - "contextType": "WebPage", - "messageType": "Context", - "messageId": "discover-web--page-ping-mriduna-----" - } - ] - - await wss.send_str(_format(struct)) - - final = False - draw = False - resp_txt = '' - result_text = '' - resp_txt_no_link = '' - cache_text = '' - - while not final: - msg = await wss.receive(timeout=900) - objects = msg.data.split(Defaults.delimiter) - - for obj in objects: - if obj is None or not obj: - continue - - response = json.loads(obj) - if response.get('type') == 1 and response['arguments'][0].get('messages',): - if not draw: - if (response['arguments'][0]['messages'][0]['contentOrigin'] != 'Apology') and not draw: - resp_txt = result_text + \ - response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0].get( - 'text', '') - resp_txt_no_link = result_text + \ - response['arguments'][0]['messages'][0].get( - 'text', '') - - if response['arguments'][0]['messages'][0].get('messageType',): - resp_txt = ( - resp_txt - + response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0]['inlines'][0].get('text') - + '\n' - ) - result_text = ( - result_text - + response['arguments'][0]['messages'][0]['adaptiveCards'][0]['body'][0]['inlines'][0].get('text') - + '\n' - ) - - if cache_text.endswith(' '): - final = True - if wss and not wss.closed: - await wss.close() - if session and not session.closed: - await session.close() - - yield (resp_txt.replace(cache_text, '')) - cache_text = resp_txt - - elif response.get('type') == 2: - if response['item']['result'].get('error'): - if wss and not wss.closed: - await wss.close() - if session and not session.closed: - await session.close() - - raise Exception( - f"{response['item']['result']['value']}: {response['item']['result']['message']}") - - if draw: - cache = response['item']['messages'][1]['adaptiveCards'][0]['body'][0]['text'] - response['item']['messages'][1]['adaptiveCards'][0]['body'][0]['text'] = ( - cache + resp_txt) - - if (response['item']['messages'][-1]['contentOrigin'] == 'Apology' and resp_txt): - response['item']['messages'][-1]['text'] = resp_txt_no_link - response['item']['messages'][-1]['adaptiveCards'][0]['body'][0]['text'] = resp_txt - - # print('Preserved the message from being deleted', file=sys.stderr) - - final = True - if wss and not wss.closed: - await wss.close() - if session and not session.closed: - await session.close() - - -def run(generator): - loop = asyncio.new_event_loop() - asyncio.set_event_loop(loop) - gen = generator.__aiter__() - - while True: - try: - next_val = loop.run_until_complete(gen.__anext__()) - yield next_val - - except StopAsyncIteration: - break - #print('Done') - -def convert(messages): - context = "" - - for message in messages: - context += "[%s](#message)\n%s\n\n" % (message['role'], - message['content']) - - return context - - -def _create_completion(model: str, messages: list, stream: bool, **kwargs): - if len(messages) < 2: - prompt = messages[0]['content'] - context = False - - else: - prompt = messages[-1]['content'] - context = convert(messages[:-1]) - - response = run(stream_generate(prompt, optionsSets.jailbreak, context)) - for token in response: - yield (token) - - #print('Done') - - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join( - [f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/models/embeddings.py b/spaces/declare-lab/tango/diffusers/src/diffusers/models/embeddings.py deleted file mode 100644 index 7fbadb471f9275545f687b0f631eab74582f32d8..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/models/embeddings.py +++ /dev/null @@ -1,379 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -import math -from typing import Optional - -import numpy as np -import torch -from torch import nn - - -def get_timestep_embedding( - timesteps: torch.Tensor, - embedding_dim: int, - flip_sin_to_cos: bool = False, - downscale_freq_shift: float = 1, - scale: float = 1, - max_period: int = 10000, -): - """ - This matches the implementation in Denoising Diffusion Probabilistic Models: Create sinusoidal timestep embeddings. - - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param embedding_dim: the dimension of the output. :param max_period: controls the minimum frequency of the - embeddings. :return: an [N x dim] Tensor of positional embeddings. - """ - assert len(timesteps.shape) == 1, "Timesteps should be a 1d-array" - - half_dim = embedding_dim // 2 - exponent = -math.log(max_period) * torch.arange( - start=0, end=half_dim, dtype=torch.float32, device=timesteps.device - ) - exponent = exponent / (half_dim - downscale_freq_shift) - - emb = torch.exp(exponent) - emb = timesteps[:, None].float() * emb[None, :] - - # scale embeddings - emb = scale * emb - - # concat sine and cosine embeddings - emb = torch.cat([torch.sin(emb), torch.cos(emb)], dim=-1) - - # flip sine and cosine embeddings - if flip_sin_to_cos: - emb = torch.cat([emb[:, half_dim:], emb[:, :half_dim]], dim=-1) - - # zero pad - if embedding_dim % 2 == 1: - emb = torch.nn.functional.pad(emb, (0, 1, 0, 0)) - return emb - - -def get_2d_sincos_pos_embed(embed_dim, grid_size, cls_token=False, extra_tokens=0): - """ - grid_size: int of the grid height and width return: pos_embed: [grid_size*grid_size, embed_dim] or - [1+grid_size*grid_size, embed_dim] (w/ or w/o cls_token) - """ - grid_h = np.arange(grid_size, dtype=np.float32) - grid_w = np.arange(grid_size, dtype=np.float32) - grid = np.meshgrid(grid_w, grid_h) # here w goes first - grid = np.stack(grid, axis=0) - - grid = grid.reshape([2, 1, grid_size, grid_size]) - pos_embed = get_2d_sincos_pos_embed_from_grid(embed_dim, grid) - if cls_token and extra_tokens > 0: - pos_embed = np.concatenate([np.zeros([extra_tokens, embed_dim]), pos_embed], axis=0) - return pos_embed - - -def get_2d_sincos_pos_embed_from_grid(embed_dim, grid): - if embed_dim % 2 != 0: - raise ValueError("embed_dim must be divisible by 2") - - # use half of dimensions to encode grid_h - emb_h = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[0]) # (H*W, D/2) - emb_w = get_1d_sincos_pos_embed_from_grid(embed_dim // 2, grid[1]) # (H*W, D/2) - - emb = np.concatenate([emb_h, emb_w], axis=1) # (H*W, D) - return emb - - -def get_1d_sincos_pos_embed_from_grid(embed_dim, pos): - """ - embed_dim: output dimension for each position pos: a list of positions to be encoded: size (M,) out: (M, D) - """ - if embed_dim % 2 != 0: - raise ValueError("embed_dim must be divisible by 2") - - omega = np.arange(embed_dim // 2, dtype=np.float64) - omega /= embed_dim / 2.0 - omega = 1.0 / 10000**omega # (D/2,) - - pos = pos.reshape(-1) # (M,) - out = np.einsum("m,d->md", pos, omega) # (M, D/2), outer product - - emb_sin = np.sin(out) # (M, D/2) - emb_cos = np.cos(out) # (M, D/2) - - emb = np.concatenate([emb_sin, emb_cos], axis=1) # (M, D) - return emb - - -class PatchEmbed(nn.Module): - """2D Image to Patch Embedding""" - - def __init__( - self, - height=224, - width=224, - patch_size=16, - in_channels=3, - embed_dim=768, - layer_norm=False, - flatten=True, - bias=True, - ): - super().__init__() - - num_patches = (height // patch_size) * (width // patch_size) - self.flatten = flatten - self.layer_norm = layer_norm - - self.proj = nn.Conv2d( - in_channels, embed_dim, kernel_size=(patch_size, patch_size), stride=patch_size, bias=bias - ) - if layer_norm: - self.norm = nn.LayerNorm(embed_dim, elementwise_affine=False, eps=1e-6) - else: - self.norm = None - - pos_embed = get_2d_sincos_pos_embed(embed_dim, int(num_patches**0.5)) - self.register_buffer("pos_embed", torch.from_numpy(pos_embed).float().unsqueeze(0), persistent=False) - - def forward(self, latent): - latent = self.proj(latent) - if self.flatten: - latent = latent.flatten(2).transpose(1, 2) # BCHW -> BNC - if self.layer_norm: - latent = self.norm(latent) - return latent + self.pos_embed - - -class TimestepEmbedding(nn.Module): - def __init__( - self, - in_channels: int, - time_embed_dim: int, - act_fn: str = "silu", - out_dim: int = None, - post_act_fn: Optional[str] = None, - cond_proj_dim=None, - ): - super().__init__() - - self.linear_1 = nn.Linear(in_channels, time_embed_dim) - - if cond_proj_dim is not None: - self.cond_proj = nn.Linear(cond_proj_dim, in_channels, bias=False) - else: - self.cond_proj = None - - if act_fn == "silu": - self.act = nn.SiLU() - elif act_fn == "mish": - self.act = nn.Mish() - elif act_fn == "gelu": - self.act = nn.GELU() - else: - raise ValueError(f"{act_fn} does not exist. Make sure to define one of 'silu', 'mish', or 'gelu'") - - if out_dim is not None: - time_embed_dim_out = out_dim - else: - time_embed_dim_out = time_embed_dim - self.linear_2 = nn.Linear(time_embed_dim, time_embed_dim_out) - - if post_act_fn is None: - self.post_act = None - elif post_act_fn == "silu": - self.post_act = nn.SiLU() - elif post_act_fn == "mish": - self.post_act = nn.Mish() - elif post_act_fn == "gelu": - self.post_act = nn.GELU() - else: - raise ValueError(f"{post_act_fn} does not exist. Make sure to define one of 'silu', 'mish', or 'gelu'") - - def forward(self, sample, condition=None): - if condition is not None: - sample = sample + self.cond_proj(condition) - sample = self.linear_1(sample) - - if self.act is not None: - sample = self.act(sample) - - sample = self.linear_2(sample) - - if self.post_act is not None: - sample = self.post_act(sample) - return sample - - -class Timesteps(nn.Module): - def __init__(self, num_channels: int, flip_sin_to_cos: bool, downscale_freq_shift: float): - super().__init__() - self.num_channels = num_channels - self.flip_sin_to_cos = flip_sin_to_cos - self.downscale_freq_shift = downscale_freq_shift - - def forward(self, timesteps): - t_emb = get_timestep_embedding( - timesteps, - self.num_channels, - flip_sin_to_cos=self.flip_sin_to_cos, - downscale_freq_shift=self.downscale_freq_shift, - ) - return t_emb - - -class GaussianFourierProjection(nn.Module): - """Gaussian Fourier embeddings for noise levels.""" - - def __init__( - self, embedding_size: int = 256, scale: float = 1.0, set_W_to_weight=True, log=True, flip_sin_to_cos=False - ): - super().__init__() - self.weight = nn.Parameter(torch.randn(embedding_size) * scale, requires_grad=False) - self.log = log - self.flip_sin_to_cos = flip_sin_to_cos - - if set_W_to_weight: - # to delete later - self.W = nn.Parameter(torch.randn(embedding_size) * scale, requires_grad=False) - - self.weight = self.W - - def forward(self, x): - if self.log: - x = torch.log(x) - - x_proj = x[:, None] * self.weight[None, :] * 2 * np.pi - - if self.flip_sin_to_cos: - out = torch.cat([torch.cos(x_proj), torch.sin(x_proj)], dim=-1) - else: - out = torch.cat([torch.sin(x_proj), torch.cos(x_proj)], dim=-1) - return out - - -class ImagePositionalEmbeddings(nn.Module): - """ - Converts latent image classes into vector embeddings. Sums the vector embeddings with positional embeddings for the - height and width of the latent space. - - For more details, see figure 10 of the dall-e paper: https://arxiv.org/abs/2102.12092 - - For VQ-diffusion: - - Output vector embeddings are used as input for the transformer. - - Note that the vector embeddings for the transformer are different than the vector embeddings from the VQVAE. - - Args: - num_embed (`int`): - Number of embeddings for the latent pixels embeddings. - height (`int`): - Height of the latent image i.e. the number of height embeddings. - width (`int`): - Width of the latent image i.e. the number of width embeddings. - embed_dim (`int`): - Dimension of the produced vector embeddings. Used for the latent pixel, height, and width embeddings. - """ - - def __init__( - self, - num_embed: int, - height: int, - width: int, - embed_dim: int, - ): - super().__init__() - - self.height = height - self.width = width - self.num_embed = num_embed - self.embed_dim = embed_dim - - self.emb = nn.Embedding(self.num_embed, embed_dim) - self.height_emb = nn.Embedding(self.height, embed_dim) - self.width_emb = nn.Embedding(self.width, embed_dim) - - def forward(self, index): - emb = self.emb(index) - - height_emb = self.height_emb(torch.arange(self.height, device=index.device).view(1, self.height)) - - # 1 x H x D -> 1 x H x 1 x D - height_emb = height_emb.unsqueeze(2) - - width_emb = self.width_emb(torch.arange(self.width, device=index.device).view(1, self.width)) - - # 1 x W x D -> 1 x 1 x W x D - width_emb = width_emb.unsqueeze(1) - - pos_emb = height_emb + width_emb - - # 1 x H x W x D -> 1 x L xD - pos_emb = pos_emb.view(1, self.height * self.width, -1) - - emb = emb + pos_emb[:, : emb.shape[1], :] - - return emb - - -class LabelEmbedding(nn.Module): - """ - Embeds class labels into vector representations. Also handles label dropout for classifier-free guidance. - - Args: - num_classes (`int`): The number of classes. - hidden_size (`int`): The size of the vector embeddings. - dropout_prob (`float`): The probability of dropping a label. - """ - - def __init__(self, num_classes, hidden_size, dropout_prob): - super().__init__() - use_cfg_embedding = dropout_prob > 0 - self.embedding_table = nn.Embedding(num_classes + use_cfg_embedding, hidden_size) - self.num_classes = num_classes - self.dropout_prob = dropout_prob - - def token_drop(self, labels, force_drop_ids=None): - """ - Drops labels to enable classifier-free guidance. - """ - if force_drop_ids is None: - drop_ids = torch.rand(labels.shape[0], device=labels.device) < self.dropout_prob - else: - drop_ids = torch.tensor(force_drop_ids == 1) - labels = torch.where(drop_ids, self.num_classes, labels) - return labels - - def forward(self, labels: torch.LongTensor, force_drop_ids=None): - use_dropout = self.dropout_prob > 0 - if (self.training and use_dropout) or (force_drop_ids is not None): - labels = self.token_drop(labels, force_drop_ids) - embeddings = self.embedding_table(labels) - return embeddings - - -class CombinedTimestepLabelEmbeddings(nn.Module): - def __init__(self, num_classes, embedding_dim, class_dropout_prob=0.1): - super().__init__() - - self.time_proj = Timesteps(num_channels=256, flip_sin_to_cos=True, downscale_freq_shift=1) - self.timestep_embedder = TimestepEmbedding(in_channels=256, time_embed_dim=embedding_dim) - self.class_embedder = LabelEmbedding(num_classes, embedding_dim, class_dropout_prob) - - def forward(self, timestep, class_labels, hidden_dtype=None): - timesteps_proj = self.time_proj(timestep) - timesteps_emb = self.timestep_embedder(timesteps_proj.to(dtype=hidden_dtype)) # (N, D) - - class_labels = self.class_embedder(class_labels) # (N, D) - - conditioning = timesteps_emb + class_labels # (N, D) - - return conditioning diff --git a/spaces/derek-thomas/dataset-creator-reddit-bestofredditorupdates/app.py b/spaces/derek-thomas/dataset-creator-reddit-bestofredditorupdates/app.py deleted file mode 100644 index 080d6fccb73e6bb648ac5dfcc8b0a2759dbb65d8..0000000000000000000000000000000000000000 --- a/spaces/derek-thomas/dataset-creator-reddit-bestofredditorupdates/app.py +++ /dev/null @@ -1,120 +0,0 @@ -import os -from pathlib import Path - -import gradio as gr -from bs4 import BeautifulSoup -from rich.console import Console -from rich.syntax import Syntax - -proj_dir = Path(__name__).parent - -subreddit = os.environ["SUBREDDIT"] -username = os.environ["USERNAME"] -dataset_name = f"{username}/dataset-creator-reddit-{subreddit}" - -frequency = os.environ.get("FREQUENCY", '').lower() -if frequency not in ["daily", "hourly"]: - raise gr.Error("FREQUENCY environment variable must be 'daily' or 'hourly'") - - -def log_file_to_html_string(): - log_file = "mylog.log" - num_lines_visualize = 50 - - console = Console(record=True, width=150, style="#272822") - with open(log_file, "rt") as f: - # Seek to the end of the file minus 300 lines - # Read the last 300 lines of the file - lines = f.readlines() - lines = lines[-num_lines_visualize:] - - # Syntax-highlight the last 300 lines of the file using the Python lexer and Monokai style - output = "".join(lines) - syntax = Syntax(output, "python", theme="monokai", word_wrap=True) - - console.print(syntax); - html_content = console.export_html(inline_styles=True) - - # Parse the HTML content using BeautifulSoup - soup = BeautifulSoup(html_content, 'lxml') - - # Modify the
 tag
-    pre_tag = soup.pre
-    pre_tag['class'] = 'scrollable'
-    del pre_tag['style']
-
-    # Add your custom styles and the .scrollable CSS to the 
-
-
-
-    
- - - - - - - - - - - - - - - - - - - - - - - - -
题目
答案
正误
得分
-
- - - - diff --git a/spaces/simonduerr/gradio-2dmoleculeeditor/README.md b/spaces/simonduerr/gradio-2dmoleculeeditor/README.md deleted file mode 100644 index 7179eda175e43c7090fcedf0f85d463d9a3e3b0e..0000000000000000000000000000000000000000 --- a/spaces/simonduerr/gradio-2dmoleculeeditor/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Gradio 2D Molecule Editor (SMILES) -emoji: ⚛️ -colorFrom: green -colorTo: red -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -This repo contains a sample on how to use the Ketcher Molecule Editor with gradio. - -To adapt simply add your ML model in the run function. - -Ketcher is licensed under Apache2.0 License https://github.com/epam/ketcher diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Free Download Stickman Rope Hero 2 Old Version APK for Android Devices.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Free Download Stickman Rope Hero 2 Old Version APK for Android Devices.md deleted file mode 100644 index 10d5a0ae759fdec0a27d161020d0b18fa12ef015..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Free Download Stickman Rope Hero 2 Old Version APK for Android Devices.md +++ /dev/null @@ -1,102 +0,0 @@ -
-

Stickman Rope Hero 2 Old Version APK: A Fun and Action-Packed Game for Android Devices

-

If you are looking for a fun and action-packed game to play on your Android device, you might want to check out Stickman Rope Hero 2. This is a game where you can control a stickman with a rope and use various weapons and vehicles to fight against enemies and complete missions. In this article, we will tell you more about this game and how you can download and install the old version apk file on your device.

-

stickman rope hero 2 old version apk


Download ⇒⇒⇒ https://ssurll.com/2uNYlD



-

What is Stickman Rope Hero 2?

-

Stickman Rope Hero 2 is a game developed by Naxeex Action & RPG Games, a company that specializes in creating games with stickman characters. The game was released in 2020 and has received over 10 million downloads on Google Play Store. The game is rated 4.3 out of 5 stars by more than 200,000 users.

-

Features of Stickman Rope Hero 2

-

The game has many features that make it fun and exciting to play. Here are some of them:

-

Amazing graphics and animations

-

The game has high-quality graphics and animations that make the stickman character look realistic and lively. You can see the stickman swing his rope, shoot his guns, drive his cars, and perform other actions with smooth and fluid movements. The game also has different environments and scenarios that add variety and interest to the gameplay.

-

Various weapons and vehicles to use

-

The game gives you access to a wide range of weapons and vehicles that you can use to fight against your enemies and complete your missions. You can use pistols, rifles, shotguns, grenades, rocket launchers, and more to shoot your way through the obstacles. You can also use cars, bikes, helicopters, tanks, and more to drive or fly around the city. You can even customize your weapons and vehicles with different skins and upgrades.

-

stickman rope hero 2 mod apk old version
-stickman rope hero 2 download old version
-stickman rope hero 2 apk free download old version
-stickman rope hero 2 game old version
-stickman rope hero 2 update old version
-stickman rope hero 2 android old version
-stickman rope hero 2 offline old version
-stickman rope hero 2 hack old version
-stickman rope hero 2 unlimited money old version
-stickman rope hero 2 cheats old version
-stickman rope hero 2 gameplay old version
-stickman rope hero 2 review old version
-stickman rope hero 2 tips and tricks old version
-stickman rope hero 2 walkthrough old version
-stickman rope hero 2 guide old version
-stickman rope hero 2 best weapons old version
-stickman rope hero 2 best cars old version
-stickman rope hero 2 best skills old version
-stickman rope hero 2 best missions old version
-stickman rope hero 2 best outfits old version
-stickman rope hero 2 how to play old version
-stickman rope hero 2 how to download old version
-stickman rope hero 2 how to install old version
-stickman rope hero 2 how to update old version
-stickman rope hero 2 how to get money old version
-stickman rope hero 2 how to get gems old version
-stickman rope hero 2 how to get cars old version
-stickman rope hero 2 how to get weapons old version
-stickman rope hero 2 how to get outfits old version
-stickman rope hero 2 how to use skills old version
-stickman rope hero 2 how to use ropes old version
-stickman rope hero 2 how to fight old version
-stickman rope hero 2 how to win old version
-stickman rope hero 2 how to level up old version
-stickman rope hero 2 how to unlock everything old version
-stickman rope hero 2 features old version
-stickman rope hero 2 graphics old version
-stickman rope hero 2 sound effects old version
-stickman rope hero 2 music old version
-stickman rope hero 2 controls old version
-stickman rope hero 2 settings old version
-stickman rope hero 2 bugs and glitches old version
-stickman rope hero 2 fixes and improvements old version
-stickman rope hero 2 alternatives and similar games old version
-stickman rope hero 2 developer and publisher info old version
-stickman rope hero 2 rating and reviews old version

-

Multiple missions and challenges to complete

-

The game has many missions and challenges that you can complete to earn rewards and unlock new features. You can rescue hostages, rob banks, destroy enemy bases, chase criminals, and more. You can also explore the city and find hidden items, secrets, and easter eggs. The game has different levels of difficulty that you can adjust according to your preference.

-

Free to play and offline mode available

-

The game is free to play and does not require any registration or subscription. You can download it from Google Play Store or other sources without paying anything. The game also has an offline mode that allows you to play without an internet connection. You can enjoy the game anytime and anywhere you want.

-

Why download the old version of Stickman Rope Hero 2?

-

While the latest version of Stickman Rope Hero 2 has more features and improvements than the old version, there are some reasons why you might want to download the old version instead. Here are some of them:

-Benefits of downloading the old version of Stickman Rope Hero 2 -

Downloading the old version of Stickman Rope Hero 2 can have some benefits for you, such as:

-

Compatible with older devices and operating systems

-

If you have an older device or operating system that cannot run the latest version of Stickman Rope Hero 2, you can still enjoy the game by downloading the old version. The old version has lower system requirements and can run smoothly on most devices. You don't have to worry about compatibility issues or performance problems.

-

Smaller file size and faster installation

-

The old version of Stickman Rope Hero 2 has a smaller file size than the latest version. This means that it will take less space on your device and less time to download and install. You can save your storage space and your internet data by downloading the old version. You can also start playing the game faster and easier.

-

Fewer bugs and glitches

-

The old version of Stickman Rope Hero 2 may have fewer bugs and glitches than the latest version. Sometimes, new updates can introduce new errors or problems that can affect the gameplay. The old version may have more stability and reliability than the new version. You can play the game without worrying about crashes or freezes.

-

Nostalgic gameplay and experience

-

The old version of Stickman Rope Hero 2 may have a nostalgic gameplay and experience for you, especially if you have played it before. You can relive the memories and feelings that you had when you first played the game. You can also compare the differences and changes between the old and new versions. You can enjoy the game in a different way.

-

How to download and install the old version of Stickman Rope Hero 2?

-

If you want to download and install the old version of Stickman Rope Hero 2 on your device, you can follow these simple steps:

-

Step 1: Find a reliable source for the old version apk file

-

The first step is to find a reliable source for the old version apk file of Stickman Rope Hero 2. An apk file is a file format that contains the installation package of an Android app. You can find many sources online that offer apk files for various apps, but not all of them are safe and trustworthy. You should be careful and avoid downloading apk files from unknown or suspicious sources, as they may contain viruses or malware that can harm your device or steal your personal information.

-

One of the reliable sources that we recommend is APKPure, a website that provides apk files for various Android apps and games. You can visit their website and search for Stickman Rope Hero 2. You will see a list of different versions of the game, including the old versions. You can choose the version that you want to download and click on it. You will see a download button that will allow you to download the apk file to your device.

-

Step 2: Enable unknown sources on your device settings

-

The second step is to enable unknown sources on your device settings. This is a security feature that prevents you from installing apps from sources other than Google Play Store. Since you are downloading an apk file from a third-party source, you need to enable this option to allow the installation.

-

To enable unknown sources, you need to go to your device settings and look for security or privacy options. You will see an option called unknown sources or allow installation from unknown sources. You need to toggle this option on and confirm your choice. This will allow you to install apps from apk files.

-

Step 3: Download and install the old version apk file

-

The third step is to download and install the old version apk file of Stickman Rope Hero 2. After you have enabled unknown sources, you can go back to APKPure website and download the apk file that you want. You will see a notification or a pop-up window that will ask you to open or install the file. You need to tap on it and follow the instructions on your screen. The installation process will take a few minutes, depending on your device and internet speed.

-

Step 4: Enjoy playing Stickman Rope Hero 2 old version on your device

-

The fourth step is to enjoy playing Stickman Rope Hero 2 old version on your device. After the installation is complete, you will see an icon of the game on your home screen or app drawer. You can tap on it and launch the game. You will see the main menu of the game, where you can choose to start a new game, continue an existing game, or change some settings. You can also see some ads or offers from other games or apps, which which you can ignore or close if you want. You can then start playing the game and enjoy the fun and action-packed gameplay of Stickman Rope Hero 2 old version.

-

Conclusion

-

Stickman Rope Hero 2 is a fun and action-packed game for Android devices that lets you control a stickman with a rope and use various weapons and vehicles to fight against enemies and complete missions. The game has amazing graphics and animations, various weapons and vehicles to use, multiple missions and challenges to complete, and free to play and offline mode available. You can download the old version of Stickman Rope Hero 2 if you want to enjoy the benefits of compatibility, smaller file size, fewer bugs, and nostalgic gameplay. You can download the old version apk file from APKPure website and install it on your device by following the simple steps that we have explained in this article. We hope that you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below. Thank you for reading and happy gaming!

-

FAQs

-

Here are some frequently asked questions about Stickman Rope Hero 2 old version apk:

-

Q: Is Stickman Rope Hero 2 old version apk safe to download and install?

-

A: Yes, Stickman Rope Hero 2 old version apk is safe to download and install, as long as you download it from a reliable source like APKPure. You should avoid downloading apk files from unknown or suspicious sources, as they may contain viruses or malware that can harm your device or steal your personal information.

-

Q: What is the difference between Stickman Rope Hero 2 old version apk and the latest version?

-

A: The main difference between Stickman Rope Hero 2 old version apk and the latest version is that the old version has lower system requirements, smaller file size, fewer bugs, and nostalgic gameplay. The latest version has more features, improvements, updates, and fixes. You can choose the version that suits your preference and device.

-

Q: How can I update Stickman Rope Hero 2 old version apk to the latest version?

-

A: If you want to update Stickman Rope Hero 2 old version apk to the latest version, you can do so by visiting Google Play Store or APKPure website and downloading the latest version apk file. You can then install it on your device by following the same steps that we have explained in this article. However, you should note that updating the game may overwrite your existing data and settings, so you may want to back up your data before updating.

-

Q: How can I uninstall Stickman Rope Hero 2 old version apk from my device?

-

A: If you want to uninstall Stickman Rope Hero 2 old version apk from your device, you can do so by going to your device settings and looking for apps or applications options. You will see a list of apps that are installed on your device. You can find Stickman Rope Hero 2 old version apk and tap on it. You will see an option to uninstall or remove the app. You can tap on it and confirm your choice. This will uninstall the app from your device.

-

Q: Where can I find more information about Stickman Rope Hero 2 old version apk?

-

A: If you want to find more information about Stickman Rope Hero 2 old version apk, you can visit APKPure website and read the description, reviews, ratings, screenshots, videos, and other details of the game. You can also visit Naxeex Action & RPG Games website and social media pages to learn more about the game developer and their other games.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/simplyjaga/neural_style_tranfer_using_dense_net/model.py b/spaces/simplyjaga/neural_style_tranfer_using_dense_net/model.py deleted file mode 100644 index 40f7f6b44de891608da6c55dccd2d3758a9ba143..0000000000000000000000000000000000000000 --- a/spaces/simplyjaga/neural_style_tranfer_using_dense_net/model.py +++ /dev/null @@ -1,85 +0,0 @@ -#imports -import gradio as gr -import torch -import torch.nn as nn -import torchvision.models as models -from torchvision import transforms - - -#modelling -class NeuralStyleTransfer(nn.Module): - def __init__(self): - super(NeuralStyleTransfer, self).__init__() - self.model = models.densenet121(weights=models.DenseNet121_Weights.IMAGENET1K_V1).features - # picking the feature layers from the conv layers in the model - # this is choosen manually going through all the layers in the model, we can also experiment with this selection - self.feature_layers = [4, 6, 8, 10] - - def forward(self, x): - features = [] - for layer_num, layer in enumerate(self.model): - x = layer(x) - #getting the selected layer's output from the model as features - if layer_num in self.feature_layers: - features.append(x) - return features - - -def get_output(style_image, content_image, alpha, beta, step, progress=gr.Progress()): - device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - loader = transforms.Compose([transforms.Resize((224,224)), - transforms.ToTensor()]) #converting to tensor automatically scales the values to 0 and 1 - style = loader(style_image).to(device) - content = loader(content_image).to(device) - #starting with content image instead of some random noise image to speed up the process - generated = loader(content_image).to(device) - - # setting the generated images values to be tracked and modified while training - generated.requires_grad_(True) - - # densenets weigths need not to be updated - model = NeuralStyleTransfer() - model.to(device) - model.eval() - - #setting parameters - step_count = int(step) - learning_rate = 0.001 - - #custom loss is defined inside the training loop - #the values in the generated matrix needs to be updated by the optimizer - optimizer = torch.optim.Adam([generated], lr = learning_rate) - - #training - for i in progress.tqdm(range(step_count)): - style_features = model(style.unsqueeze(0)) - content_features = model(content.unsqueeze(0)) - generated_features = model(generated.unsqueeze(0)) - - #content loss - content_loss = 0 - for cf, gf in zip(content_features, generated_features): - content_loss += torch.sum((cf-gf)**2) - - #style loss - style_loss = 0 - for sf, gf in zip(style_features, generated_features): - bs, c, h, w = sf.shape - s_gram = torch.mm(sf.view(c, h*w), sf.view(c, h*w).T) - g_gram = torch.mm(gf.view(c, h*w), gf.view(c, h*w).T) - style_loss += torch.sum((s_gram - g_gram)**2) - - #total_loss - loss = alpha * content_loss + beta * style_loss - - #update values in the generated image - optimizer.zero_grad() - loss.backward() - optimizer.step() - - if (i+1) % 5 == 0: - print(f"\nLoss at {i+1} epoch -----> {loss.item()}", end='') - - convertor = transforms.ToPILImage() #converts tensor to pil image formate used for displaying in gradio - return convertor(generated) diff --git a/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/deep/train.py b/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/deep/train.py deleted file mode 100644 index c95b55d7dce1f2f12a6c315bec9101faaeb45d6b..0000000000000000000000000000000000000000 --- a/spaces/sino72/Passenger_Reconization/deep_sort/deep_sort/deep/train.py +++ /dev/null @@ -1,192 +0,0 @@ -import argparse -import os -import time - -import numpy as np -import matplotlib.pyplot as plt -import torch -import torch.backends.cudnn as cudnn -import torchvision - -from model import Net - -parser = argparse.ArgumentParser(description="Train on market1501") -parser.add_argument("--data-dir",default='data',type=str) -parser.add_argument("--no-cuda",action="store_true") -parser.add_argument("--gpu-id",default=0,type=int) -parser.add_argument("--lr",default=0.1, type=float) -parser.add_argument("--interval",'-i',default=20,type=int) -parser.add_argument('--resume', '-r',action='store_true') -args = parser.parse_args() - -# device -device = "cuda:{}".format(args.gpu_id) if torch.cuda.is_available() and not args.no_cuda else "cpu" -if torch.cuda.is_available() and not args.no_cuda: - cudnn.benchmark = True - -# data loading -root = args.data_dir -train_dir = os.path.join(root,"train") -test_dir = os.path.join(root,"test") - -transform_train = torchvision.transforms.Compose([ - torchvision.transforms.RandomCrop((128,64),padding=4), - torchvision.transforms.RandomHorizontalFlip(), - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) -]) -transform_test = torchvision.transforms.Compose([ - torchvision.transforms.Resize((128,64)), - torchvision.transforms.ToTensor(), - torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) -]) -trainloader = torch.utils.data.DataLoader( - torchvision.datasets.ImageFolder(train_dir, transform=transform_train), - batch_size=64,shuffle=True -) -testloader = torch.utils.data.DataLoader( - torchvision.datasets.ImageFolder(test_dir, transform=transform_test), - batch_size=64,shuffle=True -) -num_classes = max(len(trainloader.dataset.classes), len(testloader.dataset.classes)) -print("num_classes = %s" %num_classes) - -# net definition -start_epoch = 0 -net = Net(num_classes=num_classes) -if args.resume: - assert os.path.isfile("./checkpoint/ckpt.t7"), "Error: no checkpoint file found!" - print('Loading from checkpoint/ckpt.t7') - checkpoint = torch.load("./checkpoint/ckpt.t7") - # import ipdb; ipdb.set_trace() - net_dict = checkpoint['net_dict'] - net.load_state_dict(net_dict) - best_acc = checkpoint['acc'] - start_epoch = checkpoint['epoch'] -net.to(device) - -# loss and optimizer -criterion = torch.nn.CrossEntropyLoss() -optimizer = torch.optim.SGD(net.parameters(), args.lr, momentum=0.9, weight_decay=5e-4) -best_acc = 0. - -# train function for each epoch -def train(epoch): - print("\nEpoch : %d"%(epoch+1)) - net.train() - training_loss = 0. - train_loss = 0. - correct = 0 - total = 0 - interval = args.interval - start = time.time() - for idx, (inputs, labels) in enumerate(trainloader): - # forward - inputs,labels = inputs.to(device),labels.to(device) - outputs = net(inputs) - loss = criterion(outputs, labels) - - # backward - optimizer.zero_grad() - loss.backward() - optimizer.step() - - # accumurating - training_loss += loss.item() - train_loss += loss.item() - correct += outputs.max(dim=1)[1].eq(labels).sum().item() - total += labels.size(0) - - # print - if (idx+1)%interval == 0: - end = time.time() - print("[progress:{:.1f}%]time:{:.2f}s Loss:{:.5f} Correct:{}/{} Acc:{:.3f}%".format( - 100.*(idx+1)/len(trainloader), end-start, training_loss/interval, correct, total, 100.*correct/total - )) - training_loss = 0. - start = time.time() - - return train_loss/len(trainloader), 1.- correct/total - -def test(epoch): - global best_acc - net.eval() - test_loss = 0. - correct = 0 - total = 0 - start = time.time() - with torch.no_grad(): - for idx, (inputs, labels) in enumerate(testloader): - inputs, labels = inputs.to(device), labels.to(device) - outputs = net(inputs) - loss = criterion(outputs, labels) - - test_loss += loss.item() - correct += outputs.max(dim=1)[1].eq(labels).sum().item() - total += labels.size(0) - - print("Testing ...") - end = time.time() - print("[progress:{:.1f}%]time:{:.2f}s Loss:{:.5f} Correct:{}/{} Acc:{:.3f}%".format( - 100.*(idx+1)/len(testloader), end-start, test_loss/len(testloader), correct, total, 100.*correct/total - )) - - # saving checkpoint - acc = 100.*correct/total - if acc > best_acc: - best_acc = acc - print("Saving parameters to checkpoint/ckpt.t7") - checkpoint = { - 'net_dict':net.state_dict(), - 'acc':acc, - 'epoch':epoch, - } - if not os.path.isdir('checkpoint'): - os.mkdir('checkpoint') - torch.save(checkpoint, './checkpoint/ckpt.t7') - - return test_loss/len(testloader), 1.- correct/total - -# plot figure -x_epoch = [] -record = {'train_loss':[], 'train_err':[], 'test_loss':[], 'test_err':[]} -fig = plt.figure() -ax0 = fig.add_subplot(121, title="loss") -ax1 = fig.add_subplot(122, title="top1err") -def draw_curve(epoch, train_loss, train_err, test_loss, test_err): - global record - record['train_loss'].append(train_loss) - record['train_err'].append(train_err) - record['test_loss'].append(test_loss) - record['test_err'].append(test_err) - - x_epoch.append(epoch) - ax0.plot(x_epoch, record['train_loss'], 'bo-', label='train') - ax0.plot(x_epoch, record['test_loss'], 'ro-', label='val') - ax1.plot(x_epoch, record['train_err'], 'bo-', label='train') - ax1.plot(x_epoch, record['test_err'], 'ro-', label='val') - if epoch == 0: - ax0.legend() - ax1.legend() - fig.savefig("train.jpg") - -# lr decay -def lr_decay(): - global optimizer - for params in optimizer.param_groups: - params['lr'] *= 0.1 - lr = params['lr'] - print("Learning rate adjusted to {}".format(lr)) - -def main(): - total_epoches = 40 - for epoch in range(start_epoch, start_epoch+total_epoches): - train_loss, train_err = train(epoch) - test_loss, test_err = test(epoch) - draw_curve(epoch, train_loss, train_err, test_loss, test_err) - if (epoch+1)%(total_epoches//2)==0: - lr_decay() - - -if __name__ == '__main__': - main() diff --git a/spaces/skf15963/summary/fengshen/models/DAVAE/run_latent_generation.py b/spaces/skf15963/summary/fengshen/models/DAVAE/run_latent_generation.py deleted file mode 100644 index f9f099d205279d883df589fe5031ff0fdbcfb32d..0000000000000000000000000000000000000000 --- a/spaces/skf15963/summary/fengshen/models/DAVAE/run_latent_generation.py +++ /dev/null @@ -1,302 +0,0 @@ -import re -import torch -import torch.nn.functional as F -from torch.nn.utils.rnn import pad_sequence -import numpy as np -import json -import jsonlines -from tqdm import tqdm, trange - -def set_seed(args): - np.random.seed(args.seed) - torch.manual_seed(args.seed) - if args.n_gpu > 0: - torch.cuda.manual_seed_all(args.seed) - -def filter_noise(text): - space_pattern = '([\u4e00-\u9fa5|0-9|,|。|?|!|@|¥|……|——|《|》|“|”|、|;|:|‘|’|(|)|「|」|【|】|·|~|-|+])\s+([\u4e00-\u9fa5|0-9|,|。|?|!|@|¥|……|——|《|》|“|”|、|;|:|‘|’|(|)|「|」|【|】|·|~|-|+])' - text = re.sub(space_pattern, r'\1\2', text) - text = re.sub(space_pattern, r'\1\2', text) - patterns = ['引用日期.*$', '参考资料.*$', '\[.*\]', '【.*】', '原文地址:', '原文转载:', '本文转自:', '本文摘要:', ''] - for pattern in patterns: - text = re.sub(pattern, "", text) - return text.strip() - -def get_raw_data(raw_data): - train_data = {} - with open(raw_data, 'r', encoding='utf8') as fh: - for line in fh: - line = json.loads(line) - for key in line.keys(): - if key not in train_data.keys(): - train_data[key] = [line[key]] - else: - train_data[key].append(line[key]) - return train_data - -def save_output(input_text, output, output_file): - with jsonlines.open(output_file, mode='a') as writer: - for text_in,text_out in zip(input_text, output): - otc = {} - otc['text_a'] = str(text_in) - otc['text_b'] = str(text_out) - writer.write(otc) - -def enforce_repetition_penalty(lprobs, prev_output_tokens, repetition_penalty = 1.5): - """repetition penalty (from CTRL paper https://arxiv.org/abs/1909.05858). """ - for i in range(len(prev_output_tokens)): - for previous_token in set(prev_output_tokens[i]): - # if score < 0 then repetition penalty has to multiplied to reduce the previous token probability - if lprobs[i, previous_token] < 0: - lprobs[i, previous_token] *= repetition_penalty - else: - lprobs[i, previous_token] /= repetition_penalty - -def top_k_top_p_filtering(logits, top_k=0, top_p=0.0, filter_value=-float('Inf')): - """ Filter a distribution of logits using top-k and/or nucleus (top-p) filtering - Args: - logits: logits distribution shape (vocabulary size) - top_k > 0: keep only top k tokens with highest probability (top-k filtering). - top_p > 0.0: keep the top tokens with cumulative probability >= top_p (nucleus filtering). - Nucleus filtering is described in Holtzman et al. (http://arxiv.org/abs/1904.09751) - From: https://gist.github.com/thomwolf/1a5a29f6962089e871b94cbd09daf317 - """ - # assert logits.dim() == 1# batch size 1 for now - could be updated for more but the code would be less clear - top_k = min(top_k, logits.size(-1)) # Safety check - if top_k > 0: - # Remove all tokens with a probability less than the last token of the top-k - indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None] - logits[indices_to_remove] = filter_value - - if top_p > 0.0: - sorted_logits, sorted_indices = torch.sort(logits, dim=-1, descending=True) - cumulative_probs = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1) - - # Remove tokens with cumulative probability above the threshold - sorted_indices_to_remove = cumulative_probs > top_p - # Shift the indices to the right to keep also the first token above the threshold - sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone() - sorted_indices_to_remove[..., 0] = 0 - - for i in range(sorted_indices.size()[0]): - indices_to_remove = sorted_indices[i][sorted_indices_to_remove[i]] - logits[i][indices_to_remove] = filter_value - # indices_to_remove = sorted_indices[sorted_indices_to_remove] - # logits[indices_to_remove] = filter_value - return logits - -def sample_sequence_conditional(model, length, context, latent_z=None, temperature=1, top_k=0, top_p=0.0, repetition_penalty=1.0, device='cpu'): - - context = torch.tensor(context, dtype=torch.long, device=device) - context = context.unsqueeze(0) - generated = context - with torch.no_grad(): - for i in trange(length): - if i == 2: - generated[generated[:, 1] == 127, 1] = 0 - attention_mask = model.get_attn_mask(generated.shape[1]).to(device) - inputs = {'input_ids': generated, 'latent_state': latent_z, 'attention_mask':attention_mask, 'mems':None} - outputs = model(**inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet (cached hidden-states) - next_token_logits = outputs[0][:, -1, :] / temperature - filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p) - - log_probs = F.softmax(filtered_logits, dim=-1) - if repetition_penalty != 1.0: - enforce_repetition_penalty(log_probs, generated, repetition_penalty) - next_token = torch.multinomial(log_probs, num_samples=1) - generated = torch.cat((generated, next_token), dim=1) - # pdb.set_trace() - # if next_token[0,0].item() == decoder_tokenizer.encode('')[0]: - if next_token[0, 0] == 50000: # end of token 50000 - break - - return generated - -def latent_code_from_text(text, tokenizer_encoder, model_vae, args, scale=1.0): - tokenized1 = tokenizer_encoder.encode(text) - coded = torch.Tensor([tokenized1]).long() - with torch.no_grad(): - coded = coded.to(device) - outputs = model_vae.encoder(coded, attention_mask=(coded > 0).float()) - pooled_hidden_fea = outputs[1] - - mean, logvar = model_vae.encoder.linear(pooled_hidden_fea).chunk(2, -1) - std = logvar.mul(0.5).exp() - eps = torch.zeros_like(std).normal_() - - return mean + torch.mul(eps, std)*scale - -def text_from_latent_code(latent_z, model_vae, args, tokenizer_decoder, prompt=None): - bos_token = tokenizer_decoder.convert_tokens_to_ids(tokenizer_decoder.bos_token) - context_tokens = [bos_token] - - if prompt is not None: - context_tokens.append(tokenizer_decoder.encode(prompt)[:-1]) # remove eos token - - out = sample_sequence_conditional( - model=model_vae.decoder, - context=context_tokens, - latent_z=latent_z, - length= args.max_out_length, # Chunyuan: Fix length; or use to complete a sentence - temperature=args.temperature, - top_k=args.top_k, - top_p=args.top_p, - repetition_penalty=args.repetition_penalty, - device=device - ) - - out_tokens = out[0, :].tolist() - out_tokens = out_tokens[1:out_tokens.index(50000)] if 50000 in out_tokens else out_tokens # remove bos and eos - text_x1 = tokenizer_decoder.decode(out_tokens, clean_up_tokenization_spaces=True) - - return text_x1 - - -def simulate(model_vae, tokenizer_encoder, tokenizer_decoder, args, sent_input, prompt=None): - latent_z, _ = latent_code_from_text(sent_input, tokenizer_encoder, model_vae, args) - text_analogy = text_from_latent_code(latent_z, model_vae, args, tokenizer_decoder, prompt=prompt) - - return text_analogy - -def switch(next_value, init, is_update): - is_update = is_update.type_as(next_value) - return (1-is_update)*init + is_update*next_value - -def sample_sequence_conditional_batch(model, max_out_length, context_tokens_tensor, context_length_tensor, latent_z=None, temperature=1, top_k=0, top_p=0.0, repetition_penalty=1.0, device='cpu', end_token=50000): - org_context_length = torch.min(context_length_tensor).item() - batch_size = context_tokens_tensor.shape[0] - - generated = context_tokens_tensor[:,:org_context_length] - counter = org_context_length - - output_tokens_lists = [] - output_order = [] - orig_order = torch.LongTensor(list(range(batch_size))) - - with torch.no_grad(): - while counter < max_out_length: - if counter == org_context_length+2: - generated[generated[:,org_context_length] == 127, org_context_length] = 0 - attention_mask = model.get_attn_mask(generated.shape[1]).to(device) - inputs = {'input_ids': generated, 'latent_state': latent_z, 'attention_mask': attention_mask} - outputs = model(**inputs) # Note: we could also use 'past' with GPT-2/Transfo-XL/XLNet (cached hidden-states) - next_token_logits = outputs[0][:, -1, :] / temperature - filtered_logits = top_k_top_p_filtering(next_token_logits, top_k=top_k, top_p=top_p) - - # if counter == org_context_length: - # filtered_logits[:, 43488] = -float('Inf') # forbid starting with '《' - log_probs = F.softmax(filtered_logits, dim=-1) - - if repetition_penalty != 1.0: - enforce_repetition_penalty(log_probs, generated, repetition_penalty) - - if any(log_probs.sum(dim=-1) <= 0.0) : - break - next_token = torch.multinomial(log_probs, num_samples=1).view(-1) - next_token = switch(next_token, context_tokens_tensor[:, counter], context_length_tensor<=counter) - - if torch.all(next_token == end_token).item(): - break - - stop_idx = next_token == end_token - output_order.extend(orig_order[stop_idx].tolist()) - - finished = generated[stop_idx] - output_tokens_lists.extend(finished.detach().cpu().tolist()) - # continue with non-ending tokens - conti_idx = next_token != end_token - orig_order = orig_order[conti_idx] - generated = generated[conti_idx] - latent_z = latent_z[conti_idx] - - next_token = next_token[conti_idx] - context_tokens_tensor = context_tokens_tensor[conti_idx] - context_length_tensor = context_length_tensor[conti_idx] - batch_size = generated.shape[0] - - generated = torch.cat((generated, next_token.view(batch_size, 1)), dim=-1) - counter += 1 - - output_order.extend(orig_order.tolist()) - generated = generated.detach().cpu().tolist() - output_tokens_lists.extend(generated) - output_tokens_lists = [tokens[:tokens.index(end_token)] if end_token in tokens else tokens for tokens in output_tokens_lists] - - output_tokens_lists = [tokens for _,tokens in sorted(zip(output_order, output_tokens_lists))] - - return output_tokens_lists - -def latent_code_from_text_batch(texts, tokenizer_encoder, model_vae, args): - tokens_tensor_list = [] - for text in texts: - tokens = tokenizer_encoder.encode(text)[:510] - tokens_tensor_list.append(torch.tensor([101]+tokens+[102])) - - coded = pad_sequence(tokens_tensor_list, batch_first=True, padding_value=0).long() - with torch.no_grad(): - coded = coded.to(device) - pooled_hidden_fea = model_vae.encoder(coded, attention_mask=(coded > 0).float())[1] - mean, logvar = model_vae.encoder.linear(pooled_hidden_fea).chunk(2, -1) - - std = logvar.mul(0.5).exp() - eps = torch.zeros_like(std).normal_() - - latent_z = mean + torch.mul(eps, std)*args.std_scale - - return latent_z - -def text_from_latent_code_batch(latent_z, model_vae, args, tokenizer_decoder, prompt=None): - past = latent_z - batch_size = latent_z.shape[0] - bos_token = tokenizer_decoder.convert_tokens_to_ids(tokenizer_decoder.bos_token) - end_token = tokenizer_decoder.convert_tokens_to_ids(tokenizer_decoder.eos_token) - - if prompt is not None: - prompt = [[bos_token] + tokenizer_decoder.encode(text)[:-1] for text in prompt] - else: - prompt = [[bos_token]]*batch_size - - context_tokens_tensor = torch.tensor([[end_token]*args.max_out_length]*batch_size).to(device) # 2-d tensor - context_length_tensor = torch.tensor([1]*batch_size).to(device) - for i in range(batch_size): - context_tokens_tensor[i,:len(prompt[i])] = torch.tensor(prompt[i]).long().to(device) - context_length_tensor[i] = len(prompt[i]) - - # length = 128 # maximum length, but not used - out = sample_sequence_conditional_batch( - model=model_vae.decoder, - max_out_length= args.max_out_length, # Chunyuan: Fix length; or use to complete a sentence - context_tokens_tensor=context_tokens_tensor, - context_length_tensor=context_length_tensor, - latent_z=latent_z, - temperature=args.temperature, - top_k=args.top_k, - top_p=args.top_p, - repetition_penalty=args.repetition_penalty, - device=device - ) - - out_text = [] - for i, tokens in enumerate(out): - tokens = tokens[len(prompt[i]):] - tokens = tokens[:tokens.index(end_token)] if end_token in tokens else tokens - text = tokenizer_decoder.decode(tokens, clean_up_tokenization_spaces=True) - out_text.append(filter_noise(text)) - return out_text - - -def simulate_batch(model_vae, tokenizer_encoder, tokenizer_decoder, args, sent_inputs, prompt=None): - latent_z = latent_code_from_text_batch(sent_inputs, tokenizer_encoder, model_vae, args) - text_analogy = text_from_latent_code_batch(latent_z, model_vae, args, tokenizer_decoder, prompt=prompt) - return text_analogy - -def simulate_bz(model_vae, tokenizer_encoder, tokenizer_decoder, args, sent_inputs, prompt=None): - latent_z = latent_code_from_text_batch(sent_inputs, tokenizer_encoder, model_vae, args) - return latent_z - -def my_shuffle(x, index): - result = [] - for field in index: - result.append(x[field]) - return result - diff --git a/spaces/sparanoid/milky-green-sovits-4/cluster/__init__.py b/spaces/sparanoid/milky-green-sovits-4/cluster/__init__.py deleted file mode 100644 index f1b9bde04e73e9218a5d534227caa4c25332f424..0000000000000000000000000000000000000000 --- a/spaces/sparanoid/milky-green-sovits-4/cluster/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -import numpy as np -import torch -from sklearn.cluster import KMeans - -def get_cluster_model(ckpt_path): - checkpoint = torch.load(ckpt_path) - kmeans_dict = {} - for spk, ckpt in checkpoint.items(): - km = KMeans(ckpt["n_features_in_"]) - km.__dict__["n_features_in_"] = ckpt["n_features_in_"] - km.__dict__["_n_threads"] = ckpt["_n_threads"] - km.__dict__["cluster_centers_"] = ckpt["cluster_centers_"] - kmeans_dict[spk] = km - return kmeans_dict - -def get_cluster_result(model, x, speaker): - """ - x: np.array [t, 256] - return cluster class result - """ - return model[speaker].predict(x) - -def get_cluster_center_result(model, x,speaker): - """x: np.array [t, 256]""" - predict = model[speaker].predict(x) - return model[speaker].cluster_centers_[predict] - -def get_center(model, x,speaker): - return model[speaker].cluster_centers_[x] diff --git a/spaces/srikotha/bigscience-bloom/app.py b/spaces/srikotha/bigscience-bloom/app.py deleted file mode 100644 index e2baf29247fdd75903697d71a498e9de137f37bc..0000000000000000000000000000000000000000 --- a/spaces/srikotha/bigscience-bloom/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/bigscience/bloom").launch() \ No newline at end of file diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/adaptive_span/__init__.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/adaptive_span/__init__.py deleted file mode 100644 index e0a142a769360e1140bf814c532eaf841f1d52d8..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/adaptive_span/__init__.py +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - -# automatically import any Python files in the current directory -cur_dir = os.path.dirname(__file__) -for file in os.listdir(cur_dir): - path = os.path.join(cur_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - mod_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module(__name__ + "." + mod_name) diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/scalar/ops.py b/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/scalar/ops.py deleted file mode 100644 index c74f530380b393ffc53ecfb1398000079495772f..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/fairseq/modules/quantization/scalar/ops.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - - -def emulate_int(w, bits, method, scale=None, zero_point=None): - q = globals()[f"emulate_int8_{method}"] - return q(w, scale=scale, zero_point=zero_point, bits=bits) - - -def quantize(w, scale, zero_point, bits=8): - # In the default behavior, max_val = 255. - max_val = 2 ** bits - 1 - return ( - torch.clamp(torch.round(w / scale + zero_point), 0, max_val) - zero_point - ) * scale - - -def emulate_int8_histogram(w, scale=None, zero_point=None, bits=8): - if scale is None: - obs = torch.quantization.observer.HistogramObserver() - obs.to(device=w.device) - _ = obs(w.float()) - scale, zero_point = obs.calculate_qparams() - scale = scale.cuda().type_as(w) - zero_point = zero_point.cuda().type_as(w) - return quantize(w, scale, zero_point, bits=bits), scale, zero_point - - -def emulate_int8_channel(w, scale=None, zero_point=None, bits=8): - if scale is None: - obs = torch.quantization.observer.PerChannelMinMaxObserver( - ch_axis=-1, qscheme=torch.per_channel_symmetric - ) - obs.to(device=w.device) - _ = obs(w) - scale, zero_point, ch_axis = obs.get_qparams() - scale = scale.cuda().type_as(w) - zero_point = zero_point.cuda().type_as(w) - return quantize(w, scale, zero_point, bits=bits), scale, zero_point - - -def emulate_int8_tensor(w, scale=None, zero_point=None, bits=8): - if scale is None: - obs = torch.quantization.observer.MinMaxObserver() - obs.to(device=w.device) - _ = obs(w) - scale, zero_point = obs.calculate_qparams() - scale = scale.cuda().type_as(w) - zero_point = zero_point.cuda().type_as(w) - return quantize(w, scale, zero_point, bits=bits), scale, zero_point diff --git a/spaces/stomexserde/gpt4-ui/Examples/Crack Schemaplic V 3.0 TOP.md b/spaces/stomexserde/gpt4-ui/Examples/Crack Schemaplic V 3.0 TOP.md deleted file mode 100644 index 9498714974993985a572c8264a24ce68b7744cf4..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/Crack Schemaplic V 3.0 TOP.md +++ /dev/null @@ -1,33 +0,0 @@ - -

What is Schemaplic v 3.0 and How to Use It?

-

Schemaplic v 3.0 is a software that allows you to create and simulate electrical schematics in a simple and intuitive way. It is a powerful tool for electrical engineering students, teachers and professionals who want to design, test and troubleshoot electrical circuits.

-

In this article, we will show you some of the features and benefits of Schemaplic v 3.0, as well as how to download, install and use it.

-

Crack schemaplic v 3.0


Download Zip > https://urlgoal.com/2uI6jV



-

Features and Benefits of Schemaplic v 3.0

-

Schemaplic v 3.0 offers a range of features and benefits for creating and simulating electrical schematics, such as:

-
    -
  • An unlimited number of folios for your schematics, with a large library of components to choose from.
  • -
  • A simulation mode that allows you to test your circuits in real time, with analog and temporal (chronogram) analysis, electrical safety and fault detection.
  • -
  • A technological resource that helps you find information and solutions for your projects.
  • -
  • An integration of pedagogical exercises to enhance your learning and skills.
  • -
  • A custom component editor that lets you create your own components and applications.
  • -
-

How to Download and Install Schemaplic v 3.0

-

To download and install Schemaplic v 3.0, you need to follow these steps:

-
    -
  1. Go to the official website of Schemaplic at https://www.schemaplic.fr/telechargements/ [^1^] and click on the button "Accédez à la version gratuite" (Access the free version).
  2. -
  3. Fill in the form with your name, email address, phone number and country, and click on "Envoyer" (Send).
  4. -
  5. You will receive an email with a link to download the software. Click on the link and save the file on your computer.
  6. -
  7. Run the file and follow the instructions to install the software. You will need to enter your email address and a license key that you will receive by email.
  8. -
  9. Once the installation is complete, you can launch the software from your desktop or start menu.
  10. -
-

How to Use Schemaplic v 3.0

-

To use Schemaplic v 3.0, you need to follow these steps:

-
    -
  1. Open the software and choose a project type from the menu: "Nouveau projet" (New project), "Ouvrir un projet" (Open a project), "Exercices pédagogiques" (Pedagogical exercises) or "Ressources technologiques" (Technological resources).
  2. -
  3. If you choose "Nouveau projet" or "Ouvrir un projet", you will see a blank workspace where you can create or edit your schematic. You can use the toolbar on the left to select components from different categories, such as power sources, switches, relays, motors, sensors, etc. You can also use the toolbar on the top to zoom in or out, undo or redo actions, copy or paste elements, etc.
  4. -
  5. To place a component on your schematic, simply drag it from the toolbar to the workspace. To connect components, click on their terminals and drag a wire between them. To delete a component or a wire, right-click on it and choose "Supprimer" (Delete).
  6. -
  7. To edit a component's properties, such as its name, value or symbol, double-click on it and modify the fields in the pop-up window. To edit a wire's properties, such as its color or thickness, right-click on it and choose "Propriétés" (Properties).
  8. -
  9. To simulate your schematic, click on the button "Simuler" (Simulate) on the top toolbar. You will see your circuit come to life with animated symbols, voltage and current values, chronograms and fault indicators. You can also use the buttons on the bottom toolbar to pause, resume or stop the simulation, change the simulation

    81aa517590
    -
    -
    \ No newline at end of file diff --git a/spaces/stomexserde/gpt4-ui/Examples/HACK Windows Embedded Industry Pro 8.1 X64 En-US OCT 2018 Gen2.md b/spaces/stomexserde/gpt4-ui/Examples/HACK Windows Embedded Industry Pro 8.1 X64 En-US OCT 2018 Gen2.md deleted file mode 100644 index 126300934699482420ba2ceee8a89fe62bc5c731..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/HACK Windows Embedded Industry Pro 8.1 X64 En-US OCT 2018 Gen2.md +++ /dev/null @@ -1,62 +0,0 @@ -
    -

    HACK Windows Embedded Industry Pro 8.1 X64 En-US OCT 2018 Gen2

    | HACK Windows Embedded Industry Pro 8.1 X64 En-US OCT 2018 Gen2 | |

    Windows Embedded Industry Pro 8.1 is a special edition of Windows 8.1 that is designed for embedded devices such as ATMs, kiosks, POS terminals, medical equipment, and industrial machines. It offers many features that allow users to customize, secure, and optimize their systems for specific scenarios. However, it also has some limitations and restrictions that might prevent users from enjoying the full potential of their devices.

    -

    HACK Windows Embedded Industry Pro 8.1 X64 En-US OCT 2018 Gen2


    Download Filehttps://urlgoal.com/2uIaCu



    In this article, we will show you how to hack Windows Embedded Industry Pro 8.1 X64 En-US OCT 2018 Gen2, which is a pre-activated ISO image that contains the latest updates and enhancements for this edition. By hacking this edition, you will be able to access enterprise features, run productivity apps, bypass activation, or modify the system according to your needs and preferences.

    We will also discuss why you might want to hack Windows Embedded Industry Pro 8.1

    Before we proceed, let us first explain what is Windows Embedded Industry Pro 8.1 and what are its features.

    -

    What is Windows Embedded Industry Pro 8.1?

    -

    Windows Embedded Industry Pro 8.1 is an embedded edition of Windows 8.1 that extends the OS to a range of edge devices across various industries. It is based on the same core as Windows 8.1 Pro, but it has some additional features and capabilities that make it suitable for embedded scenarios. Some of these features are:

    -
      -
    • Lockdown: This feature allows users to restrict access to certain parts of the system, such as the Start screen, the taskbar, the charms bar, the keyboard, the mouse, or the power button. This can help prevent unauthorized or accidental changes to the system configuration or user experience.
    • -
    • Branding: This feature allows users to customize the appearance and behavior of the system, such as the boot logo, the wallpaper, the color scheme, the sounds, or the notifications. This can help create a consistent and professional image for the device and the organization.
    • -
    • Security: This feature allows users to protect the system and the data from unauthorized access or tampering, such as encryption, BitLocker, AppLocker, Secure Boot, or Trusted Boot. This can help ensure the integrity and confidentiality of the system and the data.
    • -
    • Customization: This feature allows users to tailor the system to their specific needs and preferences, such as language packs, regional settings, keyboard layouts, or input methods. This can help improve the usability and accessibility of the system for different users and markets.
    • -
    -

    These are just some of the features that Windows Embedded Industry Pro 8.1 offers. There are many more features that you can explore and use to enhance your device and your business. However, not all features are available or enabled by default. Some features require activation, licensing, or configuration before they can be used. This is where hacking comes in.

    -

    -

    Why hack Windows Embedded Industry Pro 8.1?

    -

    Hacking Windows Embedded Industry Pro 8.1 means modifying or bypassing some of the limitations or restrictions that are imposed by Microsoft or by the device manufacturer on this edition. There are many reasons why someone might want to hack Windows Embedded Industry Pro 8.1, such as:

    -
      -
    • Accessing enterprise features: Some of the features that are available in Windows 8.1 Enterprise edition are not available or enabled in Windows Embedded Industry Pro 8.1 edition. For example, features such as DirectAccess, BranchCache, App-V, or UE-V are not included or activated in Windows Embedded Industry Pro 8.1 edition. By hacking this edition, you might be able to access these features and use them on your device .
    • -
    • Running productivity apps: Some of the apps that are available in Windows Store or other sources are not compatible or supported on Windows Embedded Industry Pro 8.1 edition. For example, apps such as Office 365, Skype, OneDrive, or Netflix are not designed or tested for this edition. By hacking this edition, you might be able to run these apps and use them on your device .
    • -
    • Bypassing activation: Some of the devices that run Windows Embedded Industry Pro 8.1 edition require activation before they can be used fully. For example, devices that use a volume license key (VLK) need to connect to a Key Management Service (KMS) server every 180 days to renew their activation status . By hacking this edition, you might be able to bypass this requirement and use your device without activation .
    • -
    • Modifying the system: Some of the settings or components that are part of Windows Embedded Industry Pro 8.1 edition are not accessible or editable by default. For example, settings such as registry entries, group policies, services, drivers, or files are locked or hidden by default. By hacking this edition, you might be able to modify these settings or components and change them according to your needs and preferences .
    • -
    -

    These are just some of the possible motivations for hacking Windows Embedded Industry Pro 8.1. There might be other reasons that you have for hacking Windows Embedded Industry Pro 8.1. However, hacking this edition is not without challenges and risks. Some of the challenges and risks that you might face are:

    -
      -
    • Finding a compatible activator: Not all activators that work for Windows 8.1 Pro or Enterprise edition work for Windows Embedded Industry Pro 8.1 edition. Some activators might fail to activate this edition, or might cause errors or problems on the system. You need to find an activator that is compatible and reliable for this edition, and that does not contain any malware or viruses.
    • -
    • Avoiding detection: Microsoft or the device manufacturer might detect that you have hacked Windows Embedded Industry Pro 8.1 edition, and might take actions to prevent or punish you. For example, they might block your access to updates, services, or support, or they might revoke your license, disable your device, or sue you for violating the terms and conditions of use. You need to avoid detection by using stealthy and safe methods and tools, and by following the best practices and precautions.
    • -
    • Dealing with updates: Updates are important for keeping your system secure and up-to-date, but they might also interfere with your hacking methods or results. For example, updates might patch the vulnerabilities that you have exploited, or they might overwrite the changes that you have made. You need to deal with updates by choosing whether to install them or not, and by checking their compatibility and impact on your system.
    • -
    -

    These are just some of the challenges and risks that you might encounter when hacking Windows Embedded Industry Pro 8.1. There might be other issues that you need to consider and resolve depending on your situation and goals. Therefore, you need to be careful and cautious when hacking this edition, and do proper research, backup, testing, and troubleshooting before and after attempting any hacking method.

    -

    How to hack Windows Embedded Industry Pro 8.1?

    -

    Now that we have discussed why and how to hack Windows Embedded Industry Pro 8.1, let us look at some of the methods and tools that can be used to hack this edition. There are many methods and tools that can be found online or offline, but we will focus on three of the most popular and effective ones: Microsoft Toolkit, KMSpico, and other KMS emulators. These methods and tools are based on the Key Management Service (KMS) technology that Microsoft uses to activate volume licensed editions of Windows and Office. By using these methods and tools, you can emulate a KMS server on your device or network, and activate your Windows Embedded Industry Pro 8.1 edition without connecting to a real KMS server.

    -

    Microsoft Toolkit

    -

    Microsoft Toolkit is a multifunctional tool that can activate Windows Embedded Industry Pro 8.1 edition using either KMS activation or EZ-Activator (AutoKMS) activation. KMS activation requires you to manually enter a KMS key for your edition, and then connect to a KMS server (either online or offline) to activate your system. EZ-Activator (AutoKMS) activation automatically installs a KMS key for your edition, and then creates a scheduled task that runs a KMS emulator every 24 hours to renew your activation status.

    -

    To use Microsoft Toolkit to activate Windows Embedded Industry Pro 8.1 edition, follow these steps:

    -
      -
    1. Download Microsoft Toolkit from this link. Make sure you download the latest version (currently 2.6.4) from a trusted source.
    2. -
    3. Extract the downloaded file using WinRAR or 7-Zip. You will get a folder named Microsoft Toolkit.
    4. -
    5. Run the Microsoft Toolkit.exe file as administrator. You will see a window with two icons: one for Windows and one for Office.
    6. -
    7. Click on the Windows icon to open the Windows Toolkit tab.
    8. -
    9. Click on the Activation tab.
    10. -
    11. Choose either KMS Activation or EZ-Activator (AutoKMS) activation according to your preference.
    12. -
    13. If you choose KMS Activation, click on Install to install a KMS key for your edition, and then click on Activate to connect to a KMS server (either online or offline) to activate your system.
    14. -
    15. If you choose EZ-Activator (AutoKMS) activation, click on Install/Uninstall KMService to install AutoKMS on your system, and then wait for the activation process to complete.
    16. -
    17. Once the activation is done, you will see a message saying "Product Activation Successful". You can also check your activation status by clicking on Check in the Information tab.
    18. -
    -

    Congratulations! You have successfully activated backup, testing, and caution before and after attempting any hacking method. We also advise that you respect the rights and responsibilities of Microsoft and the device manufacturer, and that you use your hacked system or device for legitimate and ethical purposes only.

    -

    We hope that this article has been helpful and informative for you. If you have any questions or comments, please feel free to contact us or leave a comment below. Thank you for reading!

    -

    FAQs

    -

    Here are some frequently asked questions that readers might have about hacking Windows Embedded Industry Pro 8.1 X64 En-US OCT 2018 Gen2:

    -
      -
    1. Is hacking Windows Embedded Industry Pro 8.1 legal?
    2. -

      No, hacking Windows Embedded Industry Pro 8.1 is not legal, as it violates the terms and conditions of use that Microsoft or the device manufacturer has set for this edition. By hacking this edition, you might face legal consequences, such as fines, lawsuits, or criminal charges.

      -
    3. Is hacking Windows Embedded Industry Pro 8.1 safe?
    4. -

      No, hacking Windows Embedded Industry Pro 8.1 is not safe, as it exposes your system and your data to potential threats or attacks from hackers, viruses, malware, or spyware. By hacking this edition, you might compromise the security and privacy of your system and your data.

      -
    5. Can I update my hacked Windows Embedded Industry Pro 8.1?
    6. -

      Yes, you can update your hacked Windows Embedded Industry Pro 8.1, but you need to be careful and cautious when doing so, as updates might interfere with your hacking methods or results. For example, updates might patch the vulnerabilities that you have exploited, or they might overwrite the changes that you have made. You need to deal with updates by choosing whether to install them or not, and by checking their compatibility and impact on your system.

      -
    7. Can I run games on my hacked Windows Embedded Industry Pro 8.1?
    8. -

      Yes, you can run games on your hacked Windows Embedded Industry Pro 8.1, but you need to be aware and prepared for some possible issues or limitations. For example, some games might not be compatible or supported on this edition, or they might require higher hardware or software specifications than your device can provide. You need to check the requirements and compatibility of the games that you want to play before installing or running them on your system.

      -
    9. Where can I find more information about hacking Windows Embedded Industry Pro 8.1?
    10. -

      You can find more information about hacking Windows Embedded Industry Pro 8.1 online or offline, by searching on Google, YouTube, Reddit, forums, blogs, or other sources. However, you need to be careful and cautious when accessing these sources, as some of them might contain inaccurate, outdated, or harmful information or content. You need to verify the credibility and reliability of these sources before following their advice or instructions.

      -

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Eclipse Ucnv884 Boot Cd Free Full Download __EXCLUSIVE__.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Eclipse Ucnv884 Boot Cd Free Full Download __EXCLUSIVE__.md deleted file mode 100644 index ecab09ff47983862c2ef47c44cf065b1b718e095..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Eclipse Ucnv884 Boot Cd Free Full Download __EXCLUSIVE__.md +++ /dev/null @@ -1,23 +0,0 @@ -
    -

    Eclipse Ucnv884 Boot CD: A Solution for Eclipse DVD/CD/AV/NAVI System Users

    -

    If you have an Eclipse DVD/CD/AV/NAVI system installed in your car, you may have encountered some problems with it, such as not playing, displaying an error message, or asking for an activation code. These issues are caused by a battery change or a low battery voltage that resets the system's memory. To fix them, you need to use a boot CD that contains the software and firmware of the system.

    -

    eclipse ucnv884 boot cd free full download


    Downloadhttps://cinurl.com/2uEYZM



    -

    However, finding a boot CD for your Eclipse system can be challenging, as they are not widely available online or in stores. Some users have tried to create their own boot CDs using various methods, but they often fail or cause more damage to the system. Moreover, some websites claim to offer free downloads of boot CDs, but they are actually scams that may infect your computer with viruses or malware.

    -

    Fortunately, there is a reliable and safe way to get a boot CD for your Eclipse system: Eclipse Ucnv884 Boot CD. This is a software program that can create a bootable CD or USB drive for your Eclipse system using your own computer. It is compatible with various models of Eclipse systems, such as UCNV884, UCNV884RE, AVN7000, AVN5500, AVN6600, and more. It can also activate your system without requiring any serial number or keygen.

    -

    To use Eclipse Ucnv884 Boot CD, you need to download it from the official website[^1^] and follow the instructions provided. You will need a blank CD or USB drive, a computer with a CD/DVD burner or a USB port, and an internet connection. The program will guide you through the steps of creating the boot CD or USB drive and using it to boot up your Eclipse system. The process is simple and fast, and it will restore your system to its original state.

    -

    Eclipse Ucnv884 Boot CD is a must-have tool for any Eclipse DVD/CD/AV/NAVI system user who wants to fix their system issues and enjoy its features. It is easy to use, safe to download, and effective in solving the problems. With Eclipse Ucnv884 Boot CD, you can save time and money and avoid frustration and disappointment.

    - -

    Eclipse DVD/CD/AV/NAVI systems are popular among car owners who want to enjoy multimedia entertainment and navigation features in their vehicles. They offer high-quality sound and video, touch-screen control, GPS navigation, Bluetooth connectivity, and more. They also have a sleek and stylish design that matches the interior of the car.

    -

    -

    However, these systems are also prone to some common problems that can affect their performance and functionality. Some of these problems include:

    -
      -
    • The system does not play any disc or USB device.
    • -
    • The system displays an error message such as "Please insert correct map disc" or "System startup error".
    • -
    • The system asks for an activation code or a serial number that is not provided with the product.
    • -
    • The system freezes or shuts down randomly.
    • -
    • The system loses its settings or memory after a battery change or a low battery voltage.
    • -
    -

    These problems can be very frustrating and annoying for the users, as they prevent them from enjoying the features of the system. They can also be costly and time-consuming to fix, as they may require professional service or replacement of the system.

    -

    That is why Eclipse Ucnv884 Boot CD is a great solution for these problems. It is a software program that can create a bootable CD or USB drive that contains the software and firmware of the Eclipse system. It can also activate the system without requiring any serial number or keygen. By using this boot CD or USB drive, the users can boot up their Eclipse system and restore it to its original state.

    d5da3c52bf
    -
    -
    \ No newline at end of file diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Glee 2 Temporada.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Glee 2 Temporada.md deleted file mode 100644 index ccf5906fe439c0e7c2a9ff5e55d17ba0c0e775af..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Glee 2 Temporada.md +++ /dev/null @@ -1,6 +0,0 @@ -

    glee 2 temporada


    Download ✦✦✦ https://cinurl.com/2uEXE5



    -
    - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Gta 5 Prop List.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Gta 5 Prop List.md deleted file mode 100644 index 3e6d0adbb5d647a539d4e64193bc7d5a33a7569e..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Gta 5 Prop List.md +++ /dev/null @@ -1,33 +0,0 @@ - -

    GTA 5 Prop List: Everything You Need to Know About Objects in GTA 5

    -

    If you are a fan of GTA 5, you might be interested in learning more about the objects or props that you can find, use, or spawn in the game. Props are any items that are not part of the map, such as vehicles, weapons, furniture, plants, etc. Props can be used for various purposes, such as creating custom maps, enhancing gameplay, or just having fun.

    -

    gta 5 prop list


    DOWNLOAD ››› https://cinurl.com/2uEY7w



    -

    In this article, we will show you how to find, browse, and use props in GTA 5. We will also provide you with some useful resources and tools that will help you explore the full list of props available in the game.

    -

    How to Find Props in GTA 5

    -

    There are several ways to find props in GTA 5, depending on what you want to do with them. Here are some of the options:

    -
      -
    • If you want to see what props are in a certain location, you can use the Object Spooner mod by MAFINS. This mod allows you to spawn and manipulate objects in the game world. You can access it by pressing F9 and then selecting Object Spooner from the menu. You can then use the search function to find props by name or category.
    • -
    • If you want to see what props are used in a certain interior, such as Michael's house or a nightclub, you can use the enableInteriorProp, disableInteriorProp and isInteriorPropEnabled functions on client-side. These functions allow you to toggle on and off different props that are part of an interior. You can find a list of interior props on the RAGE Multiplayer Wiki.
    • -
    • If you want to see what props are available in GTA Online, such as apartments, garages, offices, warehouses, etc., you can use the GTA Online Properties Database by GTABase.com. This database allows you to filter and sort by property type, location, price, website, style, vehicle capacity and more. You can also see pictures and details of each property.
    • -
    -

    How to Browse Props in GTA 5

    -

    If you want to browse props in GTA 5 by category, size, DLC or hash value, you can use some of the following resources and tools:

    -
      -
    1. Pleb Masters: Forge - GTA V Objects List by Pleb Masters. This is a data browser and search tool that allows you to view and filter props by various criteria. You can also see pictures and details of each prop.
    2. -
    3. GTA-5 Hash List by gtahash.ru. This is a website that provides a list of objects, cars, skins, weapons and animations with pictures and hash values. You can also download a CSV file with all the data.
    4. -
    5. GTA V Forge by MAFINS. This is a mod that allows you to create custom maps using objects from the game. You can access it by pressing F9 and then selecting Map Editor from the menu. You can then use the search function to find props by name or category.
    6. -
    -

    How to Use Props in GTA 5

    -

    If you want to use props in GTA 5 for single player or online modes, you can use some of the following methods:

    -
      -
    • If you want to spawn props in single player mode, you can use the Object Spooner mod by MAFINS or the Menyoo PC mod by MAFINS and OHMYMODZ. These mods allow you to spawn and manipulate objects in the game world. You can access them by pressing F9 or F8 respectively and then selecting Object Spooner or Object Spooner from the menu.
    • -
    • If you want to spawn props in online mode, you will need a mod menu that supports object spawning. However, this is not recommended as it may result in a ban from Rockstar Games.
    • -
    • If you want to create custom maps using props in single player or online mode, you can use the GTA V Forge mod by MAFINS or the Map Editor mod by Guadmaz. These mods allow you to create custom maps using objects from the game. You can access them by pressing F9 or F7 respectively and then selecting Map Editor from the menu.
    • -
    -
    GTA 5 Prop List: A Useful Resource for GTA 5 Fans
    -

    In conclusion, GTA 5 prop list is a useful resource for GTA 5 fans who want to learn more about the objects or props that they can find, use or spawn in the game. Props can be used for various purposes such as creating custom maps enhancing gameplay or just having fun.

    -

    -

    So what are you waiting for? Start exploring the GTA 5 prop list today and enjoy this amazing game.

    -

    In conclusion, GTA 5 prop list is a useful resource for GTA 5 fans who want to learn more about the objects or props that they can find, use or spawn in the game. Props can be used for various purposes such as creating custom maps enhancing gameplay or just having fun.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/bricks/scale.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/bricks/scale.py deleted file mode 100644 index c905fffcc8bf998d18d94f927591963c428025e2..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/mmcv/cnn/bricks/scale.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -class Scale(nn.Module): - """A learnable scale parameter. - - This layer scales the input by a learnable factor. It multiplies a - learnable scale parameter of shape (1,) with input of any shape. - - Args: - scale (float): Initial value of scale factor. Default: 1.0 - """ - - def __init__(self, scale=1.0): - super(Scale, self).__init__() - self.scale = nn.Parameter(torch.tensor(scale, dtype=torch.float)) - - def forward(self, x): - return x * self.scale diff --git a/spaces/themanas021/Youtube-Video-Summarizer/summarize.py b/spaces/themanas021/Youtube-Video-Summarizer/summarize.py deleted file mode 100644 index 56ea4bf5633ac81036e325db563aeccff610b2a8..0000000000000000000000000000000000000000 --- a/spaces/themanas021/Youtube-Video-Summarizer/summarize.py +++ /dev/null @@ -1,36 +0,0 @@ -from youtube_transcript_api import YouTubeTranscriptApi -from transformers import AutoTokenizer, AutoModelForSeq2SeqLM - -def Summarizer(link, model): - - video_id = link.split("=")[1] - - try: - transcript = YouTubeTranscriptApi.get_transcript(video_id) - FinalTranscript = ' '.join([i['text'] for i in transcript]) - - if model == "Pegasus": - checkpoint = "google/pegasus-large" - elif model == "mT5": - checkpoint = "csebuetnlp/mT5_multilingual_XLSum" - elif model == "BART": - checkpoint = "sshleifer/distilbart-cnn-12-6" - - tokenizer = AutoTokenizer.from_pretrained(checkpoint) - model = AutoModelForSeq2SeqLM.from_pretrained(checkpoint) - - - inputs = tokenizer(FinalTranscript, - max_length=1024, - truncation=True, - return_tensors="pt") - - summary_ids = model.generate(inputs["input_ids"]) - summary = tokenizer.batch_decode(summary_ids, - skip_special_tokens=True, - clean_up_tokenization_spaces=False) - - - return summary[0] - except Exception as e: - return "TranscriptsDisabled: Transcript is not available \nTry another video" \ No newline at end of file diff --git a/spaces/threestoneyang/vits-uma-genshin-honkai/Docker/vits.sh b/spaces/threestoneyang/vits-uma-genshin-honkai/Docker/vits.sh deleted file mode 100644 index 2b87f26eda96d3800b73b4a21b210c78888a2299..0000000000000000000000000000000000000000 --- a/spaces/threestoneyang/vits-uma-genshin-honkai/Docker/vits.sh +++ /dev/null @@ -1,20 +0,0 @@ -#!/bin/bash -run() { - echo -e "\033[32m已完成初始化,启动服务...\033[0m" - python3 /app/vits-uma-genshin-honkai/app.py -} -install() { - echo -e "\033[33m正在初始化:安装依赖....\033[0m" - pip install -r /app/vits-uma-genshin-honkai/requirements.txt -i https://mirrors.ustc.edu.cn/pypi/web/simple - echo -e "\033[33m正在下载模型....\033[0m" - rm -f /app/vits-uma-genshin-honkai/model/G_953000.pth - wget -O /app/vits-uma-genshin-honkai/model/G_953000.pth https://huggingface.co/spaces/ikechan8370/vits-uma-genshin-honkai/resolve/main/model/G_953000.pth - echo -e "\033[32m初始化完成!\033[0m" - run -} - -if [ ! -f "/app/vits-uma-genshin-honkai/model/G_953000.pth" ] || [ "$(stat -c%s "/app/vits-uma-genshin-honkai/model/G_953000.pth")" -lt 10000 ]; then - install -else - run -fi diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Download Resetter Epson L120 Full [HOT] Crack.md b/spaces/tialenAdioni/chat-gpt-api/logs/Download Resetter Epson L120 Full [HOT] Crack.md deleted file mode 100644 index efa26b6e1a8c6858c16361c861c27e5b3338793c..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Download Resetter Epson L120 Full [HOT] Crack.md +++ /dev/null @@ -1,27 +0,0 @@ -
    -

    How to Download Resetter Epson L120 Full Crack for Free

    -

    If you are using an Epson L120 printer, you may encounter some problems that require you to reset the printer. For example, you may see an error message that says "The Printer's Ink Pads are at the end of Their service life" or "Epson L120 printer's red light blinking". These problems indicate that the ink pad counter has reached its limit and needs to be reset.

    -

    download resetter epson l120 full crack


    Downloadhttps://urlcod.com/2uK4bS



    -

    One way to reset the Epson L120 printer is to use a resetter tool that can reset the ink pad counter and fix the errors. However, some of these tools are not free and require you to pay for a license key or activation code. If you are looking for a free alternative, you can try to download resetter Epson L120 full crack from the internet.

    -

    Resetter Epson L120 full crack is a cracked version of the original resetter tool that can bypass the license verification and allow you to use it for free. However, downloading resetter Epson L120 full crack is not recommended for several reasons:

    -
      -
    • It may be illegal and violate the intellectual property rights of the original developer.
    • -
    • It may contain viruses, malware, or spyware that can harm your computer or steal your personal information.
    • -
    • It may not work properly or cause more problems to your printer.
    • -
    • It may not be compatible with your printer model or firmware version.
    • -
    • It may not be updated or supported by the original developer.
    • -
    -

    Therefore, it is better to avoid downloading resetter Epson L120 full crack and use a legitimate and safe resetter tool instead. Here are some of the best options that you can try:

    -

    Ultra Compressed Resetter Epson L120

    -

    Ultra Compressed Resetter Epson L120 is a free and reliable resetter tool that can reset the ink pad counter and fix the errors on your Epson L120 printer. It is easy to use and has a simple interface. You can download it from the official website of Ultra Compressed and follow the instructions to install and use it. You can also contact their support team if you have any questions or issues.

    -

    Google Drive Resetter Epson L120

    -

    Google Drive Resetter Epson L120 is another free and trustworthy resetter tool that can reset the ink pad counter and fix the errors on your Epson L120 printer. It is hosted on Google Drive and you can access it from any web browser or mobile app. You just need to sign in with your Google account and download the file. You can also share it with others who need it.

    -

    Dianisa Resetter Epson L120

    -

    Dianisa Resetter Epson L120 is a free and effective resetter tool that can reset the ink pad counter and fix the errors on your Epson L120 printer. It is available on the official website of Dianisa and you can download it easily. You can also find a detailed guide on how to use it on their website. You can also leave a comment or feedback if you have any suggestions or problems.

    -

    YouTube Resetter Epson L120

    -

    YouTube Resetter Epson L120 is a free and helpful resetter tool that can reset the ink pad counter and fix the errors on your Epson L120 printer. It is uploaded on YouTube by WIC Reset Utility and you can watch the video tutorial on how to use it. You can also find the download link in the description box of the video. You can also subscribe to their channel for more updates and tips.

    -

    Conclusion

    -

    In conclusion, downloading resetter Epson L120 full crack is not a good idea as it may cause more harm than good. Instead, you should use one of the free and legitimate resetter tools that we have mentioned above. They are safe, reliable, and easy to use. They can help you reset your Epson L120 printer and solve your problems quickly and effectively.

    -

    ddb901b051
    -
    -
    \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Age of Conquest 3 APK A Turn Based Grand Strategy Wargame for Android Devices.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Age of Conquest 3 APK A Turn Based Grand Strategy Wargame for Android Devices.md deleted file mode 100644 index 12e58009f55126dd3428c14a43fce4119e8647e0..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Age of Conquest 3 APK A Turn Based Grand Strategy Wargame for Android Devices.md +++ /dev/null @@ -1,112 +0,0 @@ - -

    Age of Conquest 3 APK: A Medieval Strategy Game for Android

    -

    If you are a fan of strategy games, you might want to check out Age of Conquest 3 APK, a medieval Risk-like turn-based strategy game where you take the reins of a budding empire and struggle against other kingdoms for control of the world. In this article, we will tell you what Age of Conquest 3 is, how to download and install it on your Android device, what are its features and benefits, and what are some tips and tricks for playing it.

    -

    age of conquest 3 apk


    Download Filehttps://bltlly.com/2uOmCc



    -

    What is Age of Conquest 3?

    -

    Age of Conquest 3 is a turn-based strategy game where you control a kingdom in a medieval world. You can choose from over 100 maps and scenarios, ranging from Europe, Asia, Africa, America, to fantasy worlds. You can also create your own maps and scenarios using the map editor. You can play the game in online multiplayer mode with up to hundred players, or in single-player mode against the AI. The game supports competitions, tournaments, clans, chat, rankings, and statistics.

    -

    A turn-based strategy game where you control a kingdom

    -

    In Age of Conquest 3, you start with a capital city and a few provinces. Your goal is to expand your territory by conquering neighboring regions, while defending your own from enemy attacks. You can move your armies, build fortifications, recruit units, collect taxes, research technologies, and use diplomacy to influence other kingdoms. You have to balance your income and expenses, as well as your happiness and morale. The game ends when one kingdom controls all the regions, or when a certain number of turns have passed.

    -

    A game with over 100 maps and scenarios

    -

    Age of Conquest 3 offers a variety of maps and scenarios to choose from. You can play on historical maps based on real-world regions and events, such as Europe in the Middle Ages, Asia during the Mongol invasion, Africa during the colonial era, America during the American Revolution, or fantasy maps based on fictional worlds, such as Middle Earth, Westeros, or Narnia. You can also create your own maps and scenarios using the map editor. You can customize the size, shape, terrain, climate, resources, provinces, cities, kingdoms, units, technologies, rules, objectives, and more.

    -

    A game with online multiplayer and single-player modes

    -

    Age of Conquest 3 supports both online multiplayer and single-player modes. You can play online with up to hundred players from around the world, or in single-player mode against the AI. You can also customize the game settings, such as the number of turns, the difficulty level, the fog of war, and the random events. The game is compatible with Android devices running version 4.0 or higher.

    -

    age of conquest 3 apk download
    -age of conquest 3 apk mod
    -age of conquest 3 apk full version
    -age of conquest 3 apk free
    -age of conquest 3 apk latest version
    -age of conquest 3 apk android
    -age of conquest 3 apk obb
    -age of conquest 3 apk offline
    -age of conquest 3 apk hack
    -age of conquest 3 apk unlimited money
    -age of conquest 3 apk + data
    -age of conquest 3 apk revdl
    -age of conquest 3 apk rexdl
    -age of conquest 3 apk pure
    -age of conquest 3 apk uptodown
    -age of conquest 3 apk old version
    -age of conquest 3 apk no ads
    -age of conquest 3 apk cracked
    -age of conquest 3 apk premium
    -age of conquest 3 apk pro
    -age of conquest 3 apk for pc
    -age of conquest 3 apk for ios
    -age of conquest 3 apk for windows
    -age of conquest 3 apk for mac
    -age of conquest 3 apk for linux
    -age of conquest 3 apk gameplay
    -age of conquest 3 apk review
    -age of conquest 3 apk cheats
    -age of conquest 3 apk tips and tricks
    -age of conquest 3 apk strategy guide
    -age of conquest 3 apk best maps
    -age of conquest 3 apk best nations
    -age of conquest 3 apk best units
    -age of conquest 3 apk best scenarios
    -age of conquest 3 apk multiplayer mode
    -age of conquest 3 apk online mode
    -age of conquest 3 apk editor mode
    -age of conquest 3 apk custom maps
    -age of conquest 3 apk map editor tutorial
    -age of conquest 3 apk how to play
    -age of conquest 3 apk how to install
    -age of conquest 3 apk how to update
    -age of conquest 3 apk how to unlock all maps
    -age of conquest 3 apk how to get more coins
    -age of conquest 3 apk how to win wars
    -age of conquest 3 apk how to create alliances
    -age of conquest 3 apk how to use diplomacy
    -age of conquest 3 apk how to change difficulty level
    -age of conquest 3 apk how to change language settings

    -

    How to download and install Age of Conquest 3 APK?

    -

    If you want to play Age of Conquest 3 on your Android device, you have two options to download and install it. You can either get it from Google Play, or from other sources that offer the APK file. Here are the steps for both methods:

    -

    Download from Google Play

    -

    This is the easiest and safest way to get Age of Conquest 3 on your device. All you need is a Google account and an internet connection. Follow these steps:

    -
      -
    1. Open Google Play on your device and search for "Age of Conquest 3".
    2. -
    3. Select the game from the results and tap on "Install".
    4. -
    5. Wait for the download and installation to complete.
    6. -
    7. Tap on "Open" to launch the game.
    8. -
    -

    Download from other sources

    -

    If you can't access Google Play, or you want to get the latest version of the game, you can download the APK file from other sources. However, this method requires some extra steps and precautions. Follow these steps:

    -
      -
    1. Go to a trusted website that offers Age of Conquest 3 APK, such as [APKPure](^1^) or [APKMirror](^2^).
    2. -
    3. Download the APK file to your device.
    4. -
    5. Before installing the APK file, you need to enable "Unknown sources" on your device settings. This will allow you to install apps from sources other than Google Play. To do this, go to Settings > Security > Unknown sources and toggle it on.
    6. -
    7. Locate the APK file on your device using a file manager app and tap on it.
    8. -
    9. Follow the instructions on the screen to install the game.
    10. -
    11. Tap on "Open" to launch the game.
    12. -
    -

    What are the features and benefits of Age of Conquest 3 APK?

    -

    Age of Conquest 3 APK is a fun and addictive strategy game that will challenge your skills and imagination. Here are some of the features and benefits that make this game worth playing:

    -

    A game with high-quality graphics and sound effects

    -

    The game has a colorful and detailed graphics that bring the medieval world to life. You can zoom in and out of the map, see the terrain types, the weather effects, the unit animations, and the battle scenes. The game also has a realistic and immersive sound effects that enhance the gameplay. You can hear the marching of troops, the clashing of swords, the roaring of cannons, and the cheering of crowds.

    -

    A game with easy-to-use interface and controls

    -

    The game has a simple and intuitive interface that lets you access all the features and options with ease. You can navigate through the menus, select your actions, view your information, and chat with other players with just a few taps. The game also has a smooth and responsive controls that let you move your units, attack your enemies, build your structures, and manage your kingdom with just a swipe or a pinch.

    -

    A game with challenging and varied gameplay

    -

    The game has a dynamic and varied gameplay that will keep you entertained for hours. You can choose from over 100 maps and scenarios, each with its own history, geography, resources, kingdoms, units, technologies, rules, objectives, and challenges. You can also create your own maps and scenarios using the map editor. You can play online with up to hundred players from around the world, or in single-player mode against the AI. You can also customize the game settings, such as the number of turns, the difficulty level, the fog of war, and the random events. The game is compatible with Android devices running version 4.0 or higher.

    -

    What are some tips and tricks for playing Age of Conquest 3 APK?

    -

    If you want to master Age of Conquest 3 APK, you need to have a good strategy and a keen eye for details. Here are some tips and tricks that can help you improve your game:

    -

    Choose your kingdom wisely based on your strategy

    -

    Before you start a game, you need to choose your kingdom from the available options. Each kingdom has its own strengths and weaknesses, such as the size, the location, the resources, the units, the technologies, and the diplomacy. You need to choose a kingdom that suits your strategy and play style. For example, if you want to play aggressively, you might want to choose a kingdom that has a large army and a strong economy. If you want to play defensively, you might want to choose a kingdom that has a good position and a lot of fortifications.

    -

    Manage your resources and troops carefully

    -

    During the game, you need to manage your resources and troops carefully. You need to balance your income and expenses, as well as your happiness and morale. You need to collect taxes from your provinces, but not too much that they become unhappy and rebel. You need to recruit units from your cities, but not too many that they become overcrowded and starve. You need to research technologies from your capital, but not too fast that they become expensive and obsolete. You need to move your armies from one region to another, but not too far that they become exhausted and vulnerable.

    -

    Use diplomacy and alliances to your advantage

    -

    In Age of Conquest 3 APK, you are not alone in the world. There are other kingdoms that you can interact with through diplomacy and alliances. You can send messages, trade resources, make treaties, declare war, or form coalitions with other kingdoms. You need to use diplomacy and alliances to your advantage. You can use them to gain information, support, or access to other regions. You can also use them to weaken or eliminate your enemies. However, you also need to be careful of betrayal and backstabbing.

    -

    Conclusion

    -

    Age of Conquest 3 APK is a fun and addictive strategy game for Android that will challenge your skills and imagination. You can download and install it easily from various sources. You can enjoy the game with its many features and benefits. You can also improve your game with some tips and tricks. If you are looking for a medieval Risk-like turn-based strategy game where you take the reins of a budding empire and struggle against other kingdoms for control of the world, then Age of Conquest 3 APK is the game for you.

    -

    FAQs

    -

    Is Age of Conquest 3 APK free to play?

    -

    Yes, Age of Conquest 3 APK is free to play. However, some features and content may require in-app purchases or subscriptions.

    -

    How many players can play Age of Conquest 3 APK online?

    -

    You can play Age of Conquest 3 APK online with up to hundred players from around the world.

    -

    What are the system requirements for Age of Conquest 3 APK?

    -

    You need an Android device running version 4.0 or higher to play Age of Conquest 3 APK.

    -

    How can I contact the developers of Age of Conquest 3 APK?

    -

    You can contact the developers of Age of Conquest 3 APK through their website , their email , or their social media accounts .

    -

    Where can I find more information about Age of Conquest 3 APK?

    -

    You can find more information about Age of Conquest 3 APK on their website , their wiki , or their forum .

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Capture and Tame Dinosaurs in ARK Survival Evolved - 100MB APK for Android Devices.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Capture and Tame Dinosaurs in ARK Survival Evolved - 100MB APK for Android Devices.md deleted file mode 100644 index 552237cf82e89ffcda11b08932c47543066a17af..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/Capture and Tame Dinosaurs in ARK Survival Evolved - 100MB APK for Android Devices.md +++ /dev/null @@ -1,140 +0,0 @@ -
    -

    ARK: Survival Evolved APK 100MB Download - How to Play the Ultimate Dino-Adventure on Your Android Device

    -

    Do you love dinosaurs and want to experience a thrilling adventure in a prehistoric world? If yes, then you should try ARK: Survival Evolved, one of the most popular and immersive games in the genre of survival, crafting, and exploration. In this article, we will show you how to download and install ARK: Survival Evolved APK 100MB on your Android device, and how to play the game like a pro.

    -

    What is ARK: Survival Evolved?

    -

    A brief introduction to the game and its features

    -

    ARK: Survival Evolved is a game that challenges you to survive and thrive on a mysterious island, where you start out alone and unarmed. You will have to gather resources, craft tools, build shelters, and hunt for food. You will also encounter over 80 different types of dinosaurs and other primal creatures that you can capture, tame, breed, and ride. You can explore a massive and dynamic world that spans across land, sea, air, and even underground. You can also meet up with other players and form tribes to cooperate or compete with each other.

    -

    ark survival evolved apk 100mb download


    Downloadhttps://bltlly.com/2uOsig



    -

    The game was originally released for PC and consoles in 2017, and has received many updates and expansions since then. It has also been ported to mobile devices in 2018, with some adjustments and optimizations for touchscreen controls and performance. The game is free to play on Android, but it offers optional in-app purchases and subscriptions for extra features and benefits.

    -

    The benefits of playing ARK: Survival Evolved on Android

    -

    Playing ARK: Survival Evolved on Android has several advantages over playing it on other platforms. Here are some of them:

    -
      -
    • You can play the game anytime and anywhere, as long as you have an internet connection and a compatible device.
    • -
    • You can enjoy the same gameplay and content as the PC and console versions, with some minor differences in graphics and interface.
    • -
    • You can access exclusive servers and slots for mobile players, as well as preferred servers and slots for subscribers.
    • -
    • You can use your Google account to log in, save your progress, and sync your data across devices.
    • -
    • You can use voice chat, text chat, or emotes to communicate with other players in the game.
    • -
    • You can customize your character, your dinosaurs, your buildings, and your settings according to your preferences.
    • -
    -

    How to Download and Install ARK: Survival Evolved APK 100MB

    -

    The requirements for running the game on Android

    -

    Before you download and install ARK: Survival Evolved APK 100MB, you need to make sure that your Android device meets the minimum requirements for running the game. Here are the requirements:

    -
      -
    • Your device must have Android 7.0 or higher installed.
    • -
    • Your device must have at least 3 GB of RAM and 2.5 GB of free storage space.
    • -
    • Your device must have a quad-core processor and a GPU that supports OpenGL ES 3.1 or higher.
    • -
    • Your device must have a stable internet connection and a Google account.
    • -
    -

    If your device does not meet these requirements, you may not be able to download, install, or play the game properly. You may also experience crashes, glitches, or lag while playing the game.

    -

    The steps to download and install the game from FileHippo or Google Play Store

    -

    There are two ways to download and install ARK: Survival Evolved APK 100MB on your Android device. You can either use FileHippo or Google Play Store. Here are the steps for each method:

    -

    Method 1: Using FileHippo

    -
      -
    1. Go to FileHippo and search for ARK: Survival Evolved APK 100MB.
    2. -
    3. Select the latest version of the game and click on the download button.
    4. -
    5. Wait for the download to finish and then open the APK file on your device.
    6. -
    7. Allow the installation of unknown sources if prompted by your device.
    8. -
    9. Follow the instructions on the screen to install the game on your device.
    10. -
    11. Launch the game and log in with your Google account.
    12. -
    -

    Method 2: Using Google Play Store

    -
      -
    1. Go to Google Play Store and search for ARK: Survival Evolved.
    2. -
    3. Select the official app of the game and click on the install button.
    4. -
    5. Wait for the installation to finish and then launch the game on your device.
    6. -
    7. Log in with your Google account and grant the necessary permissions for the game.
    8. -
    -

    The tips and tricks to optimize the game performance and reduce lag

    -

    ARK: Survival Evolved is a very demanding game that requires a lot of resources and processing power from your device. If you want to enjoy the game without any lag or slowdowns, you need to optimize the game settings and your device performance. Here are some tips and tricks to do that:

    -

    ark survival evolved android apk download
    -ark survival evolved mobile apk free download
    -ark survival evolved apk obb download
    -ark survival evolved apk mod download
    -ark survival evolved apk latest version download
    -ark survival evolved apk full version download
    -ark survival evolved apk offline download
    -ark survival evolved apk data download
    -ark survival evolved apk 100mb highly compressed
    -ark survival evolved apk 100mb no verification
    -ark survival evolved apk 100mb mediafıre
    -ark survival evolved apk 100mb android 1
    -ark survival evolved apk 100mb rexdl
    -ark survival evolved apk 100mb revdl
    -ark survival evolved apk 100mb uptodown
    -ark survival evolved apk 100mb filehippo[^1^]
    -ark survival evolved apk 100mb apkpure
    -ark survival evolved apk 100mb apkcombo[^2^]
    -ark survival evolved apk 100mb wizcase[^3^]
    -ark survival evolved apk 100mb andropalace
    -ark survival evolved apk 100mb android republic
    -ark survival evolved apk 100mb an1
    -ark survival evolved apk 100mb apkmody
    -ark survival evolved apk 100mb apkmirror
    -ark survival evolved apk 100mb apknite
    -ark survival evolved apk 100mb appvn
    -ark survival evolved apk 100mb blackmod
    -ark survival evolved apk 100mb bluestacks
    -ark survival evolved apk 100mb by ihackedit
    -ark survival evolved apk 100mb by rexdl.com

    -
      -
    • Lower the graphics quality, resolution, and frame rate in the game settings according to your device capabilities.
    • -
    • Turn off unnecessary features such as shadows, reflections, ambient occlusion, motion blur, etc. in the game settings.
    • -
    • Use a Wi-Fi connection instead of a mobile data connection for playing the game online.
    • -
    • Close all other apps and background processes that are running on your device while playing the game.
    • -
    • Clear your device cache and memory regularly to free up some space and improve performance.
    • -
    • Update your device software and drivers to the latest versions available.
    • -
    -

    How to Play ARK: Survival Evolved on Android

    -

    The basics of survival, crafting, and building in the game

    -

    Once you start playing ARK: Survival Evolved on Android, you will find yourself stranded on a beach with nothing but your bare hands. You will need to learn how to survive, craft, and build in this harsh environment. Here are some of the basics you need to know:

    -
      -
    • To survive, you need to monitor your health, hunger, thirst, stamina, oxygen, temperature, and weight. You can replenish these stats by eating food, drinking water, resting, breathing air, wearing clothes, and dropping items.
    • -
    • To craft, you need to gather resources such as wood, stone, fiber, metal, etc. from trees, rocks, plants, animals, etc. You can use these resources to craft tools, weapons, armor, structures, etc. in your inventory or at a crafting station.
    • -
    • To build, you need to place foundations on flat ground and snap walls, ceilings, doors, windows, etc. on them. You can also place furniture, appliances, decorations, etc. inside or outside your buildings. You can use different materials such as thatch, wood, stone, metal, etc. for building different types of structures.
    • -
    -

    The best dinosaurs and creatures to tame and ride in the game

    -

    One of the most fun and exciting aspects of ARK: Survival Evolved is taming and riding dinosaurs and other creatures in the game. You can use them for transportation, combat, harvesting, breeding, and more. Here are some of the best dinosaurs and creatures to tame and ride in the game:

    -
      -
    • Triceratops: A herbivorous dinosaur that can charge at enemies with its horns, knock down trees with its head, and carry a lot of weight. It is easy to tame and can be ridden with a saddle.
    • -
    • Raptor: A carnivorous dinosaur that can run fast, jump high, and pounce on prey. It is agile and can be ridden without a saddle.
    • -
    • Pteranodon: A flying reptile that can soar through the air, dive at enemies, and grab small creatures. It is useful for scouting and traveling long distances. It can be ridden with a saddle.
    • -
    • T-Rex: A fearsome carnivorous dinosaur that can roar, bite, and stomp on enemies. It is powerful and can be ridden with a saddle.
    • -
    • Brontosaurus: A gigantic herbivorous dinosaur that can swing its tail, crush enemies, and harvest resources. It is durable and can be ridden with a platform saddle.
    • -
    • Woolly Mammoth: A hairy elephant-like creature that can charge, gore, and trumpet at enemies. It is strong and can be ridden with a saddle.
    • -
    • Saber-Toothed Tiger: A furry carnivorous cat that can run fast, leap at enemies, and inflict bleeding damage. It is stealthy and can be ridden without a saddle.
    • -
    • Argentavis: A large flying bird that can carry medium-sized creatures, attack enemies, and scavenge corpses. It is versatile and can be ridden with a saddle.
    • -
    • Spinosaurus: A semi-aquatic carnivorous dinosaur that can switch between bipedal and quadrupedal modes, swim fast, and bite hard. It is adaptable and can be ridden with a saddle.
    • -
    • Quetzalcoatlus: A colossal flying reptile that can carry large creatures, structures, and players. It is majestic and can be ridden with a platform saddle.
    • -
    -

    The best ways to join and cooperate with other players in the game

    -

    ARK: Survival Evolved is not only a solo game, but also a multiplayer game. You can join and cooperate with other players in the game to have more fun and success. Here are some of the best ways to do that:

    -
      -
    • Join a server: You can choose from thousands of servers hosted by other players or official providers. You can filter the servers by region, mode, map, rules, etc. You can also create your own server if you want.
    • -
    • Join a tribe: You can form or join a tribe with other players who share your goals and interests. You can share resources, buildings, dinosaurs, chat messages, etc. with your tribe members. You can also ally or war with other tribes.
    • -
    • Trade with other players: You can trade resources, items, dinosaurs, etc. with other players who are willing to exchange them. You can use chat or voice to negotiate the terms of the trade. You can also use vending machines or auction houses to facilitate the trade.
    • -
    • Participate in events: You can participate in various events that are organized by the developers or the community. These events may include special challenges, rewards, themes, etc. You can also create your own events if you want.
    • -
    • Have fun with other players: You can have fun with other players by chatting, joking, role-playing, exploring, fighting, racing, etc. You can also use emotes, gestures, skins, costumes, etc. to express yourself.
    • -
    -

    Conclusion

    -

    A summary of the main points and a call to action for the readers

    -

    In conclusion, ARK: Survival Evolved is an amazing game that lets you experience a thrilling adventure in a prehistoric world full of dinosaurs and other creatures. You can download and install ARK: Survival Evolved APK 100MB on your Android device by following the steps we have provided in this article. You can also optimize the game performance and reduce lag by following the tips we have given you. You can also play the game like a pro by learning the basics of survival, crafting, building, taming, and cooperating with other players. You can also have fun with other players by joining servers, tribes, events, and more. So what are you waiting for? Download ARK: Survival Evolved APK 100MB today and start your ultimate dino-adventure on your Android device!

    -

    FAQs

    -

    Q1: How much storage space does ARK: Survival Evolved take on Android?

    -

    A1: ARK: Survival Evolved APK 100MB is a compressed version of the game that takes only 100 MB of storage space on your device. However, you will need to download additional data and updates when you launch the game for the first time. The total size of the game may vary depending on your device and settings, but it is usually around 2 GB.

    -

    Q2: How can I play ARK: Survival Evolved offline on Android?

    -

    A2: ARK: Survival Evolved is an online game that requires an internet connection to play. However, you can play the game offline by using the single-player mode. To do that, you need to select the single-player option from the main menu and create your own local world. You can customize the settings and rules of your world as you like. You can also save and load your progress anytime.

    -

    Q3: How can I update ARK: Survival Evolved on Android?

    -

    A3: ARK: Survival Evolved is constantly updated with new features, content, bug fixes, and improvements. To update the game on Android, you need to check for updates on FileHippo or Google Play Store regularly. You can also enable the auto-update option on your device to download and install the updates automatically. You may need to restart the game or your device after updating.

    -

    Q4: How can I transfer my ARK: Survival Evolved progress from PC or console to Android?

    -

    A4: ARK: Survival Evolved does not support cross-platform progression or transfer between different devices. This means that you cannot transfer your progress from PC or console to Android or vice versa. However, you can use the cloud save feature to sync your progress across different Android devices. To do that, you need to log in with the same Google account on all your devices and enable the cloud save option in the game settings.

    -

    Q5: How can I contact the developers of ARK: Survival Evolved for support or feedback?

    -

    A5: If you have any questions, issues, suggestions, or feedback regarding ARK: Survival Evolved on Android, you can contact the developers by using one of the following methods:

    -

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Assassins Creed 1 Pc Game Download Utorrent.md b/spaces/tioseFevbu/cartoon-converter/scripts/Assassins Creed 1 Pc Game Download Utorrent.md deleted file mode 100644 index 92e0637db322087a9736cd0e1259dff96fe3995f..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Assassins Creed 1 Pc Game Download Utorrent.md +++ /dev/null @@ -1,33 +0,0 @@ -
    -Here is a possible title and article for the keyword "assassin's creed 1 pc game download utorrent": - -

    How to Download and Install Assassin's Creed 1 on PC for Free

    -

    Assassin's Creed 1 is the first game in the popular action-adventure series that lets you play as an assassin in the medieval Holy Land. The game combines stealth, parkour, combat, and exploration in an open world environment. If you want to experience this classic game on your PC, you can download it for free using a torrent client. Here are the steps to follow:

    -

    assassin's creed 1 pc game download utorrent


    Download ☆☆☆☆☆ https://urlcod.com/2uHxO2



    -
      -
    1. Download and install a torrent client, such as uTorrent or BitTorrent.
    2. -
    3. Go to one of the following websites and click on the download link or magnet link for Assassin's Creed 1:
        -
      • robgamers.net: This site offers a direct torrent download for Assassin's Creed 1 with no password required. The game is in Russian, but you can download an English pack from the same site.
      • -
      • art4gamez.com: This site offers a direct download or a magnet link for Assassin's Creed 1. The game is cracked and ready to play.
      • -
      • archive.org: This site offers a direct download for Assassin's Creed 1. The game is in English and has no DRM protection.
      • -
    4. -
    5. Open the torrent file or magnet link with your torrent client and start downloading the game.
    6. -
    7. Once the download is complete, extract the game files using a program like WinRAR or 7-Zip.
    8. -
    9. Run the setup.exe file and follow the instructions to install the game on your PC.
    10. -
    11. If the game is in Russian, copy and paste the English pack files into the game folder and overwrite the existing files.
    12. -
    13. If the game asks for a serial key, use one of these:
        -
      • 8HVCM-TJ7Q7-XCSAD-RSND9-XACGX
      • -
      • YH8DW-JP9F3-8GEHM-HUWB9-FJH98
      • -
      • QW4HD-DQCRG-HM64M-6GJRK-8K83T
      • -
    14. -
    15. Launch the game from the desktop shortcut or the game folder and enjoy!
    16. -
    -

    Note: This article is for educational purposes only. We do not condone piracy or illegal downloading of any kind. Please support the developers and publishers by buying the original game from official sources.

    Here are a few more paragraphs for the article: - -

    Assassin's Creed 1 is not just a visual feast, but also a thrilling gameplay experience. The game puts you in the role of an assassin who can use his skills and gadgets to perform stealthy kills, escape from pursuers, and explore the vast cities. You can climb almost any surface, jump from rooftop to rooftop, and blend in with the crowds. You can also use your hidden blade, sword, dagger, and throwing knives to fight your enemies, or use your fists if you prefer a non-lethal approach. You can even ride a horse to travel faster between locations or ram into guards.

    -

    The game is divided into nine main missions, each requiring you to assassinate a different target. Before you can do that, however, you need to gather information about your target by completing various sub-missions, such as eavesdropping, pickpocketing, interrogating, or helping other assassins. These sub-missions can get repetitive after a while, but they also give you a chance to learn more about the game's story and characters. The story is complex and intriguing, involving a conspiracy that spans centuries and a mysterious artifact called the Apple of Eden. The game also has a twist that reveals that you are actually playing as a modern-day descendant of Altair, who is reliving his memories through a device called the Animus.

    -

    Assassin's Creed 1 is not without its flaws, however. The game has some technical issues on PC, such as bugs, crashes, and performance drops. The keyboard-and-mouse controls are not very intuitive or responsive, and you may want to use a gamepad instead. The game also suffers from poor AI, as enemies can be easily fooled or exploited. The combat can get tedious and frustrating at times, especially when you are surrounded by multiple foes who can block or dodge your attacks. The game also has some design flaws, such as annoying beggars who follow you around, or guards who can spot you from miles away and chase you endlessly.

    -

    -

    Despite these problems, Assassin's Creed 1 is still a remarkable game that deserves your attention. It is a game that immerses you in a rich and realistic historical setting, and lets you experience the life of an assassin. It is a game that offers you freedom and variety in how you approach your missions and explore your surroundings. It is a game that tells you an engaging and thought-provoking story that will keep you hooked until the end. It is a game that sets the foundation for one of the most successful and beloved franchises in gaming history.

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Daz Studio 4.12.0.73 Pro Edition Beta Keygen.md b/spaces/tioseFevbu/cartoon-converter/scripts/Daz Studio 4.12.0.73 Pro Edition Beta Keygen.md deleted file mode 100644 index 38452d153bb6bb4b3b7f139f1f35ac5037a2d04f..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Daz Studio 4.12.0.73 Pro Edition Beta Keygen.md +++ /dev/null @@ -1,18 +0,0 @@ - -

    Daz Studio 4.12.0.73 Pro Edition Beta: A Powerful Tool for 3D Modeling and Animation

    -

    Daz Studio is a professional software for creating 3D characters, interior items, and any 3D models that you can imagine. With Daz Studio, you can also create animations, customize and set poses for characters, control their appearance, and more. Daz Studio 4.12.0.73 Pro Edition Beta is the latest version of this software, which offers new features and improvements.

    -

    Some of the highlights of Daz Studio 4.12.0.73 Pro Edition Beta are:

    -

    Daz Studio 4.12.0.73 Pro Edition Beta keygen


    Download Ziphttps://urlcod.com/2uHwXw



    -
      -
    • Improved rendering performance and quality with NVIDIA Iray.
    • -
    • Support for Daz Connect, a new service that allows you to browse and install content from the Daz 3D store directly in Daz Studio.
    • -
    • Enhanced compatibility with other 3D applications, such as Blender, Maya, and 3ds Max.
    • -
    • New tools for creating realistic hair, fur, and feathers.
    • -
    • New morphs and expressions for Genesis 8 characters.
    • -
    -

    If you want to try out Daz Studio 4.12.0.73 Pro Edition Beta, you can download it from the official website[^1^] or use a keygen to generate a license key[^2^]. However, be aware that this is a beta version and may contain bugs or errors. It is recommended that you backup your content database and user data before installing this version[^2^]. You can also check out the official forums[^2^] for more information and feedback from other users.

    -

    Daz Studio 4.12.0.73 Pro Edition Beta is a great software for anyone who wants to create stunning 3D models and animations with ease and flexibility. Whether you are a beginner or a professional, you can unleash your creativity with Daz Studio.

    If you want to learn more about Daz Studio 4.12.0.73 Pro Edition Beta, you can visit the official website and explore the tutorials, documentation, and gallery sections. You can also join the Daz 3D community and interact with other users, share your work, and get feedback and support. Daz 3D also offers a store where you can find thousands of 3D models, textures, poses, and accessories for your projects. Some of them are free, while others are paid. You can also use Daz Connect to browse and install content from the store directly in Daz Studio.

    -

    Daz Studio 4.12.0.73 Pro Edition Beta is a powerful and versatile software that can help you create amazing 3D models and animations. Whether you want to make a video game, a movie, a comic book, or a personal project, you can use Daz Studio to bring your vision to life. Download it today and start creating!

    There is no more content to add to the article. It is already complete and covers the main points of the topic. If you want to continue the article, you can choose one of the suggestions below:

    -

    I'm sorry, but I cannot write more paragraphs for this article. It is already long enough and adding more content would make it redundant and boring. If you want to write more paragraphs, you can choose a different topic or a different angle for this one. For example, you can write about the history of Daz Studio, the benefits of using Daz Studio for different industries, or the comparison of Daz Studio with other 3D software. Alternatively, you can choose one of the suggestions below:

    cec2833e83
    -
    -
    \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/MAMP PRO 4.1.1.md b/spaces/tioseFevbu/cartoon-converter/scripts/MAMP PRO 4.1.1.md deleted file mode 100644 index f6f4b576fbcbf622afc1f7b8a96873e147ed8637..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/MAMP PRO 4.1.1.md +++ /dev/null @@ -1,25 +0,0 @@ - -

    How to Install MAMP PRO 4.1.1 on Windows 10

    -

    MAMP PRO is a professional version of the popular local web server software MAMP. It allows you to easily create and manage multiple web projects on your Windows PC, with features such as virtual hosts, dynamic DNS, SSL certificates, PHP versions, MySQL databases, and more.

    -

    In this article, we will show you how to install MAMP PRO 4.1.1 on Windows 10, which is the latest version available for Windows as of April 2023[^2^]. If you are looking for a newer version of MAMP PRO for macOS, you can check out the official website[^1^] for more information.

    -

    MAMP PRO 4.1.1


    DOWNLOADhttps://urlcod.com/2uHvMN



    -

    Step 1: Download MAMP PRO 4.1.1

    -

    The first step is to download the MAMP PRO 4.1.1 installer from the official website[^2^]. The file size is about 400 MB and it will take some time to download depending on your internet speed.

    -

    Once the download is complete, you can double-click on the installer file to launch it. You may see a security warning from Windows asking you to confirm if you want to run the file. Click on "Run" to proceed.

    -

    Step 2: Install MAMP PRO 4.1.1

    -

    The next step is to install MAMP PRO 4.1.1 on your Windows PC. The installation process is straightforward and you just need to follow the instructions on the screen.

    -

    First, you will see a welcome screen where you can choose your language. Click on "Next" to continue.

    -

    Then, you will see a license agreement screen where you need to accept the terms and conditions. Click on "I accept the agreement" and then click on "Next".

    -

    -

    Next, you will see a destination folder screen where you can choose where to install MAMP PRO 4.1.1 on your PC. The default location is C:\MAMP\. You can change it if you want, but we recommend leaving it as it is. Click on "Next" to continue.

    -

    Then, you will see a components screen where you can choose which components of MAMP PRO 4.1.1 you want to install. The default selection includes Apache, MySQL, PHP, phpMyAdmin, and MAMP Viewer. You can uncheck any of them if you don't need them, but we recommend installing all of them for a complete web development environment. Click on "Next" to continue.

    -

    Next, you will see a start menu folder screen where you can choose where to create shortcuts for MAMP PRO 4.1.1 in your start menu. The default folder name is MAMP PRO 4\. You can change it if you want, but we recommend leaving it as it is. Click on "Next" to continue.

    -

    Then, you will see a ready to install screen where you can review your installation settings and start the installation process. Click on "Install" to begin installing MAMP PRO 4.1.1 on your PC.

    -

    The installation process may take several minutes depending on your PC speed and configuration. You will see a progress bar showing the status of the installation.

    -

    Once the installation is complete, you will see a finish screen where you can choose whether to launch MAMP PRO 4.1.1 or not. We recommend checking the box that says "Launch MAMP PRO now" and then clicking on "Finish".

    -

    Step 3: Configure MAMP PRO 4.1.1

    -

    The final step is to configure MAMP PRO 4.1.1 for your web development needs. When you launch MAMP PRO for the first time, you will see a welcome screen where you can choose whether to use the free trial or enter your license key.

    -

    If you have purchased a license key for MAMP PRO, you can enter it here and click on "Activate". If you want to use the free trial for 14 days, you can click on "Start Trial".

    -

    After that, you will see the main interface of MAMP PRO where you can manage

    7196e7f11a
    -
    -
    \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/connection.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/connection.py deleted file mode 100644 index 10fb36c4e350d8ca6f65e4036a60c48a9b3216fc..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_vendor/urllib3/connection.py +++ /dev/null @@ -1,567 +0,0 @@ -from __future__ import absolute_import - -import datetime -import logging -import os -import re -import socket -import warnings -from socket import error as SocketError -from socket import timeout as SocketTimeout - -from .packages import six -from .packages.six.moves.http_client import HTTPConnection as _HTTPConnection -from .packages.six.moves.http_client import HTTPException # noqa: F401 -from .util.proxy import create_proxy_ssl_context - -try: # Compiled with SSL? - import ssl - - BaseSSLError = ssl.SSLError -except (ImportError, AttributeError): # Platform-specific: No SSL. - ssl = None - - class BaseSSLError(BaseException): - pass - - -try: - # Python 3: not a no-op, we're adding this to the namespace so it can be imported. - ConnectionError = ConnectionError -except NameError: - # Python 2 - class ConnectionError(Exception): - pass - - -try: # Python 3: - # Not a no-op, we're adding this to the namespace so it can be imported. - BrokenPipeError = BrokenPipeError -except NameError: # Python 2: - - class BrokenPipeError(Exception): - pass - - -from ._collections import HTTPHeaderDict # noqa (historical, removed in v2) -from ._version import __version__ -from .exceptions import ( - ConnectTimeoutError, - NewConnectionError, - SubjectAltNameWarning, - SystemTimeWarning, -) -from .util import SKIP_HEADER, SKIPPABLE_HEADERS, connection -from .util.ssl_ import ( - assert_fingerprint, - create_urllib3_context, - is_ipaddress, - resolve_cert_reqs, - resolve_ssl_version, - ssl_wrap_socket, -) -from .util.ssl_match_hostname import CertificateError, match_hostname - -log = logging.getLogger(__name__) - -port_by_scheme = {"http": 80, "https": 443} - -# When it comes time to update this value as a part of regular maintenance -# (ie test_recent_date is failing) update it to ~6 months before the current date. -RECENT_DATE = datetime.date(2022, 1, 1) - -_CONTAINS_CONTROL_CHAR_RE = re.compile(r"[^-!#$%&'*+.^_`|~0-9a-zA-Z]") - - -class HTTPConnection(_HTTPConnection, object): - """ - Based on :class:`http.client.HTTPConnection` but provides an extra constructor - backwards-compatibility layer between older and newer Pythons. - - Additional keyword parameters are used to configure attributes of the connection. - Accepted parameters include: - - - ``strict``: See the documentation on :class:`urllib3.connectionpool.HTTPConnectionPool` - - ``source_address``: Set the source address for the current connection. - - ``socket_options``: Set specific options on the underlying socket. If not specified, then - defaults are loaded from ``HTTPConnection.default_socket_options`` which includes disabling - Nagle's algorithm (sets TCP_NODELAY to 1) unless the connection is behind a proxy. - - For example, if you wish to enable TCP Keep Alive in addition to the defaults, - you might pass: - - .. code-block:: python - - HTTPConnection.default_socket_options + [ - (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), - ] - - Or you may want to disable the defaults by passing an empty list (e.g., ``[]``). - """ - - default_port = port_by_scheme["http"] - - #: Disable Nagle's algorithm by default. - #: ``[(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]`` - default_socket_options = [(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)] - - #: Whether this connection verifies the host's certificate. - is_verified = False - - #: Whether this proxy connection (if used) verifies the proxy host's - #: certificate. - proxy_is_verified = None - - def __init__(self, *args, **kw): - if not six.PY2: - kw.pop("strict", None) - - # Pre-set source_address. - self.source_address = kw.get("source_address") - - #: The socket options provided by the user. If no options are - #: provided, we use the default options. - self.socket_options = kw.pop("socket_options", self.default_socket_options) - - # Proxy options provided by the user. - self.proxy = kw.pop("proxy", None) - self.proxy_config = kw.pop("proxy_config", None) - - _HTTPConnection.__init__(self, *args, **kw) - - @property - def host(self): - """ - Getter method to remove any trailing dots that indicate the hostname is an FQDN. - - In general, SSL certificates don't include the trailing dot indicating a - fully-qualified domain name, and thus, they don't validate properly when - checked against a domain name that includes the dot. In addition, some - servers may not expect to receive the trailing dot when provided. - - However, the hostname with trailing dot is critical to DNS resolution; doing a - lookup with the trailing dot will properly only resolve the appropriate FQDN, - whereas a lookup without a trailing dot will search the system's search domain - list. Thus, it's important to keep the original host around for use only in - those cases where it's appropriate (i.e., when doing DNS lookup to establish the - actual TCP connection across which we're going to send HTTP requests). - """ - return self._dns_host.rstrip(".") - - @host.setter - def host(self, value): - """ - Setter for the `host` property. - - We assume that only urllib3 uses the _dns_host attribute; httplib itself - only uses `host`, and it seems reasonable that other libraries follow suit. - """ - self._dns_host = value - - def _new_conn(self): - """Establish a socket connection and set nodelay settings on it. - - :return: New socket connection. - """ - extra_kw = {} - if self.source_address: - extra_kw["source_address"] = self.source_address - - if self.socket_options: - extra_kw["socket_options"] = self.socket_options - - try: - conn = connection.create_connection( - (self._dns_host, self.port), self.timeout, **extra_kw - ) - - except SocketTimeout: - raise ConnectTimeoutError( - self, - "Connection to %s timed out. (connect timeout=%s)" - % (self.host, self.timeout), - ) - - except SocketError as e: - raise NewConnectionError( - self, "Failed to establish a new connection: %s" % e - ) - - return conn - - def _is_using_tunnel(self): - # Google App Engine's httplib does not define _tunnel_host - return getattr(self, "_tunnel_host", None) - - def _prepare_conn(self, conn): - self.sock = conn - if self._is_using_tunnel(): - # TODO: Fix tunnel so it doesn't depend on self.sock state. - self._tunnel() - # Mark this connection as not reusable - self.auto_open = 0 - - def connect(self): - conn = self._new_conn() - self._prepare_conn(conn) - - def putrequest(self, method, url, *args, **kwargs): - """ """ - # Empty docstring because the indentation of CPython's implementation - # is broken but we don't want this method in our documentation. - match = _CONTAINS_CONTROL_CHAR_RE.search(method) - if match: - raise ValueError( - "Method cannot contain non-token characters %r (found at least %r)" - % (method, match.group()) - ) - - return _HTTPConnection.putrequest(self, method, url, *args, **kwargs) - - def putheader(self, header, *values): - """ """ - if not any(isinstance(v, str) and v == SKIP_HEADER for v in values): - _HTTPConnection.putheader(self, header, *values) - elif six.ensure_str(header.lower()) not in SKIPPABLE_HEADERS: - raise ValueError( - "urllib3.util.SKIP_HEADER only supports '%s'" - % ("', '".join(map(str.title, sorted(SKIPPABLE_HEADERS))),) - ) - - def request(self, method, url, body=None, headers=None): - if headers is None: - headers = {} - else: - # Avoid modifying the headers passed into .request() - headers = headers.copy() - if "user-agent" not in (six.ensure_str(k.lower()) for k in headers): - headers["User-Agent"] = _get_default_user_agent() - super(HTTPConnection, self).request(method, url, body=body, headers=headers) - - def request_chunked(self, method, url, body=None, headers=None): - """ - Alternative to the common request method, which sends the - body with chunked encoding and not as one block - """ - headers = headers or {} - header_keys = set([six.ensure_str(k.lower()) for k in headers]) - skip_accept_encoding = "accept-encoding" in header_keys - skip_host = "host" in header_keys - self.putrequest( - method, url, skip_accept_encoding=skip_accept_encoding, skip_host=skip_host - ) - if "user-agent" not in header_keys: - self.putheader("User-Agent", _get_default_user_agent()) - for header, value in headers.items(): - self.putheader(header, value) - if "transfer-encoding" not in header_keys: - self.putheader("Transfer-Encoding", "chunked") - self.endheaders() - - if body is not None: - stringish_types = six.string_types + (bytes,) - if isinstance(body, stringish_types): - body = (body,) - for chunk in body: - if not chunk: - continue - if not isinstance(chunk, bytes): - chunk = chunk.encode("utf8") - len_str = hex(len(chunk))[2:] - to_send = bytearray(len_str.encode()) - to_send += b"\r\n" - to_send += chunk - to_send += b"\r\n" - self.send(to_send) - - # After the if clause, to always have a closed body - self.send(b"0\r\n\r\n") - - -class HTTPSConnection(HTTPConnection): - """ - Many of the parameters to this constructor are passed to the underlying SSL - socket by means of :py:func:`urllib3.util.ssl_wrap_socket`. - """ - - default_port = port_by_scheme["https"] - - cert_reqs = None - ca_certs = None - ca_cert_dir = None - ca_cert_data = None - ssl_version = None - assert_fingerprint = None - tls_in_tls_required = False - - def __init__( - self, - host, - port=None, - key_file=None, - cert_file=None, - key_password=None, - strict=None, - timeout=socket._GLOBAL_DEFAULT_TIMEOUT, - ssl_context=None, - server_hostname=None, - **kw - ): - - HTTPConnection.__init__(self, host, port, strict=strict, timeout=timeout, **kw) - - self.key_file = key_file - self.cert_file = cert_file - self.key_password = key_password - self.ssl_context = ssl_context - self.server_hostname = server_hostname - - # Required property for Google AppEngine 1.9.0 which otherwise causes - # HTTPS requests to go out as HTTP. (See Issue #356) - self._protocol = "https" - - def set_cert( - self, - key_file=None, - cert_file=None, - cert_reqs=None, - key_password=None, - ca_certs=None, - assert_hostname=None, - assert_fingerprint=None, - ca_cert_dir=None, - ca_cert_data=None, - ): - """ - This method should only be called once, before the connection is used. - """ - # If cert_reqs is not provided we'll assume CERT_REQUIRED unless we also - # have an SSLContext object in which case we'll use its verify_mode. - if cert_reqs is None: - if self.ssl_context is not None: - cert_reqs = self.ssl_context.verify_mode - else: - cert_reqs = resolve_cert_reqs(None) - - self.key_file = key_file - self.cert_file = cert_file - self.cert_reqs = cert_reqs - self.key_password = key_password - self.assert_hostname = assert_hostname - self.assert_fingerprint = assert_fingerprint - self.ca_certs = ca_certs and os.path.expanduser(ca_certs) - self.ca_cert_dir = ca_cert_dir and os.path.expanduser(ca_cert_dir) - self.ca_cert_data = ca_cert_data - - def connect(self): - # Add certificate verification - self.sock = conn = self._new_conn() - hostname = self.host - tls_in_tls = False - - if self._is_using_tunnel(): - if self.tls_in_tls_required: - self.sock = conn = self._connect_tls_proxy(hostname, conn) - tls_in_tls = True - - # Calls self._set_hostport(), so self.host is - # self._tunnel_host below. - self._tunnel() - # Mark this connection as not reusable - self.auto_open = 0 - - # Override the host with the one we're requesting data from. - hostname = self._tunnel_host - - server_hostname = hostname - if self.server_hostname is not None: - server_hostname = self.server_hostname - - is_time_off = datetime.date.today() < RECENT_DATE - if is_time_off: - warnings.warn( - ( - "System time is way off (before {0}). This will probably " - "lead to SSL verification errors" - ).format(RECENT_DATE), - SystemTimeWarning, - ) - - # Wrap socket using verification with the root certs in - # trusted_root_certs - default_ssl_context = False - if self.ssl_context is None: - default_ssl_context = True - self.ssl_context = create_urllib3_context( - ssl_version=resolve_ssl_version(self.ssl_version), - cert_reqs=resolve_cert_reqs(self.cert_reqs), - ) - - context = self.ssl_context - context.verify_mode = resolve_cert_reqs(self.cert_reqs) - - # Try to load OS default certs if none are given. - # Works well on Windows (requires Python3.4+) - if ( - not self.ca_certs - and not self.ca_cert_dir - and not self.ca_cert_data - and default_ssl_context - and hasattr(context, "load_default_certs") - ): - context.load_default_certs() - - self.sock = ssl_wrap_socket( - sock=conn, - keyfile=self.key_file, - certfile=self.cert_file, - key_password=self.key_password, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - ca_cert_data=self.ca_cert_data, - server_hostname=server_hostname, - ssl_context=context, - tls_in_tls=tls_in_tls, - ) - - # If we're using all defaults and the connection - # is TLSv1 or TLSv1.1 we throw a DeprecationWarning - # for the host. - if ( - default_ssl_context - and self.ssl_version is None - and hasattr(self.sock, "version") - and self.sock.version() in {"TLSv1", "TLSv1.1"} - ): - warnings.warn( - "Negotiating TLSv1/TLSv1.1 by default is deprecated " - "and will be disabled in urllib3 v2.0.0. Connecting to " - "'%s' with '%s' can be enabled by explicitly opting-in " - "with 'ssl_version'" % (self.host, self.sock.version()), - DeprecationWarning, - ) - - if self.assert_fingerprint: - assert_fingerprint( - self.sock.getpeercert(binary_form=True), self.assert_fingerprint - ) - elif ( - context.verify_mode != ssl.CERT_NONE - and not getattr(context, "check_hostname", False) - and self.assert_hostname is not False - ): - # While urllib3 attempts to always turn off hostname matching from - # the TLS library, this cannot always be done. So we check whether - # the TLS Library still thinks it's matching hostnames. - cert = self.sock.getpeercert() - if not cert.get("subjectAltName", ()): - warnings.warn( - ( - "Certificate for {0} has no `subjectAltName`, falling back to check for a " - "`commonName` for now. This feature is being removed by major browsers and " - "deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 " - "for details.)".format(hostname) - ), - SubjectAltNameWarning, - ) - _match_hostname(cert, self.assert_hostname or server_hostname) - - self.is_verified = ( - context.verify_mode == ssl.CERT_REQUIRED - or self.assert_fingerprint is not None - ) - - def _connect_tls_proxy(self, hostname, conn): - """ - Establish a TLS connection to the proxy using the provided SSL context. - """ - proxy_config = self.proxy_config - ssl_context = proxy_config.ssl_context - if ssl_context: - # If the user provided a proxy context, we assume CA and client - # certificates have already been set - return ssl_wrap_socket( - sock=conn, - server_hostname=hostname, - ssl_context=ssl_context, - ) - - ssl_context = create_proxy_ssl_context( - self.ssl_version, - self.cert_reqs, - self.ca_certs, - self.ca_cert_dir, - self.ca_cert_data, - ) - - # If no cert was provided, use only the default options for server - # certificate validation - socket = ssl_wrap_socket( - sock=conn, - ca_certs=self.ca_certs, - ca_cert_dir=self.ca_cert_dir, - ca_cert_data=self.ca_cert_data, - server_hostname=hostname, - ssl_context=ssl_context, - ) - - if ssl_context.verify_mode != ssl.CERT_NONE and not getattr( - ssl_context, "check_hostname", False - ): - # While urllib3 attempts to always turn off hostname matching from - # the TLS library, this cannot always be done. So we check whether - # the TLS Library still thinks it's matching hostnames. - cert = socket.getpeercert() - if not cert.get("subjectAltName", ()): - warnings.warn( - ( - "Certificate for {0} has no `subjectAltName`, falling back to check for a " - "`commonName` for now. This feature is being removed by major browsers and " - "deprecated by RFC 2818. (See https://github.com/urllib3/urllib3/issues/497 " - "for details.)".format(hostname) - ), - SubjectAltNameWarning, - ) - _match_hostname(cert, hostname) - - self.proxy_is_verified = ssl_context.verify_mode == ssl.CERT_REQUIRED - return socket - - -def _match_hostname(cert, asserted_hostname): - # Our upstream implementation of ssl.match_hostname() - # only applies this normalization to IP addresses so it doesn't - # match DNS SANs so we do the same thing! - stripped_hostname = asserted_hostname.strip("u[]") - if is_ipaddress(stripped_hostname): - asserted_hostname = stripped_hostname - - try: - match_hostname(cert, asserted_hostname) - except CertificateError as e: - log.warning( - "Certificate did not match expected hostname: %s. Certificate: %s", - asserted_hostname, - cert, - ) - # Add cert to exception and reraise so client code can inspect - # the cert when catching the exception, if they want to - e._peer_cert = cert - raise - - -def _get_default_user_agent(): - return "python-urllib3/%s" % __version__ - - -class DummyConnection(object): - """Used to detect a failed ConnectionCls import.""" - - pass - - -if not ssl: - HTTPSConnection = DummyConnection # noqa: F811 - - -VerifiedHTTPSConnection = HTTPSConnection diff --git a/spaces/tomofi/MMOCR/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py b/spaces/tomofi/MMOCR/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py deleted file mode 100644 index b6b46ba4af194b6ffa406d9b0abc97149ac4e1df..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/configs/textdet/maskrcnn/mask_rcnn_r50_fpn_160e_icdar2017.py +++ /dev/null @@ -1,33 +0,0 @@ -_base_ = [ - '../../_base_/runtime_10e.py', - '../../_base_/det_models/ocr_mask_rcnn_r50_fpn_ohem.py', - '../../_base_/schedules/schedule_sgd_160e.py', - '../../_base_/det_datasets/icdar2017.py', - '../../_base_/det_pipelines/maskrcnn_pipeline.py' -] - -train_list = {{_base_.train_list}} -test_list = {{_base_.test_list}} - -train_pipeline = {{_base_.train_pipeline}} -test_pipeline_icdar2015 = {{_base_.test_pipeline_icdar2015}} - -data = dict( - samples_per_gpu=8, - workers_per_gpu=4, - val_dataloader=dict(samples_per_gpu=1), - test_dataloader=dict(samples_per_gpu=1), - train=dict( - type='UniformConcatDataset', - datasets=train_list, - pipeline=train_pipeline), - val=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015), - test=dict( - type='UniformConcatDataset', - datasets=test_list, - pipeline=test_pipeline_icdar2015)) - -evaluation = dict(interval=10, metric='hmean-iou') diff --git a/spaces/tomofi/MMOCR/tests/test_utils/test_box.py b/spaces/tomofi/MMOCR/tests/test_utils/test_box.py deleted file mode 100644 index 9af23cc51a04b48ee04658be2afffa03e4dc1532..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/tests/test_utils/test_box.py +++ /dev/null @@ -1,128 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import pytest - -from mmocr.utils import (bezier_to_polygon, is_on_same_line, sort_points, - stitch_boxes_into_lines) - - -def test_box_on_line(): - # regular boxes - box1 = [0, 0, 1, 0, 1, 1, 0, 1] - box2 = [2, 0.5, 3, 0.5, 3, 1.5, 2, 1.5] - box3 = [4, 0.8, 5, 0.8, 5, 1.8, 4, 1.8] - assert is_on_same_line(box1, box2, 0.5) - assert not is_on_same_line(box1, box3, 0.5) - - # irregular box4 - box4 = [0, 0, 1, 1, 1, 2, 0, 1] - box5 = [2, 1.5, 3, 1.5, 3, 2.5, 2, 2.5] - box6 = [2, 1.6, 3, 1.6, 3, 2.6, 2, 2.6] - assert is_on_same_line(box4, box5, 0.5) - assert not is_on_same_line(box4, box6, 0.5) - - -def test_stitch_boxes_into_lines(): - boxes = [ # regular boxes - [0, 0, 1, 0, 1, 1, 0, 1], - [2, 0.5, 3, 0.5, 3, 1.5, 2, 1.5], - [3, 1.2, 4, 1.2, 4, 2.2, 3, 2.2], - [5, 0.5, 6, 0.5, 6, 1.5, 5, 1.5], - # irregular box - [6, 1.5, 7, 1.25, 7, 1.75, 6, 1.75] - ] - raw_input = [{'box': boxes[i], 'text': str(i)} for i in range(len(boxes))] - result = stitch_boxes_into_lines(raw_input, 1, 0.5) - # Final lines: [0, 1], [2], [3, 4] - # box 0, 1, 3, 4 are on the same line but box 3 is 2 pixels away from box 1 - # box 3 and 4 are on the same line since the length of overlapping part >= - # 0.5 * the y-axis length of box 5 - expected_result = [{ - 'box': [0, 0, 3, 0, 3, 1.5, 0, 1.5], - 'text': '0 1' - }, { - 'box': [3, 1.2, 4, 1.2, 4, 2.2, 3, 2.2], - 'text': '2' - }, { - 'box': [5, 0.5, 7, 0.5, 7, 1.75, 5, 1.75], - 'text': '3 4' - }] - result.sort(key=lambda x: x['box'][0]) - expected_result.sort(key=lambda x: x['box'][0]) - assert result == expected_result - - -def test_bezier_to_polygon(): - bezier_points = [ - 37.0, 249.0, 72.5, 229.55, 95.34, 220.65, 134.0, 216.0, 132.0, 233.0, - 82.11, 240.2, 72.46, 247.16, 38.0, 263.0 - ] - pts = bezier_to_polygon(bezier_points) - target = np.array([[37.0, 249.0], [42.50420761043885, 246.01570199737577], - [47.82291296107305, 243.2012392477038], - [52.98102930456334, 240.5511007435486], - [58.00346989357049, 238.05977547747486], - [62.91514798075522, 235.721752442047], - [67.74097681877824, 233.53152062982943], - [72.50586966030032, 231.48356903338674], - [77.23473975798221, 229.57238664528356], - [81.95250036448464, 227.79246245808432], - [86.68406473246829, 226.13828546435346], - [91.45434611459396, 224.60434465665548], - [96.28825776352238, 223.18512902755504], - [101.21071293191426, 221.87512756961655], - [106.24662487243039, 220.6688292754046], - [111.42090683773145, 219.5607231374836], - [116.75847208047819, 218.5452981484181], - [122.28423385333137, 217.6170433007727], - [128.02310540895172, 216.77044758711182], - [134.0, 216.0], [132.0, 233.0], - [124.4475521213005, 234.13617728531858], - [117.50700976818779, 235.2763434903047], - [111.12146960198277, 236.42847645429362], - [105.2340282840064, 237.6005540166205], - [99.78778247557953, 238.80055401662054], - [94.72582883802303, 240.0364542936288], - [89.99126403265781, 241.31623268698053], - [85.52718472080478, 242.64786703601104], - [81.27668756378483, 244.03933518005545], - [77.1828692229188, 245.49861495844874], - [73.18882635952762, 247.0336842105263], - [69.23765563493221, 248.65252077562326], - [65.27245371045342, 250.3631024930748], - [61.23631724741216, 252.17340720221605], - [57.07234290712931, 254.09141274238226], - [52.723627350925796, 256.12509695290856], - [48.13326724012247, 258.2824376731302], - [43.24435923604024, 260.5714127423822], [38.0, 263.0]]) - assert np.allclose(pts, target) - - bezier_points = [0, 0, 0, 1, 0, 2, 0, 3, 1, 0, 1, 1, 1, 2, 1, 3] - pts = bezier_to_polygon(bezier_points, num_sample=3) - target = np.array([[0, 0], [0, 1.5], [0, 3], [1, 0], [1, 1.5], [1, 3]]) - assert np.allclose(pts, target) - - with pytest.raises(AssertionError): - bezier_to_polygon(bezier_points, num_sample=-1) - - bezier_points = [0, 1] - with pytest.raises(AssertionError): - bezier_to_polygon(bezier_points) - - -def test_sort_points(): - points = np.array([[1, 1], [0, 0], [1, -1], [2, -2], [0, 2], [1, 1], - [0, 1], [-1, 1], [-1, -1]]) - target = np.array([[-1, -1], [0, 0], [-1, 1], [0, 1], [0, 2], [1, 1], - [1, 1], [2, -2], [1, -1]]) - assert np.allclose(target, sort_points(points)) - - points = np.array([[1, 1], [1, -1], [-1, 1], [-1, -1]]) - target = np.array([[-1, -1], [-1, 1], [1, 1], [1, -1]]) - assert np.allclose(target, sort_points(points)) - - points = [[1, 1], [1, -1], [-1, 1], [-1, -1]] - assert np.allclose(target, sort_points(points)) - - with pytest.raises(AssertionError): - sort_points([1, 2]) diff --git a/spaces/tomofi/NDLOCR/src/deskew_HT/alyn3/deskew.py b/spaces/tomofi/NDLOCR/src/deskew_HT/alyn3/deskew.py deleted file mode 100644 index 7f5ece0e13dbdf632faf153899a6aa9fea7ca505..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/deskew_HT/alyn3/deskew.py +++ /dev/null @@ -1,163 +0,0 @@ -""" Deskews file after getting skew angle """ -""" -This code is based on the following file: -https://github.com/kakul/Alyn/blob/master/alyn/deskew.py -""" -import optparse -import numpy as np -import os - -from alyn3.skew_detect import SkewDetect -import cv2 - - -class Deskew: - - def __init__(self, input_file, output_file, r_angle=0, - skew_max=4.0, acc_deg=0.1, method=1, - roi_w=1.0, roi_h=1.0, - gray=1.0, quality=100, short=None): - self.input_file = input_file - self.output_file = output_file - self.r_angle = r_angle - self.method = method - self.gray = gray - self.quality = quality - self.short = short - self.skew_obj = SkewDetect(self.input_file, - skew_max=skew_max, acc_deg=acc_deg, - roi_w=roi_w, roi_h=roi_h) - - def deskew(self): - print('input: '+self.input_file) - - res = self.skew_obj.process_single_file() - angle = res['Estimated Angle'] - rot_angle = angle + self.r_angle - - img = cv2.imread(self.input_file, cv2.IMREAD_COLOR) - g = self.gray * 255 - rotated = self.rotate_expand(img, rot_angle, g) - - if self.short: - h = rotated.shape[0] - w = rotated.shape[1] - print('origin w,h: {}, {}'.format(w, h)) - if w < h: - h = int(h*self.short/w+0.5) - w = self.short - else: - w = int(w*self.short/h+0.5) - h = self.short - print('resized w,h: {}, {}'.format(w, h)) - rotated = cv2.resize(rotated, (w, h)) - - if self.output_file: - self.save_image(rotated) - - return res - - def deskew_on_memory(self, input_data): - res = self.skew_obj.determine_skew_on_memory(input_data) - angle = res['Estimated Angle'] - rot_angle = angle + self.r_angle - - img = input_data - g = self.gray * 255 - rotated = self.rotate_expand(img, rot_angle, g) - - if self.short: - h = rotated.shape[0] - w = rotated.shape[1] - print('origin w,h: {}, {}'.format(w, h)) - if w < h: - h = int(h*self.short/w+0.5) - w = self.short - else: - w = int(w*self.short/h+0.5) - h = self.short - print('resized w,h: {}, {}'.format(w, h)) - rotated = cv2.resize(rotated, (w, h)) - - return rotated - - def save_image(self, img): - path = self.skew_obj.check_path(self.output_file) - if os.path.splitext(path)[1] in ['.jpg', '.JPG', '.jpeg', '.JPEG']: - cv2.imwrite(path, img, [cv2.IMWRITE_JPEG_QUALITY, 100]) - else: - cv2.imwrite(path, img) - - def rotate_expand(self, img, angle=0, g=255): - h = img.shape[0] - w = img.shape[1] - angle_rad = angle/180.0*np.pi - w_rot = int(np.round(h*np.absolute(np.sin(angle_rad)) + - w*np.absolute(np.cos(angle_rad)))) - h_rot = int(np.round(h*np.absolute(np.cos(angle_rad)) + - w*np.absolute(np.sin(angle_rad)))) - size_rot = (w_rot, h_rot) - mat = cv2.getRotationMatrix2D((w/2, h/2), angle, 1.0) - mat[0][2] = mat[0][2] - w/2 + w_rot/2 - mat[1][2] = mat[1][2] - h/2 + h_rot/2 - rotated = cv2.warpAffine(img, mat, size_rot, borderValue=(g, g, g)) - - return rotated - - def run(self): - if self.input_file: - return self.deskew() - - -def optparse_args(): - parser = optparse.OptionParser() - - parser.add_option( - '-i', - '--input', - default=None, - dest='input_file', - help='Input file name') - parser.add_option( - '-o', '--output', - default=None, - dest='output_file', - help='Output file name') - parser.add_option( - '-r', '--rotate', - default=0, - dest='r_angle', - help='Rotate the image to desired axis', - type=int) - parser.add_option( - '-g', '--gray', - default=1.0, - dest='gray', - help='Gray level outside the input image boundaries.\n' - 'between 0.0(black) and 1.0(white)\n' - '[0.0, 1.0], default: 1.0', - type=float) - parser.add_option( - '-q', '--quality', - default=100, - dest='quality', - help='output jpeg image quality. i\n' - '1 is worst quality and smallest file size,\n' - 'and 100 is best quality and largest file size.\n' - '[1, 100], default: 100', - type=int) - - return parser.parse_args() - - -if __name__ == '__main__': - options, args = optparse_args() - deskew_obj = Deskew( - options.input_file, - options.display_image, - options.output_file, - options.r_angle, - options.gray, - options.quality) - - deskew_obj.run() diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco.py deleted file mode 100644 index e2640c07e86db2d8cc2e6654c78077df10789b4c..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/free_anchor/retinanet_free_anchor_x101_32x4d_fpn_1x_coco.py +++ /dev/null @@ -1,12 +0,0 @@ -_base_ = './retinanet_free_anchor_r50_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://resnext101_32x4d', - backbone=dict( - type='ResNeXt', - depth=101, - groups=32, - base_width=4, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - style='pytorch')) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/grid_rcnn/grid_rcnn_r101_fpn_gn-head_2x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/grid_rcnn/grid_rcnn_r101_fpn_gn-head_2x_coco.py deleted file mode 100644 index cf8b648a4291db4a172bf031f301110963f38dd6..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/grid_rcnn/grid_rcnn_r101_fpn_gn-head_2x_coco.py +++ /dev/null @@ -1,3 +0,0 @@ -_base_ = './grid_rcnn_r50_fpn_gn-head_2x_coco.py' - -model = dict(pretrained='torchvision://resnet101', backbone=dict(depth=101)) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/grid_rcnn/grid_rcnn_r50_fpn_gn-head_1x_coco.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/grid_rcnn/grid_rcnn_r50_fpn_gn-head_1x_coco.py deleted file mode 100644 index 4aa00ece55280697fc67bd727077a8c9a58cfa44..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/configs/grid_rcnn/grid_rcnn_r50_fpn_gn-head_1x_coco.py +++ /dev/null @@ -1,11 +0,0 @@ -_base_ = ['grid_rcnn_r50_fpn_gn-head_2x_coco.py'] -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[8, 11]) -checkpoint_config = dict(interval=1) -# runtime settings -runner = dict(type='EpochBasedRunner', max_epochs=12) diff --git a/spaces/trttung1610/musicgen/audiocraft/metrics/kld.py b/spaces/trttung1610/musicgen/audiocraft/metrics/kld.py deleted file mode 100644 index 18260bf974bf47d8381223ac39be0c47c031bf8a..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/audiocraft/metrics/kld.py +++ /dev/null @@ -1,218 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import contextlib -from functools import partial -import logging -import os -import typing as tp - -import torch -import torchmetrics - -from ..data.audio_utils import convert_audio - - -logger = logging.getLogger(__name__) - - -class _patch_passt_stft: - """Decorator to patch torch.stft in PaSST.""" - def __init__(self): - self.old_stft = torch.stft - - def __enter__(self): - # return_complex is a mandatory parameter in latest torch versions - # torch is throwing RuntimeErrors when not set - torch.stft = partial(torch.stft, return_complex=False) - - def __exit__(self, *exc): - torch.stft = self.old_stft - - -def kl_divergence(pred_probs: torch.Tensor, target_probs: torch.Tensor, epsilon: float = 1e-6) -> torch.Tensor: - """Computes the elementwise KL-Divergence loss between probability distributions - from generated samples and target samples. - - Args: - pred_probs (torch.Tensor): Probabilities for each label obtained - from a classifier on generated audio. Expected shape is [B, num_classes]. - target_probs (torch.Tensor): Probabilities for each label obtained - from a classifier on target audio. Expected shape is [B, num_classes]. - epsilon (float): Epsilon value. - Returns: - kld (torch.Tensor): KLD loss between each generated sample and target pair. - """ - kl_div = torch.nn.functional.kl_div((pred_probs + epsilon).log(), target_probs, reduction="none") - return kl_div.sum(-1) - - -class KLDivergenceMetric(torchmetrics.Metric): - """Base implementation for KL Divergence metric. - - The KL divergence is measured between probability distributions - of class predictions returned by a pre-trained audio classification model. - When the KL-divergence is low, the generated audio is expected to - have similar acoustic characteristics as the reference audio, - according to the classifier. - """ - def __init__(self): - super().__init__() - self.add_state("kld_pq_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("kld_qp_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("kld_all_sum", default=torch.tensor(0.), dist_reduce_fx="sum") - self.add_state("weight", default=torch.tensor(0), dist_reduce_fx="sum") - - def _get_label_distribution(self, x: torch.Tensor, sizes: torch.Tensor, - sample_rates: torch.Tensor) -> tp.Optional[torch.Tensor]: - """Get model output given provided input tensor. - - Args: - x (torch.Tensor): Input audio tensor of shape [B, C, T]. - sizes (torch.Tensor): Actual audio sample length, of shape [B]. - sample_rates (torch.Tensor): Actual audio sample rate, of shape [B]. - Returns: - probs (torch.Tensor): Probabilities over labels, of shape [B, num_classes]. - """ - raise NotImplementedError("implement method to extract label distributions from the model.") - - def update(self, preds: torch.Tensor, targets: torch.Tensor, - sizes: torch.Tensor, sample_rates: torch.Tensor) -> None: - """Calculates running KL-Divergence loss between batches of audio - preds (generated) and target (ground-truth) - Args: - preds (torch.Tensor): Audio samples to evaluate, of shape [B, C, T]. - targets (torch.Tensor): Target samples to compare against, of shape [B, C, T]. - sizes (torch.Tensor): Actual audio sample length, of shape [B]. - sample_rates (torch.Tensor): Actual audio sample rate, of shape [B]. - """ - assert preds.shape == targets.shape - assert preds.size(0) > 0, "Cannot update the loss with empty tensors" - preds_probs = self._get_label_distribution(preds, sizes, sample_rates) - targets_probs = self._get_label_distribution(targets, sizes, sample_rates) - if preds_probs is not None and targets_probs is not None: - assert preds_probs.shape == targets_probs.shape - kld_scores = kl_divergence(preds_probs, targets_probs) - assert not torch.isnan(kld_scores).any(), "kld_scores contains NaN value(s)!" - self.kld_pq_sum += torch.sum(kld_scores) - kld_qp_scores = kl_divergence(targets_probs, preds_probs) - self.kld_qp_sum += torch.sum(kld_qp_scores) - self.weight += torch.tensor(kld_scores.size(0)) - - def compute(self) -> dict: - """Computes KL-Divergence across all evaluated pred/target pairs.""" - weight: float = float(self.weight.item()) # type: ignore - assert weight > 0, "Unable to compute with total number of comparisons <= 0" - logger.info(f"Computing KL divergence on a total of {weight} samples") - kld_pq = self.kld_pq_sum.item() / weight # type: ignore - kld_qp = self.kld_qp_sum.item() / weight # type: ignore - kld_both = kld_pq + kld_qp - return {'kld': kld_pq, 'kld_pq': kld_pq, 'kld_qp': kld_qp, 'kld_both': kld_both} - - -class PasstKLDivergenceMetric(KLDivergenceMetric): - """KL-Divergence metric based on pre-trained PASST classifier on AudioSet. - - From: PaSST: Efficient Training of Audio Transformers with Patchout - Paper: https://arxiv.org/abs/2110.05069 - Implementation: https://github.com/kkoutini/PaSST - - Follow instructions from the github repo: - ``` - pip install 'git+https://github.com/kkoutini/passt_hear21@0.0.19#egg=hear21passt' - ``` - - Args: - pretrained_length (float, optional): Audio duration used for the pretrained model. - """ - def __init__(self, pretrained_length: tp.Optional[float] = None): - super().__init__() - self._initialize_model(pretrained_length) - - def _initialize_model(self, pretrained_length: tp.Optional[float] = None): - """Initialize underlying PaSST audio classifier.""" - model, sr, max_frames, min_frames = self._load_base_model(pretrained_length) - self.min_input_frames = min_frames - self.max_input_frames = max_frames - self.model_sample_rate = sr - self.model = model - self.model.eval() - self.model.to(self.device) - - def _load_base_model(self, pretrained_length: tp.Optional[float]): - """Load pretrained model from PaSST.""" - try: - if pretrained_length == 30: - from hear21passt.base30sec import get_basic_model # type: ignore - max_duration = 30 - elif pretrained_length == 20: - from hear21passt.base20sec import get_basic_model # type: ignore - max_duration = 20 - else: - from hear21passt.base import get_basic_model # type: ignore - # Original PASST was trained on AudioSet with 10s-long audio samples - max_duration = 10 - min_duration = 0.15 - min_duration = 0.15 - except ModuleNotFoundError: - raise ModuleNotFoundError( - "Please install hear21passt to compute KL divergence: ", - "pip install 'git+https://github.com/kkoutini/passt_hear21@0.0.19#egg=hear21passt'" - ) - model_sample_rate = 32_000 - max_input_frames = int(max_duration * model_sample_rate) - min_input_frames = int(min_duration * model_sample_rate) - with open(os.devnull, 'w') as f, contextlib.redirect_stdout(f): - model = get_basic_model(mode='logits') - return model, model_sample_rate, max_input_frames, min_input_frames - - def _process_audio(self, wav: torch.Tensor, sample_rate: int, wav_len: int) -> tp.Optional[torch.Tensor]: - wav = wav.unsqueeze(0) - wav = wav[..., :wav_len] - wav = convert_audio(wav, from_rate=sample_rate, to_rate=self.model_sample_rate, to_channels=1) - wav = wav.squeeze(0) - # create chunks of audio to match the classifier processing length - segments = torch.split(wav, self.max_input_frames, dim=-1) - valid_segments = [] - for s in segments: - if s.size(-1) > self.min_input_frames: - s = torch.nn.functional.pad(s, (0, self.max_input_frames - s.shape[-1])) - valid_segments.append(s) - if len(valid_segments) > 0: - return torch.stack(valid_segments, dim=0) - else: - return None - - def _get_label_distribution(self, x: torch.Tensor, sizes: torch.Tensor, - sample_rates: torch.Tensor) -> tp.Optional[torch.Tensor]: - """Get model output given provided input tensor. - - Args: - x (torch.Tensor): Input audio tensor of shape [B, C, T]. - sizes (torch.Tensor): Actual audio sample length, of shape [B]. - sample_rates (torch.Tensor): Actual audio sample rate, of shape [B]. - Returns: - probs (torch.Tensor, optional): Probabilities over labels, of shape [B, num_classes]. - """ - all_probs: tp.List[torch.Tensor] = [] - for i, wav in enumerate(x): - sample_rate = int(sample_rates[i].item()) - wav_len = int(sizes[i].item()) - wav = self._process_audio(wav, sample_rate, wav_len) - if wav is not None: - assert wav.dim() == 3, f"Unexpected number of dims for preprocessed wav: {wav.shape}" - wav = wav.mean(dim=1) - # PaSST is printing a lot of infos that we are not interested in - with open(os.devnull, 'w') as f, contextlib.redirect_stdout(f): - with torch.no_grad(), _patch_passt_stft(): - logits = self.model(wav.to(self.device)) - probs = torch.softmax(logits, dim=-1) - probs = probs.mean(dim=0) - all_probs.append(probs) - if len(all_probs) > 0: - return torch.stack(all_probs, dim=0) - else: - return None diff --git a/spaces/tyoung560/ai-assist/app.py b/spaces/tyoung560/ai-assist/app.py deleted file mode 100644 index 7a391428c2fbfd9ea5a8072fd9e2a8781c37143f..0000000000000000000000000000000000000000 --- a/spaces/tyoung560/ai-assist/app.py +++ /dev/null @@ -1,66 +0,0 @@ -from llama_index import SimpleDirectoryReader, GPTSimpleVectorIndex, LLMPredictor, ServiceContext -from langchain.chat_models import ChatOpenAI -import gradio as gr -import os - -os.environ['OPENAI_API_KEY'] - -def construct_index(directory_path): - - llm_predictor = LLMPredictor(llm=ChatOpenAI(temperature=0.1, model_name="gpt-3.5-turbo", max_tokens=4096)) - - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor) - - docs = SimpleDirectoryReader(directory_path).load_data() - - index = GPTSimpleVectorIndex.from_documents(docs, service_context=service_context) - - index.save_to_disk('index.json') - - return index - -def chatbot(input_text, chat_history): - index = GPTSimpleVectorIndex.load_from_disk('index.json') - prompt = "\n".join([f"User: {h[0]}\nSystem: {h[1]}" for h in chat_history]) - response = index.query(prompt + input_text, response_mode="compact") - print("Prompt:") - print(prompt) - return response.response - - -with gr.Blocks(theme=gr.themes.Base(),analytics_enabled=True) as demo: - gr.Markdown( - """ - > Tadabase AI Assist Bot is a chatbot powered by **GPT-3** - > - > - T.A.A.B can answer questions about Tadabase, it's features, and other relevant content. - > - This chatbot has a memory. It will remember prevous questions and responses within the chat session. - > - This chatbot **may elaborate** on it's answers with incorrect information. - > - > Be **natural**, be **specific**, and **have fun** :) - """) - chat_msg = gr.Chatbot(label="Tadabase AI Assist Bot") - msg = gr.Textbox(label="Start typing and press Enter to submit.") - clear = gr.Button("Start Over") - - def user(user_message, history): - print("Chat history before adding user message:") - print(history) - return "", history + [[user_message, None]] - - def bot(history): - bot_message = chatbot(history[-1][0], history) - history[-1][1] = bot_message - print("Chat history after generating bot response:") - print(history) - return history - - - msg.submit(user, [msg, chat_msg], [msg, chat_msg], queue=False).then( - bot, chat_msg, chat_msg - ) - clear.click(lambda: None, None, chat_msg, queue=False) - -index = construct_index("./docs") - -demo.launch() \ No newline at end of file diff --git a/spaces/ucalyptus/PTI/torch_utils/ops/bias_act.h b/spaces/ucalyptus/PTI/torch_utils/ops/bias_act.h deleted file mode 100644 index a32187e1fb7e3bae509d4eceaf900866866875a4..0000000000000000000000000000000000000000 --- a/spaces/ucalyptus/PTI/torch_utils/ops/bias_act.h +++ /dev/null @@ -1,38 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -//------------------------------------------------------------------------ -// CUDA kernel parameters. - -struct bias_act_kernel_params -{ - const void* x; // [sizeX] - const void* b; // [sizeB] or NULL - const void* xref; // [sizeX] or NULL - const void* yref; // [sizeX] or NULL - const void* dy; // [sizeX] or NULL - void* y; // [sizeX] - - int grad; - int act; - float alpha; - float gain; - float clamp; - - int sizeX; - int sizeB; - int stepB; - int loopX; -}; - -//------------------------------------------------------------------------ -// CUDA kernel selection. - -template void* choose_bias_act_kernel(const bias_act_kernel_params& p); - -//------------------------------------------------------------------------ diff --git a/spaces/unicorn345/bingo34778/Dockerfile b/spaces/unicorn345/bingo34778/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/unicorn345/bingo34778/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/unik-style/unik-ml/routers/__init__.py b/spaces/unik-style/unik-ml/routers/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/usbethFlerru/sovits-modelsV2/Motion Design School MAD VFX In After Effects UPDATED.md b/spaces/usbethFlerru/sovits-modelsV2/Motion Design School MAD VFX In After Effects UPDATED.md deleted file mode 100644 index 0c764834bcd614f2a932fc891342cc732c5f3ad2..0000000000000000000000000000000000000000 --- a/spaces/usbethFlerru/sovits-modelsV2/Motion Design School MAD VFX In After Effects UPDATED.md +++ /dev/null @@ -1,128 +0,0 @@ -## Motion Design School MAD VFX in After Effects UPDATED - - - - - - - - - -**Download File >>>>> [https://searchdisvipas.blogspot.com/?download=2txnO1](https://searchdisvipas.blogspot.com/?download=2txnO1)** - - - - - - - - - - - - Here is a possible title and article for the keyword "Motion Design School MAD VFX in After Effects UPDATED": - -# Motion Design School MAD VFX in After Effects UPDATED: Learn How to Create Stunning Visual Effects for Your Videos - - - -If you want to take your video editing skills to the next level, you need to master the art of visual effects. Visual effects, or VFX, are the techniques and tools that allow you to manipulate and enhance your footage with realistic or stylized elements, such as explosions, fire, smoke, magic, sci-fi, and more. - - - -But how do you learn VFX? Where do you start? What software do you need? - - - -The answer is Motion Design School MAD VFX in After Effects UPDATED. This is an online course that teaches you everything you need to know about creating amazing VFX in Adobe After Effects, the industry-standard software for motion graphics and compositing. - - - -In this course, you will learn from experienced instructors who have worked on Hollywood movies and TV shows, such as Avengers: Endgame, Stranger Things, The Witcher, and more. You will follow along with practical projects and exercises that will help you apply what you learn to your own videos. You will also get access to exclusive assets and resources that will make your work easier and faster. - - - -Some of the topics covered in this course are: - - - -- How to use basic and advanced tools and effects in After Effects - -- How to track and stabilize your footage - -- How to create realistic explosions, fire, smoke, and debris - -- How to use 3D elements and cameras in After Effects - -- How to create sci-fi and fantasy effects, such as portals, holograms, lasers, and magic - -- How to color grade and enhance your VFX shots - -- How to render and export your final video - - - -This course is suitable for beginners and intermediate users of After Effects who want to improve their VFX skills. You don't need any prior experience or knowledge of VFX to enroll in this course. All you need is a computer with After Effects installed and a passion for learning. - - - -Motion Design School MAD VFX in After Effects UPDATED is the ultimate course for anyone who wants to create stunning visual effects for their videos. Whether you are a filmmaker, a videographer, a YouTube creator, a social media influencer, or just a hobbyist, this course will help you take your videos to the next level. - - - -Don't miss this opportunity to learn from the best and join a community of thousands of students who have already enrolled in this course. Enroll today and get ready to unleash your creativity with Motion Design School MAD VFX in After Effects UPDATED. - -Here are some more paragraphs for the article: - -What are the benefits of learning VFX in After Effects? - - - -After Effects is one of the most popular and powerful software for creating motion graphics and visual effects. It is used by professionals and amateurs alike for a variety of projects, such as films, TV shows, commercials, music videos, animations, and more. - - - -Learning VFX in After Effects will give you many advantages, such as: - - - -- You will be able to create stunning and realistic effects that will make your videos stand out from the crowd. - -- You will be able to express your creativity and imagination in new and exciting ways. - -- You will be able to enhance your storytelling and communication skills by adding visual elements that support your message. - -- You will be able to increase your value and marketability as a video editor or creator. - -- You will be able to save time and money by doing VFX yourself instead of hiring someone else or using stock footage. - - - -How is Motion Design School MAD VFX in After Effects UPDATED different from other VFX courses? - - - -There are many VFX courses available online, but Motion Design School MAD VFX in After Effects UPDATED is unique and superior in many ways, such as: - - - -- It is updated regularly with new content and features to keep up with the latest trends and technologies in VFX. - -- It is taught by experienced and qualified instructors who have worked on some of the biggest and most successful VFX projects in the world. - -- It is based on real-world scenarios and examples that will help you apply what you learn to your own videos. - -- It is comprehensive and covers all the aspects of VFX in After Effects, from the basics to the advanced techniques. - -- It is interactive and engaging, with quizzes, assignments, feedback, and support from the instructors and the community. - - - -What are you waiting for? Enroll in Motion Design School MAD VFX in After Effects UPDATED today and start creating amazing VFX for your videos. You won't regret it! - - dfd1c89656 - - - - - diff --git a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/hub/session.md b/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/hub/session.md deleted file mode 100644 index 8fe82c4f20cc86a45dd2fa1e672779616ab428f7..0000000000000000000000000000000000000000 --- a/spaces/vaishanthr/Simultaneous-Segmented-Depth-Prediction/yolov8/docs/reference/hub/session.md +++ /dev/null @@ -1,9 +0,0 @@ ---- -description: Accelerate your AI development with the Ultralytics HUB Training Session. High-performance training of object detection models. -keywords: YOLOv5, object detection, HUBTrainingSession, custom models, Ultralytics Docs ---- - -## HUBTrainingSession ---- -### ::: ultralytics.hub.session.HUBTrainingSession -

    diff --git a/spaces/videfikri/aicover/uvr5_pack/lib_v5/layers_537227KB.py b/spaces/videfikri/aicover/uvr5_pack/lib_v5/layers_537227KB.py deleted file mode 100644 index 78e539250075d7fed2f349d05e3317dfe2c96804..0000000000000000000000000000000000000000 --- a/spaces/videfikri/aicover/uvr5_pack/lib_v5/layers_537227KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from uvr5_pack.lib_v5 import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/vishnu0001/text2mesh/shap_e/models/nn/pointnet2_utils.py b/spaces/vishnu0001/text2mesh/shap_e/models/nn/pointnet2_utils.py deleted file mode 100644 index 73c63cfb18b2cc6543f9805db74ca8e26d90e4e1..0000000000000000000000000000000000000000 --- a/spaces/vishnu0001/text2mesh/shap_e/models/nn/pointnet2_utils.py +++ /dev/null @@ -1,370 +0,0 @@ -""" -Based on https://github.com/yanx27/Pointnet_Pointnet2_pytorch/blob/master/models/pointnet2_utils.py - -MIT License - -Copyright (c) 2019 benny - -Permission is hereby granted, free of charge, to any person obtaining a copy -of this software and associated documentation files (the "Software"), to deal -in the Software without restriction, including without limitation the rights -to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -copies of the Software, and to permit persons to whom the Software is -furnished to do so, subject to the following conditions: - -The above copyright notice and this permission notice shall be included in all -copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -SOFTWARE. -""" - -from time import time - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F - - -def timeit(tag, t): - print("{}: {}s".format(tag, time() - t)) - return time() - - -def pc_normalize(pc): - l = pc.shape[0] - centroid = np.mean(pc, axis=0) - pc = pc - centroid - m = np.max(np.sqrt(np.sum(pc**2, axis=1))) - pc = pc / m - return pc - - -def square_distance(src, dst): - """ - Calculate Euclid distance between each two points. - - src^T * dst = xn * xm + yn * ym + zn * zm; - sum(src^2, dim=-1) = xn*xn + yn*yn + zn*zn; - sum(dst^2, dim=-1) = xm*xm + ym*ym + zm*zm; - dist = (xn - xm)^2 + (yn - ym)^2 + (zn - zm)^2 - = sum(src**2,dim=-1)+sum(dst**2,dim=-1)-2*src^T*dst - - Input: - src: source points, [B, N, C] - dst: target points, [B, M, C] - Output: - dist: per-point square distance, [B, N, M] - """ - B, N, _ = src.shape - _, M, _ = dst.shape - dist = -2 * torch.matmul(src, dst.permute(0, 2, 1)) - dist += torch.sum(src**2, -1).view(B, N, 1) - dist += torch.sum(dst**2, -1).view(B, 1, M) - return dist - - -def index_points(points, idx): - """ - - Input: - points: input points data, [B, N, C] - idx: sample index data, [B, S] - Return: - new_points:, indexed points data, [B, S, C] - """ - device = points.device - B = points.shape[0] - view_shape = list(idx.shape) - view_shape[1:] = [1] * (len(view_shape) - 1) - repeat_shape = list(idx.shape) - repeat_shape[0] = 1 - batch_indices = ( - torch.arange(B, dtype=torch.long).to(device).view(view_shape).repeat(repeat_shape) - ) - new_points = points[batch_indices, idx, :] - return new_points - - -def farthest_point_sample(xyz, npoint, deterministic=False): - """ - Input: - xyz: pointcloud data, [B, N, 3] - npoint: number of samples - Return: - centroids: sampled pointcloud index, [B, npoint] - """ - device = xyz.device - B, N, C = xyz.shape - centroids = torch.zeros(B, npoint, dtype=torch.long).to(device) - distance = torch.ones(B, N).to(device) * 1e10 - if deterministic: - farthest = torch.arange(0, B, dtype=torch.long).to(device) - else: - farthest = torch.randint(0, N, (B,), dtype=torch.long).to(device) - batch_indices = torch.arange(B, dtype=torch.long).to(device) - for i in range(npoint): - centroids[:, i] = farthest - centroid = xyz[batch_indices, farthest, :].view(B, 1, 3) - dist = torch.sum((xyz - centroid) ** 2, -1) - mask = dist < distance - distance[mask] = dist[mask] - farthest = torch.max(distance, -1)[1] - return centroids - - -def query_ball_point(radius, nsample, xyz, new_xyz): - """ - Input: - radius: local region radius - nsample: max sample number in local region - xyz: all points, [B, N, 3] - new_xyz: query points, [B, S, 3] - Return: - group_idx: grouped points index, [B, S, nsample] - """ - device = xyz.device - B, N, C = xyz.shape - _, S, _ = new_xyz.shape - group_idx = torch.arange(N, dtype=torch.long).to(device).view(1, 1, N).repeat([B, S, 1]) - sqrdists = square_distance(new_xyz, xyz) - group_idx[sqrdists > radius**2] = N - group_idx = group_idx.sort(dim=-1)[0][:, :, :nsample] - group_first = group_idx[:, :, 0].view(B, S, 1).repeat([1, 1, nsample]) - mask = group_idx == N - group_idx[mask] = group_first[mask] - return group_idx - - -def sample_and_group( - npoint, - radius, - nsample, - xyz, - points, - returnfps=False, - deterministic=False, - fps_method: str = "fps", -): - """ - Input: - npoint: - radius: - nsample: - xyz: input points position data, [B, N, 3] - points: input points data, [B, N, D] - Return: - new_xyz: sampled points position data, [B, npoint, nsample, 3] - new_points: sampled points data, [B, npoint, nsample, 3+D] - """ - B, N, C = xyz.shape - S = npoint - if fps_method == "fps": - fps_idx = farthest_point_sample(xyz, npoint, deterministic=deterministic) # [B, npoint, C] - elif fps_method == "first": - fps_idx = torch.arange(npoint)[None].repeat(B, 1) - else: - raise ValueError(f"Unknown FPS method: {fps_method}") - new_xyz = index_points(xyz, fps_idx) - idx = query_ball_point(radius, nsample, xyz, new_xyz) - grouped_xyz = index_points(xyz, idx) # [B, npoint, nsample, C] - grouped_xyz_norm = grouped_xyz - new_xyz.view(B, S, 1, C) - - if points is not None: - grouped_points = index_points(points, idx) - new_points = torch.cat( - [grouped_xyz_norm, grouped_points], dim=-1 - ) # [B, npoint, nsample, C+D] - else: - new_points = grouped_xyz_norm - if returnfps: - return new_xyz, new_points, grouped_xyz, fps_idx - else: - return new_xyz, new_points - - -def sample_and_group_all(xyz, points): - """ - Input: - xyz: input points position data, [B, N, 3] - points: input points data, [B, N, D] - Return: - new_xyz: sampled points position data, [B, 1, 3] - new_points: sampled points data, [B, 1, N, 3+D] - """ - device = xyz.device - B, N, C = xyz.shape - new_xyz = torch.zeros(B, 1, C).to(device) - grouped_xyz = xyz.view(B, 1, N, C) - if points is not None: - new_points = torch.cat([grouped_xyz, points.view(B, 1, N, -1)], dim=-1) - else: - new_points = grouped_xyz - return new_xyz, new_points - - -class PointNetSetAbstraction(nn.Module): - def __init__(self, npoint, radius, nsample, in_channel, mlp, group_all): - super(PointNetSetAbstraction, self).__init__() - self.npoint = npoint - self.radius = radius - self.nsample = nsample - self.mlp_convs = nn.ModuleList() - self.mlp_bns = nn.ModuleList() - last_channel = in_channel - for out_channel in mlp: - self.mlp_convs.append(nn.Conv2d(last_channel, out_channel, 1)) - self.mlp_bns.append(nn.BatchNorm2d(out_channel)) - last_channel = out_channel - self.group_all = group_all - - def forward(self, xyz, points): - """ - Input: - xyz: input points position data, [B, C, N] - points: input points data, [B, D, N] - Return: - new_xyz: sampled points position data, [B, C, S] - new_points_concat: sample points feature data, [B, D', S] - """ - xyz = xyz.permute(0, 2, 1) - if points is not None: - points = points.permute(0, 2, 1) - - if self.group_all: - new_xyz, new_points = sample_and_group_all(xyz, points) - else: - new_xyz, new_points = sample_and_group( - self.npoint, self.radius, self.nsample, xyz, points, deterministic=not self.training - ) - # new_xyz: sampled points position data, [B, npoint, C] - # new_points: sampled points data, [B, npoint, nsample, C+D] - new_points = new_points.permute(0, 3, 2, 1) # [B, C+D, nsample,npoint] - for i, conv in enumerate(self.mlp_convs): - bn = self.mlp_bns[i] - new_points = F.relu(bn(conv(new_points))) - - new_points = torch.max(new_points, 2)[0] - new_xyz = new_xyz.permute(0, 2, 1) - return new_xyz, new_points - - -class PointNetSetAbstractionMsg(nn.Module): - def __init__(self, npoint, radius_list, nsample_list, in_channel, mlp_list): - super(PointNetSetAbstractionMsg, self).__init__() - self.npoint = npoint - self.radius_list = radius_list - self.nsample_list = nsample_list - self.conv_blocks = nn.ModuleList() - self.bn_blocks = nn.ModuleList() - for i in range(len(mlp_list)): - convs = nn.ModuleList() - bns = nn.ModuleList() - last_channel = in_channel + 3 - for out_channel in mlp_list[i]: - convs.append(nn.Conv2d(last_channel, out_channel, 1)) - bns.append(nn.BatchNorm2d(out_channel)) - last_channel = out_channel - self.conv_blocks.append(convs) - self.bn_blocks.append(bns) - - def forward(self, xyz, points): - """ - Input: - xyz: input points position data, [B, C, N] - points: input points data, [B, D, N] - Return: - new_xyz: sampled points position data, [B, C, S] - new_points_concat: sample points feature data, [B, D', S] - """ - xyz = xyz.permute(0, 2, 1) - if points is not None: - points = points.permute(0, 2, 1) - - B, N, C = xyz.shape - S = self.npoint - new_xyz = index_points(xyz, farthest_point_sample(xyz, S, deterministic=not self.training)) - new_points_list = [] - for i, radius in enumerate(self.radius_list): - K = self.nsample_list[i] - group_idx = query_ball_point(radius, K, xyz, new_xyz) - grouped_xyz = index_points(xyz, group_idx) - grouped_xyz -= new_xyz.view(B, S, 1, C) - if points is not None: - grouped_points = index_points(points, group_idx) - grouped_points = torch.cat([grouped_points, grouped_xyz], dim=-1) - else: - grouped_points = grouped_xyz - - grouped_points = grouped_points.permute(0, 3, 2, 1) # [B, D, K, S] - for j in range(len(self.conv_blocks[i])): - conv = self.conv_blocks[i][j] - bn = self.bn_blocks[i][j] - grouped_points = F.relu(bn(conv(grouped_points))) - new_points = torch.max(grouped_points, 2)[0] # [B, D', S] - new_points_list.append(new_points) - - new_xyz = new_xyz.permute(0, 2, 1) - new_points_concat = torch.cat(new_points_list, dim=1) - return new_xyz, new_points_concat - - -class PointNetFeaturePropagation(nn.Module): - def __init__(self, in_channel, mlp): - super(PointNetFeaturePropagation, self).__init__() - self.mlp_convs = nn.ModuleList() - self.mlp_bns = nn.ModuleList() - last_channel = in_channel - for out_channel in mlp: - self.mlp_convs.append(nn.Conv1d(last_channel, out_channel, 1)) - self.mlp_bns.append(nn.BatchNorm1d(out_channel)) - last_channel = out_channel - - def forward(self, xyz1, xyz2, points1, points2): - """ - Input: - xyz1: input points position data, [B, C, N] - xyz2: sampled input points position data, [B, C, S] - points1: input points data, [B, D, N] - points2: input points data, [B, D, S] - Return: - new_points: upsampled points data, [B, D', N] - """ - xyz1 = xyz1.permute(0, 2, 1) - xyz2 = xyz2.permute(0, 2, 1) - - points2 = points2.permute(0, 2, 1) - B, N, C = xyz1.shape - _, S, _ = xyz2.shape - - if S == 1: - interpolated_points = points2.repeat(1, N, 1) - else: - dists = square_distance(xyz1, xyz2) - dists, idx = dists.sort(dim=-1) - dists, idx = dists[:, :, :3], idx[:, :, :3] # [B, N, 3] - - dist_recip = 1.0 / (dists + 1e-8) - norm = torch.sum(dist_recip, dim=2, keepdim=True) - weight = dist_recip / norm - interpolated_points = torch.sum( - index_points(points2, idx) * weight.view(B, N, 3, 1), dim=2 - ) - - if points1 is not None: - points1 = points1.permute(0, 2, 1) - new_points = torch.cat([points1, interpolated_points], dim=-1) - else: - new_points = interpolated_points - - new_points = new_points.permute(0, 2, 1) - for i, conv in enumerate(self.mlp_convs): - bn = self.mlp_bns[i] - new_points = F.relu(bn(conv(new_points))) - return new_points diff --git a/spaces/vivien/clip-slip/models.py b/spaces/vivien/clip-slip/models.py deleted file mode 100644 index a08238422b2076280ed11d569ff1981312ebedd7..0000000000000000000000000000000000000000 --- a/spaces/vivien/clip-slip/models.py +++ /dev/null @@ -1,331 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. - -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -# Modified from github.com/openai/CLIP -from collections import OrderedDict - -import numpy as np -import timm -import torch -from torch import nn - -import losses - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__(self, d_model: int, n_head: int, attn_mask: torch.Tensor = None): - super().__init__() - - self.attn = nn.MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp = nn.Sequential(OrderedDict([ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)) - ])) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - def attention(self, x: torch.Tensor): - self.attn_mask = self.attn_mask.to(dtype=x.dtype, device=x.device) if self.attn_mask is not None else None - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__(self, width: int, layers: int, heads: int, attn_mask: torch.Tensor = None): - super().__init__() - self.width = width - self.layers = layers - self.resblocks = nn.Sequential(*[ResidualAttentionBlock(width, heads, attn_mask) for _ in range(layers)]) - - def forward(self, x: torch.Tensor): - return self.resblocks(x) - - -class CLIP(nn.Module): - def __init__(self, - embed_dim: int, - # vision - vision_width: int, - vision_model: nn.Module, - # text - context_length: int, - vocab_size: int, - transformer_width: int, - transformer_heads: int, - transformer_layers: int, - **kwargs, - ): - super().__init__() - - self.context_length = context_length - self.vision_width = vision_width - - self.visual = vision_model - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask(), - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter(torch.empty(self.context_length, transformer_width)) - self.ln_final = LayerNorm(transformer_width) - - self.image_projection = nn.Parameter(torch.empty(vision_width, embed_dim)) - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - self.initialize_parameters() - - def initialize_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - proj_std = (self.transformer.width ** -0.5) * ((2 * self.transformer.layers) ** -0.5) - attn_std = self.transformer.width ** -0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - nn.init.normal_(self.image_projection, std=self.vision_width ** -0.5) - nn.init.normal_(self.text_projection, std=self.transformer.width ** -0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - def encode_image(self, image): - x = self.visual(image) - x = x @ self.image_projection - - return x - - def encode_text(self, text): - x = self.token_embedding(text) # [batch_size, n_ctx, d_model] - x = x + self.positional_embedding - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text): - image_embed = self.encode_image(image) - text_embed = self.encode_text(text) - - return {'image_embed': image_embed, - 'text_embed': text_embed, - 'logit_scale': self.logit_scale.exp()} - - -class SIMCLR(nn.Module): - def __init__(self, - # vision - vision_width: int, - vision_model: nn.Module, - # ssl - ssl_mlp_dim: int, - ssl_emb_dim: int, - **kwargs, - ): - super().__init__() - - self.vision_width = vision_width - self.visual = vision_model - - self.image_mlp = self._build_mlp(in_dim=vision_width, mlp_dim=ssl_mlp_dim, out_dim=ssl_emb_dim) - - def _build_mlp(self, in_dim, mlp_dim, out_dim): - return nn.Sequential(OrderedDict([ - ("layer1", nn.Linear(in_dim, mlp_dim)), - ("bn1", nn.SyncBatchNorm(mlp_dim)), - ("relu1", nn.ReLU(inplace=True)), - ("layer2", nn.Linear(mlp_dim, mlp_dim)), - ("bn2", nn.SyncBatchNorm(mlp_dim)), - ("relu2", nn.ReLU(inplace=True)), - ("layer3", nn.Linear(mlp_dim, out_dim)), - ])) - - def encode_image(self, image): - x = self.visual(image) - - return x - - def forward(self, aug1, aug2): - h1 = self.visual(aug1) - h2 = self.visual(aug2) - - aug1_embed = self.image_mlp(h1) - aug2_embed = self.image_mlp(h2) - - return {'aug1_embed': aug1_embed, - 'aug2_embed': aug2_embed} - - -class SLIP(CLIP): - def __init__(self, - ssl_mlp_dim: int, - ssl_emb_dim: int, - **kwargs, - ): - super().__init__(**kwargs) - - self.image_mlp = self._build_mlp(in_dim=self.vision_width, mlp_dim=ssl_mlp_dim, out_dim=ssl_emb_dim) - - def _build_mlp(self, in_dim, mlp_dim, out_dim): - return nn.Sequential(OrderedDict([ - ("layer1", nn.Linear(in_dim, mlp_dim)), - ("bn1", nn.SyncBatchNorm(mlp_dim)), - ("relu1", nn.ReLU(inplace=True)), - ("layer2", nn.Linear(mlp_dim, mlp_dim)), - ("bn2", nn.SyncBatchNorm(mlp_dim)), - ("relu2", nn.ReLU(inplace=True)), - ("layer3", nn.Linear(mlp_dim, out_dim)), - ])) - - def forward(self, image, text, aug1, aug2): - aug1_embed = self.image_mlp(self.visual(aug1)) - aug2_embed = self.image_mlp(self.visual(aug2)) - - image_embed = self.encode_image(image) - text_embed = self.encode_text(text) - - return {'image_embed': image_embed, - 'text_embed': text_embed, - 'logit_scale': self.logit_scale.exp(), - 'aug1_embed': aug1_embed, - 'aug2_embed': aug2_embed} - - -def get_loss(model, ssl_temp, ssl_scale): - if model.startswith('SLIP'): - ssl_loss = losses.SIMCLRLoss(temperature=ssl_temp) - return losses.SLIPLoss(ssl_loss, ssl_scale) - if model.startswith('CLIP'): - return losses.CLIPLoss() - if model.startswith('SIMCLR'): - return losses.SIMCLRLoss(temperature=ssl_temp) - - -def get_metric_names(model): - if model.startswith('SLIP'): - return ['loss', 'clip_loss', 'ssl_loss', 'clip_acc', 'ssl_acc'] - elif model.startswith('CLIP'): - return ['loss', 'clip_loss', 'clip_acc'] - else: - return ['loss', 'ssl_loss', 'ssl_acc'] - - -@timm.models.registry.register_model -def vit_small_mocov3_patch16_224(**kwargs): - model_kwargs = dict(patch_size=16, embed_dim=384, depth=12, num_heads=12, **kwargs) - model = timm.models.vision_transformer._create_vision_transformer('vit_small_patch16_224', **model_kwargs) - - return model - - -def CLIP_VITS16(**kwargs): - vision_model = timm.create_model('vit_small_mocov3_patch16_224', num_classes=0) - model = CLIP(embed_dim=512, vision_width=384, vision_model=vision_model, context_length=77, vocab_size=49408, - transformer_width=512, transformer_heads=8, transformer_layers=12, **kwargs) - - return model - - -def SIMCLR_VITS16(**kwargs): - vision_model = timm.create_model('vit_small_mocov3_patch16_224', num_classes=0) - model = SIMCLR(vision_width=384, vision_model=vision_model, **kwargs) - - return model - - -def SLIP_VITS16(**kwargs): - vision_model = timm.create_model('vit_small_mocov3_patch16_224', num_classes=0) - model = SLIP(embed_dim=512, vision_width=384, vision_model=vision_model, context_length=77, vocab_size=49408, - transformer_width=512, transformer_heads=8, transformer_layers=12, **kwargs) - - return model - - -def CLIP_VITB16(**kwargs): - vision_model = timm.create_model('vit_base_patch16_224', num_classes=0) - model = CLIP(embed_dim=512, vision_width=768, vision_model=vision_model, context_length=77, vocab_size=49408, - transformer_width=512, transformer_heads=8, transformer_layers=12, **kwargs) - - return model - - -def SIMCLR_VITB16(**kwargs): - vision_model = timm.create_model('vit_base_patch16_224', num_classes=0) - model = SIMCLR(vision_width=768, vision_model=vision_model, **kwargs) - - return model - - -def SLIP_VITB16(**kwargs): - vision_model = timm.create_model('vit_base_patch16_224', num_classes=0) - model = SLIP(embed_dim=512, vision_width=768, vision_model=vision_model, context_length=77, vocab_size=49408, - transformer_width=512, transformer_heads=8, transformer_layers=12, **kwargs) - - return model - - -def CLIP_VITL16(**kwargs): - vision_model = timm.create_model('vit_large_patch16_224', num_classes=0) - model = CLIP(embed_dim=512, vision_width=1024, vision_model=vision_model, context_length=77, vocab_size=49408, - transformer_width=512, transformer_heads=8, transformer_layers=12, **kwargs) - - return model - - -def SIMCLR_VITL16(**kwargs): - vision_model = timm.create_model('vit_large_patch16_224', num_classes=0) - model = SIMCLR(vision_width=1024, vision_model=vision_model, **kwargs) - - return model - - -def SLIP_VITL16(**kwargs): - vision_model = timm.create_model('vit_large_patch16_224', num_classes=0) - model = SLIP(embed_dim=512, vision_width=1024, vision_model=vision_model, context_length=77, vocab_size=49408, - transformer_width=512, transformer_heads=8, transformer_layers=12, **kwargs) - - return model diff --git a/spaces/vrajeshbhatt/Automated-Ticket-Management-System/templates/prediction.html b/spaces/vrajeshbhatt/Automated-Ticket-Management-System/templates/prediction.html deleted file mode 100644 index 1eaddfd1ef00698df30b967fca1667c0bae0eb2b..0000000000000000000000000000000000000000 --- a/spaces/vrajeshbhatt/Automated-Ticket-Management-System/templates/prediction.html +++ /dev/null @@ -1,586 +0,0 @@ - - - - - - - - Prediction - - - - - - - - - - - - - - - - - - - - -
    - - - -
    - - -
    -
    - -
    - - - -
    -
    - - -

    - -
    - -
    -
    - - - - - - - - - - - - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/weide/ChuanhuChatGPT2/modules/llama_func.py b/spaces/weide/ChuanhuChatGPT2/modules/llama_func.py deleted file mode 100644 index 9f4f799882b4e7c34aa8df815ebeb90ed822ba46..0000000000000000000000000000000000000000 --- a/spaces/weide/ChuanhuChatGPT2/modules/llama_func.py +++ /dev/null @@ -1,137 +0,0 @@ -import os -import logging - -from llama_index import download_loader -from llama_index import ( - Document, - LLMPredictor, - PromptHelper, - QuestionAnswerPrompt, - RefinePrompt, -) -import colorama -import PyPDF2 -from tqdm import tqdm - -from modules.presets import * -from modules.utils import * - -def get_index_name(file_src): - file_paths = [x.name for x in file_src] - file_paths.sort(key=lambda x: os.path.basename(x)) - - md5_hash = hashlib.md5() - for file_path in file_paths: - with open(file_path, "rb") as f: - while chunk := f.read(8192): - md5_hash.update(chunk) - - return md5_hash.hexdigest() - -def block_split(text): - blocks = [] - while len(text) > 0: - blocks.append(Document(text[:1000])) - text = text[1000:] - return blocks - -def get_documents(file_src): - documents = [] - logging.debug("Loading documents...") - logging.debug(f"file_src: {file_src}") - for file in file_src: - filepath = file.name - filename = os.path.basename(filepath) - file_type = os.path.splitext(filepath)[1] - logging.info(f"loading file: {filename}") - if file_type == ".pdf": - logging.debug("Loading PDF...") - try: - from modules.pdf_func import parse_pdf - from modules.config import advance_docs - two_column = advance_docs["pdf"].get("two_column", False) - pdftext = parse_pdf(filepath, two_column).text - except: - pdftext = "" - with open(filepath, 'rb') as pdfFileObj: - pdfReader = PyPDF2.PdfReader(pdfFileObj) - for page in tqdm(pdfReader.pages): - pdftext += page.extract_text() - text_raw = pdftext - elif file_type == ".docx": - logging.debug("Loading Word...") - DocxReader = download_loader("DocxReader") - loader = DocxReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".epub": - logging.debug("Loading EPUB...") - EpubReader = download_loader("EpubReader") - loader = EpubReader() - text_raw = loader.load_data(file=filepath)[0].text - elif file_type == ".xlsx": - logging.debug("Loading Excel...") - text_raw = excel_to_string(filepath) - else: - logging.debug("Loading text file...") - with open(filepath, "r", encoding="utf-8") as f: - text_raw = f.read() - text = add_space(text_raw) - # text = block_split(text) - # documents += text - documents += [Document(text)] - logging.debug("Documents loaded.") - return documents - - -def construct_index( - api_key, - file_src, - max_input_size=4096, - num_outputs=5, - max_chunk_overlap=20, - chunk_size_limit=600, - embedding_limit=None, - separator=" " -): - from langchain.chat_models import ChatOpenAI - from llama_index import GPTSimpleVectorIndex, ServiceContext - - os.environ["OPENAI_API_KEY"] = api_key - chunk_size_limit = None if chunk_size_limit == 0 else chunk_size_limit - embedding_limit = None if embedding_limit == 0 else embedding_limit - separator = " " if separator == "" else separator - - llm_predictor = LLMPredictor( - llm=ChatOpenAI(model_name="gpt-3.5-turbo-0301", openai_api_key=api_key) - ) - prompt_helper = PromptHelper(max_input_size = max_input_size, num_output = num_outputs, max_chunk_overlap = max_chunk_overlap, embedding_limit=embedding_limit, chunk_size_limit=600, separator=separator) - index_name = get_index_name(file_src) - if os.path.exists(f"./index/{index_name}.json"): - logging.info("找到了缓存的索引文件,加载中……") - return GPTSimpleVectorIndex.load_from_disk(f"./index/{index_name}.json") - else: - try: - documents = get_documents(file_src) - logging.info("构建索引中……") - with retrieve_proxy(): - service_context = ServiceContext.from_defaults(llm_predictor=llm_predictor, prompt_helper=prompt_helper, chunk_size_limit=chunk_size_limit) - index = GPTSimpleVectorIndex.from_documents( - documents, service_context=service_context - ) - logging.debug("索引构建完成!") - os.makedirs("./index", exist_ok=True) - index.save_to_disk(f"./index/{index_name}.json") - logging.debug("索引已保存至本地!") - return index - - except Exception as e: - logging.error("索引构建失败!", e) - print(e) - return None - - -def add_space(text): - punctuations = {",": ", ", "。": "。 ", "?": "? ", "!": "! ", ":": ": ", ";": "; "} - for cn_punc, en_punc in punctuations.items(): - text = text.replace(cn_punc, en_punc) - return text diff --git a/spaces/weishao2019/ChuanhuChatGPT/custom.css b/spaces/weishao2019/ChuanhuChatGPT/custom.css deleted file mode 100644 index 97a1c2e681f4cc09e2237a92b37ab6cadd545a71..0000000000000000000000000000000000000000 --- a/spaces/weishao2019/ChuanhuChatGPT/custom.css +++ /dev/null @@ -1,184 +0,0 @@ -:root { - --chatbot-color-light: #F3F3F3; - --chatbot-color-dark: #121111; -} - -/* status_display */ -#status_display { - display: flex; - min-height: 2.5em; - align-items: flex-end; - justify-content: flex-end; -} -#status_display p { - font-size: .85em; - font-family: monospace; - color: var(--body-text-color-subdued); -} - -#chuanhu_chatbot, #status_display { - transition: all 0.6s; -} - -ol, ul { - list-style-position: inside; - padding-left: 0; -} - -ol li, ul:not(.options) li { - padding-left: 1.5em; - text-indent: -1.5em; -} - -/* 亮色 */ -@media (prefers-color-scheme: light) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-light) !important; - } - [data-testid = "bot"] { - background-color: #FFFFFF !important; - } - [data-testid = "user"] { - background-color: #95EC69 !important; - } -} -/* 暗色 */ -@media (prefers-color-scheme: dark) { - #chuanhu_chatbot { - background-color: var(--chatbot-color-dark) !important; - } - [data-testid = "bot"] { - background-color: #2C2C2C !important; - } - [data-testid = "user"] { - background-color: #26B561 !important; - } - body { - background-color: var(--neutral-950) !important; - } -} - -/* 对话气泡 */ -[class *= "message"] { - border-radius: var(--radius-xl) !important; - border: none; - padding: var(--spacing-xl) !important; - font-size: var(--text-md) !important; - line-height: var(--line-md) !important; -} -[data-testid = "bot"] { - max-width: 85%; - border-bottom-left-radius: 0 !important; -} -[data-testid = "user"] { - max-width: 85%; - width: auto !important; - border-bottom-right-radius: 0 !important; -} -/* 表格 */ -table { - margin: 1em 0; - border-collapse: collapse; - empty-cells: show; -} -td,th { - border: 1.2px solid var(--border-color-primary) !important; - padding: 0.2em; -} -thead { - background-color: rgba(175,184,193,0.2); -} -thead th { - padding: .5em .2em; -} -/* 行内代码 */ -code { - display: inline; - white-space: break-spaces; - border-radius: 6px; - margin: 0 2px 0 2px; - padding: .2em .4em .1em .4em; - background-color: rgba(175,184,193,0.2); -} -/* 代码块 */ -pre code { - display: block; - overflow: auto; - white-space: pre; - background-color: hsla(0, 0%, 0%, 80%)!important; - border-radius: 10px; - padding: 1rem 1.2rem 1rem; - margin: 1.2em 2em 1.2em 0.5em; - color: #FFF; - box-shadow: 6px 6px 16px hsla(0, 0%, 0%, 0.2); -} -/* 代码高亮样式 */ -.highlight .hll { background-color: #49483e } -.highlight .c { color: #75715e } /* Comment */ -.highlight .err { color: #960050; background-color: #1e0010 } /* Error */ -.highlight .k { color: #66d9ef } /* Keyword */ -.highlight .l { color: #ae81ff } /* Literal */ -.highlight .n { color: #f8f8f2 } /* Name */ -.highlight .o { color: #f92672 } /* Operator */ -.highlight .p { color: #f8f8f2 } /* Punctuation */ -.highlight .ch { color: #75715e } /* Comment.Hashbang */ -.highlight .cm { color: #75715e } /* Comment.Multiline */ -.highlight .cp { color: #75715e } /* Comment.Preproc */ -.highlight .cpf { color: #75715e } /* Comment.PreprocFile */ -.highlight .c1 { color: #75715e } /* Comment.Single */ -.highlight .cs { color: #75715e } /* Comment.Special */ -.highlight .gd { color: #f92672 } /* Generic.Deleted */ -.highlight .ge { font-style: italic } /* Generic.Emph */ -.highlight .gi { color: #a6e22e } /* Generic.Inserted */ -.highlight .gs { font-weight: bold } /* Generic.Strong */ -.highlight .gu { color: #75715e } /* Generic.Subheading */ -.highlight .kc { color: #66d9ef } /* Keyword.Constant */ -.highlight .kd { color: #66d9ef } /* Keyword.Declaration */ -.highlight .kn { color: #f92672 } /* Keyword.Namespace */ -.highlight .kp { color: #66d9ef } /* Keyword.Pseudo */ -.highlight .kr { color: #66d9ef } /* Keyword.Reserved */ -.highlight .kt { color: #66d9ef } /* Keyword.Type */ -.highlight .ld { color: #e6db74 } /* Literal.Date */ -.highlight .m { color: #ae81ff } /* Literal.Number */ -.highlight .s { color: #e6db74 } /* Literal.String */ -.highlight .na { color: #a6e22e } /* Name.Attribute */ -.highlight .nb { color: #f8f8f2 } /* Name.Builtin */ -.highlight .nc { color: #a6e22e } /* Name.Class */ -.highlight .no { color: #66d9ef } /* Name.Constant */ -.highlight .nd { color: #a6e22e } /* Name.Decorator */ -.highlight .ni { color: #f8f8f2 } /* Name.Entity */ -.highlight .ne { color: #a6e22e } /* Name.Exception */ -.highlight .nf { color: #a6e22e } /* Name.Function */ -.highlight .nl { color: #f8f8f2 } /* Name.Label */ -.highlight .nn { color: #f8f8f2 } /* Name.Namespace */ -.highlight .nx { color: #a6e22e } /* Name.Other */ -.highlight .py { color: #f8f8f2 } /* Name.Property */ -.highlight .nt { color: #f92672 } /* Name.Tag */ -.highlight .nv { color: #f8f8f2 } /* Name.Variable */ -.highlight .ow { color: #f92672 } /* Operator.Word */ -.highlight .w { color: #f8f8f2 } /* Text.Whitespace */ -.highlight .mb { color: #ae81ff } /* Literal.Number.Bin */ -.highlight .mf { color: #ae81ff } /* Literal.Number.Float */ -.highlight .mh { color: #ae81ff } /* Literal.Number.Hex */ -.highlight .mi { color: #ae81ff } /* Literal.Number.Integer */ -.highlight .mo { color: #ae81ff } /* Literal.Number.Oct */ -.highlight .sa { color: #e6db74 } /* Literal.String.Affix */ -.highlight .sb { color: #e6db74 } /* Literal.String.Backtick */ -.highlight .sc { color: #e6db74 } /* Literal.String.Char */ -.highlight .dl { color: #e6db74 } /* Literal.String.Delimiter */ -.highlight .sd { color: #e6db74 } /* Literal.String.Doc */ -.highlight .s2 { color: #e6db74 } /* Literal.String.Double */ -.highlight .se { color: #ae81ff } /* Literal.String.Escape */ -.highlight .sh { color: #e6db74 } /* Literal.String.Heredoc */ -.highlight .si { color: #e6db74 } /* Literal.String.Interpol */ -.highlight .sx { color: #e6db74 } /* Literal.String.Other */ -.highlight .sr { color: #e6db74 } /* Literal.String.Regex */ -.highlight .s1 { color: #e6db74 } /* Literal.String.Single */ -.highlight .ss { color: #e6db74 } /* Literal.String.Symbol */ -.highlight .bp { color: #f8f8f2 } /* Name.Builtin.Pseudo */ -.highlight .fm { color: #a6e22e } /* Name.Function.Magic */ -.highlight .vc { color: #f8f8f2 } /* Name.Variable.Class */ -.highlight .vg { color: #f8f8f2 } /* Name.Variable.Global */ -.highlight .vi { color: #f8f8f2 } /* Name.Variable.Instance */ -.highlight .vm { color: #f8f8f2 } /* Name.Variable.Magic */ -.highlight .il { color: #ae81ff } /* Literal.Number.Integer.Long */ diff --git a/spaces/weishao2019/ChuanhuChatGPT/overwrites.py b/spaces/weishao2019/ChuanhuChatGPT/overwrites.py deleted file mode 100644 index 436fcf46b5807ca045e77ac762039ba0ffc16f6d..0000000000000000000000000000000000000000 --- a/spaces/weishao2019/ChuanhuChatGPT/overwrites.py +++ /dev/null @@ -1,38 +0,0 @@ -from __future__ import annotations -import logging - -from llama_index import Prompt -from typing import List, Tuple -import mdtex2html - -from presets import * -from llama_func import * - - -def compact_text_chunks(self, prompt: Prompt, text_chunks: List[str]) -> List[str]: - logging.debug("Compacting text chunks...🚀🚀🚀") - combined_str = [c.strip() for c in text_chunks if c.strip()] - combined_str = [f"[{index+1}] {c}" for index, c in enumerate(combined_str)] - combined_str = "\n\n".join(combined_str) - # resplit based on self.max_chunk_overlap - text_splitter = self.get_text_splitter_given_prompt(prompt, 1, padding=1) - return text_splitter.split_text(combined_str) - - -def postprocess( - self, y: List[Tuple[str | None, str | None]] -) -> List[Tuple[str | None, str | None]]: - """ - Parameters: - y: List of tuples representing the message and response pairs. Each message and response should be a string, which may be in Markdown format. - Returns: - List of tuples representing the message and response. Each message and response will be a string of HTML. - """ - if y is None or y == []: - return [] - tag_regex = re.compile(r"^<\w+>[^<]+") - if tag_regex.search(y[-1][1]): - y[-1] = (y[-1][0].replace("\n", "
    "), y[-1][1]) - else: - y[-1] = (y[-1][0].replace("\n", "
    "), convert_mdtext(y[-1][1])) - return y diff --git a/spaces/wf-genius/Control-A-Video/model/annotator/hed/__init__.py b/spaces/wf-genius/Control-A-Video/model/annotator/hed/__init__.py deleted file mode 100644 index 2a40fb3452ba1c72039fa15813e06e4343e419d4..0000000000000000000000000000000000000000 --- a/spaces/wf-genius/Control-A-Video/model/annotator/hed/__init__.py +++ /dev/null @@ -1,133 +0,0 @@ -import numpy as np -import cv2 -import os -import torch -from einops import rearrange - - -class HEDNetwork(torch.nn.Module): - def __init__(self, model_path): - super().__init__() - - self.netVggOne = torch.nn.Sequential( - torch.nn.Conv2d(in_channels=3, out_channels=64, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False), - torch.nn.Conv2d(in_channels=64, out_channels=64, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False) - ) - - self.netVggTwo = torch.nn.Sequential( - torch.nn.MaxPool2d(kernel_size=2, stride=2), - torch.nn.Conv2d(in_channels=64, out_channels=128, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False), - torch.nn.Conv2d(in_channels=128, out_channels=128, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False) - ) - - self.netVggThr = torch.nn.Sequential( - torch.nn.MaxPool2d(kernel_size=2, stride=2), - torch.nn.Conv2d(in_channels=128, out_channels=256, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False), - torch.nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False), - torch.nn.Conv2d(in_channels=256, out_channels=256, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False) - ) - - self.netVggFou = torch.nn.Sequential( - torch.nn.MaxPool2d(kernel_size=2, stride=2), - torch.nn.Conv2d(in_channels=256, out_channels=512, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False), - torch.nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False), - torch.nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False) - ) - - self.netVggFiv = torch.nn.Sequential( - torch.nn.MaxPool2d(kernel_size=2, stride=2), - torch.nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False), - torch.nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False), - torch.nn.Conv2d(in_channels=512, out_channels=512, kernel_size=3, stride=1, padding=1), - torch.nn.ReLU(inplace=False) - ) - - self.netScoreOne = torch.nn.Conv2d(in_channels=64, out_channels=1, kernel_size=1, stride=1, padding=0) - self.netScoreTwo = torch.nn.Conv2d(in_channels=128, out_channels=1, kernel_size=1, stride=1, padding=0) - self.netScoreThr = torch.nn.Conv2d(in_channels=256, out_channels=1, kernel_size=1, stride=1, padding=0) - self.netScoreFou = torch.nn.Conv2d(in_channels=512, out_channels=1, kernel_size=1, stride=1, padding=0) - self.netScoreFiv = torch.nn.Conv2d(in_channels=512, out_channels=1, kernel_size=1, stride=1, padding=0) - - self.netCombine = torch.nn.Sequential( - torch.nn.Conv2d(in_channels=5, out_channels=1, kernel_size=1, stride=1, padding=0), - torch.nn.Sigmoid() - ) - - self.load_state_dict({strKey.replace('module', 'net'): tenWeight for strKey, tenWeight in torch.load(model_path).items()}) - - def forward(self, tenInput): - tenInput = tenInput * 255.0 - tenInput = tenInput - torch.tensor(data=[104.00698793, 116.66876762, 122.67891434], dtype=tenInput.dtype, device=tenInput.device).view(1, 3, 1, 1) - - tenVggOne = self.netVggOne(tenInput) - tenVggTwo = self.netVggTwo(tenVggOne) - tenVggThr = self.netVggThr(tenVggTwo) - tenVggFou = self.netVggFou(tenVggThr) - tenVggFiv = self.netVggFiv(tenVggFou) - - tenScoreOne = self.netScoreOne(tenVggOne) - tenScoreTwo = self.netScoreTwo(tenVggTwo) - tenScoreThr = self.netScoreThr(tenVggThr) - tenScoreFou = self.netScoreFou(tenVggFou) - tenScoreFiv = self.netScoreFiv(tenVggFiv) - - tenScoreOne = torch.nn.functional.interpolate(input=tenScoreOne, size=(tenInput.shape[2], tenInput.shape[3]), mode='bilinear', align_corners=False) - tenScoreTwo = torch.nn.functional.interpolate(input=tenScoreTwo, size=(tenInput.shape[2], tenInput.shape[3]), mode='bilinear', align_corners=False) - tenScoreThr = torch.nn.functional.interpolate(input=tenScoreThr, size=(tenInput.shape[2], tenInput.shape[3]), mode='bilinear', align_corners=False) - tenScoreFou = torch.nn.functional.interpolate(input=tenScoreFou, size=(tenInput.shape[2], tenInput.shape[3]), mode='bilinear', align_corners=False) - tenScoreFiv = torch.nn.functional.interpolate(input=tenScoreFiv, size=(tenInput.shape[2], tenInput.shape[3]), mode='bilinear', align_corners=False) - - return self.netCombine(torch.cat([ tenScoreOne, tenScoreTwo, tenScoreThr, tenScoreFou, tenScoreFiv ], 1)) - - -class HEDdetector: - def __init__(self, network ): - self.netNetwork = network - - def __call__(self, input_image): - if isinstance(input_image, torch.Tensor): - # 输入的就是 b c h w的tensor 范围是-1~1,需要转换为0~1 - input_image = (input_image + 1) / 2 - input_image = input_image.float().cuda() - edge = self.netNetwork(input_image) # 范围也是0~1, 不用转了直接用 - return edge - else: - assert input_image.ndim == 3 - input_image = input_image[:, :, ::-1].copy() - with torch.no_grad(): - image_hed = torch.from_numpy(input_image).float().cuda() - image_hed = image_hed / 255.0 - image_hed = rearrange(image_hed, 'h w c -> 1 c h w') - edge = self.netNetwork(image_hed)[0] - edge = (edge.cpu().numpy() * 255.0).clip(0, 255).astype(np.uint8) - return edge[0] - - -def nms(x, t, s): - x = cv2.GaussianBlur(x.astype(np.float32), (0, 0), s) - - f1 = np.array([[0, 0, 0], [1, 1, 1], [0, 0, 0]], dtype=np.uint8) - f2 = np.array([[0, 1, 0], [0, 1, 0], [0, 1, 0]], dtype=np.uint8) - f3 = np.array([[1, 0, 0], [0, 1, 0], [0, 0, 1]], dtype=np.uint8) - f4 = np.array([[0, 0, 1], [0, 1, 0], [1, 0, 0]], dtype=np.uint8) - - y = np.zeros_like(x) - - for f in [f1, f2, f3, f4]: - np.putmask(y, cv2.dilate(x, kernel=f) == x, x) - - z = np.zeros_like(y, dtype=np.uint8) - z[y > t] = 255 - return z diff --git a/spaces/whgwd2023/bingo/src/components/header.tsx b/spaces/whgwd2023/bingo/src/components/header.tsx deleted file mode 100644 index dc298b722154d1ac6d7a7e148204605562d6cc58..0000000000000000000000000000000000000000 --- a/spaces/whgwd2023/bingo/src/components/header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import * as React from 'react' -import { UserMenu } from './user-menu' - -export async function Header() { - return ( -
    -
    - -
    -
    - ) -} diff --git a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/body/decoder/__init__.py b/spaces/xdecoder/Instruct-X-Decoder/xdecoder/body/decoder/__init__.py deleted file mode 100644 index bbce50aad955329e5cba93e1d4d2f25e3cf694c7..0000000000000000000000000000000000000000 --- a/spaces/xdecoder/Instruct-X-Decoder/xdecoder/body/decoder/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .build import build_decoder \ No newline at end of file diff --git a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/util/slconfig.py b/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/util/slconfig.py deleted file mode 100644 index 3f293e3aff215a3c7c2f7d21d27853493b6ebfbc..0000000000000000000000000000000000000000 --- a/spaces/xinyu1205/recognize-anything/GroundingDINO/groundingdino/util/slconfig.py +++ /dev/null @@ -1,427 +0,0 @@ -# ========================================================== -# Modified from mmcv -# ========================================================== -import ast -import os.path as osp -import shutil -import sys -import tempfile -from argparse import Action -from importlib import import_module -import platform - -from addict import Dict -from yapf.yapflib.yapf_api import FormatCode - -BASE_KEY = "_base_" -DELETE_KEY = "_delete_" -RESERVED_KEYS = ["filename", "text", "pretty_text", "get", "dump", "merge_from_dict"] - - -def check_file_exist(filename, msg_tmpl='file "{}" does not exist'): - if not osp.isfile(filename): - raise FileNotFoundError(msg_tmpl.format(filename)) - - -class ConfigDict(Dict): - def __missing__(self, name): - raise KeyError(name) - - def __getattr__(self, name): - try: - value = super(ConfigDict, self).__getattr__(name) - except KeyError: - ex = AttributeError(f"'{self.__class__.__name__}' object has no " f"attribute '{name}'") - except Exception as e: - ex = e - else: - return value - raise ex - - -class SLConfig(object): - """ - config files. - only support .py file as config now. - - ref: mmcv.utils.config - - Example: - >>> cfg = Config(dict(a=1, b=dict(b1=[0, 1]))) - >>> cfg.a - 1 - >>> cfg.b - {'b1': [0, 1]} - >>> cfg.b.b1 - [0, 1] - >>> cfg = Config.fromfile('tests/data/config/a.py') - >>> cfg.filename - "/home/kchen/projects/mmcv/tests/data/config/a.py" - >>> cfg.item4 - 'test' - >>> cfg - "Config [path: /home/kchen/projects/mmcv/tests/data/config/a.py]: " - "{'item1': [1, 2], 'item2': {'a': 0}, 'item3': True, 'item4': 'test'}" - """ - - @staticmethod - def _validate_py_syntax(filename): - with open(filename) as f: - content = f.read() - try: - ast.parse(content) - except SyntaxError: - raise SyntaxError("There are syntax errors in config " f"file {filename}") - - @staticmethod - def _file2dict(filename): - filename = osp.abspath(osp.expanduser(filename)) - check_file_exist(filename) - if filename.lower().endswith(".py"): - with tempfile.TemporaryDirectory() as temp_config_dir: - temp_config_file = tempfile.NamedTemporaryFile(dir=temp_config_dir, suffix=".py") - temp_config_name = osp.basename(temp_config_file.name) - if platform.system() == 'Windows': - temp_config_file.close() - shutil.copyfile(filename, osp.join(temp_config_dir, temp_config_name)) - temp_module_name = osp.splitext(temp_config_name)[0] - sys.path.insert(0, temp_config_dir) - SLConfig._validate_py_syntax(filename) - mod = import_module(temp_module_name) - sys.path.pop(0) - cfg_dict = { - name: value for name, value in mod.__dict__.items() if not name.startswith("__") - } - # delete imported module - del sys.modules[temp_module_name] - # close temp file - temp_config_file.close() - elif filename.lower().endswith((".yml", ".yaml", ".json")): - from .slio import slload - - cfg_dict = slload(filename) - else: - raise IOError("Only py/yml/yaml/json type are supported now!") - - cfg_text = filename + "\n" - with open(filename, "r") as f: - cfg_text += f.read() - - # parse the base file - if BASE_KEY in cfg_dict: - cfg_dir = osp.dirname(filename) - base_filename = cfg_dict.pop(BASE_KEY) - base_filename = base_filename if isinstance(base_filename, list) else [base_filename] - - cfg_dict_list = list() - cfg_text_list = list() - for f in base_filename: - _cfg_dict, _cfg_text = SLConfig._file2dict(osp.join(cfg_dir, f)) - cfg_dict_list.append(_cfg_dict) - cfg_text_list.append(_cfg_text) - - base_cfg_dict = dict() - for c in cfg_dict_list: - if len(base_cfg_dict.keys() & c.keys()) > 0: - raise KeyError("Duplicate key is not allowed among bases") - # TODO Allow the duplicate key while warnning user - base_cfg_dict.update(c) - - base_cfg_dict = SLConfig._merge_a_into_b(cfg_dict, base_cfg_dict) - cfg_dict = base_cfg_dict - - # merge cfg_text - cfg_text_list.append(cfg_text) - cfg_text = "\n".join(cfg_text_list) - - return cfg_dict, cfg_text - - @staticmethod - def _merge_a_into_b(a, b): - """merge dict `a` into dict `b` (non-inplace). - values in `a` will overwrite `b`. - copy first to avoid inplace modification - - Args: - a ([type]): [description] - b ([type]): [description] - - Returns: - [dict]: [description] - """ - # import ipdb; ipdb.set_trace() - if not isinstance(a, dict): - return a - - b = b.copy() - for k, v in a.items(): - if isinstance(v, dict) and k in b and not v.pop(DELETE_KEY, False): - - if not isinstance(b[k], dict) and not isinstance(b[k], list): - # if : - # import ipdb; ipdb.set_trace() - raise TypeError( - f"{k}={v} in child config cannot inherit from base " - f"because {k} is a dict in the child config but is of " - f"type {type(b[k])} in base config. You may set " - f"`{DELETE_KEY}=True` to ignore the base config" - ) - b[k] = SLConfig._merge_a_into_b(v, b[k]) - elif isinstance(b, list): - try: - _ = int(k) - except: - raise TypeError( - f"b is a list, " f"index {k} should be an int when input but {type(k)}" - ) - b[int(k)] = SLConfig._merge_a_into_b(v, b[int(k)]) - else: - b[k] = v - - return b - - @staticmethod - def fromfile(filename): - cfg_dict, cfg_text = SLConfig._file2dict(filename) - return SLConfig(cfg_dict, cfg_text=cfg_text, filename=filename) - - def __init__(self, cfg_dict=None, cfg_text=None, filename=None): - if cfg_dict is None: - cfg_dict = dict() - elif not isinstance(cfg_dict, dict): - raise TypeError("cfg_dict must be a dict, but " f"got {type(cfg_dict)}") - for key in cfg_dict: - if key in RESERVED_KEYS: - raise KeyError(f"{key} is reserved for config file") - - super(SLConfig, self).__setattr__("_cfg_dict", ConfigDict(cfg_dict)) - super(SLConfig, self).__setattr__("_filename", filename) - if cfg_text: - text = cfg_text - elif filename: - with open(filename, "r") as f: - text = f.read() - else: - text = "" - super(SLConfig, self).__setattr__("_text", text) - - @property - def filename(self): - return self._filename - - @property - def text(self): - return self._text - - @property - def pretty_text(self): - - indent = 4 - - def _indent(s_, num_spaces): - s = s_.split("\n") - if len(s) == 1: - return s_ - first = s.pop(0) - s = [(num_spaces * " ") + line for line in s] - s = "\n".join(s) - s = first + "\n" + s - return s - - def _format_basic_types(k, v, use_mapping=False): - if isinstance(v, str): - v_str = f"'{v}'" - else: - v_str = str(v) - - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f"{k_str}: {v_str}" - else: - attr_str = f"{str(k)}={v_str}" - attr_str = _indent(attr_str, indent) - - return attr_str - - def _format_list(k, v, use_mapping=False): - # check if all items in the list are dict - if all(isinstance(_, dict) for _ in v): - v_str = "[\n" - v_str += "\n".join( - f"dict({_indent(_format_dict(v_), indent)})," for v_ in v - ).rstrip(",") - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f"{k_str}: {v_str}" - else: - attr_str = f"{str(k)}={v_str}" - attr_str = _indent(attr_str, indent) + "]" - else: - attr_str = _format_basic_types(k, v, use_mapping) - return attr_str - - def _contain_invalid_identifier(dict_str): - contain_invalid_identifier = False - for key_name in dict_str: - contain_invalid_identifier |= not str(key_name).isidentifier() - return contain_invalid_identifier - - def _format_dict(input_dict, outest_level=False): - r = "" - s = [] - - use_mapping = _contain_invalid_identifier(input_dict) - if use_mapping: - r += "{" - for idx, (k, v) in enumerate(input_dict.items()): - is_last = idx >= len(input_dict) - 1 - end = "" if outest_level or is_last else "," - if isinstance(v, dict): - v_str = "\n" + _format_dict(v) - if use_mapping: - k_str = f"'{k}'" if isinstance(k, str) else str(k) - attr_str = f"{k_str}: dict({v_str}" - else: - attr_str = f"{str(k)}=dict({v_str}" - attr_str = _indent(attr_str, indent) + ")" + end - elif isinstance(v, list): - attr_str = _format_list(k, v, use_mapping) + end - else: - attr_str = _format_basic_types(k, v, use_mapping) + end - - s.append(attr_str) - r += "\n".join(s) - if use_mapping: - r += "}" - return r - - cfg_dict = self._cfg_dict.to_dict() - text = _format_dict(cfg_dict, outest_level=True) - # copied from setup.cfg - yapf_style = dict( - based_on_style="pep8", - blank_line_before_nested_class_or_def=True, - split_before_expression_after_opening_paren=True, - ) - text, _ = FormatCode(text, style_config=yapf_style, verify=True) - - return text - - def __repr__(self): - return f"Config (path: {self.filename}): {self._cfg_dict.__repr__()}" - - def __len__(self): - return len(self._cfg_dict) - - def __getattr__(self, name): - # # debug - # print('+'*15) - # print('name=%s' % name) - # print("addr:", id(self)) - # # print('type(self):', type(self)) - # print(self.__dict__) - # print('+'*15) - # if self.__dict__ == {}: - # raise ValueError - - return getattr(self._cfg_dict, name) - - def __getitem__(self, name): - return self._cfg_dict.__getitem__(name) - - def __setattr__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setattr__(name, value) - - def __setitem__(self, name, value): - if isinstance(value, dict): - value = ConfigDict(value) - self._cfg_dict.__setitem__(name, value) - - def __iter__(self): - return iter(self._cfg_dict) - - def dump(self, file=None): - # import ipdb; ipdb.set_trace() - if file is None: - return self.pretty_text - else: - with open(file, "w") as f: - f.write(self.pretty_text) - - def merge_from_dict(self, options): - """Merge list into cfg_dict - - Merge the dict parsed by MultipleKVAction into this cfg. - - Examples: - >>> options = {'model.backbone.depth': 50, - ... 'model.backbone.with_cp':True} - >>> cfg = Config(dict(model=dict(backbone=dict(type='ResNet')))) - >>> cfg.merge_from_dict(options) - >>> cfg_dict = super(Config, self).__getattribute__('_cfg_dict') - >>> assert cfg_dict == dict( - ... model=dict(backbone=dict(depth=50, with_cp=True))) - - Args: - options (dict): dict of configs to merge from. - """ - option_cfg_dict = {} - for full_key, v in options.items(): - d = option_cfg_dict - key_list = full_key.split(".") - for subkey in key_list[:-1]: - d.setdefault(subkey, ConfigDict()) - d = d[subkey] - subkey = key_list[-1] - d[subkey] = v - - cfg_dict = super(SLConfig, self).__getattribute__("_cfg_dict") - super(SLConfig, self).__setattr__( - "_cfg_dict", SLConfig._merge_a_into_b(option_cfg_dict, cfg_dict) - ) - - # for multiprocess - def __setstate__(self, state): - self.__init__(state) - - def copy(self): - return SLConfig(self._cfg_dict.copy()) - - def deepcopy(self): - return SLConfig(self._cfg_dict.deepcopy()) - - -class DictAction(Action): - """ - argparse action to split an argument into KEY=VALUE form - on the first = and append to a dictionary. List options should - be passed as comma separated values, i.e KEY=V1,V2,V3 - """ - - @staticmethod - def _parse_int_float_bool(val): - try: - return int(val) - except ValueError: - pass - try: - return float(val) - except ValueError: - pass - if val.lower() in ["true", "false"]: - return True if val.lower() == "true" else False - if val.lower() in ["none", "null"]: - return None - return val - - def __call__(self, parser, namespace, values, option_string=None): - options = {} - for kv in values: - key, val = kv.split("=", maxsplit=1) - val = [self._parse_int_float_bool(v) for v in val.split(",")] - if len(val) == 1: - val = val[0] - options[key] = val - setattr(namespace, self.dest, options) diff --git a/spaces/xswu/HPSv2/src/training/zero_shot.py b/spaces/xswu/HPSv2/src/training/zero_shot.py deleted file mode 100644 index e5768b4a3ce26f0a9a12d8ee3a6d9490e778a78a..0000000000000000000000000000000000000000 --- a/spaces/xswu/HPSv2/src/training/zero_shot.py +++ /dev/null @@ -1,93 +0,0 @@ -import logging - -import torch -import torch.nn.functional as F -from tqdm import tqdm - -from open_clip import get_cast_dtype, get_tokenizer -from .precision import get_autocast -from .imagenet_zeroshot_data import imagenet_classnames, openai_imagenet_template - - -def zero_shot_classifier(model, classnames, templates, args): - tokenizer = get_tokenizer(args.model) - with torch.no_grad(): - zeroshot_weights = [] - for classname in tqdm(classnames): - texts = [template(classname) for template in templates] # format with class - texts = tokenizer(texts).to(args.device) # tokenize - if args.distributed and not args.horovod: - class_embeddings = model.module.encode_text(texts) - else: - class_embeddings = model.encode_text(texts) - class_embedding = F.normalize(class_embeddings, dim=-1).mean(dim=0) - class_embedding /= class_embedding.norm() - zeroshot_weights.append(class_embedding) - zeroshot_weights = torch.stack(zeroshot_weights, dim=1).to(args.device) - return zeroshot_weights - - -def accuracy(output, target, topk=(1,)): - pred = output.topk(max(topk), 1, True, True)[1].t() - correct = pred.eq(target.view(1, -1).expand_as(pred)) - return [float(correct[:k].reshape(-1).float().sum(0, keepdim=True).cpu().numpy()) for k in topk] - - -def run(model, classifier, dataloader, args): - autocast = get_autocast(args.precision) - cast_dtype = get_cast_dtype(args.precision) - with torch.no_grad(): - top1, top5, n = 0., 0., 0. - for images, target in tqdm(dataloader, unit_scale=args.batch_size): - images = images.to(args.device) - if cast_dtype is not None: - images = images.to(dtype=cast_dtype) - target = target.to(args.device) - - with autocast(): - # predict - if args.distributed and not args.horovod: - image_features = model.module.encode_image(images) - else: - image_features = model.encode_image(images) - image_features = F.normalize(image_features, dim=-1) - logits = 100. * image_features @ classifier - - # measure accuracy - acc1, acc5 = accuracy(logits, target, topk=(1, 5)) - top1 += acc1 - top5 += acc5 - n += images.size(0) - - top1 = (top1 / n) - top5 = (top5 / n) - return top1, top5 - - -def zero_shot_eval(model, data, epoch, args): - if 'imagenet-val' not in data and 'imagenet-v2' not in data: - return {} - if args.zeroshot_frequency == 0: - return {} - if (epoch % args.zeroshot_frequency) != 0 and epoch != args.epochs: - return {} - - logging.info('Starting zero-shot imagenet.') - - logging.info('Building zero-shot classifier') - classifier = zero_shot_classifier(model, imagenet_classnames, openai_imagenet_template, args) - - logging.info('Using classifier') - results = {} - if 'imagenet-val' in data: - top1, top5 = run(model, classifier, data['imagenet-val'].dataloader, args) - results['imagenet-zeroshot-val-top1'] = top1 - results['imagenet-zeroshot-val-top5'] = top5 - if 'imagenet-v2' in data: - top1, top5 = run(model, classifier, data['imagenet-v2'].dataloader, args) - results['imagenetv2-zeroshot-val-top1'] = top1 - results['imagenetv2-zeroshot-val-top5'] = top5 - - logging.info('Finished zero-shot imagenet.') - - return results diff --git a/spaces/xxie92/proteinml-demo-dssp-duplicate/app.py b/spaces/xxie92/proteinml-demo-dssp-duplicate/app.py deleted file mode 100644 index 6827426acf426a3f7ac3dc41f5b18ede3faa8a58..0000000000000000000000000000000000000000 --- a/spaces/xxie92/proteinml-demo-dssp-duplicate/app.py +++ /dev/null @@ -1,128 +0,0 @@ -import gradio as gr - -import py3Dmol -import torch -import pydssp - -import os - -lst_name = ["generated_hv_for_5dmg","5dmg"] -for i in range(0): - name = "b45_" + str(i)+"_0" - lst_name.append(name) - - - -def get_pdb(pdb_code="", filepath=""): - try: - return filepath.name - except AttributeError as e: - if pdb_code is None or pdb_code == "": - return None - elif pdb_code in lst_name: - return f"{pdb_code}.pdb" - else: - os.system(f"wget -qnc https://files.rcsb.org/view/{pdb_code}.pdb") - return f"{pdb_code}.pdb" - - -import gemmi - - -def get_seq(p): - st = gemmi.read_structure(p) - st.setup_entities() - polymer = st[0][0].get_polymer() - sequence = gemmi.one_letter_code(polymer) - return sequence - - - -# from abnumber import Chain - -# # no_index version -# def get_h1_h2_h3(seq): -# try: -# chain = Chain(seq, scheme='chothia') -# return chain.cdr1_seq, chain.cdr2_seq, chain.cdr3_seq -# except: -# return "no recoginized cdr1","no recoginized cdr2","no recoginized cdr3" - -def get_offset(pdb): - pdb_multiline = pdb.split("\n") - for line in pdb_multiline: - if line.startswith("ATOM"): - return int(line[22:27]) - - -def run(pdb_id, pdb_file, helix, sheet, loop): - path_to_pdb = get_pdb(pdb_code=pdb_id, filepath=pdb_file) - pdb = open(path_to_pdb, "r").read() - - offset = get_offset(pdb) - try: - coord = torch.tensor(pydssp.read_pdbtext(pdb)) - secondary_struct = pydssp.assign(coord) - view = py3Dmol.view(width=800, height=400) - view.addModel(pdb, "pdb",{'vibrate': {'frames':10,'amplitude':1}}) - colormap = {"H": helix, "-": loop, "E": sheet} - colors = {i + offset: colormap[resi] for i, resi in enumerate(secondary_struct)} - view.setStyle({"cartoon": {"colorscheme": {"prop": "resi", "map": colors}}}) - except: - secondary_struct = [] - view = py3Dmol.view(width=800, height=400) - view.addModel(pdb, "pdb",{'vibrate': {'frames':10,'amplitude':1}}) - colormap = {"H": helix, "-": loop, "E": sheet} - colors = {i + offset: colormap[resi] for i, resi in enumerate(secondary_struct)} - #view.setStyle({'chain':['A','H']},{'cartoon': {'color': 'orange'}}) # alpha subunits of hemoglobin - view.setStyle({'model': -1}, {"cartoon": {'color': 'pink'}}) - view.setStyle({'chain':['A','H']},{'cartoon': {'color': 'orange'}}) # alpha subunits of hemoglobin - view.setStyle({'chain':['P']},{'cartoon': {'color': 'green'}}) # alpha subunits of hemoglobin - - #view.setStyle({"cartoon": {"colorscheme": {"prop": "resi", "map": colors}}}) - view.zoomTo() - view.animate({'loop': "forward"}) - output = view._make_html().replace("'", '"') - - # the below is to include seq info - seq = get_seq(path_to_pdb) - - # h1,h2,h3 = get_h1_h2_h3(seq) - # p_seq = "Full seq: " + seq - # h1 = "cdr1: " +h1 - # h2 = "cdr2: " +h2 - # h3 = "cdr3: " +h3 - # sequence_html = f"

    {p_seq}
    {h1}
    {h2}
    {h3}

    " - sequence_html = f"

    {seq}

    " - - x = f""" {output} {sequence_html} """ # do not use ' in this input - return f"""""" - - -with gr.Blocks() as demo: - pdb_id = gr.Textbox(label="pdb code") - pdb_file = gr.File(label="pdb file") - with gr.Row(): - helix = gr.ColorPicker(label="Helix") - sheet = gr.ColorPicker(label="Sheet") - loop = gr.ColorPicker(label="Loop") - btn = gr.Button(label="run") - - html = gr.HTML() - gr.Examples( - # [["b45_0_0", "#ff0000", "#00ff00", "#0000ff"],["b45_1_0", "#ff0000", "#00ff00", "#0000ff"],["b45_2_0", "#ff0000", "#00ff00", "#0000ff"], - # ["b45_3_0", "#ff0000", "#00ff00", "#0000ff"],["b45_4_0", "#ff0000", "#00ff00", "#0000ff"],["b45_5_0", "#ff0000", "#00ff00", "#0000ff"], - # ["b45_6_0", "#ff0000", "#00ff00", "#0000ff"],["b45_7_0", "#ff0000", "#00ff00", "#0000ff"],["b45_8_0", "#ff0000", "#00ff00", "#0000ff"], - # ["1QYS", "#ff0000", "#00ff00", "#0000ff"]], - [["generated_hv_for_5dmg", "#ff0000", "#00ff00", "#0000ff"],["5dmg", "#ff0000", "#00ff00", "#0000ff"]], - inputs=[pdb_id, helix, sheet, loop], - outputs=[html], - fn=run, - ) - btn.click(fn=run, inputs=[pdb_id, pdb_file, helix, sheet, loop], outputs=[html]) - -demo.launch() diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/generation/beam_search.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/generation/beam_search.py deleted file mode 100644 index 03334b6b6145ab74b41c0a4026ca8e9f053bf13e..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/generation/beam_search.py +++ /dev/null @@ -1,978 +0,0 @@ -# coding=utf-8 -# Copyright 2020 The HuggingFace Inc. team -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from abc import ABC, abstractmethod -from collections import UserDict -from typing import Dict, List, Optional, Tuple, Union - -import numpy as np -import torch - -from ..utils import add_start_docstrings -from .beam_constraints import Constraint, ConstraintListState - - -PROCESS_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size * num_beams, sequence_length)`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using any class inheriting from [`PreTrainedTokenizer`]. See - [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - next_scores (`torch.FloatTensor` of shape `(batch_size, 2 * num_beams)`): - Current scores of the top `2 * num_beams` non-finished beam hypotheses. - next_tokens (`torch.LongTensor` of shape `(batch_size, 2 * num_beams)`): - `input_ids` of the tokens corresponding to the top `2 * num_beams` non-finished beam hypotheses. - next_indices (`torch.LongTensor` of shape `(batch_size, 2 * num_beams)`): - Beam indices indicating to which beam hypothesis the `next_tokens` correspond. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - eos_token_id (`Union[int, List[int]]`, *optional*): - The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens. - beam_indices (`torch.LongTensor`, *optional*): - Beam indices indicating to which beam hypothesis each token correspond. - group_index (`int`, *optional*): - The index of the group of beams. Used with [`~PreTrainedModel.group_beam_search`]. - - Return: - `UserDict`: A dictionary composed of the fields as defined above: - - - **next_beam_scores** (`torch.FloatTensor` of shape `(batch_size * num_beams)`) -- Updated scores of all - non-finished beams. - - **next_beam_tokens** (`torch.FloatTensor` of shape `(batch_size * num_beams)`) -- Next tokens to be added - to the non-finished beam_hypotheses. - - **next_beam_indices** (`torch.FloatTensor` of shape `(batch_size * num_beams)`) -- Beam indices - indicating to which beam the next tokens shall be added. - -""" - -FINALIZE_INPUTS_DOCSTRING = r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size * num_beams, sequence_length)`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using any class inheriting from [`PreTrainedTokenizer`]. See - [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - final_beam_scores (`torch.FloatTensor` of shape `(batch_size * num_beams)`): - The final scores of all non-finished beams. - final_beam_tokens (`torch.FloatTensor` of shape `(batch_size * num_beams)`): - The last tokens to be added to the non-finished beam_hypotheses. - final_beam_indices (`torch.FloatTensor` of shape `(batch_size * num_beams)`): - The beam indices indicating to which beam the `final_beam_tokens` shall be added. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - eos_token_id (`Union[int, List[int]]`, *optional*): - The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens. - - Return: - `torch.LongTensor` of shape `(batch_size * num_return_sequences, sequence_length)`: The generated sequences. - The second dimension (sequence_length) is either equal to `max_length` or shorter if all batches finished early - due to the `eos_token_id`. - -""" - - -class BeamScorer(ABC): - """ - Abstract base class for all beam scorers that are used for [`~PreTrainedModel.beam_search`] and - [`~PreTrainedModel.beam_sample`]. - """ - - @abstractmethod - @add_start_docstrings(PROCESS_INPUTS_DOCSTRING) - def process( - self, - input_ids: torch.LongTensor, - next_scores: torch.FloatTensor, - next_tokens: torch.LongTensor, - next_indices: torch.LongTensor, - **kwargs, - ) -> Tuple[torch.Tensor]: - raise NotImplementedError("This is an abstract method.") - - @abstractmethod - @add_start_docstrings(FINALIZE_INPUTS_DOCSTRING) - def finalize( - self, - input_ids: torch.LongTensor, - next_scores: torch.FloatTensor, - next_tokens: torch.LongTensor, - next_indices: torch.LongTensor, - max_length: int, - **kwargs, - ) -> torch.LongTensor: - raise NotImplementedError("This is an abstract method.") - - -class BeamSearchScorer(BeamScorer): - r""" - [`BeamScorer`] implementing standard beam search decoding. - - Adapted in part from [Facebook's XLM beam search - code](https://github.com/facebookresearch/XLM/blob/9e6f6814d17be4fe5b15f2e6c43eb2b2d76daeb4/src/model/transformer.py#L529). - - Reference for the diverse beam search algorithm and implementation [Ashwin Kalyan's DBS - implementation](https://github.com/ashwinkalyan/dbs/blob/master/dbs/beam_utils.lua) - - Args: - batch_size (`int`): - Batch Size of `input_ids` for which standard beam search decoding is run in parallel. - num_beams (`int`): - Number of beams for beam search. - device (`torch.device`): - Defines the device type (*e.g.*, `"cpu"` or `"cuda"`) on which this instance of `BeamSearchScorer` will be - allocated. - length_penalty (`float`, *optional*, defaults to 1.0): - Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to - the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log - likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while - `length_penalty` < 0.0 encourages shorter sequences. - do_early_stopping (`bool` or `str`, *optional*, defaults to `False`): - Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: - `True`, where the generation stops as soon as there are `num_beams` complete candidates; `False`, where an - heuristic is applied and the generation stops when is it very unlikely to find better candidates; - `"never"`, where the beam search procedure only stops when there cannot be better candidates (canonical - beam search algorithm). - num_beam_hyps_to_keep (`int`, *optional*, defaults to 1): - The number of beam hypotheses that shall be returned upon calling - [`~transformer.BeamSearchScorer.finalize`]. - num_beam_groups (`int`, *optional*, defaults to 1): - Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams. - See [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details. - max_length (`int`, *optional*): - The maximum length of the sequence to be generated. - """ - - def __init__( - self, - batch_size: int, - num_beams: int, - device: torch.device, - length_penalty: Optional[float] = 1.0, - do_early_stopping: Optional[Union[bool, str]] = False, - num_beam_hyps_to_keep: Optional[int] = 1, - num_beam_groups: Optional[int] = 1, - max_length: Optional[int] = None, - ): - self.num_beams = num_beams - self.device = device - self.length_penalty = length_penalty - self.do_early_stopping = do_early_stopping - self.num_beam_hyps_to_keep = num_beam_hyps_to_keep - self.num_beam_groups = num_beam_groups - self.group_size = self.num_beams // self.num_beam_groups - - self._is_init = False - # self._beam_hyps[i*self.num_beam_groups+j] is the beam_hyps of the j-th group in the i-th mini-batch. - # If group_beam_search is not used, the list consists of `batch_size` beam_hyps. - self._beam_hyps = [ - BeamHypotheses( - num_beams=self.group_size, - length_penalty=self.length_penalty, - early_stopping=self.do_early_stopping, - max_length=max_length, - ) - for _ in range(batch_size * self.num_beam_groups) - ] - # self._done[i*self.num_beam_groups+j] indicates whether the generation of the beam_hyps of the j-th group - # in the i-th mini-batch is complete. - self._done = torch.tensor( - [False for _ in range(batch_size * self.num_beam_groups)], dtype=torch.bool, device=self.device - ) - - if not isinstance(num_beams, int) or num_beams <= 1: - raise ValueError( - f"`num_beams` has to be an integer strictly greater than 1, but is {num_beams}. For `num_beams` == 1," - " one should make use of `greedy_search` instead." - ) - - if not isinstance(num_beam_groups, int) or (num_beam_groups > num_beams) or (num_beams % num_beam_groups != 0): - raise ValueError( - "`num_beam_groups` has to be an integer smaller or equal than `num_beams` and `num_beams` has to be" - f" divisible by `num_beam_groups`, but is {num_beam_groups} with `num_beams` being {num_beams}." - ) - - @property - def is_done(self) -> bool: - return self._done.all() - - def process( - self, - input_ids: torch.LongTensor, - next_scores: torch.FloatTensor, - next_tokens: torch.LongTensor, - next_indices: torch.LongTensor, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[Union[int, List[int]]] = None, - beam_indices: Optional[torch.LongTensor] = None, - group_index: Optional[int] = 0, - ) -> Dict[str, torch.Tensor]: - cur_len = input_ids.shape[-1] + 1 # add up to the length which the next_scores is calculated on - batch_size = len(self._beam_hyps) // self.num_beam_groups - - if not (batch_size == (input_ids.shape[0] // self.group_size)): - if self.num_beam_groups > 1: - raise ValueError( - f"A group beam size of {input_ids.shape[0]} is used as the input, but a group beam " - f"size of {self.group_size} is expected by the beam scorer." - ) - else: - raise ValueError( - f"A beam size of {input_ids.shape[0]} is used as the input, but a beam size of " - f"{self.group_size} is expected by the beam scorer." - ) - - device = input_ids.device - next_beam_scores = torch.zeros((batch_size, self.group_size), dtype=next_scores.dtype, device=device) - next_beam_tokens = torch.zeros((batch_size, self.group_size), dtype=next_tokens.dtype, device=device) - next_beam_indices = torch.zeros((batch_size, self.group_size), dtype=next_indices.dtype, device=device) - - if isinstance(eos_token_id, int): - eos_token_id = [eos_token_id] - - for batch_idx in range(batch_size): - batch_group_idx = batch_idx * self.num_beam_groups + group_index - if self._done[batch_group_idx]: - if self.num_beams < len(self._beam_hyps[batch_group_idx]): - raise ValueError(f"Batch can only be done if at least {self.num_beams} beams have been generated") - if eos_token_id is None or pad_token_id is None: - raise ValueError("Generated beams >= num_beams -> eos_token_id and pad_token have to be defined") - # pad the batch - next_beam_scores[batch_idx, :] = 0 - next_beam_tokens[batch_idx, :] = pad_token_id - next_beam_indices[batch_idx, :] = 0 - continue - - # next tokens for this sentence - beam_idx = 0 - for beam_token_rank, (next_token, next_score, next_index) in enumerate( - zip(next_tokens[batch_idx], next_scores[batch_idx], next_indices[batch_idx]) - ): - batch_beam_idx = batch_idx * self.group_size + next_index - # add to generated hypotheses if end of sentence - if (eos_token_id is not None) and (next_token.item() in eos_token_id): - # if beam_token does not belong to top num_beams tokens, it should not be added - is_beam_token_worse_than_top_num_beams = beam_token_rank >= self.group_size - if is_beam_token_worse_than_top_num_beams: - continue - if beam_indices is not None: - beam_index = beam_indices[batch_beam_idx] - beam_index = beam_index + (batch_beam_idx,) - else: - beam_index = None - - self._beam_hyps[batch_group_idx].add( - input_ids[batch_beam_idx].clone(), - next_score.item(), - beam_indices=beam_index, - ) - else: - # add next predicted token since it is not eos_token - next_beam_scores[batch_idx, beam_idx] = next_score - next_beam_tokens[batch_idx, beam_idx] = next_token - next_beam_indices[batch_idx, beam_idx] = batch_beam_idx - beam_idx += 1 - - # once the beam for next step is full, don't add more tokens to it. - if beam_idx == self.group_size: - break - - if beam_idx < self.group_size: - raise ValueError( - f"At most {self.group_size} tokens in {next_tokens[batch_idx]} can be equal to `eos_token_id:" - f" {eos_token_id}`. Make sure {next_tokens[batch_idx]} are corrected." - ) - - # Check if we are done so that we can save a pad step if all(done) - self._done[batch_group_idx] = self._done[batch_group_idx] or self._beam_hyps[batch_group_idx].is_done( - next_scores[batch_idx].max().item(), cur_len - ) - - return UserDict( - { - "next_beam_scores": next_beam_scores.view(-1), - "next_beam_tokens": next_beam_tokens.view(-1), - "next_beam_indices": next_beam_indices.view(-1), - } - ) - - def finalize( - self, - input_ids: torch.LongTensor, - final_beam_scores: torch.FloatTensor, - final_beam_tokens: torch.LongTensor, - final_beam_indices: torch.LongTensor, - max_length: int, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[Union[int, List[int]]] = None, - beam_indices: Optional[torch.LongTensor] = None, - ) -> Tuple[torch.LongTensor]: - batch_size = len(self._beam_hyps) // self.num_beam_groups - - if isinstance(eos_token_id, int): - eos_token_id = [eos_token_id] - - # finalize all open beam hypotheses and add to generated hypotheses - for batch_group_idx, beam_hyp in enumerate(self._beam_hyps): - if self._done[batch_group_idx]: - continue - - # all open beam hypotheses are added to the beam hypothesis - # beam hypothesis class automatically keeps the best beams - for index_per_group in range(self.group_size): - batch_beam_idx = batch_group_idx * self.group_size + index_per_group - final_score = final_beam_scores[batch_beam_idx].item() - final_tokens = input_ids[batch_beam_idx] - beam_index = beam_indices[batch_beam_idx] if beam_indices is not None else None - beam_hyp.add(final_tokens, final_score, beam_indices=beam_index) - - # select the best hypotheses - sent_lengths = input_ids.new(batch_size * self.num_beam_hyps_to_keep) - best = [] - best_indices = [] - best_scores = torch.zeros(batch_size * self.num_beam_hyps_to_keep, device=self.device, dtype=torch.float32) - - # retrieve best hypotheses - for i in range(batch_size): - beam_hyps_in_batch = self._beam_hyps[i * self.num_beam_groups : (i + 1) * self.num_beam_groups] - candidate_beams = [beam for beam_hyp in beam_hyps_in_batch for beam in beam_hyp.beams] - sorted_hyps = sorted(candidate_beams, key=lambda x: x[0]) - for j in range(self.num_beam_hyps_to_keep): - best_hyp_tuple = sorted_hyps.pop() - best_score = best_hyp_tuple[0] - best_hyp = best_hyp_tuple[1] - best_index = best_hyp_tuple[2] - sent_lengths[self.num_beam_hyps_to_keep * i + j] = len(best_hyp) - - # append hyp to lists - best.append(best_hyp) - - # append indices to list - best_indices.append(best_index) - - best_scores[i * self.num_beam_hyps_to_keep + j] = best_score - - # prepare for adding eos - sent_lengths_max = sent_lengths.max().item() + 1 - sent_max_len = min(sent_lengths_max, max_length) if max_length is not None else sent_lengths_max - decoded: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len) - - if len(best_indices) > 0 and best_indices[0] is not None: - indices: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len) - else: - indices = None - - # shorter batches are padded if needed - if sent_lengths.min().item() != sent_lengths.max().item(): - if pad_token_id is None: - raise ValueError("`pad_token_id` has to be defined") - decoded.fill_(pad_token_id) - - if indices is not None: - indices.fill_(-1) - - # fill with hypotheses and eos_token_id if the latter fits in - for i, (hypo, best_idx) in enumerate(zip(best, best_indices)): - decoded[i, : sent_lengths[i]] = hypo - - if indices is not None: - indices[i, : len(best_idx)] = torch.tensor(best_idx) - - if sent_lengths[i] < sent_max_len: - # inserting only the first eos_token_id - decoded[i, sent_lengths[i]] = eos_token_id[0] - - return UserDict( - { - "sequences": decoded, - "sequence_scores": best_scores, - "beam_indices": indices, - } - ) - - -class ConstrainedBeamSearchScorer(BeamScorer): - r""" - [`BeamScorer`] implementing constrained beam search decoding. - - - Args: - batch_size (`int`): - Batch Size of `input_ids` for which standard beam search decoding is run in parallel. - num_beams (`int`): - Number of beams for beam search. - constraints (`List[Constraint]`): - A list of positive constraints represented as `Constraint` objects that must be fulfilled in the generation - output. For more information, the documentation of [`Constraint`] should be read. - device (`torch.device`): - Defines the device type (*e.g.*, `"cpu"` or `"cuda"`) on which this instance of `BeamSearchScorer` will be - allocated. - length_penalty (`float`, *optional*, defaults to 1.0): - Exponential penalty to the length that is used with beam-based generation. It is applied as an exponent to - the sequence length, which in turn is used to divide the score of the sequence. Since the score is the log - likelihood of the sequence (i.e. negative), `length_penalty` > 0.0 promotes longer sequences, while - `length_penalty` < 0.0 encourages shorter sequences. - do_early_stopping (`bool` or `str`, *optional*, defaults to `False`): - Controls the stopping condition for beam-based methods, like beam-search. It accepts the following values: - `True`, where the generation stops as soon as there are `num_beams` complete candidates; `False`, where an - heuristic is applied and the generation stops when is it very unlikely to find better candidates; - `"never"`, where the beam search procedure only stops when there cannot be better candidates (canonical - beam search algorithm). - num_beam_hyps_to_keep (`int`, *optional*, defaults to 1): - The number of beam hypotheses that shall be returned upon calling - [`~transformer.BeamSearchScorer.finalize`]. - num_beam_groups (`int`, *optional*, defaults to 1): - Number of groups to divide `num_beams` into in order to ensure diversity among different groups of beams. - See [this paper](https://arxiv.org/pdf/1610.02424.pdf) for more details. - max_length (`int`, *optional*): - The maximum length of the sequence to be generated. - """ - - def __init__( - self, - batch_size: int, - num_beams: int, - constraints: List[Constraint], - device: torch.device, - length_penalty: Optional[float] = 1.0, - do_early_stopping: Optional[Union[bool, str]] = False, - num_beam_hyps_to_keep: Optional[int] = 1, - num_beam_groups: Optional[int] = 1, - max_length: Optional[int] = None, - ): - self.num_beams = num_beams - self.device = device - self.length_penalty = length_penalty - self.do_early_stopping = do_early_stopping - self.num_beam_hyps_to_keep = num_beam_hyps_to_keep - self.num_beam_groups = num_beam_groups - self.group_size = self.num_beams // self.num_beam_groups - self.constraints = constraints - - self._is_init = False - self._beam_hyps = [ - BeamHypotheses( - num_beams=self.num_beams, - length_penalty=self.length_penalty, - early_stopping=self.do_early_stopping, - max_length=max_length, - ) - for _ in range(batch_size) - ] - self._done = torch.tensor([False for _ in range(batch_size)], dtype=torch.bool, device=self.device) - - if not isinstance(num_beams, int) or num_beams <= 1: - raise ValueError( - f"`num_beams` has to be an integer strictly greater than 1, but is {num_beams}. For `num_beams` == 1," - " one should make use of `greedy_search` instead." - ) - - if not isinstance(num_beam_groups, int) or (num_beam_groups > num_beams) or (num_beams % num_beam_groups != 0): - raise ValueError( - "`num_beam_groups` has to be an integer smaller or equal than `num_beams` and `num_beams` has to be" - f" divisible by `num_beam_groups`, but is {num_beam_groups} with `num_beams` being {num_beams}." - ) - - @property - def is_done(self) -> bool: - return self._done.all() - - def make_constraint_states(self, n): - return [ConstraintListState([constraint.copy() for constraint in self.constraints]) for _ in range(n)] - - def check_completes_constraints(self, sequence): - new_state = self.make_constraint_states(1)[0] - new_state.reset(sequence) - return new_state.completed - - def process( - self, - input_ids: torch.LongTensor, - next_scores: torch.FloatTensor, - next_tokens: torch.LongTensor, - next_indices: torch.LongTensor, - scores_for_all_vocab: torch.FloatTensor, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[Union[int, List[int]]] = None, - beam_indices: Optional[torch.LongTensor] = None, - ) -> Tuple[torch.Tensor]: - r""" - Args: - input_ids (`torch.LongTensor` of shape `(batch_size * num_beams, sequence_length)`): - Indices of input sequence tokens in the vocabulary. - - Indices can be obtained using any class inheriting from [`PreTrainedTokenizer`]. See - [`PreTrainedTokenizer.encode`] and [`PreTrainedTokenizer.__call__`] for details. - - [What are input IDs?](../glossary#input-ids) - next_scores (`torch.FloatTensor` of shape `(batch_size, 2 * num_beams)`): - Current scores of the top `2 * num_beams` non-finished beam hypotheses. - next_tokens (`torch.LongTensor` of shape `(batch_size, 2 * num_beams)`): - `input_ids` of the tokens corresponding to the top `2 * num_beams` non-finished beam hypotheses. - next_indices (`torch.LongTensor` of shape `(batch_size, 2 * num_beams)`): - Beam indices indicating to which beam hypothesis the `next_tokens` correspond. - scores_for_all_vocab (`torch.FloatTensor` of shape `(batch_size * num_beams, sequence_length)`): - The scores of all tokens in the vocabulary for each of the beam hypotheses. - pad_token_id (`int`, *optional*): - The id of the *padding* token. - eos_token_id (`Union[int, List[int]]`, *optional*): - The id of the *end-of-sequence* token. Optionally, use a list to set multiple *end-of-sequence* tokens. - beam_indices (`torch.LongTensor`, *optional*): - Beam indices indicating to which beam hypothesis each token correspond. - - Return: - `UserDict`: A dictionary composed of the fields as defined above: - - - **next_beam_scores** (`torch.FloatTensor` of shape `(batch_size * num_beams)`) -- Updated scores of - all - non-finished beams. - - - **next_beam_tokens** (`torch.FloatTensor` of shape `(batch_size * num_beams)`) -- Next tokens to be - added - to the non-finished beam_hypotheses. - - **next_beam_indices** (`torch.FloatTensor` of shape `(batch_size * num_beams)`) -- Beam indices - indicating to which beam the next tokens shall be added. - """ - - cur_len = input_ids.shape[-1] + 1 # add up to the length which the next_scores is calculated on - batch_size = len(self._beam_hyps) - if not (batch_size == (input_ids.shape[0] // self.group_size)): - if self.num_beam_groups > 1: - raise ValueError( - f"A group beam size of {input_ids.shape[0]} is used as the input, but a group beam " - f"size of {self.group_size} is expected by the beam scorer." - ) - else: - raise ValueError( - f"A beam size of {input_ids.shape[0]} is used as the input, but a beam size of " - f"{self.group_size} is expected by the beam scorer." - ) - - device = input_ids.device - - next_beam_scores = torch.zeros((batch_size, self.group_size), dtype=next_scores.dtype, device=device) - next_beam_tokens = torch.zeros((batch_size, self.group_size), dtype=next_tokens.dtype, device=device) - next_beam_indices = torch.zeros((batch_size, self.group_size), dtype=next_indices.dtype, device=device) - - if isinstance(eos_token_id, int): - eos_token_id = [eos_token_id] - - for batch_idx, beam_hyp in enumerate(self._beam_hyps): - if self._done[batch_idx]: - if self.num_beams < len(beam_hyp): - raise ValueError(f"Batch can only be done if at least {self.num_beams} beams have been generated") - if eos_token_id is None or pad_token_id is None: - raise ValueError("Generated beams >= num_beams -> eos_token_id and pad_token have to be defined") - # pad the batch - next_beam_scores[batch_idx, :] = 0 - next_beam_tokens[batch_idx, :] = pad_token_id - next_beam_indices[batch_idx, :] = 0 - continue - - # next tokens for this sentence. - beam_idx = 0 - for beam_token_rank, (next_token, next_score, next_index) in enumerate( - zip(next_tokens[batch_idx], next_scores[batch_idx], next_indices[batch_idx]) - ): - batch_beam_idx = batch_idx * self.group_size + next_index - # add to generated hypotheses if end of sentence - if (eos_token_id is not None) and (next_token.item() in eos_token_id): - # if beam_token does not belong to top num_beams tokens, it should not be added - is_beam_token_worse_than_top_num_beams = beam_token_rank >= self.group_size - if is_beam_token_worse_than_top_num_beams: - continue - - completes_constraint = self.check_completes_constraints(input_ids[batch_beam_idx].cpu().tolist()) - if completes_constraint: - if beam_indices is not None: - beam_index = beam_indices[batch_beam_idx] - beam_index = beam_index + (batch_beam_idx,) - else: - beam_index = None - - beam_hyp.add( - input_ids[batch_beam_idx].clone(), - next_score.item(), - beam_indices=beam_index, - ) - else: - # add next predicted token since it is not eos_token - next_beam_scores[batch_idx, beam_idx] = next_score - next_beam_tokens[batch_idx, beam_idx] = next_token - next_beam_indices[batch_idx, beam_idx] = batch_beam_idx - beam_idx += 1 - - # once the beam for next step is full, don't add more tokens to it. - if beam_idx == self.group_size: - break - - new_scores, new_tokens, new_indices = self.step_sentence_constraint( - batch_idx, - input_ids, - scores_for_all_vocab, - next_beam_scores[batch_idx], - next_beam_tokens[batch_idx], - next_beam_indices[batch_idx], - ) - - next_beam_scores[batch_idx] = new_scores - next_beam_tokens[batch_idx] = new_tokens - next_beam_indices[batch_idx] = new_indices - - if beam_idx < self.group_size: - raise ValueError( - f"At most {self.group_size} tokens in {next_tokens[batch_idx]} can be equal to `eos_token_id:" - f" {eos_token_id}`. Make sure {next_tokens[batch_idx]} are corrected." - ) - - # Check if we are done so that we can save a pad step if all(done) - self._done[batch_idx] = self._done[batch_idx] or beam_hyp.is_done( - next_scores[batch_idx].max().item(), cur_len - ) - - return UserDict( - { - "next_beam_scores": next_beam_scores.view(-1), - "next_beam_tokens": next_beam_tokens.view(-1), - "next_beam_indices": next_beam_indices.view(-1), - } - ) - - def step_sentence_constraint( - self, - batch_idx: int, - input_ids: torch.LongTensor, - vocab_scores: torch.FloatTensor, - sent_beam_scores: torch.FloatTensor, - sent_beam_tokens: torch.LongTensor, - sent_beam_indices: torch.LongTensor, - push_progress: bool = False, - ): - # sent_beam_tokens are the next {num_beams} number of tokens that are under consideration for this beam - # (candidate next tokens) - - # 1. Adding "advance_tokens" - # using ConstraintStateList.advance(), we propose new tokens to be added into this "candidate list" that will - # advance us in fulfilling the constraints. - - # 2. Selecting best candidates such that we end up with highest probable candidates - # that fulfill our constraints. - - orig_len = sent_beam_indices.size(0) - device = sent_beam_indices.device - - # initialize states - topk_contraint_states = self.make_constraint_states(orig_len) - advance_constraint_states = self.make_constraint_states(orig_len) - - sidx, eidx = batch_idx * orig_len, (batch_idx + 1) * orig_len - this_batch_input_ids = input_ids[sidx:eidx] - this_batch_token_scores = vocab_scores[sidx:eidx] - full_hypotheses = torch.cat((input_ids[sent_beam_indices], sent_beam_tokens.unsqueeze(-1)), dim=-1) - - # need to make new hypothesis that advance the constraints - track_new = { - "new_seqs": full_hypotheses.tolist(), - "new_states": [], - "new_indices": [], - "new_tokens": [], - "new_scores": [], - } - for seq_idx, pre_seq in enumerate(this_batch_input_ids): - # pre_seq = ith sequence generated before this step. - - # input_ids -> (topk) generic beam search best model next tokens - # -> (advance) constraints forcing the next token - # either way, we need to sort them into "banks" later, so store a "ConstraintListState" for all types of - # hypotheses. - - topk_state = topk_contraint_states[seq_idx] - topk_state.reset(full_hypotheses[seq_idx].cpu().tolist()) - - advance_state = advance_constraint_states[seq_idx] - advance_state.reset(pre_seq.cpu().tolist()) - - if not advance_state.completed: - advance_tokens = torch.LongTensor(advance_state.advance()).to(device) - for advance_token in advance_tokens: - # since adding each `advance_token` leads to a different hypothesis, create new state instance. - new_state = advance_state.copy(stateful=True) - new_state.add(advance_token.cpu().tolist()) - - advance_seq = torch.cat((pre_seq, advance_token.unsqueeze(0)), -1).cpu().tolist() - if advance_seq not in track_new["new_seqs"]: - # prevent duplicates, which are basically bound to happen in this process. - track_new["new_seqs"].append(advance_seq) - track_new["new_indices"].append(sidx + seq_idx) # idx -> global idx across all the batches - track_new["new_tokens"].append(advance_token) - track_new["new_scores"].append(this_batch_token_scores[seq_idx].take(advance_token)) - track_new["new_states"].append(new_state) - elif push_progress: - # Basically, `sent_beam_indices` often chooses very little among `input_ids` the generated sequences that - # actually fulfill our constraints. For example, let constraints == ["loves pies"] and - - # pre_seq_1 = "The child loves pies and" pre_seq_2 = "The child plays in the playground and" - - # Without this step, if `sent_beam_indices` is something like [1,1], then - # 1. `pre_seq_1` won't be added to the list of (topk) hypothesis since it's not in the indices and - # 2. it won't be added to the list of (advance) hypothesis since it's completed already. (this is - # the else part of `if constraints_completed[seq_idx]`) - # 3. it ends up simply getting removed from consideration. - - # #3 might be fine and actually desired, since it's likely that it's a low-probability output anyways, - # especially if it's not in the list of `sent_beam_indices`. But this often leads to lengthened beam - # search times, since completed sequences keep getting removed after all this effort for constrained - # generation. - - # Here, we basically take `pre_seq_1` and to "push" it into the considered list of hypotheses, by simply - # appending the next likely token in the vocabulary and adding it to the list of hypotheses. - - new_score, new_token = torch.max(this_batch_token_scores[seq_idx], 0) # some next probable token - advance_seq = torch.cat((pre_seq, new_token.unsqueeze(0)), -1) - - advance_state = advance_constraint_states[seq_idx] - - advance_seq = advance_seq.cpu().tolist() - - advance_state.reset(advance_seq) - if advance_seq not in track_new["new_seqs"]: - # but still don't want to have duplicates - track_new["new_seqs"].append(advance_seq) - track_new["new_indices"].append(seq_idx) - track_new["new_tokens"].append(new_token) - track_new["new_scores"].append(new_score) - track_new["new_states"].append(advance_state) - - if len(track_new["new_indices"]) > 0: - new_indices = torch.tensor(track_new["new_indices"]).to(device) - new_tokens = torch.stack(track_new["new_tokens"]).to(device) - new_scores = torch.stack(track_new["new_scores"]).to(device) - - all_states = topk_contraint_states + track_new["new_states"] - all_tokens = torch.cat((sent_beam_tokens, new_tokens), -1) - all_scores = torch.cat((sent_beam_scores, new_scores), -1) - all_banks = torch.tensor([one.get_bank() for one in all_states]).to(device) - - zipped = all_banks * 100 + all_scores - indices = zipped.sort(descending=True).indices - sorted_banks = all_banks[indices] - - # Then we end up with {sorted among bank C}, {sorted among bank C-1}, ..., {sorted among bank 0} - - counter = -1 - cur_bank = sorted_banks[0] - increments = [] - for bank in sorted_banks: - if bank == cur_bank: - counter += 1 - else: - counter = 0 - cur_bank = bank - increments.append(counter) - rearrangers = torch.tensor(np.argsort(increments, kind="mergesort")) - - indices = indices[rearrangers][:orig_len] - - sent_beam_scores = all_scores[indices] - sent_beam_tokens = all_tokens[indices] - sent_beam_indices = torch.cat((sent_beam_indices, new_indices))[indices] - - return sent_beam_scores, sent_beam_tokens, sent_beam_indices - - def finalize( - self, - input_ids: torch.LongTensor, - final_beam_scores: torch.FloatTensor, - final_beam_tokens: torch.LongTensor, - final_beam_indices: torch.LongTensor, - max_length: int, - pad_token_id: Optional[int] = None, - eos_token_id: Optional[Union[int, List[int]]] = None, - beam_indices: Optional[torch.LongTensor] = None, - ) -> Tuple[torch.LongTensor]: - batch_size = len(self._beam_hyps) - - if isinstance(eos_token_id, int): - eos_token_id = [eos_token_id] - - # finalize all open beam hypotheses and add to generated hypotheses - for batch_idx, beam_hyp in enumerate(self._beam_hyps): - if self._done[batch_idx]: - continue - - # all open beam hypotheses are added to the beam hypothesis - # beam hypothesis class automatically keeps the best beams - - ids_collect = [] - for beam_id in range(self.num_beams): - batch_beam_idx = batch_idx * self.num_beams + beam_id - final_score = final_beam_scores[batch_beam_idx].item() - final_tokens = input_ids[batch_beam_idx] - - completes_constraint = self.check_completes_constraints(final_tokens.cpu().tolist()) - if completes_constraint: - beam_index = beam_indices[batch_beam_idx] if beam_indices is not None else None - beam_hyp.add(final_tokens, final_score, beam_indices=beam_index) - ids_collect.append(beam_id) - - # due to overly complex constraints or other factors, sometimes we can't gaurantee a successful - # generation. In these cases we simply return the highest scoring outputs. - if len(ids_collect) < self.num_beam_hyps_to_keep: - for beam_id in range(self.num_beams): - if beam_id not in ids_collect: - batch_beam_idx = batch_idx * self.num_beams + beam_id - final_score = final_beam_scores[batch_beam_idx].item() - final_tokens = input_ids[batch_beam_idx] - beam_hyp.add(final_tokens, final_score) - if len(ids_collect) >= self.num_beam_hyps_to_keep: - break - - # select the best hypotheses - sent_lengths = input_ids.new(batch_size * self.num_beam_hyps_to_keep) - best = [] - best_indices = [] - best_scores = torch.zeros(batch_size * self.num_beam_hyps_to_keep, device=self.device, dtype=torch.float32) - - # retrieve best hypotheses - for i, beam_hyp in enumerate(self._beam_hyps): - sorted_hyps = sorted(beam_hyp.beams, key=lambda x: x[0]) - for j in range(self.num_beam_hyps_to_keep): - best_hyp_tuple = sorted_hyps.pop() - best_score = best_hyp_tuple[0] - best_hyp = best_hyp_tuple[1] - best_index = best_hyp_tuple[2] - sent_lengths[self.num_beam_hyps_to_keep * i + j] = len(best_hyp) - - # append to lists - best.append(best_hyp) - - # append indices to list - best_indices.append(best_index) - - best_scores[i * self.num_beam_hyps_to_keep + j] = best_score - - # prepare for adding eos - sent_lengths_max = sent_lengths.max().item() + 1 - - sent_max_len = min(sent_lengths_max, max_length) if max_length is not None else sent_lengths_max - decoded: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len) - - if len(best_indices) > 0 and best_indices[0] is not None: - indices: torch.LongTensor = input_ids.new(batch_size * self.num_beam_hyps_to_keep, sent_max_len) - else: - indices = None - - # shorter batches are padded if needed - if sent_lengths.min().item() != sent_lengths.max().item(): - if pad_token_id is None: - raise ValueError("`pad_token_id` has to be defined") - decoded.fill_(pad_token_id) - - if indices is not None: - indices.fill_(-1) - - # fill with hypotheses and eos_token_id if the latter fits in - for i, (hypo, best_idx) in enumerate(zip(best, best_indices)): - decoded[i, : sent_lengths[i]] = hypo - - if indices is not None: - indices[i, : len(best_idx)] = torch.tensor(best_idx) - - if sent_lengths[i] < sent_max_len: - # inserting only the first eos_token_id - decoded[i, sent_lengths[i]] = eos_token_id[0] - - return UserDict( - { - "sequences": decoded, - "sequence_scores": best_scores, - "beam_indices": indices, - } - ) - - -class BeamHypotheses: - def __init__(self, num_beams: int, length_penalty: float, early_stopping: bool, max_length: Optional[int] = None): - """ - Initialize n-best list of hypotheses. - """ - self.length_penalty = length_penalty - self.early_stopping = early_stopping - self.max_length = max_length - self.num_beams = num_beams - self.beams = [] - self.worst_score = 1e9 - - if not isinstance(self.early_stopping, bool) and self.max_length is None: - raise ValueError( - "When `do_early_stopping` is set to a string, `max_length` must be defined. Ensure it is passed to the" - " BeamScorer class instance at initialization time." - ) - - def __len__(self): - """ - Number of hypotheses in the list. - """ - return len(self.beams) - - def add(self, hyp: torch.LongTensor, sum_logprobs: float, beam_indices: Optional[torch.LongTensor] = None): - """ - Add a new hypothesis to the list. - """ - score = sum_logprobs / (hyp.shape[-1] ** self.length_penalty) - if len(self) < self.num_beams or score > self.worst_score: - self.beams.append((score, hyp, beam_indices)) - if len(self) > self.num_beams: - sorted_next_scores = sorted([(s, idx) for idx, (s, _, _) in enumerate(self.beams)]) - del self.beams[sorted_next_scores[0][1]] - self.worst_score = sorted_next_scores[1][0] - else: - self.worst_score = min(score, self.worst_score) - - def is_done(self, best_sum_logprobs: float, cur_len: int) -> bool: - """ - If there are enough hypotheses and that none of the hypotheses being generated can become better than the worst - one in the heap, then we are done with this sentence. - """ - - if len(self) < self.num_beams: - return False - - # `True`: stop as soon as at least `num_beams` hypotheses are finished - if self.early_stopping is True: - return True - # `False`: heuristic -- compute best possible score from `cur_len`, even though it is not entirely accurate - # when `length_penalty` is positive. See the discussion below for more details. - # https://github.com/huggingface/transformers/pull/20901#issuecomment-1369845565 - elif self.early_stopping is False: - highest_attainable_score = best_sum_logprobs / cur_len**self.length_penalty - ret = self.worst_score >= highest_attainable_score - return ret - # `"never"`: compute the best possible score, depending on the signal of `length_penalty` - else: - # `length_penalty` > 0.0 -> max denominator is obtaned from `max_length`, not from `cur_len` -> min - # abs(`highest_attainable_score`) is obtained -> `highest_attainable_score` is negative, hence we obtain - # its max this way - if self.length_penalty > 0.0: - highest_attainable_score = best_sum_logprobs / self.max_length**self.length_penalty - # the opposite logic applies here (max `highest_attainable_score` from `cur_len`) - else: - highest_attainable_score = best_sum_logprobs / cur_len**self.length_penalty - ret = self.worst_score >= highest_attainable_score - return ret diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ctrl/configuration_ctrl.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ctrl/configuration_ctrl.py deleted file mode 100644 index 553e919b4a77d85c733cc4f0f303fe7664bf437f..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/ctrl/configuration_ctrl.py +++ /dev/null @@ -1,117 +0,0 @@ -# coding=utf-8 -# Copyright 2018 Salesforce and HuggingFace Inc. team. -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Salesforce CTRL configuration""" - -from ...configuration_utils import PretrainedConfig -from ...utils import logging - - -logger = logging.get_logger(__name__) - -CTRL_PRETRAINED_CONFIG_ARCHIVE_MAP = { - "Salesforce/ctrl": "https://huggingface.co/Salesforce/ctrl/resolve/main/config.json" -} - - -class CTRLConfig(PretrainedConfig): - """ - This is the configuration class to store the configuration of a [`CTRLModel`] or a [`TFCTRLModel`]. It is used to - instantiate a CTRL model according to the specified arguments, defining the model architecture. Instantiating a - configuration with the defaults will yield a similar configuration to that of the - [Salesforce/ctrl](https://huggingface.co/Salesforce/ctrl) architecture from SalesForce. - - Configuration objects inherit from [`PretrainedConfig`] and can be used to control the model outputs. Read the - documentation from [`PretrainedConfig`] for more information. - - Args: - vocab_size (`int`, *optional*, defaults to 246534): - Vocabulary size of the CTRL model. Defines the number of different tokens that can be represented by the - `inputs_ids` passed when calling [`CTRLModel`] or [`TFCTRLModel`]. - n_positions (`int`, *optional*, defaults to 256): - The maximum sequence length that this model might ever be used with. Typically set this to something large - just in case (e.g., 512 or 1024 or 2048). - n_embd (`int`, *optional*, defaults to 1280): - Dimensionality of the embeddings and hidden states. - dff (`int`, *optional*, defaults to 8192): - Dimensionality of the inner dimension of the feed forward networks (FFN). - n_layer (`int`, *optional*, defaults to 48): - Number of hidden layers in the Transformer encoder. - n_head (`int`, *optional*, defaults to 16): - Number of attention heads for each attention layer in the Transformer encoder. - resid_pdrop (`float`, *optional*, defaults to 0.1): - The dropout probability for all fully connected layers in the embeddings, encoder, and pooler. - embd_pdrop (`int`, *optional*, defaults to 0.1): - The dropout ratio for the embeddings. - layer_norm_epsilon (`float`, *optional*, defaults to 1e-06): - The epsilon to use in the layer normalization layers - initializer_range (`float`, *optional*, defaults to 0.02): - The standard deviation of the truncated_normal_initializer for initializing all weight matrices. - use_cache (`bool`, *optional*, defaults to `True`): - Whether or not the model should return the last key/values attentions (not used by all models). - - - Examples: - - ```python - >>> from transformers import CTRLConfig, CTRLModel - - >>> # Initializing a CTRL configuration - >>> configuration = CTRLConfig() - - >>> # Initializing a model (with random weights) from the configuration - >>> model = CTRLModel(configuration) - - >>> # Accessing the model configuration - >>> configuration = model.config - ```""" - - model_type = "ctrl" - keys_to_ignore_at_inference = ["past_key_values"] - attribute_map = { - "max_position_embeddings": "n_positions", - "hidden_size": "n_embd", - "num_attention_heads": "n_head", - "num_hidden_layers": "n_layer", - } - - def __init__( - self, - vocab_size=246534, - n_positions=256, - n_embd=1280, - dff=8192, - n_layer=48, - n_head=16, - resid_pdrop=0.1, - embd_pdrop=0.1, - layer_norm_epsilon=1e-6, - initializer_range=0.02, - use_cache=True, - **kwargs, - ): - self.vocab_size = vocab_size - self.n_positions = n_positions - self.n_embd = n_embd - self.n_layer = n_layer - self.n_head = n_head - self.dff = dff - self.resid_pdrop = resid_pdrop - self.embd_pdrop = embd_pdrop - self.layer_norm_epsilon = layer_norm_epsilon - self.initializer_range = initializer_range - - self.use_cache = use_cache - - super().__init__(**kwargs) diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/train_index.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/train_index.py deleted file mode 100644 index a8d8cae451b9c2a18dce3db6e2023bc29d48a021..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/train_index.py +++ /dev/null @@ -1,30 +0,0 @@ -import utils -import pickle -import os -import argparse - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument( - "--root_dir", type=str, default="dataset/44k", help="path to root dir" - ) - parser.add_argument('-c', '--config', type=str, default="./configs/config.json", - help='JSON file for configuration') - parser.add_argument( - "--output_dir", type=str, default="logs/44k", help="path to output dir" - ) - - args = parser.parse_args() - - hps = utils.get_hparams_from_file(args.config) - spk_dic = hps.spk - result = {} - - for k,v in spk_dic.items(): - print(f"now, index {k} feature...") - index = utils.train_index(k,args.root_dir) - result[v] = index - - with open(os.path.join(args.output_dir,"feature_and_index.pkl"),"wb") as f: - pickle.dump(result,f) \ No newline at end of file diff --git a/spaces/yseop/financial-relation-extractor-demo/app.py b/spaces/yseop/financial-relation-extractor-demo/app.py deleted file mode 100644 index 22871b42ce878a2f2e213c4ace8af56438c19270..0000000000000000000000000000000000000000 --- a/spaces/yseop/financial-relation-extractor-demo/app.py +++ /dev/null @@ -1,73 +0,0 @@ -from transformers import AutoTokenizer, AutoModelForSequenceClassification, AutoConfig -import gradio as gr -from torch.nn import functional as F -import seaborn -import matplotlib -import platform -from transformers.file_utils import ModelOutput -if platform.system() == "Darwin": - print("MacOS") - matplotlib.use('Agg') -import matplotlib.pyplot as plt -import io -from PIL import Image -import matplotlib.font_manager as fm - -# global var -MODEL_NAME = 'yseop/distilbert-base-financial-relation-extraction' -tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME) -model = AutoModelForSequenceClassification.from_pretrained(MODEL_NAME) -config = AutoConfig.from_pretrained(MODEL_NAME) -MODEL_BUF = { - "name": MODEL_NAME, - "tokenizer": tokenizer, - "model": model, - "config": config -} -font_dir = ['./'] -for font in fm.findSystemFonts(font_dir): - print(font) - fm.fontManager.addfont(font) -plt.rcParams["font.family"] = 'NanumGothicCoding' - -def change_model_name(name): - MODEL_BUF["name"] = name - MODEL_BUF["tokenizer"] = AutoTokenizer.from_pretrained(name) - MODEL_BUF["model"] = AutoModelForSequenceClassification.from_pretrained(name) - MODEL_BUF["config"] = AutoConfig.from_pretrained(name) -def predict(model_name, text): - if model_name != MODEL_NAME: - change_model_name(model_name) - - tokenizer = MODEL_BUF["tokenizer"] - model = MODEL_BUF["model"] - config = MODEL_BUF["config"] - tokenized_text = tokenizer([text], return_tensors='pt') - model.eval() - output, attention = model(**tokenized_text, output_attentions=True, return_dict=False) - output = F.softmax(output, dim=-1) - result = {} - - for idx, label in enumerate(output[0].detach().numpy()): - result[config.id2label[idx]] = float(label) - return result -if __name__ == '__main__': - text1 = 'An A-B trust is a joint trust created by a married couple for the purpose of minimizing estate taxes.' - text2 = 'For example, if the supply of reserves in the fed funds market is greater than the demand, then the fed funds rate falls, and if the supply of reserves is less than the demand, the rate rises.' - text3 = 'Coupon dates are the dates on which the bond issuer will make interest payments.' - text4 = "Two features of a bond—credit quality and time to maturity—are the principal determinants of a bond's coupon rate." - text5 = "When an investment sale is less than a standard lot, it's referred to as a job lot." - text6 = 'Most bonds can be sold by the initial bondholder to other investors after they have been issued.' - text7 = 'A bond could be thought of as an I.O.U. between the lender and borrower.' - model_name_list = [ - 'yseop/distilbert-base-financial-relation-extraction' - ] - #Create a gradio app with a button that calls predict() - app = gr.Interface( - fn=predict, - inputs=[gr.inputs.Dropdown(model_name_list, label="Model Name"), 'text'], outputs=['label'], - examples = [[MODEL_BUF["name"], text1], [MODEL_BUF["name"], text2], [MODEL_BUF["name"], text3], [MODEL_BUF["name"], text4], [MODEL_BUF["name"], text5], [MODEL_BUF["name"], text6], [MODEL_BUF["name"], text7]], - title="FReE (Financial Relation Extraction)", - description="A model capable of detecting the presence of a relationship between financial terms and qualifying the relationship in case of its presence." - ) - app.launch(inline=False) diff --git a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/postprocessing/dula/layout.py b/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/postprocessing/dula/layout.py deleted file mode 100644 index a8e62b0eac42c145753d585bf9b2e5a617a3b9fb..0000000000000000000000000000000000000000 --- a/spaces/zhigangjiang/3D-Room-Layout-Estimation_LGT-Net/postprocessing/dula/layout.py +++ /dev/null @@ -1,226 +0,0 @@ -""" -@Date: 2021/10/06 -@description: Use the approach proposed by DuLa-Net -""" -import cv2 -import numpy as np -import math -import matplotlib.pyplot as plt - -from visualization.floorplan import draw_floorplan - - -def merge_near(lst, diag): - group = [[0, ]] - for i in range(1, len(lst)): - if lst[i][1] == 0 and lst[i][0] - np.mean(group[-1]) < diag * 0.02: - group[-1].append(lst[i][0]) - else: - group.append([lst[i][0], ]) - if len(group) == 1: - group = [lst[0][0], lst[-1][0]] - else: - group = [int(np.mean(x)) for x in group] - return group - - -def fit_layout(floor_xz, need_cube=False, show=False, block_eps=0.2): - show_radius = np.linalg.norm(floor_xz, axis=-1).max() - side_l = 512 - floorplan = draw_floorplan(xz=floor_xz, show_radius=show_radius, show=show, scale=1, side_l=side_l).astype(np.uint8) - center = np.array([side_l / 2, side_l / 2]) - polys = cv2.findContours(floorplan, 1, 2) - if isinstance(polys, tuple): - if len(polys) == 3: - # opencv 3 - polys = list(polys[1]) - else: - polys = list(polys[0]) - polys.sort(key=lambda x: cv2.contourArea(x), reverse=True) - poly = polys[0] - sub_x, sub_y, w, h = cv2.boundingRect(poly) - floorplan_sub = floorplan[sub_y:sub_y + h, sub_x:sub_x + w] - sub_center = center - np.array([sub_x, sub_y]) - polys = cv2.findContours(floorplan_sub, 1, 2) - if isinstance(polys, tuple): - if len(polys) == 3: - polys = list(polys[1]) - else: - polys = list(polys[0]) - poly = polys[0] - epsilon = 0.005 * cv2.arcLength(poly, True) - poly = cv2.approxPolyDP(poly, epsilon, True) - - x_lst = [[0, 0], ] - y_lst = [[0, 0], ] - - ans = np.zeros((floorplan_sub.shape[0], floorplan_sub.shape[1])) - - for i in range(len(poly)): - p1 = poly[i][0] - p2 = poly[(i + 1) % len(poly)][0] - # We added occlusion detection - cp1 = p1 - sub_center - cp2 = p2 - sub_center - p12 = p2 - p1 - l1 = np.linalg.norm(cp1) - l2 = np.linalg.norm(cp2) - l3 = np.linalg.norm(p12) - # We added occlusion detection - is_block1 = abs(np.cross(cp1/l1, cp2/l2)) < block_eps - is_block2 = abs(np.cross(cp2/l2, p12/l3)) < block_eps*2 - is_block = is_block1 and is_block2 - - if (p2[0] - p1[0]) == 0: - slope = 10 - else: - slope = abs((p2[1] - p1[1]) / (p2[0] - p1[0])) - - if is_block: - s = p1[1] if l1 < l2 else p2[1] - y_lst.append([s, 1]) - s = p1[0] if l1 < l2 else p2[0] - x_lst.append([s, 1]) - - left = p1[0] if p1[0] < p2[0] else p2[0] - right = p1[0] if p1[0] > p2[0] else p2[0] - top = p1[1] if p1[1] < p2[1] else p2[1] - bottom = p1[1] if p1[1] > p2[1] else p2[1] - sample = floorplan_sub[top:bottom, left:right] - score = 0 if sample.size == 0 else sample.mean() - if score >= 0.3: - ans[top:bottom, left:right] = 1 - - else: - if slope <= 1: - s = int((p1[1] + p2[1]) / 2) - y_lst.append([s, 0]) - elif slope > 1: - s = int((p1[0] + p2[0]) / 2) - x_lst.append([s, 0]) - - debug_show = False - if debug_show: - plt.figure(dpi=300) - plt.axis('off') - a = cv2.drawMarker(floorplan_sub.copy()*0.5, tuple([floorplan_sub.shape[1] // 2, floorplan_sub.shape[0] // 2]), [1], markerType=0, markerSize=10, thickness=2) - plt.imshow(cv2.drawContours(a, [poly], 0, 1, 1)) - plt.savefig('src/1.png', bbox_inches='tight', transparent=True, pad_inches=0) - plt.show() - - plt.figure(dpi=300) - plt.axis('off') - a = cv2.drawMarker(ans.copy()*0.5, tuple([floorplan_sub.shape[1] // 2, floorplan_sub.shape[0] // 2]), [1], markerType=0, markerSize=10, thickness=2) - plt.imshow(cv2.drawContours(a, [poly], 0, 1, 1)) - # plt.show() - plt.savefig('src/2.png', bbox_inches='tight', transparent=True, pad_inches=0) - plt.show() - - x_lst.append([floorplan_sub.shape[1], 0]) - y_lst.append([floorplan_sub.shape[0], 0]) - x_lst.sort(key=lambda x: x[0]) - y_lst.sort(key=lambda x: x[0]) - - diag = math.sqrt(math.pow(floorplan_sub.shape[1], 2) + math.pow(floorplan_sub.shape[0], 2)) - x_lst = merge_near(x_lst, diag) - y_lst = merge_near(y_lst, diag) - if need_cube and len(x_lst) > 2: - x_lst = [x_lst[0], x_lst[-1]] - if need_cube and len(y_lst) > 2: - y_lst = [y_lst[0], y_lst[-1]] - - for i in range(len(x_lst) - 1): - for j in range(len(y_lst) - 1): - sample = floorplan_sub[y_lst[j]:y_lst[j + 1], x_lst[i]:x_lst[i + 1]] - score = 0 if sample.size == 0 else sample.mean() - if score >= 0.3: - ans[y_lst[j]:y_lst[j + 1], x_lst[i]:x_lst[i + 1]] = 1 - - if debug_show: - plt.figure(dpi=300) - plt.axis('off') - a = cv2.drawMarker(ans.copy() * 0.5, tuple([floorplan_sub.shape[1] // 2, floorplan_sub.shape[0] // 2]), [1], - markerType=0, markerSize=10, thickness=2) - plt.imshow(cv2.drawContours(a, [poly], 0, 1, 1)) - # plt.show() - plt.savefig('src/3.png', bbox_inches='tight', transparent=True, pad_inches=0) - plt.show() - - pred = np.uint8(ans) - pred_polys = cv2.findContours(pred, 1, 3) - if isinstance(pred_polys, tuple): - if len(pred_polys) == 3: - pred_polys = list(pred_polys[1]) - else: - pred_polys = list(pred_polys[0]) - - pred_polys.sort(key=lambda x: cv2.contourArea(x), reverse=True) - pred_polys = pred_polys[0] - - if debug_show: - plt.figure(dpi=300) - plt.axis('off') - a = cv2.drawMarker(ans.copy() * 0.5, tuple([floorplan_sub.shape[1] // 2, floorplan_sub.shape[0] // 2]), [1], - markerType=0, markerSize=10, thickness=2) - a = cv2.drawContours(a, [poly], 0, 0.8, 1) - a = cv2.drawContours(a, [pred_polys], 0, 1, 1) - plt.imshow(a) - # plt.show() - plt.savefig('src/4.png', bbox_inches='tight', transparent=True, pad_inches=0) - plt.show() - - polygon = [(p[0][1], p[0][0]) for p in pred_polys[::-1]] - - v = np.array([p[0] + sub_y for p in polygon]) - u = np.array([p[1] + sub_x for p in polygon]) - # side_l - # v<-----------|o - # | | | - # | ----|----z | side_l - # | | | - # | x \|/ - # |------------u - side_l = floorplan.shape[0] - pred_xz = np.concatenate((u[:, np.newaxis] - side_l // 2, side_l // 2 - v[:, np.newaxis]), axis=1) - - pred_xz = pred_xz * show_radius / (side_l // 2) - if show: - draw_floorplan(pred_xz, show_radius=show_radius, show=show) - - show_process = False - if show_process: - img = np.zeros((floorplan_sub.shape[0], floorplan_sub.shape[1], 3)) - for x in x_lst: - cv2.line(img, (x, 0), (x, floorplan_sub.shape[0]), (0, 255, 0), 1) - for y in y_lst: - cv2.line(img, (0, y), (floorplan_sub.shape[1], y), (255, 0, 0), 1) - - fig = plt.figure() - plt.axis('off') - ax1 = fig.add_subplot(2, 2, 1) - ax1.imshow(floorplan) - ax3 = fig.add_subplot(2, 2, 2) - ax3.imshow(floorplan_sub) - ax4 = fig.add_subplot(2, 2, 3) - ax4.imshow(img) - ax5 = fig.add_subplot(2, 2, 4) - ax5.imshow(ans) - plt.show() - - return pred_xz - - -if __name__ == '__main__': - from utils.conversion import uv2xyz - - pano_img = np.zeros([512, 1024, 3]) - corners = np.array([[0.1, 0.7], - [0.4, 0.7], - [0.3, 0.6], - [0.6, 0.6], - [0.8, 0.7]]) - xz = uv2xyz(corners)[..., ::2] - draw_floorplan(xz, show=True, marker_color=None, center_color=0.8) - - xz = fit_layout(xz) - draw_floorplan(xz, show=True, marker_color=None, center_color=0.8) diff --git a/spaces/zideliu/styledrop/timm/__init__.py b/spaces/zideliu/styledrop/timm/__init__.py deleted file mode 100644 index db3d3f22f4defb0f2f6ee7ef53a7e88fe3a7d380..0000000000000000000000000000000000000000 --- a/spaces/zideliu/styledrop/timm/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .version import __version__ -from .models import create_model, list_models, is_model, list_modules, model_entrypoint, \ - is_scriptable, is_exportable, set_scriptable, set_exportable