diff --git a/spaces/0xSpleef/openchat-openchat_8192/README.md b/spaces/0xSpleef/openchat-openchat_8192/README.md deleted file mode 100644 index e1b70e9d6ab0890f68b390e52bcb8fd266de4f75..0000000000000000000000000000000000000000 --- a/spaces/0xSpleef/openchat-openchat_8192/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Openchat-openchat 8192 -emoji: 🌍 -colorFrom: red -colorTo: pink -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/0xtanmoysamanta/espnet-kan-bayashi_ljspeech_vits/README.md b/spaces/0xtanmoysamanta/espnet-kan-bayashi_ljspeech_vits/README.md deleted file mode 100644 index 1babeafb73ad9310a235d269b7e6a7e8d8d9b012..0000000000000000000000000000000000000000 --- a/spaces/0xtanmoysamanta/espnet-kan-bayashi_ljspeech_vits/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Espnet-kan-bayashi Ljspeech Vits -emoji: 🐨 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install Microsoft Office 32-bit Version Online or Offline.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install Microsoft Office 32-bit Version Online or Offline.md deleted file mode 100644 index b15092b1e5fbc2c4626c265ed92a667ac70cc322..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Download and Install Microsoft Office 32-bit Version Online or Offline.md +++ /dev/null @@ -1,36 +0,0 @@ - -

How to Install Microsoft Office 32-bit Version on Your PC

-

Microsoft Office is a popular suite of productivity applications that includes Word, Excel, PowerPoint, Outlook, and more. You can install Microsoft Office on your PC either online or offline, depending on your preference and internet connection. However, before you install Microsoft Office, you need to choose between the 64-bit or 32-bit version of the software. In this article, we will explain how to install Microsoft Office 32-bit version on your PC and why you might want to do so.

-

microsoft office install 32 bit


DOWNLOADhttps://byltly.com/2uKxVM



-

What is the difference between 64-bit and 32-bit versions of Microsoft Office?

-

The main difference between 64-bit and 32-bit versions of Microsoft Office is the amount of memory they can use. The 64-bit version can access more memory than the 32-bit version, which can improve the performance and stability of the software when working with large files and data sets. However, the 64-bit version also requires more disk space and may not be compatible with some older add-ins or customizations.

-

Why choose the 32-bit version of Microsoft Office?

-

There are some reasons why you might want to choose the 32-bit version of Microsoft Office over the 64-bit version. For example, you might want to choose the 32-bit version if:

- -

How to install Microsoft Office 32-bit version online?

-

If you have a stable internet connection and a Microsoft account, you can install Microsoft Office 32-bit version online by following these steps:

-
    -
  1. Go to https://www.office.com and sign in with your Microsoft account.
  2. -
  3. From the home page, select Install Office > Other install options.
  4. -
  5. Under Language and install options, select Additional install options.
  6. -
  7. Under Version, choose 32-bit and then select Install.
  8. -
  9. The installation file will be downloaded to your PC. Run it and follow the instructions on the screen to complete the installation.
  10. -
-

How to install Microsoft Office 32-bit version offline?

-

If you don't have a stable internet connection or prefer to install Microsoft Office offline, you can use the offline installer by following these steps:

-

-
    -
  1. Go to https://www.microsoft.com/en-us/download/details.aspx?id=49117 and download the offline installer for your language and region.
  2. -
  3. The offline installer will be downloaded as an ISO file. You can either burn it to a DVD or mount it as a virtual drive on your PC.
  4. -
  5. Select the Microsoft 365 folder from the virtual drive and then double-click either Setup32.exe to install the 32-bit version of Microsoft Office.
  6. -
  7. Follow the instructions on the screen to complete the installation.
  8. -
-

Conclusion

-

Microsoft Office is a powerful and versatile suite of productivity applications that you can install on your PC either online or offline. However, before you install Microsoft Office, you need to choose between the 64-bit or 32-bit version of the software depending on your device specifications and compatibility needs. In this article, we explained how to install Microsoft Office 32-bit version on your PC and why you might want to do so. We hope this article was helpful and informative for you.

ddb901b051
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Facebook Password Hacker V4 0 Free Download.md b/spaces/1gistliPinn/ChatGPT4/Examples/Facebook Password Hacker V4 0 Free Download.md deleted file mode 100644 index 4811e91738fc85bf6676fc03a0829e8f1a9fa75d..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Facebook Password Hacker V4 0 Free Download.md +++ /dev/null @@ -1,48 +0,0 @@ -

facebook password hacker v4 0 free download


Download Zip ->>->>->> https://imgfil.com/2uy0uu



- -This hack tool includes access to all the information on your Facebook profile. You can use it to read the messages from all your Facebook friends and read the messages in the social network. - -Below, I will tell you how you can download the APK file. - -Download the APK of FaceBook Password Hacker for Android for free. - -The FaceBook Password Hacker app is very easy to use. You will be able to hack the passwords of your Facebook account in the next 30 seconds. - -After you have logged in to your Facebook account, the app will allow you to hack any Facebook account within a few seconds. You can hack multiple accounts simultaneously. - -With this hack tool, you will be able to hack the password of all the Facebook accounts on your Android phone. - -There are many reasons why you can use this app to hack the password of your Facebook account. If you need to access a file or document on Facebook which is locked by Facebook, then you can use this Facebook Password Hacker app to hack your password and open the file. - -This Facebook password hacker app is not designed to hack the account of a Facebook admin or Facebook staff. You cannot access the data of other Facebook accounts. - -Facebook password hacker is a powerful tool that can help you to hack all your Facebook account on your Android phone. - -If you have a Facebook account on your Android phone, then you can use this Facebook Password Hacker to hack any Facebook account. You can use this FaceBook Password hacker app to hack Facebook accounts that you can use on your Android phone. You can hack Facebook accounts on your Android phone which are shared with you. - -With the FaceBook Password Hacker app, you can hack Facebook accounts. The app allows you to access your Facebook account, and all the data on your Facebook account is provided to you. - -You can hack Facebook accounts by email or phone. You can get the phone number from your Facebook friends, or you can also access the phone numbers of your Facebook friends directly. - -App Details: - -Name: FaceBook Password Hacker - -Version: 1.4.3 - -Developer: JAN - -Email: info@janhack.com - -File Size: 38 MB - -Requires Android: 3.0 and up - -Overview: - -You can use this FaceBook Password Hacker app to hack Facebook account. - -This FaceBook password hacker app is 4fefd39f24
-
-
-

diff --git a/spaces/1phancelerku/anime-remove-background/AP KGBV Teaching Jobs 2023 How to Apply for Principal PGT CRT PET Posts in KGBV Schools.md b/spaces/1phancelerku/anime-remove-background/AP KGBV Teaching Jobs 2023 How to Apply for Principal PGT CRT PET Posts in KGBV Schools.md deleted file mode 100644 index af5b3d8177624785bdf2b9aa613378170cd1ab8a..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/AP KGBV Teaching Jobs 2023 How to Apply for Principal PGT CRT PET Posts in KGBV Schools.md +++ /dev/null @@ -1,185 +0,0 @@ -
-

APKGVB Notification 2023: Everything You Need to Know

-

If you are interested in working as a teacher or studying in a residential school for girls in Andhra Pradesh, then you should not miss the APKGVB notification 2023. This notification is released by the Samagra Shiksha, Government of Andhra Pradesh, to invite online applications for filling up of vacant posts of teaching staff and students in all the Kasturba Gandhi Balika Vidyalayas (KGBVs) located across the state. In this article, we will tell you everything you need to know about the APKGVB notification 2023, including what is APKGVB, what is the recruitment and admission process, how to apply online, and more.

-

apkgbv notification 2023


Download >>>>> https://jinyurl.com/2uNQpT



-

What is APKGVB?

-

APKGVB stands for Andhra Pradesh Kasturba Gandhi Balika Vidyalaya. It is a scheme launched by the Government of India in August 2004, under the Sarva Shiksha Abhiyan (SSA), to provide quality education to girls from disadvantaged sections of society. The scheme aims to set up residential schools at upper primary level for girls belonging to SC, ST, OBC, minority communities and families below the poverty line (BPL) in educationally backward blocks. The scheme was later extended to cover girls in secondary level as well.

-

The objectives and features of APKGVB

-

The main objectives of APKGVB are:

- -

Some of the key features of APKGVB are:

- -

The benefits and achievements of APKGVB

-

APKGVB has been successful in achieving its goals and bringing positive changes in the lives of girls. Some of the benefits and achievements of APKGVB are:

- -

What is the APKGVB notification 2023?

The APKGVB notification 2023 is a document that contains all the information regarding the recruitment of teaching staff and the admission of students in the KGBVs for the academic year 2023-24. The notification is released by the Samagra Shiksha, Government of Andhra Pradesh, on its official website apkgbv.apcfss.in. The notification covers two aspects:

-

apkgbv recruitment 2023 apply online
-apkgbv merit list 2023 district wise
-apkgbv contract recruitment 2023
-apkgbv application form 2023
-apkgbv vacancies 2023
-apkgbv principal pgt crt pet jobs 2023
-apkgbv exam date 2023
-apkgbv syllabus 2023 pdf download
-apkgbv admit card 2023
-apkgbv result 2023
-apkgbv salary details 2023
-apkgbv eligibility criteria 2023
-apkgbv selection process 2023
-apkgbv previous papers 2023
-apkgbv answer key 2023
-apkgbv cut off marks 2023
-apkgbv counselling schedule 2023
-apkgbv joining letter 2023
-apkgbv latest news and updates 2023
-apkgbv official website link 2023
-apkgbv kasturba gandhi balika vidyalaya recruitment 2023
-apkgbv andhra pradesh sarva shiksha abhiyan recruitment 2023
-apkgbv teaching and non teaching posts recruitment 2023
-apkgbv online application portal link 2023
-apkgbv notification pdf download 2023
-apkgbv district wise vacancy list 2023
-apkgbv reservation roster points breakup 2023
-apkgbv online grievance registration link 2023
-apkgbv tentative merit list release date 2023
-apkgbv final merit list release date 2023
-apkgbv interview date and venue details 2023
-apkgbv document verification process and checklist 2023
-apkgbv how to apply step by step guide 2023
-apkgbv application fee payment mode and amount details 2023
-apkgbv age limit and relaxation details 2023
-apkgbv educational qualification and experience details 2023
-apkgbv exam pattern and marking scheme details 2023
-apkgbv exam center and code details 2023
-apkgbv how to download admit card step by step guide 2023
-apkgbv how to check result step by step guide 2023
-apkgbv how to download merit list step by step guide 2023
-apkgbv how to calculate cut off marks formula and factors details 2023
-apkgbv how to raise objections against answer key step by step guide 2023
-apkgbv how to download joining letter step by step guide 2023
-apkgbv frequently asked questions and answers details 2023

-

The recruitment process for teaching staff in KGBVs

-

The Samagra Shiksha invites online applications from eligible women candidates for filling up of 1358 vacant posts of teaching staff in all the KGBVs across the state. The posts include Principal, Post Graduate Teachers (PGT), Contract Residential Teachers (CRT) and Physical Education Teachers (PET). The recruitment is done on a contractual basis for a period of one year or till regular recruitment is made, whichever is earlier.

-

Eligibility criteria and application fee

-

The candidates who wish to apply for the teaching staff recruitment must fulfill the following eligibility criteria:

- - - - - - -
PostQualificationAge LimitApplication Fee
PrincipalPost Graduation Degree with B.Ed. from a recognized university with at least 50% marks in aggregate for OCs, 45% for BCs and 40% for SC/ST/Differently abled persons.Not more than 45 years as on 01.07.2023Rs. 500/-
PGTPost Graduation Degree in the relevant subject with B.Ed. from a recognized university with at least 50% marks in aggregate for OCs, 45% for BCs and 40% for SC/ST/Differently abled persons.Not more than 44 years as on 01.07.2023Rs. 500/-
CRTGraduation Degree in the relevant subject with B.Ed. from a recognized university with at least 50% marks in aggregate for OCs, 45% for BCs and 40% for SC/ST/Differently abled persons.Not more than 39 years as on 01.07.2023Rs. 500/-
PETIntermediate with D.P.Ed./B.P.Ed./M.P.Ed. from a recognized board or university.Not more than 39 years as on 01.07.2023Rs. 250/-
-

Timeline and selection procedure

-

The candidates who are interested and eligible can apply online through the official website apkgbv.apcfss.in from May 30, 2023 to June 05, 2023. The candidates have to pay the application fee through online mode only using debit card/credit card/net banking etc. The candidates have to upload their scanned copies of photograph, signature and relevant documents while applying online.

-

The selection of the candidates will be done on the basis of merit list prepared by the State Office at the ratio of 1:3 for each post. The merit list will be based on the academic qualifications, professional qualifications and experience of the candidates as per the weightage given below:

- - - - - - -
PostAcedemic Qualifications (Max Marks)Professional Qualifications (Max Marks)Experience (Max Marks)
Principal30 (10 marks each for SSC, Intermediate and Graduation)20 (10 marks each for Post Graduation and B.Ed.)50 (10 marks each for one year of experience as Principal/PGT/CRT/PET in any residential school)
PGT30 (10 marks each for SSC, Intermediate and Graduation)20 (10 marks each for Post Graduation and B.Ed.)50 (10 marks each for one year of experience as PGT/CRT/PET in any residential school)
CRT30 (10 marks each for SSC, Intermediate and Graduation)20 (10 marks each for B.Ed.)50 (10 marks each for one year of experience as CRT/PET in any residential school)
PET30 (10 marks each for SSC, Intermediate and D.P.Ed./B.P.Ed./M.P.Ed.)20 (10 marks each for Graduation)50 (10 marks each for one year of experience as PET in any residential school)
-

The candidates who are shortlisted in the merit list will be called for certificate verification and demo/interview at the district level. The final selection will be based on the performance of the candidates in the demo/interview and the availability of vacancies.

-

Vacancy details and salary structure

-

The vacancy details for the teaching staff recruitment are as follows:

- - - - - - -
PostNo. of VacanciesSalary per month
Principal44Rs. 40,000/-
PGT313Rs. 31,000/-
CRT897Rs. 21,000/-
PET104Rs. 12,000/-
-

The salary structure for the teaching staff is subject to revision as per the norms of the Samagra Shiksha.

-

The admission process for students in KGBVs

-

The Samagra Shiksha also invites online applications from eligible girl students for admission into Class VI to X in all the KGBVs across the state. The admission is done on a merit-cum-reservation basis for a total of 36,720 seats available in 918 KGBVs.

-

Eligibility criteria and application fee

-

The girl students who wish to apply for the admission in KGBVs must fulfill the following eligibility criteria:

- -

The girl students who are eligible can apply online through the official website apkgbv.apcfss.in without paying any application fee.

-

Timeline and selection procedure

-

The girl students who are interested and eligible can apply online through the official website apkgbv.apcfss.in from June 10, 2023 to June 20, 2023. The girl students have to upload their scanned copies of photograph, signature and relevant documents while applying online.

-

The selection of the girl students will be done on the basis of merit list prepared by the District Project Office at the ratio of 1:2 for each seat. The merit list will be based on the marks obtained by the girl students in their previous class. The merit list will be displayed on the notice board of the concerned KGBV and on the official website apkgbv.apcfss.in by June 25, 2023.

-

The girl students who are shortlisted in the merit list will be called for certificate verification and counseling at the district level. The final selection will be based on the verification of documents and the availability of seats.

-

Reservation policy and seat allotment

-

The reservation policy for the admission of girl students in KGBVs is as follows:

- -

How to apply for APKGVB notification 2023?

-

If you are interested in applying for the APKGVB notification 2023, either as a teaching staff or as a student, you have to follow the steps given below:

-

The steps to apply online for teaching staff recruitment

-
    -
  1. Visit the official website apkgbv.apcfss.in and click on the link "Online Application for Teaching Staff Recruitment 2023".
  2. -
  3. Read the instructions carefully and click on the "Proceed" button.
  4. -
  5. Fill in the basic details such as name, date of birth, gender, mobile number, email id, etc. and click on the "Submit" button.
  6. -
  7. You will receive an OTP on your registered mobile number and email id. Enter the OTP and click on the "Verify" button.
  8. -
  9. You will get a registration number and password. Note them down for future reference.
  10. -
  11. Login with your registration number and password and fill in the personal details, educational details, experience details, etc. and click on the "Save" button.
  12. -
  13. Upload your scanned photograph, signature and relevant documents in the prescribed format and size and click on the "Upload" button.
  14. -
  15. Pay the application fee through online mode using debit card/credit card/net banking etc. and click on the "Pay" button.
  16. -
  17. Take a printout of the application form and fee receipt for future reference.
  18. -
-

The steps to apply online for student admission

-
    -
  1. Visit the official website apkgbv.apcfss.in and click on the link "Online Application for Student Admission 2023".
  2. -
  3. Read the instructions carefully and click on the "Proceed" button.
  4. -
  5. Fill in the basic details such as name, date of birth, gender, caste, community, BPL status, etc. and click on the "Submit" button.
  6. -
  7. You will receive an OTP on your registered mobile number. Enter the OTP and click on the "Verify" button.
  8. -
  9. You will get a registration number and password. Note them down for future reference.
  10. -
  11. Login with your registration number and password and fill in the personal details, educational details, preferences of schools, etc. and click on the "Save" button.
  12. -
  13. Upload your scanned photograph, signature and relevant documents in the prescribed format and size and click on the "Upload" button.
  14. -
  15. Take a printout of the application form for future reference.
  16. -
-

Conclusion

-

The APKGVB notification 2023 is a great opportunity for women candidates who want to pursue a career as a teacher and for girl students who want to get quality education in a residential school. The notification provides all the details about the eligibility criteria, application process, selection process, vacancy details, reservation policy, etc. for both teaching staff recruitment and student admission. The candidates who are interested and eligible can apply online through the official website apkgbv.apcfss.in before the last date. The candidates who are selected will be able to work or study in one of the best KGBVs in Andhra Pradesh.

-

Frequently Asked Questions

-

Here are some of the frequently asked questions about the APKGVB notification 2023:

-

Q: When will the APKGVB notification 2023 be released?

-

A: The APKGVB notification 2023 is expected to be released by May 2023 on the official website apkgbv.apcfss.in.

-

Q: How many vacancies are there for teaching staff recruitment in KGBVs?

-

A: There are 1358 vacancies for teaching staff recruitment in KGBVs, including 44 for Principal, 313 for PGT, 897 for CRT and 104 for PET.

-

Q: How many seats are there for student admission in KGBVs?

-

A: There are 36,720 seats for student admission in KGBVs, including 9180 seats for Class VI, 9180 seats for Class VII, 9180 seats for Class VIII, 9180 seats for Class IX and 9180 seats for Class X.

-

Q: What is the application fee for teaching staff recruitment in KGBVs?

-

A: The application fee for teaching staff recruitment in KGBVs is Rs. 500/- for Principal, PGT and CRT posts and Rs. 250/- for PET post.

-

Q: What is the application fee for student admission in KGBVs?

-

A: There is no application fee for student admission in KGBVs. The girl students can apply online for free.

-

Q: How can I contact the Samagra Shiksha for any queries or grievances regarding the APKGVB notification 2023?

-

A: You can contact the Samagra Shiksha through the following modes:

-

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Candy Crush Soda Saga A Free and Fun Game for PC Windows 7 Users.md b/spaces/1phancelerku/anime-remove-background/Candy Crush Soda Saga A Free and Fun Game for PC Windows 7 Users.md deleted file mode 100644 index ff5bb2b31664f01be7cf23b6e04a1427915a4521..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Candy Crush Soda Saga A Free and Fun Game for PC Windows 7 Users.md +++ /dev/null @@ -1,129 +0,0 @@ - -

Candy Crush Soda Saga Download for PC Windows 7 Free

-

If you are looking for a fun and addictive puzzle game that will keep you entertained for hours, you might want to try Candy Crush Soda Saga. This game is a sequel to the popular Candy Crush Saga, and it offers more divine matching combinations, challenging game modes, and fizzy fun. In this article, we will show you how to download and install Candy Crush Soda Saga on your PC Windows 7 for free. We will also share some of the benefits of playing this game, as well as some tips and tricks to help you master it.

-

What is Candy Crush Soda Saga?

-

Candy Crush Soda Saga is a match-3 puzzle game developed by King, a leading company in casual gaming. The game was released in 2014 as a spin-off of Candy Crush Saga, one of the most successful mobile games of all time. The game has over 100 million downloads on Google Play Store alone, and it has received positive reviews from critics and players alike.

-

candy crush soda saga download for pc windows 7 free


Download Zip ✯✯✯ https://jinyurl.com/2uNRqC



-

A fun and addictive match-3 puzzle game

-

The gameplay of Candy Crush Soda Saga is similar to that of Candy Crush Saga. You have to match three or more candies of the same color to clear them from the board. You can also create special candies by matching four or more candies in different shapes, such as striped, wrapped, or fish candies. These special candies can have various effects, such as clearing a whole row, column, or area of candies.

-

The game has different objectives depending on the level type. For example, in soda levels, you have to switch the bottles and match candies to release purple soda and save the candy bears. In frosting levels, you have to match candies to smash the ice and set the candy bears free. In honey levels, you have to match candies next to honeycomb to release the trapped candy bears. In jam levels, you have to spread the jam across the board.

-

candy crush soda saga pc windows 7 free download
-download candy crush soda saga for windows 7 pc free
-candy crush soda saga windows 7 free pc game download
-how to download candy crush soda saga on windows 7 pc for free
-candy crush soda saga for pc windows 7 free full version download
-candy crush soda saga download free for windows 7 laptop pc
-download and install candy crush soda saga on windows 7 pc free
-candy crush soda saga free online game download for windows 7 pc
-candy crush soda saga windows 7 pc offline download free
-candy crush soda saga microsoft store download for windows 7 pc free
-candy crush soda saga apk download for windows 7 pc free
-candy crush soda saga mod apk download for windows 7 pc free
-candy crush soda saga hack download for windows 7 pc free
-candy crush soda saga cheats download for windows 7 pc free
-candy crush soda saga unlimited lives download for windows 7 pc free
-candy crush soda saga latest version download for windows 7 pc free
-candy crush soda saga update download for windows 7 pc free
-candy crush soda saga level editor download for windows 7 pc free
-candy crush soda saga level unlocker download for windows 7 pc free
-candy crush soda saga booster generator download for windows 7 pc free
-candy crush soda saga tips and tricks download for windows 7 pc free
-candy crush soda saga walkthrough guide download for windows 7 pc free
-candy crush soda saga gameplay video download for windows 7 pc free
-candy crush soda saga soundtrack music download for windows 7 pc free
-candy crush soda saga wallpaper hd download for windows 7 pc free
-candy crush soda saga theme pack download for windows 7 pc free
-candy crush soda saga icons pack download for windows 7 pc free
-candy crush soda saga fonts pack download for windows 7 pc free
-candy crush soda saga screensaver download for windows 7 pc free
-candy crush soda saga cursor pack download for windows 7 pc free
-play candy crush soda saga online without downloading on windows 7 pc free
-play candy crush soda saga offline without downloading on windows 7 pc free
-play candy crush soda saga with friends without downloading on windows 7 pc free
-play candy crush soda saga with facebook without downloading on windows 7 pc free
-play candy crush soda saga with keyboard without downloading on windows 7 pc free
-play candy crush soda saga with mouse without downloading on windows 7 pc free
-play candy crush soda saga with controller without downloading on windows 7 pc free
-play candy crush soda saga with touch screen without downloading on windows 7 pc free
-play candy crush soda saga with voice commands without downloading on windows 7 pc free
-play candy crush soda saga with vr headset without downloading on windows 7 pc free
-best site to download candy crush soda saga for windows 7 pc free
-best app to download candy crush soda saga for windows 7 pc free
-best software to download candy crush soda saga for windows 7 pc free
-best tool to download candy crush soda saga for windows 7 pc free
-best way to download candy crush soda saga for windows 7 pc free
-fastest way to download candy crush soda saga for windows 7 pc free
-easiest way to download candy crush soda saga for windows 7 pc free
-safest way to download candy crush soda saga for windows 7 pc free
-legal way to download candy crush soda saga for windows 7 pc free

-

The game has over 10,000 levels to play, each with different layouts, obstacles, and challenges. You have a limited number of moves or time to complete each level. If you run out of moves or time before reaching the goal, you will lose a life. You can earn up to five lives at a time, which regenerate over time or can be purchased with real money. You can also use boosters, such as lollipop hammers or color bombs, to help

you clear the board or make special moves. You can also earn stars and gold bars by completing levels, which can be used to unlock new episodes, buy boosters, or access special features.

-

A sequel to the popular Candy Crush Saga

-

Candy Crush Soda Saga is a sequel to Candy Crush Saga, which was launched in 2012 and became a global phenomenon. Candy Crush Saga is based on the classic game Candy Crush, which was created by King in 2011. The game has been downloaded over 2.7 billion times and has more than 270 million monthly active users. The game has also inspired several spin-offs, such as Candy Crush Jelly Saga, Candy Crush Friends Saga, and Candy Crush All Stars.

-

Candy Crush Soda Saga follows the adventures of Kimmy, the sister of Tiffi, the main character of Candy Crush Saga. Kimmy is looking for her lost sister and travels through the Candy Kingdom, meeting new friends and foes along the way. The game introduces new characters, such as Mr. Toffee, Yeti, Bubblegum Troll, and Percy the Penguin. The game also features new graphics, animations, sound effects, and music that enhance the candy-themed experience.

-

A game with different modes, levels, and challenges

-

Candy Crush Soda Saga is a game that offers a lot of variety and fun for players of all ages and skill levels. The game has different modes that test your abilities and creativity. For example, in Live Events mode, you can compete with other players in real-time for prizes and glory. In Quests mode, you can complete daily tasks and earn rewards. In Team mode, you can join or create a team with other players and chat, share lives, and play together.

-

The game also has different levels that challenge your strategy and logic. For example, in Boss levels, you have to face off against powerful enemies that have special abilities and tricks. In Super Hard levels, you have to overcome extra difficult obstacles and puzzles. In Treasure Hunt levels, you have to find hidden treasures and collect them.

-

The game also has different challenges that add more excitement and fun to the gameplay. For example, in Bubblegum Hill challenge, you have to climb a mountain of bubblegum and collect as many gold crowns as possible. In Soda Squad challenge, you have to work with your team to fill a soda meter and win rewards. In Rainbow Rapids challenge, you have to match candies on rainbow-colored tiles and create rainbow streaks.

-

How to download and install Candy Crush Soda Saga on PC Windows 7?

-

If you want to enjoy Candy Crush Soda Saga on a bigger screen and with better performance, you can download and install it on your PC Windows 7 for free. There are two main options for doing this: downloading from the Microsoft Store or downloading from a third-party platform.

-

Option 1: Download from the Microsoft Store

-

The Microsoft Store is the official app store for Windows devices. It offers a wide range of apps and games that are compatible with Windows 7 or later versions. You can download Candy Crush Soda Saga from the Microsoft Store by following these steps:

-

Step 1: Open the Microsoft Store app

-

To open the Microsoft Store app, you can click on the Start button on the bottom left corner of your screen and type "Microsoft Store" in the search box. Alternatively, you can press the Windows key + S on your keyboard and type "Microsoft Store" in the search box.

-

Step 2: Search for Candy Crush Soda Saga

-

To search for Candy Crush Soda Saga in the Microsoft Store app, you can click on the magnifying glass icon on the top right corner of the app window and type "Candy Crush Soda Saga" in the search box. Alternatively, you can press Ctrl + F on your keyboard and type "Candy Crush Soda Saga" in the search box.

-

Step 3: Click on Get or Install to download the game

-

To download Candy Crush Soda Saga from the Microsoft Store app, you can click on the Get or Install button next to the game's name and icon. This will start downloading the game to your PC Windows 7. You may need to sign in with your Microsoft account or create one if you don't have one already.

-

Step 4: Launch the game from the Start menu or the desktop shortcut

-

To launch Candy Crush Soda Saga from your PC Windows 7, you can click on the Start button on the bottom left corner of your screen and scroll down to find the game's name and icon under "C". Alternatively, you can press the Windows key + Q on your keyboard and type "Candy Crush Soda Saga" in the search box. You can also find a desktop shortcut for the game on your desktop and double-click on it to launch the game.

-

Option 2: Download from a third-party platform

-

If you prefer to download Candy Crush Soda Saga from a different source than the Microsoft Store, you can use a third-party platform that offers PC games. Some of the most popular platforms are Steam, Epic Games, GOG, and itch.io. You can download Candy Crush Soda Saga from any of these platforms by following these steps:

-

Step 1: Choose a platform such as Steam, Epic Games, GOG, or itch.io

-

To choose a platform to download Candy Crush Soda Saga from, you can visit their official websites and compare their features, prices, and reviews. You can also check if they have any discounts, deals, or free games available. Some of the factors to consider when choosing a platform are:

- -

Step 2: Create an account and log in to the platform

-

To create an account and log in to the platform of your choice, you can follow the instructions on their website or app. You may need to provide some personal information, such as your name, email address, password, and payment details. You may also need to verify your account through email or phone. Once you have created an account and logged in to the platform, you can access its features and browse its games.

-

Step 3: Search for Candy Crush Soda Saga and purchase or download the game

-

To search for Candy Crush Soda Saga on the platform of your choice, you can use the search bar or filter options to find the game's name and icon. You can also check the game's description, screenshots, videos, ratings, reviews, and system requirements. Depending on the platform, you may need to purchase or download the game before playing it. Some platforms may offer free trials or demos of the game. You can also check if there are any updates or patches available for the game.

-

Step 4: Launch the game from the platform's library or launcher

-

To launch Candy Crush Soda Saga from the platform of your choice, you can go to your library or launcher and find the game's name and icon. You can also create a desktop shortcut for the game if you want. You can then click on Play or Launch to start playing the game. You may need to log in to your account or connect to the internet to play the game.

-

What are the benefits of playing Candy Crush Soda Saga?

-

Candy Crush Soda Saga is not only a fun and addictive game, but also a beneficial one. Playing this game can have positive effects on your mental and emotional well-being. Here are some of the benefits of playing Candy Crush Soda Saga:

-

It can improve your cognitive skills and memory

-

Playing Candy Crush Soda Saga can stimulate your brain and enhance your cognitive skills, such as attention, concentration, problem-solving, logic, spatial awareness, pattern recognition, and memory. These skills are essential for learning, working, and everyday life. By matching candies and creating special combinations, you can train your brain to process information faster and more efficiently. By completing levels and advancing through episodes, you can challenge your brain to remember details and strategies.

-

It can reduce stress and boredom

-

Playing Candy Crush Soda Saga can also help you relax and unwind from stress and boredom. The game has colorful graphics, cheerful music, cute characters, and satisfying sound effects that can create a positive mood and atmosphere. The game also has simple rules and easy controls that can make you feel comfortable and confident. The game also has different modes and levels that can keep you entertained and engaged for hours. The game also has a rewarding system that can make you feel accomplished and motivated. By playing Candy Crush Soda Saga, you can escape from the worries and pressures of reality and enjoy a sweet and refreshing adventure.

-

It can provide social interaction and entertainment

-

Playing Candy Crush Soda Saga can also help you connect and interact with other people who share your passion for the game. The game has a social feature that allows you to link your Facebook account and see your friends' progress and scores. You can also send and receive lives, boosters, and messages from your friends. You can also join or create a team with other players and chat, share lives, and play together. You can also compete with other players in live events or leaderboards and show off your skills and achievements. By playing Candy Crush Soda Saga, you can have fun and make new friends at the same time.

-

What are some tips and tricks for playing Candy Crush Soda Saga?

-

Candy Crush Soda Saga is a game that requires strategy and skill to master. If you want to improve your performance and progress faster in the game, you might want to follow some tips and tricks that can help you beat the levels and challenges. Here are some of them:

-

Focus on clearing soda bottles and raising the soda level

-

In soda levels, the main objective is to switch the bottles and match candies to release purple soda and save the candy bears. The more soda you release, the higher the soda level will rise. The higher the soda level, the easier it will be to match candies and clear the board. Therefore, you should focus on clearing soda bottles as soon as possible and raising the soda level as high as possible. You should also try to match candies near the bottom of the board, as this will create more cascades and opportunities to clear more bottles.

-

Use special candies and combos to clear obstacles and ice

-

In frosting levels, the main objective is to match candies to smash the ice and set the candy bears free. The ice can be thick or thin, depending on the level. The thicker the ice, the more times you have to match candies next to it to break it. Therefore, you should use special candies and combos to clear obstacles and ice faster and more efficiently. Special candies are created by matching four or more candies in different shapes, such as striped, wrapped, or fish candies. Combos are created by matching two or more special candies together, such as striped + striped, striped + wrapped, or wrapped + wrapped. These special candies and combos can have various effects, such as clearing a whole row, column, or area of candies.

-

Keep an eye on the bubble bears and don't let them float away

-

In honey levels, the main objective is to match candies next to honeycomb to release the trapped candy bears. The honeycomb can be thick or thin, depending on the level. The thicker the honeycomb, the more times you have to match candies next to it to break it. However, there is another challenge in these levels: the bubble bears. These are candy bears that are surrounded by bubbles and float up when you match candies below them. If they reach the top of the board, they will disappear and you will lose them. Therefore, you should keep an eye on the bubble bears and don't let them float away. You should try to match candies next to them or use special candies or combos to pop their bubbles.

-

Plan your moves ahead and save your boosters for hard levels

-

In jam levels, the main objective is to spread the jam across the board. The jam is a sticky substance that covers some of the candies or tiles on the board. To spread the jam, you have to match candies on top of it or use special candies or combos to splash it. However, you have a limited number of moves or time to spread the jam to all the tiles on the board. Therefore, you should plan your moves ahead and save your boosters for hard levels. Boosters are items that can help you clear the board or make special moves. You can earn boosters by completing levels, quests, events, or challenges. You can also buy boosters with real money. Some of the boosters are lollipop hammers, color bombs, striped brushes, free switches, and extra moves.

-

Conclusion

-

Candy Crush Soda Saga is a game that can offer you hours of fun and enjoyment. It is a game that can improve your cognitive skills and memory, reduce your stress and boredom, and provide you with social interaction and entertainment. It is also a game that can challenge your strategy and logic with different modes, levels, and obstacles. If you want to play this game on your PC Windows 7 for free, you can download it from the Microsoft Store or from a third-party platform. You can also follow some tips and tricks to help you master the game and beat the levels. So what are you waiting for? Download Candy Crush Soda Saga today and join Kimmy on her sweet and fizzy adventure!

-

FAQs

-

Here are some of the frequently asked questions about Candy Crush Soda Saga:

-

Q: How do I sync my progress across different devices?

-

A: To sync your progress across different devices, you need to link your game to your Facebook account or your King account. You can do this by tapping on the settings icon on the main screen and choosing "Connect" or "Log in". Once you have linked your game to your account, you can access your progress on any device that has the game installed.

-

Q: How do I get more lives?

-

A: To get more lives, you have several options. You can wait for your lives to regenerate over time, which takes about 30 minutes per life. You can ask your friends for lives, which they can send you through Facebook or the game's app. You can join or create a team and share lives with your teammates. You can also buy lives with real money or gold bars.

-

Q: How do I get more gold bars?

-

A: To get more gold bars, you have several options. You can earn gold bars by completing levels, quests, events, or challenges. You can also buy gold bars with real money or redeem them with gift cards or coupons. You can also get gold bars from your friends or teammates as gifts.

-

Q: How do I unlock new episodes?

-

A: To unlock new episodes, you need to complete all the levels in the previous episode. You may also need to pay a certain amount of gold bars or ask your friends for tickets to unlock the next episode. Some episodes may also have special requirements or conditions to unlock them.

-

Q: How do I contact customer support?

-

A: To contact customer support, you can visit the official website of King and go to the "Help Center" section. There you can find answers to common questions, report a problem, give feedback, or chat with an agent.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Download Guardian Tales JP and Experience a Classic Adventure with Pixel Art and Puzzles.md b/spaces/1phancelerku/anime-remove-background/Download Guardian Tales JP and Experience a Classic Adventure with Pixel Art and Puzzles.md deleted file mode 100644 index efbc62c64102b589a4657a006873f9c81dd97368..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Download Guardian Tales JP and Experience a Classic Adventure with Pixel Art and Puzzles.md +++ /dev/null @@ -1,176 +0,0 @@ - -

How to Download and Play Guardian Tales JP: Tips and Tricks for Beginners

-

Guardian Tales is a pixel RPG game that combines gacha elements, puzzle-solving, and action combat. You can collect over 50 heroes and 100 weapons, each with their own unique abilities and skills. You can also explore various worlds, dungeons, and bosses, as well as challenge other players in real-time PvP battles.

-

guardian tales jp download


Download ->>->>->> https://jinyurl.com/2uNMRn



-

But did you know that there is a Japanese version of Guardian Tales that has some exclusive features and content? For example, the Japanese version has different voice actors, collab events, costumes, and banners than the global version. If you are a fan of Japanese culture and anime, you might want to try out Guardian Tales JP.

-

In this article, we will show you how to download and play Guardian Tales JP on your Android devices or PC using an emulator. We will also share some tips and tricks for beginners who want to start their adventure in Kanterbury, the world of Guardian Tales.

-

How to Download Guardian Tales JP on Android Devices

-

If you have an Android device, you can download Guardian Tales JP from the Google Play Store. However, you will need to change your region settings to Japan first. Here are the steps to do so:

-
    -
  1. Open the Google Play Store app on your device.
  2. -
  3. Tap on the menu icon (three horizontal lines) on the top left corner.
  4. -
  5. Tap on Account.
  6. -
  7. Tap on Country and profiles.
  8. -
  9. Select Japan as your country. You might need to add a payment method from Japan to do this.
  10. -
  11. Accept the Terms of Service and wait for the changes to take effect.
  12. -
  13. Search for "Guardian Tales" in the Play Store. You should see the Japanese version of the game with the title "ガーディアンテイルズ".
  14. -
  15. Tap on Install and wait for the game to download.
  16. -
-

Congratulations! You have successfully downloaded Guardian Tales JP on your Android device. You can now launch the game and enjoy its features.

-

How to Download Guardian Tales JP on PC Using an Emulator

-

If you want to play Guardian Tales JP on your PC, you will need to use an Android emulator. An emulator is a software that allows you to run Android apps on your computer. There are many emulators available online, but we recommend using BlueStacks or MuMu Player as they are popular and reliable options.

-

Here are the steps to download Guardian Tales JP on PC using an emulator:

-

guardian tales japanese version apk download
-how to download guardian tales jp on android
-guardian tales jp ios download
-guardian tales jp qooapp download
-guardian tales jp server download
-guardian tales jp mod apk download
-guardian tales jp play store download
-guardian tales jp english patch download
-guardian tales jp pc download
-guardian tales jp update download
-guardian tales jp app store download
-guardian tales jp vpn download
-guardian tales jp google play download
-guardian tales jp apk pure download
-guardian tales jp emulator download
-guardian tales jp bluestacks download
-guardian tales jp apk mirror download
-guardian tales jp latest version download
-guardian tales jp obb download
-guardian tales jp reddit download
-guardian tales jp discord download
-guardian tales jp data download
-guardian tales jp kakao games download
-guardian tales jp offline download
-guardian tales jp free download
-guardian tales jp full game download
-guardian tales jp beta download
-guardian tales jp guide download
-guardian tales jp wiki download
-guardian tales jp review download
-guardian tales jp tips and tricks download
-guardian tales jp cheats and hacks download
-guardian tales jp tier list download
-guardian tales jp characters and weapons download
-guardian tales jp codes and coupons download
-guardian tales jp events and updates download
-guardian tales jp news and announcements download
-guardian tales jp gameplay and walkthrough download
-guardian tales jp story and lore download
-guardian tales jp puzzles and dungeons download
-guardian tales jp bosses and battles download
-guardian tales jp pvp and rankings download
-guardian tales jp guild and friends download
-guardian tales jp island and castle download
-guardian tales jp parodies and easter eggs download
-guardian tales japan release date and time
-how to play guardian tales japan on iphone
-how to switch to guardian tales japan server
-how to get free gems in guardian tales japan
-how to reroll in guardian tales japan

-
    -
  1. Download and install an Android emulator on your PC. You can choose from BlueStacks or MuMu Player.
  2. -
  3. Open the emulator and complete Google sign-in to access the Play Store.
  4. -
  5. Change your region settings to Japan following the same steps as above.
  6. -
  7. Search for "Guardian Tales" in the Play Store. You should see the Japanese version of the game with the title "ガーディアンテイルズ".
  8. -
  9. Click to install Guardian Tales JP from the search results.
  10. -
  11. Once installation completes, click the game icon to start the game.
  12. -
  13. Enjoy playing Guardian Tales JP on your PC with the emulator.
  14. -
-

You can also customize your keyboard and mouse controls using the Advanced Keymapping feature in BlueStacks or MuMu Player. This will allow you to play Guardian Tales JP more comfortably and efficiently on your PC.

-

Tips and Tricks for Beginners

-

Now that you have downloaded Guardian Tales JP, you might be wondering how to play it well. Don't worry, we have some tips and tricks for beginners who want to have a smooth start in the game. Here are some of them:

-

Choose a Good Starter Hero and Reroll if Needed

-

When you start the game, you will be able to choose one of four starter heroes: Knight, Warrior, Mage, or Archer. Each hero has their own strengths and weaknesses, as well as different roles and playstyles. You can check their stats and skills before making your choice.

-

However, if you are not satisfied with your starter hero, you can reroll for a better one. Rerolling means resetting your game data and starting over until you get the hero you want. To reroll in Guardian Tales JP, you need to do the following:

-
    -
  1. Complete the tutorial and the first chapter of the story mode.
  2. -
  3. Collect the free gems and tickets from the mailbox and events.
  4. -
  5. Go to the summon menu and use your gems and tickets to pull for heroes and weapons.
  6. -
  7. If you get a good hero or weapon, keep playing. If not, go to the settings menu and tap on "Delete Account".
  8. -
  9. Confirm your decision and restart the game.
  10. -
  11. Repeat the process until you get your desired hero or weapon.
  12. -
-

The best heroes to aim for are those with a rarity of 3 stars, as they have higher stats and skills than lower rarity heroes. Some of the most popular 3-star heroes are Marina, Bari, Nari, Oghma, Bianca, and Eugene. You can also check the tier list for more information on the best heroes and weapons in the game.

-

Complete the Story Mode and Side Quests for Rewards

-

One of the main features of Guardian Tales is its story mode, which consists of 10 chapters with different themes and settings. The story mode is not only fun and engaging, but also rewarding. You can earn gems, gold, experience, items, and even new heroes by completing the story mode.

-

However, don't just rush through the main quests. You should also pay attention to the side quests, which are marked with a yellow exclamation point on the map. Side quests are optional missions that give you more insight into the characters and the world of Guardian Tales. They also reward you with more gems, gold, experience, items, and sometimes even costumes for your heroes.

-

Therefore, try to complete as many side quests as possible while progressing through the story mode. You can also replay the story mode stages on higher difficulties for more rewards and challenges.

-

Join a Guild and Participate in Raids and Events

-

Another way to enjoy Guardian Tales is to join a guild and participate in raids and events. A guild is a group of players who can chat, cooperate, and compete with each other. You can join an existing guild or create your own guild with your friends.

-

By joining a guild, you can access various benefits such as guild buffs, guild shop, guild attendance rewards, and guild missions. You can also participate in guild raids, which are special battles that require teamwork and strategy. Guild raids reward you with raid coins, which you can use to buy exclusive items from the raid shop.

-

Besides guild raids, you can also participate in various events that are held regularly in Guardian Tales. Events are limited-time missions that offer unique rewards such as gems, gold, items, costumes, heroes, and weapons. Some events are also collab events that feature characters from other popular games or anime series. For example, there was a collab event with Re:Zero in 2021 that allowed players to obtain Rem, Ram, Emilia, Subaru, Beatrice, and Roswaal as playable heroes.

-

Therefore, try to join a guild and participate in raids and events as much as possible. They will not only make your game more fun and social but also help you progress faster and easier.

-

Upgrade Your Heroes, Weapons, and Accessories

-

As you play Guardian Tales JP , you will need to upgrade your heroes, weapons, and accessories to make them stronger and more effective. There are several ways to do this, such as leveling up, awakening, evolution, limit breaking, and enhancement.

-

Leveling up is the simplest way to increase your heroes' and weapons' stats. You can level up your heroes by using experience points (XP) that you earn from battles or items. You can level up your weapons by using weapon XP that you earn from dismantling other weapons or items.

-

Awakening is a process that unlocks new skills and abilities for your heroes and weapons. You can awaken your heroes by using awakening stones that you obtain from the awakening dungeon or events. You can awaken your weapons by using magic metal that you obtain from dismantling other weapons or events.

-

Evolution is a process that increases the rarity and potential of your heroes and weapons. You can evolve your heroes by using hero crystals that you obtain from summoning or events. You can evolve your weapons by using weapon hammers that you obtain from the evolution dungeon or events.

-

Limit breaking is a process that increases the maximum level and stats of your heroes and weapons. You can limit break your heroes by using hero shards that you obtain from summoning or events. You can limit break your weapons by using weapon shards that you obtain from summoning or events.

-

Enhancement is a process that adds extra effects and bonuses to your accessories. You can enhance your accessories by using enhancement stones that you obtain from the enhancement dungeon or events.

-

Therefore, try to upgrade your heroes, weapons, and accessories as much as possible. They will make a huge difference in your performance and results in the game.

-

Explore the Floating Island and Customize Your Base

-

The last tip we have for beginners is to explore the floating island and customize your base. The floating island is a feature that allows you to create and decorate your own base with various buildings, facilities, and items. You can also invite your heroes and friends to visit your base and interact with them.

-

The floating island is not only a place to relax and have fun but also a source of income and resources. You can collect gold, gems, items, and energy from the buildings and facilities in your base. You can also complete quests and missions related to the floating island for more rewards.

-

To access the floating island, you need to tap on the island icon on the top right corner of the screen. You can then use the edit mode to place and move buildings, facilities, and items on your base. You can also use the visit mode to see how your base looks like and interact with your heroes and friends.

-

Some of the buildings and facilities you can build on your base are:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NameDescriptionBenefits
InnA place where your heroes can rest and recover.Increases hero XP over time.
TowerA place where you can store and display your weapons.Increases weapon XP over time.
ShopA place where you can buy and sell items.Generates gold over time.
CafeA place where you can serve drinks and snacks to your heroes.Increases hero affection over time.
MineA place where you can dig for minerals and gems.Generates gems over time.
FactoryA place where you can produce items and materials.Generates items over time.
BatteryA place where you can store and recharge energy.Generates energy over time.

Therefore, try to explore the floating island and customize your base as much as possible. They will not only make your game more enjoyable but also help you progress faster and easier.

Conclusion: Summary of the Main Points and a Call to Action

In conclusion, Guardian Tales JP is a pixel RPG game that has some exclusive features and content that are different from the global version. If you want to try it out, you can download it on your Android devices or PC using an emulator. You can also follow our tips and tricks for beginners who want to have a smooth start in the game. We hope this article has been helpful and informative for you.

If you liked this article, please share it with your friends who are also interested in Guardian Tales JP. You can also leave a comment below and let us know what you think about the game. And if you want to learn more about Guardian Tales JP, you can visit the official website or follow the social media accounts of the game. Thank you for reading and have a great day!

-

FAQs: Five Common Questions and Answers About Guardian Tales JP

-

Here are some of the most frequently asked questions and answers about Guardian Tales JP. If you have any other questions, feel free to ask them in the comments section.

-

Q: Is Guardian Tales JP free to play?

-

A: Yes, Guardian Tales JP is free to play. You can download and play the game without spending any money. However, there are some optional in-game purchases that can enhance your gaming experience, such as gems, costumes, and packages. You can buy these with real money or earn them through various methods in the game.

-

Q: Can I play Guardian Tales JP with my friends?

-

A: Yes, you can play Guardian Tales JP with your friends. You can add them as friends in the game and chat with them, visit their bases, and send them gifts. You can also invite them to join your guild or team up with them in co-op mode, arena mode, or colosseum mode. Playing with your friends can make the game more fun and rewarding.

-

Q: How can I change the language of Guardian Tales JP?

-

A: Unfortunately, you cannot change the language of Guardian Tales JP. The game is only available in Japanese, and there is no option to switch to other languages. If you want to play the game in English or other languages, you will have to download the global version of Guardian Tales instead.

-

Q: How can I transfer my data from the global version to the Japanese version of Guardian Tales?

-

A: Unfortunately, you cannot transfer your data from the global version to the Japanese version of Guardian Tales. The two versions are separate and have different servers, accounts, and data. If you want to play the Japanese version of Guardian Tales, you will have to start from scratch.

-

Q: How can I contact the customer service of Guardian Tales JP?

-

A: If you have any problems or issues with Guardian Tales JP, you can contact the customer service of the game by following these steps:

-
    -
  1. Go to the settings menu and tap on "Customer Service".
  2. -
  3. Tap on "Contact Us" and fill out the form with your details and inquiry.
  4. -
  5. Tap on "Send" and wait for a reply from the customer service team.
  6. -
-

You can also check the FAQ section for more information and solutions to common problems.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/AIConsultant/MusicGen/audiocraft/metrics/__init__.py b/spaces/AIConsultant/MusicGen/audiocraft/metrics/__init__.py deleted file mode 100644 index 3474bdc4f1c88b21904d2a21ba077c93a8a70c8b..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/metrics/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. -"""Metrics like CLAP score, FAD, KLD, Visqol, Chroma similarity, etc. -""" -# flake8: noqa -from .clap_consistency import CLAPTextConsistencyMetric, TextConsistencyMetric -from .chroma_cosinesim import ChromaCosineSimilarityMetric -from .fad import FrechetAudioDistanceMetric -from .kld import KLDivergenceMetric, PasstKLDivergenceMetric -from .rvm import RelativeVolumeMel -from .visqol import ViSQOL diff --git a/spaces/AIWaves/Debate/src/agents/Action/base_action.py b/spaces/AIWaves/Debate/src/agents/Action/base_action.py deleted file mode 100644 index 7beeac9ac748e15229c2c0a609a07f5408fd0b3d..0000000000000000000000000000000000000000 --- a/spaces/AIWaves/Debate/src/agents/Action/base_action.py +++ /dev/null @@ -1,48 +0,0 @@ -from Memory import Memory -class Action: - """ - The basic action unit of agent - """ - def __init__(self,**kwargs): - self.response = None - self.is_user = False - self.res_dict = {} - self.name = "" - self.role = "" - for key,value in kwargs.items(): - setattr(self,key,value) - - - def process(self): - """ - processing action - Rerutn : memory(Memory) - """ - response = self.response - send_name = self.name - send_role = self.role - all = "" - for res in response: - all += res - parse = f"{send_name}:" - - # 将里面对话的第三人称删了 - # The third person in the dialogue was deleted. - while parse in all: - index = all.index(parse) + len(parse) - all = all[index:] - - if not self.is_user: - print(f"{send_name}({send_role}):{all}") - # for software - if "" in all: - title = extract(all,"title") - python = extract(all,"python") - os.makedirs("output_code", exist_ok=True) - file_name = "output_code/" + title - with open(file_name, "w", encoding="utf-8") as f: - f.write(python) - memory = Memory(send_role, send_name, all) - return memory - - diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192.py deleted file mode 100644 index 30c4d02557f3167ad0d265c6a00e6deb6b9d6d1b..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_1_ClothesKeyPoint/mmpose_1_x/configs/fashion_2d_keypoint/topdown_heatmap/deepfashion2/td_hm_res50_4xb16-120e_deepfashion2_long_sleeved_outwear_256x192.py +++ /dev/null @@ -1,172 +0,0 @@ -_base_ = [ - '../../../_base_/default_runtime.py', - '../../../_base_/datasets/deepfashion2.py' -] - -default_hooks = dict(checkpoint=dict(save_best='PCK', rule='greater')) - -resume = False # 断点恢复 -load_from = None # 模型权重加载 -train_cfg = dict(by_epoch=True, max_epochs=120, val_interval=10) # 训练轮数,测试间隔 -param_scheduler = [ - dict( # warmup策略 - type='LinearLR', - begin=0, - end=500, - start_factor=0.001, - by_epoch=False), - dict( # scheduler - type='MultiStepLR', - begin=0, - end=60, - milestones=[20, 40], - gamma=0.1, - by_epoch=True) -] -optim_wrapper = dict(optimizer=dict(type='Adam', lr=0.0005)) # 优化器和学习率 -auto_scale_lr = dict(base_batch_size=512) # 根据batch_size自动缩放学习率 - -backend_args = dict(backend='local') # 数据加载后端设置,默认从本地硬盘加载 -dataset_type = 'DeepFashion2Dataset' # 数据集类名 DeepFashionDataset -data_mode = 'topdown' # 算法结构类型,用于指定标注信息加载策略 -data_root = 'data/deepfashion2/' # 数据存放路径 -# 定义数据编解码器,用于生成target和对pred进行解码,同时包含了输入图片和输出heatmap尺寸等信息 -codec = dict( - type='MSRAHeatmap', input_size=(192, 256), heatmap_size=(48, 64), sigma=2) - -train_pipeline = [ - dict(type='LoadImage'), - dict(type='GetBBoxCenterScale'), - dict(type='RandomFlip', direction='horizontal'), - dict( - type='RandomBBoxTransform', - shift_prob=0, - rotate_factor=60, - scale_factor=(0.75, 1.25)), - dict(type='TopdownAffine', input_size=codec['input_size']), - dict(type='GenerateTarget', encoder=codec), - dict(type='PackPoseInputs') -] -val_pipeline = [ # 测试时数据增强 - dict(type='LoadImage', backend_args=backend_args), # 加载图片 - dict(type='GetBBoxCenterScale'), # 根据bbox获取center和scale - dict(type='TopdownAffine', input_size=codec['input_size']), # 根据变换矩阵更新目标数据 - dict(type='PackPoseInputs') # 对target进行打包用于训练 -] -train_dataloader = dict( # 训练数据加载 - batch_size=16, # 批次大小 - num_workers=6, # 数据加载进程数 - persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销 - sampler=dict(type='DefaultSampler', shuffle=True), # 采样策略,打乱数据 - dataset=dict( - type=dataset_type, # 数据集类名 - data_root=data_root, # 数据集路径 - data_mode=data_mode, # 算法类型 - ann_file='train/deepfashion2_long_sleeved_outwear.json', # 标注文件路径 - data_prefix=dict(img='train/image/'), # 图像路径 - pipeline=train_pipeline # 数据流水线 - )) -val_dataloader = dict( - batch_size=16, - num_workers=6, - persistent_workers=True, # 在不活跃时维持进程不终止,避免反复启动进程的开销 - drop_last=False, - sampler=dict(type='DefaultSampler', shuffle=False), # 采样策略,不进行打乱 - dataset=dict( - type=dataset_type, # 数据集类名 - data_root=data_root, # 数据集路径 - data_mode=data_mode, # 算法类型 - ann_file='validation/deepfashion2_long_sleeved_outwear.json', # 标注文件路径 - data_prefix=dict(img='validation/image/'), # 图像路径 - test_mode=True, # 测试模式开关 - pipeline=val_pipeline # 数据流水线 - )) -test_dataloader = val_dataloader # 默认情况下不区分验证集和测试集,用户根据需要来自行定义 - -channel_cfg = dict( - num_output_channels=294, - dataset_joints=294, - dataset_channel=[ - [ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, - 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, - 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, - 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, - 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, - 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, - 103, 104, 105, 106, 107, 108, 109, 110, 111, 112, 113, 114, 115, - 116, 117, 118, 119, 120, 121, 122, 123, 124, 125, 126, 127, 128, - 129, 130, 131, 132, 133, 134, 135, 136, 137, 138, 139, 140, 141, - 142, 143, 144, 145, 146, 147, 148, 149, 150, 151, 152, 153, 154, - 155, 156, 157, 158, 159, 160, 161, 162, 163, 164, 165, 166, 167, - 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, 178, 179, 180, - 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, 192, 193, - 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, 206, - 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, - 233, 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, - 246, 247, 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, - 259, 260, 261, 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, - 272, 273, 274, 275, 276, 277, 278, 279, 280, 281, 282, 283, 284, - 285, 286, 287, 288, 289, 290, 291, 292, 293 - ], - ], - inference_channel=[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, - 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, - 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, - 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, - 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, - 92, 93, 94, 95, 96, 97, 98, 99, 100, 101, 102, 103, 104, 105, 106, 107, - 108, 109, 110, 111, 112, 113, 114, 115, 116, 117, 118, 119, 120, 121, - 122, 123, 124, 125, 126, 127, 128, 129, 130, 131, 132, 133, 134, 135, - 136, 137, 138, 139, 140, 141, 142, 143, 144, 145, 146, 147, 148, 149, - 150, 151, 152, 153, 154, 155, 156, 157, 158, 159, 160, 161, 162, 163, - 164, 165, 166, 167, 168, 169, 170, 171, 172, 173, 174, 175, 176, 177, - 178, 179, 180, 181, 182, 183, 184, 185, 186, 187, 188, 189, 190, 191, - 192, 193, 194, 195, 196, 197, 198, 199, 200, 201, 202, 203, 204, 205, - 206, 207, 208, 209, 210, 211, 212, 213, 214, 215, 216, 217, 218, 219, - 220, 221, 222, 223, 224, 225, 226, 227, 228, 229, 230, 231, 232, 233, - 234, 235, 236, 237, 238, 239, 240, 241, 242, 243, 244, 245, 246, 247, - 248, 249, 250, 251, 252, 253, 254, 255, 256, 257, 258, 259, 260, 261, - 262, 263, 264, 265, 266, 267, 268, 269, 270, 271, 272, 273, 274, 275, - 276, 277, 278, 279, 280, 281, 282, 283, 284, 285, 286, 287, 288, 289, - 290, 291, 292, 293 - ]) - -model = dict( - type='TopdownPoseEstimator', # 模型结构决定了算法流程 - data_preprocessor=dict( # 数据归一化和通道顺序调整,作为模型的一部分 - type='PoseDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True), - backbone=dict( - type='ResNet', - depth=50, - init_cfg=dict( - type='Pretrained', # 预训练参数,只加载backbone权重用于迁移学习 - checkpoint='torchvision://resnet50')), - head=dict( # 模型头部 - type='HeatmapHead', - in_channels=2048, - out_channels=channel_cfg['num_output_channels'], - # deconv_out_channels=None, - loss=dict(type='KeypointMSELoss', use_target_weight=True), # 损失函数 - decoder=codec), # 解码器,将heatmap解码成坐标值 - test_cfg=dict( - flip_test=True, # 开启测试时水平翻转集成 - flip_mode='heatmap', # 对heatmap进行翻转 - shift_heatmap=True, # 对翻转后的结果进行平移提高精度 - )) - -val_evaluator = [ - dict(type='PCKAccuracy', thr=0.2), - dict(type='AUC'), - dict(type='EPE'), -] -test_evaluator = val_evaluator # 默认情况下不区分验证集和测试集,用户根据需要来自行定义 - -visualizer = dict( - vis_backends=[dict(type='LocalVisBackend'), - dict(type='WandbVisBackend')]) diff --git a/spaces/Abhilashvj/planogram-compliance/data/scripts/get_imagenet.sh b/spaces/Abhilashvj/planogram-compliance/data/scripts/get_imagenet.sh deleted file mode 100644 index 6026d502e8f3cce457d7f48cefe19cf55d60c0fc..0000000000000000000000000000000000000000 --- a/spaces/Abhilashvj/planogram-compliance/data/scripts/get_imagenet.sh +++ /dev/null @@ -1,51 +0,0 @@ -#!/bin/bash -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -# Download ILSVRC2012 ImageNet dataset https://image-net.org -# Example usage: bash data/scripts/get_imagenet.sh -# parent -# ├── yolov5 -# └── datasets -# └── imagenet ← downloads here - -# Arguments (optional) Usage: bash data/scripts/get_imagenet.sh --train --val -if [ "$#" -gt 0 ]; then - for opt in "$@"; do - case "${opt}" in - --train) train=true ;; - --val) val=true ;; - esac - done -else - train=true - val=true -fi - -# Make dir -d='../datasets/imagenet' # unzip directory -mkdir -p $d && cd $d - -# Download/unzip train -if [ "$train" == "true" ]; then - wget https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_train.tar # download 138G, 1281167 images - mkdir train && mv ILSVRC2012_img_train.tar train/ && cd train - tar -xf ILSVRC2012_img_train.tar && rm -f ILSVRC2012_img_train.tar - find . -name "*.tar" | while read NAME; do - mkdir -p "${NAME%.tar}" - tar -xf "${NAME}" -C "${NAME%.tar}" - rm -f "${NAME}" - done - cd .. -fi - -# Download/unzip val -if [ "$val" == "true" ]; then - wget https://image-net.org/data/ILSVRC/2012/ILSVRC2012_img_val.tar # download 6.3G, 50000 images - mkdir val && mv ILSVRC2012_img_val.tar val/ && cd val && tar -xf ILSVRC2012_img_val.tar - wget -qO- https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh | bash # move into subdirs -fi - -# Delete corrupted image (optional: PNG under JPEG name that may cause dataloaders to fail) -# rm train/n04266014/n04266014_10835.JPEG - -# TFRecords (optional) -# wget https://raw.githubusercontent.com/tensorflow/models/master/research/slim/datasets/imagenet_lsvrc_2015_synsets.txt diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetAllChildrenSizers.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetAllChildrenSizers.js deleted file mode 100644 index 0a5b3ddca672d265130ede86881c6ef97bb5d67b..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/GetAllChildrenSizers.js +++ /dev/null @@ -1,14 +0,0 @@ -var GetAllChildrenSizers = function (out) { - if (out === undefined) { - out = []; - } - var startIdx = out.length; - var children = this.getChildrenSizers(out); - var endIdx = out.length; - for (var i = startIdx; i < endIdx; i++) { - children[i].getAllChildrenSizers(out); - } - - return out; -} -export default GetAllChildrenSizers; \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/RemoveChildMethods.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/RemoveChildMethods.js deleted file mode 100644 index 8f1d88272679d2e4e94c689c9b4323073aec0af4..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/basesizer/RemoveChildMethods.js +++ /dev/null @@ -1,39 +0,0 @@ -import RemoveChild from './utils/RemoveChild.js'; -import GetParentSizerMethods from './GetParentSizerMethods.js'; - -const RemoveItem = Phaser.Utils.Array.Remove; - -export default { - removeFromParentSizer() { - var parent = GetParentSizerMethods.getParentSizer(gameObject); - if (parent) { - parent.remove(this); - } - return this; - }, - - removeBackground(gameObject, destroyChild) { - if (this.backgroundChildren === undefined) { - return this; - } - - if (this.getParentSizer(gameObject) !== this) { - return this; - } - - RemoveItem(this.backgroundChildren, gameObject); - RemoveChild.call(this, gameObject, destroyChild); - return this; - }, - - removeAllBackgrounds(destroyChild) { - if (this.backgroundChildren === undefined) { - return this; - } - - for (var i = this.backgroundChildren.length - 1; i >= 0; i--) { - this.remove(this.backgroundChildren[i], destroyChild); - } - return this; - }, -} \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/intouching/InTouching.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/intouching/InTouching.js deleted file mode 100644 index 7434aeebcc68b323468f86aa4199b8fdd302d904..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/intouching/InTouching.js +++ /dev/null @@ -1,2 +0,0 @@ -import InTouching from '../../../plugins/intouching.js' -export default InTouching; \ No newline at end of file diff --git a/spaces/Aki004/herta-so-vits/vdecoder/__init__.py b/spaces/Aki004/herta-so-vits/vdecoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AlekseyKorshuk/rugpt3/README.md b/spaces/AlekseyKorshuk/rugpt3/README.md deleted file mode 100644 index 2ffac9f083ca45382e3b30e3dbefd83f68ea6b33..0000000000000000000000000000000000000000 --- a/spaces/AlekseyKorshuk/rugpt3/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Rugpt3 -emoji: 📚 -colorFrom: green -colorTo: purple -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/models/facial_recognition/model_irse.py b/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/models/facial_recognition/model_irse.py deleted file mode 100644 index b1c79e0366e4a6fd92011e86df80f8b31ec671ae..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/models/StyleCLIP/models/facial_recognition/model_irse.py +++ /dev/null @@ -1,84 +0,0 @@ -from torch.nn import Linear, Conv2d, BatchNorm1d, BatchNorm2d, PReLU, Dropout, Sequential, Module -from models.facial_recognition.helpers import get_blocks, Flatten, bottleneck_IR, bottleneck_IR_SE, l2_norm - -""" -Modified Backbone implementation from [TreB1eN](https://github.com/TreB1eN/InsightFace_Pytorch) -""" - - -class Backbone(Module): - def __init__(self, input_size, num_layers, mode='ir', drop_ratio=0.4, affine=True): - super(Backbone, self).__init__() - assert input_size in [112, 224], "input_size should be 112 or 224" - assert num_layers in [50, 100, 152], "num_layers should be 50, 100 or 152" - assert mode in ['ir', 'ir_se'], "mode should be ir or ir_se" - blocks = get_blocks(num_layers) - if mode == 'ir': - unit_module = bottleneck_IR - elif mode == 'ir_se': - unit_module = bottleneck_IR_SE - self.input_layer = Sequential(Conv2d(3, 64, (3, 3), 1, 1, bias=False), - BatchNorm2d(64), - PReLU(64)) - if input_size == 112: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 7 * 7, 512), - BatchNorm1d(512, affine=affine)) - else: - self.output_layer = Sequential(BatchNorm2d(512), - Dropout(drop_ratio), - Flatten(), - Linear(512 * 14 * 14, 512), - BatchNorm1d(512, affine=affine)) - - modules = [] - for block in blocks: - for bottleneck in block: - modules.append(unit_module(bottleneck.in_channel, - bottleneck.depth, - bottleneck.stride)) - self.body = Sequential(*modules) - - def forward(self, x): - x = self.input_layer(x) - x = self.body(x) - x = self.output_layer(x) - return l2_norm(x) - - -def IR_50(input_size): - """Constructs a ir-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_101(input_size): - """Constructs a ir-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_152(input_size): - """Constructs a ir-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_50(input_size): - """Constructs a ir_se-50 model.""" - model = Backbone(input_size, num_layers=50, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_101(input_size): - """Constructs a ir_se-101 model.""" - model = Backbone(input_size, num_layers=100, mode='ir_se', drop_ratio=0.4, affine=False) - return model - - -def IR_SE_152(input_size): - """Constructs a ir_se-152 model.""" - model = Backbone(input_size, num_layers=152, mode='ir_se', drop_ratio=0.4, affine=False) - return model diff --git a/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.cpp b/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.cpp deleted file mode 100644 index 2d7177fc60040751d20e9a8da0301fa3ab64968a..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/PTI/torch_utils/ops/upfirdn2d.cpp +++ /dev/null @@ -1,103 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include <torch/extension.h> -#include <ATen/cuda/CUDAContext.h> -#include <c10/cuda/CUDAGuard.h> -#include "upfirdn2d.h" - -//------------------------------------------------------------------------ - -static torch::Tensor upfirdn2d(torch::Tensor x, torch::Tensor f, int upx, int upy, int downx, int downy, int padx0, int padx1, int pady0, int pady1, bool flip, float gain) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(f.device() == x.device(), "f must reside on the same device as x"); - TORCH_CHECK(f.dtype() == torch::kFloat, "f must be float32"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(f.numel() <= INT_MAX, "f is too large"); - TORCH_CHECK(x.dim() == 4, "x must be rank 4"); - TORCH_CHECK(f.dim() == 2, "f must be rank 2"); - TORCH_CHECK(f.size(0) >= 1 && f.size(1) >= 1, "f must be at least 1x1"); - TORCH_CHECK(upx >= 1 && upy >= 1, "upsampling factor must be at least 1"); - TORCH_CHECK(downx >= 1 && downy >= 1, "downsampling factor must be at least 1"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - int outW = ((int)x.size(3) * upx + padx0 + padx1 - (int)f.size(1) + downx) / downx; - int outH = ((int)x.size(2) * upy + pady0 + pady1 - (int)f.size(0) + downy) / downy; - TORCH_CHECK(outW >= 1 && outH >= 1, "output must be at least 1x1"); - torch::Tensor y = torch::empty({x.size(0), x.size(1), outH, outW}, x.options(), x.suggest_memory_format()); - TORCH_CHECK(y.numel() <= INT_MAX, "output is too large"); - - // Initialize CUDA kernel parameters. - upfirdn2d_kernel_params p; - p.x = x.data_ptr(); - p.f = f.data_ptr<float>(); - p.y = y.data_ptr(); - p.up = make_int2(upx, upy); - p.down = make_int2(downx, downy); - p.pad0 = make_int2(padx0, pady0); - p.flip = (flip) ? 1 : 0; - p.gain = gain; - p.inSize = make_int4((int)x.size(3), (int)x.size(2), (int)x.size(1), (int)x.size(0)); - p.inStride = make_int4((int)x.stride(3), (int)x.stride(2), (int)x.stride(1), (int)x.stride(0)); - p.filterSize = make_int2((int)f.size(1), (int)f.size(0)); - p.filterStride = make_int2((int)f.stride(1), (int)f.stride(0)); - p.outSize = make_int4((int)y.size(3), (int)y.size(2), (int)y.size(1), (int)y.size(0)); - p.outStride = make_int4((int)y.stride(3), (int)y.stride(2), (int)y.stride(1), (int)y.stride(0)); - p.sizeMajor = (p.inStride.z == 1) ? p.inSize.w : p.inSize.w * p.inSize.z; - p.sizeMinor = (p.inStride.z == 1) ? p.inSize.z : 1; - - // Choose CUDA kernel. - upfirdn2d_kernel_spec spec; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - spec = choose_upfirdn2d_kernel<scalar_t>(p); - }); - - // Set looping options. - p.loopMajor = (p.sizeMajor - 1) / 16384 + 1; - p.loopMinor = spec.loopMinor; - p.loopX = spec.loopX; - p.launchMinor = (p.sizeMinor - 1) / p.loopMinor + 1; - p.launchMajor = (p.sizeMajor - 1) / p.loopMajor + 1; - - // Compute grid size. - dim3 blockSize, gridSize; - if (spec.tileOutW < 0) // large - { - blockSize = dim3(4, 32, 1); - gridSize = dim3( - ((p.outSize.y - 1) / blockSize.x + 1) * p.launchMinor, - (p.outSize.x - 1) / (blockSize.y * p.loopX) + 1, - p.launchMajor); - } - else // small - { - blockSize = dim3(256, 1, 1); - gridSize = dim3( - ((p.outSize.y - 1) / spec.tileOutH + 1) * p.launchMinor, - (p.outSize.x - 1) / (spec.tileOutW * p.loopX) + 1, - p.launchMajor); - } - - // Launch CUDA kernel. - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(spec.kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("upfirdn2d", &upfirdn2d); -} - -//------------------------------------------------------------------------ diff --git a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_configs/global_config.py b/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_configs/global_config.py deleted file mode 100644 index bda8d2d08828aace7551db94847e2a1e039876df..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/DragGan-Inversion/stylegan_human/pti/pti_configs/global_config.py +++ /dev/null @@ -1,12 +0,0 @@ -# Device -cuda_visible_devices = '0' -device = 'cuda:0' - -# Logs -training_step = 1 -image_rec_result_log_snapshot = 100 -pivotal_training_steps = 0 -model_snapshot_interval = 400 - -# Run name to be updated during PTI -run_name = 'exp' diff --git a/spaces/Anar0140/6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5/style.css b/spaces/Anar0140/6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/Anar0140/6.AI.Dashboard.Wiki.Chat.Cognitive.HTML5/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/text2image.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/text2image.md deleted file mode 100644 index eb8a120c02110ceba95e9f7bddac8d9f30b97dd5..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/training/text2image.md +++ /dev/null @@ -1,277 +0,0 @@ -<!--Copyright 2023 The HuggingFace Team. All rights reserved. - -Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with -the License. You may obtain a copy of the License at - -http://www.apache.org/licenses/LICENSE-2.0 - -Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on -an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the -specific language governing permissions and limitations under the License. ---> - - -# Text-to-image - -<Tip warning={true}> - -The text-to-image fine-tuning script is experimental. It's easy to overfit and run into issues like catastrophic forgetting. We recommend you explore different hyperparameters to get the best results on your dataset. - -</Tip> - -Text-to-image models like Stable Diffusion generate an image from a text prompt. This guide will show you how to finetune the [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) model on your own dataset with PyTorch and Flax. All the training scripts for text-to-image finetuning used in this guide can be found in this [repository](https://github.com/huggingface/diffusers/tree/main/examples/text_to_image) if you're interested in taking a closer look. - -Before running the scripts, make sure to install the library's training dependencies: - -```bash -pip install git+https://github.com/huggingface/diffusers.git -pip install -U -r requirements.txt -``` - -And initialize an [🤗 Accelerate](https://github.com/huggingface/accelerate/) environment with: - -```bash -accelerate config -``` - -If you have already cloned the repo, then you won't need to go through these steps. Instead, you can pass the path to your local checkout to the training script and it will be loaded from there. - -## Hardware requirements - -Using `gradient_checkpointing` and `mixed_precision`, it should be possible to finetune the model on a single 24GB GPU. For higher `batch_size`'s and faster training, it's better to use GPUs with more than 30GB of GPU memory. You can also use JAX/Flax for fine-tuning on TPUs or GPUs, which will be covered [below](#flax-jax-finetuning). - -You can reduce your memory footprint even more by enabling memory efficient attention with xFormers. Make sure you have [xFormers installed](./optimization/xformers) and pass the `--enable_xformers_memory_efficient_attention` flag to the training script. - -xFormers is not available for Flax. - -## Upload model to Hub - -Store your model on the Hub by adding the following argument to the training script: - -```bash - --push_to_hub -``` - -## Save and load checkpoints - -It is a good idea to regularly save checkpoints in case anything happens during training. To save a checkpoint, pass the following argument to the training script: - -```bash - --checkpointing_steps=500 -``` - -Every 500 steps, the full training state is saved in a subfolder in the `output_dir`. The checkpoint has the format `checkpoint-` followed by the number of steps trained so far. For example, `checkpoint-1500` is a checkpoint saved after 1500 training steps. - -To load a checkpoint to resume training, pass the argument `--resume_from_checkpoint` to the training script and specify the checkpoint you want to resume from. For example, the following argument resumes training from the checkpoint saved after 1500 training steps: - -```bash - --resume_from_checkpoint="checkpoint-1500" -``` - -## Fine-tuning - -<frameworkcontent> -<pt> -Launch the [PyTorch training script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) for a fine-tuning run on the [Pokémon BLIP captions](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions) dataset like this. - -Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) argument. - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export dataset_name="lambdalabs/pokemon-blip-captions" - -accelerate launch --mixed_precision="fp16" train_text_to_image.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --dataset_name=$dataset_name \ - --use_ema \ - --resolution=512 --center_crop --random_flip \ - --train_batch_size=1 \ - --gradient_accumulation_steps=4 \ - --gradient_checkpointing \ - --max_train_steps=15000 \ - --learning_rate=1e-05 \ - --max_grad_norm=1 \ - --lr_scheduler="constant" --lr_warmup_steps=0 \ - --output_dir="sd-pokemon-model" \ - --push_to_hub -``` - -To finetune on your own dataset, prepare the dataset according to the format required by 🤗 [Datasets](https://huggingface.co/docs/datasets/index). You can [upload your dataset to the Hub](https://huggingface.co/docs/datasets/image_dataset#upload-dataset-to-the-hub), or you can [prepare a local folder with your files](https://huggingface.co/docs/datasets/image_dataset#imagefolder). - -Modify the script if you want to use custom loading logic. We left pointers in the code in the appropriate places to help you. 🤗 The example script below shows how to finetune on a local dataset in `TRAIN_DIR` and where to save the model to in `OUTPUT_DIR`: - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export TRAIN_DIR="path_to_your_dataset" -export OUTPUT_DIR="path_to_save_model" - -accelerate launch train_text_to_image.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_data_dir=$TRAIN_DIR \ - --use_ema \ - --resolution=512 --center_crop --random_flip \ - --train_batch_size=1 \ - --gradient_accumulation_steps=4 \ - --gradient_checkpointing \ - --mixed_precision="fp16" \ - --max_train_steps=15000 \ - --learning_rate=1e-05 \ - --max_grad_norm=1 \ - --lr_scheduler="constant" - --lr_warmup_steps=0 \ - --output_dir=${OUTPUT_DIR} \ - --push_to_hub -``` - -#### Training with multiple GPUs - -`accelerate` allows for seamless multi-GPU training. Follow the instructions [here](https://huggingface.co/docs/accelerate/basic_tutorials/launch) -for running distributed training with `accelerate`. Here is an example command: - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export dataset_name="lambdalabs/pokemon-blip-captions" - -accelerate launch --mixed_precision="fp16" --multi_gpu train_text_to_image.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --dataset_name=$dataset_name \ - --use_ema \ - --resolution=512 --center_crop --random_flip \ - --train_batch_size=1 \ - --gradient_accumulation_steps=4 \ - --gradient_checkpointing \ - --max_train_steps=15000 \ - --learning_rate=1e-05 \ - --max_grad_norm=1 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --output_dir="sd-pokemon-model" \ - --push_to_hub -``` - -</pt> -<jax> -With Flax, it's possible to train a Stable Diffusion model faster on TPUs and GPUs thanks to [@duongna211](https://github.com/duongna21). This is very efficient on TPU hardware but works great on GPUs too. The Flax training script doesn't support features like gradient checkpointing or gradient accumulation yet, so you'll need a GPU with at least 30GB of memory or a TPU v3. - -Before running the script, make sure you have the requirements installed: - -```bash -pip install -U -r requirements_flax.txt -``` - -Specify the `MODEL_NAME` environment variable (either a Hub model repository id or a path to the directory containing the model weights) and pass it to the [`pretrained_model_name_or_path`](https://huggingface.co/docs/diffusers/en/api/diffusion_pipeline#diffusers.DiffusionPipeline.from_pretrained.pretrained_model_name_or_path) argument. - -Now you can launch the [Flax training script](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_flax.py) like this: - -```bash -export MODEL_NAME="runwayml/stable-diffusion-v1-5" -export dataset_name="lambdalabs/pokemon-blip-captions" - -python train_text_to_image_flax.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --dataset_name=$dataset_name \ - --resolution=512 --center_crop --random_flip \ - --train_batch_size=1 \ - --max_train_steps=15000 \ - --learning_rate=1e-05 \ - --max_grad_norm=1 \ - --output_dir="sd-pokemon-model" \ - --push_to_hub -``` - -To finetune on your own dataset, prepare the dataset according to the format required by 🤗 [Datasets](https://huggingface.co/docs/datasets/index). You can [upload your dataset to the Hub](https://huggingface.co/docs/datasets/image_dataset#upload-dataset-to-the-hub), or you can [prepare a local folder with your files](https://huggingface.co/docs/datasets/image_dataset#imagefolder). - -Modify the script if you want to use custom loading logic. We left pointers in the code in the appropriate places to help you. 🤗 The example script below shows how to finetune on a local dataset in `TRAIN_DIR`: - -```bash -export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" -export TRAIN_DIR="path_to_your_dataset" - -python train_text_to_image_flax.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_data_dir=$TRAIN_DIR \ - --resolution=512 --center_crop --random_flip \ - --train_batch_size=1 \ - --mixed_precision="fp16" \ - --max_train_steps=15000 \ - --learning_rate=1e-05 \ - --max_grad_norm=1 \ - --output_dir="sd-pokemon-model" \ - --push_to_hub -``` -</jax> -</frameworkcontent> - -## Training with Min-SNR weighting - -We support training with the Min-SNR weighting strategy proposed in [Efficient Diffusion Training via Min-SNR Weighting Strategy](https://arxiv.org/abs/2303.09556) which helps to achieve faster convergence -by rebalancing the loss. In order to use it, one needs to set the `--snr_gamma` argument. The recommended -value when using it is 5.0. - -You can find [this project on Weights and Biases](https://wandb.ai/sayakpaul/text2image-finetune-minsnr) that compares the loss surfaces of the following setups: - -* Training without the Min-SNR weighting strategy -* Training with the Min-SNR weighting strategy (`snr_gamma` set to 5.0) -* Training with the Min-SNR weighting strategy (`snr_gamma` set to 1.0) - -For our small Pokemons dataset, the effects of Min-SNR weighting strategy might not appear to be pronounced, but for larger datasets, we believe the effects will be more pronounced. - -Also, note that in this example, we either predict `epsilon` (i.e., the noise) or the `v_prediction`. For both of these cases, the formulation of the Min-SNR weighting strategy that we have used holds. - -<Tip warning={true}> - -Training with Min-SNR weighting strategy is only supported in PyTorch. - -</Tip> - -## LoRA - -You can also use Low-Rank Adaptation of Large Language Models (LoRA), a fine-tuning technique for accelerating training large models, for fine-tuning text-to-image models. For more details, take a look at the [LoRA training](lora#text-to-image) guide. - -## Inference - -Now you can load the fine-tuned model for inference by passing the model path or model name on the Hub to the [`StableDiffusionPipeline`]: - -<frameworkcontent> -<pt> -```python -from diffusers import StableDiffusionPipeline - -model_path = "path_to_saved_model" -pipe = StableDiffusionPipeline.from_pretrained(model_path, torch_dtype=torch.float16) -pipe.to("cuda") - -image = pipe(prompt="yoda").images[0] -image.save("yoda-pokemon.png") -``` -</pt> -<jax> -```python -import jax -import numpy as np -from flax.jax_utils import replicate -from flax.training.common_utils import shard -from diffusers import FlaxStableDiffusionPipeline - -model_path = "path_to_saved_model" -pipe, params = FlaxStableDiffusionPipeline.from_pretrained(model_path, dtype=jax.numpy.bfloat16) - -prompt = "yoda pokemon" -prng_seed = jax.random.PRNGKey(0) -num_inference_steps = 50 - -num_samples = jax.device_count() -prompt = num_samples * [prompt] -prompt_ids = pipeline.prepare_inputs(prompt) - -# shard inputs and rng -params = replicate(params) -prng_seed = jax.random.split(prng_seed, jax.device_count()) -prompt_ids = shard(prompt_ids) - -images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images -images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:]))) -image.save("yoda-pokemon.png") -``` -</jax> -</frameworkcontent> diff --git a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/hrfpn.py b/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/hrfpn.py deleted file mode 100644 index ed4f194832fc4b6ea77ce54262fb8ffa8675fc4e..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_detection/mmdet/models/necks/hrfpn.py +++ /dev/null @@ -1,102 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F -from mmcv.cnn import ConvModule, caffe2_xavier_init -from torch.utils.checkpoint import checkpoint - -from ..builder import NECKS - - -@NECKS.register_module() -class HRFPN(nn.Module): - """HRFPN (High Resolution Feature Pyramids) - - paper: `High-Resolution Representations for Labeling Pixels and Regions - <https://arxiv.org/abs/1904.04514>`_. - - Args: - in_channels (list): number of channels for each branch. - out_channels (int): output channels of feature pyramids. - num_outs (int): number of output stages. - pooling_type (str): pooling for generating feature pyramids - from {MAX, AVG}. - conv_cfg (dict): dictionary to construct and config conv layer. - norm_cfg (dict): dictionary to construct and config norm layer. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - stride (int): stride of 3x3 convolutional layers - """ - - def __init__(self, - in_channels, - out_channels, - num_outs=5, - pooling_type='AVG', - conv_cfg=None, - norm_cfg=None, - with_cp=False, - stride=1): - super(HRFPN, self).__init__() - assert isinstance(in_channels, list) - self.in_channels = in_channels - self.out_channels = out_channels - self.num_ins = len(in_channels) - self.num_outs = num_outs - self.with_cp = with_cp - self.conv_cfg = conv_cfg - self.norm_cfg = norm_cfg - - self.reduction_conv = ConvModule( - sum(in_channels), - out_channels, - kernel_size=1, - conv_cfg=self.conv_cfg, - act_cfg=None) - - self.fpn_convs = nn.ModuleList() - for i in range(self.num_outs): - self.fpn_convs.append( - ConvModule( - out_channels, - out_channels, - kernel_size=3, - padding=1, - stride=stride, - conv_cfg=self.conv_cfg, - act_cfg=None)) - - if pooling_type == 'MAX': - self.pooling = F.max_pool2d - else: - self.pooling = F.avg_pool2d - - def init_weights(self): - """Initialize the weights of module.""" - for m in self.modules(): - if isinstance(m, nn.Conv2d): - caffe2_xavier_init(m) - - def forward(self, inputs): - """Forward function.""" - assert len(inputs) == self.num_ins - outs = [inputs[0]] - for i in range(1, self.num_ins): - outs.append( - F.interpolate(inputs[i], scale_factor=2**i, mode='bilinear')) - out = torch.cat(outs, dim=1) - if out.requires_grad and self.with_cp: - out = checkpoint(self.reduction_conv, out) - else: - out = self.reduction_conv(out) - outs = [out] - for i in range(1, self.num_outs): - outs.append(self.pooling(out, kernel_size=2**i, stride=2**i)) - outputs = [] - - for i in range(self.num_outs): - if outs[i].requires_grad and self.with_cp: - tmp_out = checkpoint(self.fpn_convs[i], outs[i]) - else: - tmp_out = self.fpn_convs[i](outs[i]) - outputs.append(tmp_out) - return tuple(outputs) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x512_80k_ade20k.py b/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x512_80k_ade20k.py deleted file mode 100644 index 36e77219ac2d7ee6795db7c40ad7341749a3b1c7..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18s_512x512_80k_ade20k.py +++ /dev/null @@ -1,9 +0,0 @@ -_base_ = './ocrnet_hr18_512x512_80k_ade20k.py' -model = dict( - pretrained='open-mmlab://msra/hrnetv2_w18_small', - backbone=dict( - extra=dict( - stage1=dict(num_blocks=(2, )), - stage2=dict(num_blocks=(2, 2)), - stage3=dict(num_modules=3, num_blocks=(2, 2, 2)), - stage4=dict(num_modules=2, num_blocks=(2, 2, 2, 2))))) diff --git a/spaces/AngoHF/ANGO-Leaderboard/assets/__init__.py b/spaces/AngoHF/ANGO-Leaderboard/assets/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AnishKumbhar/ChatBot/README.md b/spaces/AnishKumbhar/ChatBot/README.md deleted file mode 100644 index 0f52ffcc28daa2424d6fab2411daa0b22ba42331..0000000000000000000000000000000000000000 --- a/spaces/AnishKumbhar/ChatBot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatBot -emoji: 📚 -colorFrom: gray -colorTo: blue -sdk: gradio -sdk_version: 3.47.1 -app_file: app.py -pinned: false -license: llama2 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Ankush05/Newcode/README.md b/spaces/Ankush05/Newcode/README.md deleted file mode 100644 index b3889d0d1fff163ba538ca106c2a18f24b7d27c7..0000000000000000000000000000000000000000 --- a/spaces/Ankush05/Newcode/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Newcode -emoji: 📚 -colorFrom: gray -colorTo: yellow -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/diffusionmodules/util.py b/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/diffusionmodules/util.py deleted file mode 100644 index 637363dfe34799e70cfdbcd11445212df9d9ca1f..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-sub/Rerender/ControlNet/ldm/modules/diffusionmodules/util.py +++ /dev/null @@ -1,270 +0,0 @@ -# adopted from -# https://github.com/openai/improved-diffusion/blob/main/improved_diffusion/gaussian_diffusion.py -# and -# https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -# and -# https://github.com/openai/guided-diffusion/blob/0ba878e517b276c45d1195eb29f6f5f72659a05b/guided_diffusion/nn.py -# -# thanks! - - -import os -import math -import torch -import torch.nn as nn -import numpy as np -from einops import repeat - -from ldm.util import instantiate_from_config - - -def make_beta_schedule(schedule, n_timestep, linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if schedule == "linear": - betas = ( - torch.linspace(linear_start ** 0.5, linear_end ** 0.5, n_timestep, dtype=torch.float64) ** 2 - ) - - elif schedule == "cosine": - timesteps = ( - torch.arange(n_timestep + 1, dtype=torch.float64) / n_timestep + cosine_s - ) - alphas = timesteps / (1 + cosine_s) * np.pi / 2 - alphas = torch.cos(alphas).pow(2) - alphas = alphas / alphas[0] - betas = 1 - alphas[1:] / alphas[:-1] - betas = np.clip(betas, a_min=0, a_max=0.999) - - elif schedule == "sqrt_linear": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) - elif schedule == "sqrt": - betas = torch.linspace(linear_start, linear_end, n_timestep, dtype=torch.float64) ** 0.5 - else: - raise ValueError(f"schedule '{schedule}' unknown.") - return betas.numpy() - - -def make_ddim_timesteps(ddim_discr_method, num_ddim_timesteps, num_ddpm_timesteps, verbose=True): - if ddim_discr_method == 'uniform': - c = num_ddpm_timesteps // num_ddim_timesteps - ddim_timesteps = np.asarray(list(range(0, num_ddpm_timesteps, c))) - elif ddim_discr_method == 'quad': - ddim_timesteps = ((np.linspace(0, np.sqrt(num_ddpm_timesteps * .8), num_ddim_timesteps)) ** 2).astype(int) - else: - raise NotImplementedError(f'There is no ddim discretization method called "{ddim_discr_method}"') - - # assert ddim_timesteps.shape[0] == num_ddim_timesteps - # add one to get the final alpha values right (the ones from first scale to data during sampling) - steps_out = ddim_timesteps + 1 - if verbose: - print(f'Selected timesteps for ddim sampler: {steps_out}') - return steps_out - - -def make_ddim_sampling_parameters(alphacums, ddim_timesteps, eta, verbose=True): - # select alphas for computing the variance schedule - alphas = alphacums[ddim_timesteps] - alphas_prev = np.asarray([alphacums[0]] + alphacums[ddim_timesteps[:-1]].tolist()) - - # according the the formula provided in https://arxiv.org/abs/2010.02502 - sigmas = eta * np.sqrt((1 - alphas_prev) / (1 - alphas) * (1 - alphas / alphas_prev)) - if verbose: - print(f'Selected alphas for ddim sampler: a_t: {alphas}; a_(t-1): {alphas_prev}') - print(f'For the chosen value of eta, which is {eta}, ' - f'this results in the following sigma_t schedule for ddim sampler {sigmas}') - return sigmas, alphas, alphas_prev - - -def betas_for_alpha_bar(num_diffusion_timesteps, alpha_bar, max_beta=0.999): - """ - Create a beta schedule that discretizes the given alpha_t_bar function, - which defines the cumulative product of (1-beta) over time from t = [0,1]. - :param num_diffusion_timesteps: the number of betas to produce. - :param alpha_bar: a lambda that takes an argument t from 0 to 1 and - produces the cumulative product of (1-beta) up to that - part of the diffusion process. - :param max_beta: the maximum beta to use; use values lower than 1 to - prevent singularities. - """ - betas = [] - for i in range(num_diffusion_timesteps): - t1 = i / num_diffusion_timesteps - t2 = (i + 1) / num_diffusion_timesteps - betas.append(min(1 - alpha_bar(t2) / alpha_bar(t1), max_beta)) - return np.array(betas) - - -def extract_into_tensor(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def checkpoint(func, inputs, params, flag): - """ - Evaluate a function without caching intermediate activations, allowing for - reduced memory at the expense of extra compute in the backward pass. - :param func: the function to evaluate. - :param inputs: the argument sequence to pass to `func`. - :param params: a sequence of parameters `func` depends on but does not - explicitly take as arguments. - :param flag: if False, disable gradient checkpointing. - """ - if flag: - args = tuple(inputs) + tuple(params) - return CheckpointFunction.apply(func, len(inputs), *args) - else: - return func(*inputs) - - -class CheckpointFunction(torch.autograd.Function): - @staticmethod - def forward(ctx, run_function, length, *args): - ctx.run_function = run_function - ctx.input_tensors = list(args[:length]) - ctx.input_params = list(args[length:]) - ctx.gpu_autocast_kwargs = {"enabled": torch.is_autocast_enabled(), - "dtype": torch.get_autocast_gpu_dtype(), - "cache_enabled": torch.is_autocast_cache_enabled()} - with torch.no_grad(): - output_tensors = ctx.run_function(*ctx.input_tensors) - return output_tensors - - @staticmethod - def backward(ctx, *output_grads): - ctx.input_tensors = [x.detach().requires_grad_(True) for x in ctx.input_tensors] - with torch.enable_grad(), \ - torch.cuda.amp.autocast(**ctx.gpu_autocast_kwargs): - # Fixes a bug where the first op in run_function modifies the - # Tensor storage in place, which is not allowed for detach()'d - # Tensors. - shallow_copies = [x.view_as(x) for x in ctx.input_tensors] - output_tensors = ctx.run_function(*shallow_copies) - input_grads = torch.autograd.grad( - output_tensors, - ctx.input_tensors + ctx.input_params, - output_grads, - allow_unused=True, - ) - del ctx.input_tensors - del ctx.input_params - del output_tensors - return (None, None) + input_grads - - -def timestep_embedding(timesteps, dim, max_period=10000, repeat_only=False): - """ - Create sinusoidal timestep embeddings. - :param timesteps: a 1-D Tensor of N indices, one per batch element. - These may be fractional. - :param dim: the dimension of the output. - :param max_period: controls the minimum frequency of the embeddings. - :return: an [N x dim] Tensor of positional embeddings. - """ - if not repeat_only: - half = dim // 2 - freqs = torch.exp( - -math.log(max_period) * torch.arange(start=0, end=half, dtype=torch.float32) / half - ).to(device=timesteps.device) - args = timesteps[:, None].float() * freqs[None] - embedding = torch.cat([torch.cos(args), torch.sin(args)], dim=-1) - if dim % 2: - embedding = torch.cat([embedding, torch.zeros_like(embedding[:, :1])], dim=-1) - else: - embedding = repeat(timesteps, 'b -> b d', d=dim) - return embedding - - -def zero_module(module): - """ - Zero out the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().zero_() - return module - - -def scale_module(module, scale): - """ - Scale the parameters of a module and return it. - """ - for p in module.parameters(): - p.detach().mul_(scale) - return module - - -def mean_flat(tensor): - """ - Take the mean over all non-batch dimensions. - """ - return tensor.mean(dim=list(range(1, len(tensor.shape)))) - - -def normalization(channels): - """ - Make a standard normalization layer. - :param channels: number of input channels. - :return: an nn.Module for normalization. - """ - return GroupNorm32(32, channels) - - -# PyTorch 1.7 has SiLU, but we support PyTorch 1.5. -class SiLU(nn.Module): - def forward(self, x): - return x * torch.sigmoid(x) - - -class GroupNorm32(nn.GroupNorm): - def forward(self, x): - return super().forward(x.float()).type(x.dtype) - -def conv_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D convolution module. - """ - if dims == 1: - return nn.Conv1d(*args, **kwargs) - elif dims == 2: - return nn.Conv2d(*args, **kwargs) - elif dims == 3: - return nn.Conv3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -def linear(*args, **kwargs): - """ - Create a linear module. - """ - return nn.Linear(*args, **kwargs) - - -def avg_pool_nd(dims, *args, **kwargs): - """ - Create a 1D, 2D, or 3D average pooling module. - """ - if dims == 1: - return nn.AvgPool1d(*args, **kwargs) - elif dims == 2: - return nn.AvgPool2d(*args, **kwargs) - elif dims == 3: - return nn.AvgPool3d(*args, **kwargs) - raise ValueError(f"unsupported dimensions: {dims}") - - -class HybridConditioner(nn.Module): - - def __init__(self, c_concat_config, c_crossattn_config): - super().__init__() - self.concat_conditioner = instantiate_from_config(c_concat_config) - self.crossattn_conditioner = instantiate_from_config(c_crossattn_config) - - def forward(self, c_concat, c_crossattn): - c_concat = self.concat_conditioner(c_concat) - c_crossattn = self.crossattn_conditioner(c_crossattn) - return {'c_concat': [c_concat], 'c_crossattn': [c_crossattn]} - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() \ No newline at end of file diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/__init__.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/__init__.py deleted file mode 100644 index f631ae6df4747b808cac7c03b38e3e1d48bea00b..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/cachecontrol/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -# SPDX-FileCopyrightText: 2015 Eric Larson -# -# SPDX-License-Identifier: Apache-2.0 - -"""CacheControl import Interface. - -Make it easy to import from cachecontrol without long namespaces. -""" -__author__ = "Eric Larson" -__email__ = "eric@ionrock.org" -__version__ = "0.12.11" - -from .wrapper import CacheControl -from .adapter import CacheControlAdapter -from .controller import CacheController - -import logging -logging.getLogger(__name__).addHandler(logging.NullHandler()) diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/url.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/url.py deleted file mode 100644 index a960b2f3c5f3d11fc9ae43638da9877d635e8d91..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/pip/_vendor/urllib3/util/url.py +++ /dev/null @@ -1,435 +0,0 @@ -from __future__ import absolute_import - -import re -from collections import namedtuple - -from ..exceptions import LocationParseError -from ..packages import six - -url_attrs = ["scheme", "auth", "host", "port", "path", "query", "fragment"] - -# We only want to normalize urls with an HTTP(S) scheme. -# urllib3 infers URLs without a scheme (None) to be http. -NORMALIZABLE_SCHEMES = ("http", "https", None) - -# Almost all of these patterns were derived from the -# 'rfc3986' module: https://github.com/python-hyper/rfc3986 -PERCENT_RE = re.compile(r"%[a-fA-F0-9]{2}") -SCHEME_RE = re.compile(r"^(?:[a-zA-Z][a-zA-Z0-9+-]*:|/)") -URI_RE = re.compile( - r"^(?:([a-zA-Z][a-zA-Z0-9+.-]*):)?" - r"(?://([^\\/?#]*))?" - r"([^?#]*)" - r"(?:\?([^#]*))?" - r"(?:#(.*))?$", - re.UNICODE | re.DOTALL, -) - -IPV4_PAT = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}" -HEX_PAT = "[0-9A-Fa-f]{1,4}" -LS32_PAT = "(?:{hex}:{hex}|{ipv4})".format(hex=HEX_PAT, ipv4=IPV4_PAT) -_subs = {"hex": HEX_PAT, "ls32": LS32_PAT} -_variations = [ - # 6( h16 ":" ) ls32 - "(?:%(hex)s:){6}%(ls32)s", - # "::" 5( h16 ":" ) ls32 - "::(?:%(hex)s:){5}%(ls32)s", - # [ h16 ] "::" 4( h16 ":" ) ls32 - "(?:%(hex)s)?::(?:%(hex)s:){4}%(ls32)s", - # [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32 - "(?:(?:%(hex)s:)?%(hex)s)?::(?:%(hex)s:){3}%(ls32)s", - # [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32 - "(?:(?:%(hex)s:){0,2}%(hex)s)?::(?:%(hex)s:){2}%(ls32)s", - # [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32 - "(?:(?:%(hex)s:){0,3}%(hex)s)?::%(hex)s:%(ls32)s", - # [ *4( h16 ":" ) h16 ] "::" ls32 - "(?:(?:%(hex)s:){0,4}%(hex)s)?::%(ls32)s", - # [ *5( h16 ":" ) h16 ] "::" h16 - "(?:(?:%(hex)s:){0,5}%(hex)s)?::%(hex)s", - # [ *6( h16 ":" ) h16 ] "::" - "(?:(?:%(hex)s:){0,6}%(hex)s)?::", -] - -UNRESERVED_PAT = r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._\-~" -IPV6_PAT = "(?:" + "|".join([x % _subs for x in _variations]) + ")" -ZONE_ID_PAT = "(?:%25|%)(?:[" + UNRESERVED_PAT + "]|%[a-fA-F0-9]{2})+" -IPV6_ADDRZ_PAT = r"\[" + IPV6_PAT + r"(?:" + ZONE_ID_PAT + r")?\]" -REG_NAME_PAT = r"(?:[^\[\]%:/?#]|%[a-fA-F0-9]{2})*" -TARGET_RE = re.compile(r"^(/[^?#]*)(?:\?([^#]*))?(?:#.*)?$") - -IPV4_RE = re.compile("^" + IPV4_PAT + "$") -IPV6_RE = re.compile("^" + IPV6_PAT + "$") -IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT + "$") -BRACELESS_IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT[2:-2] + "$") -ZONE_ID_RE = re.compile("(" + ZONE_ID_PAT + r")\]$") - -_HOST_PORT_PAT = ("^(%s|%s|%s)(?::0*?(|0|[1-9][0-9]{0,4}))?$") % ( - REG_NAME_PAT, - IPV4_PAT, - IPV6_ADDRZ_PAT, -) -_HOST_PORT_RE = re.compile(_HOST_PORT_PAT, re.UNICODE | re.DOTALL) - -UNRESERVED_CHARS = set( - "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._-~" -) -SUB_DELIM_CHARS = set("!$&'()*+,;=") -USERINFO_CHARS = UNRESERVED_CHARS | SUB_DELIM_CHARS | {":"} -PATH_CHARS = USERINFO_CHARS | {"@", "/"} -QUERY_CHARS = FRAGMENT_CHARS = PATH_CHARS | {"?"} - - -class Url(namedtuple("Url", url_attrs)): - """ - Data structure for representing an HTTP URL. Used as a return value for - :func:`parse_url`. Both the scheme and host are normalized as they are - both case-insensitive according to RFC 3986. - """ - - __slots__ = () - - def __new__( - cls, - scheme=None, - auth=None, - host=None, - port=None, - path=None, - query=None, - fragment=None, - ): - if path and not path.startswith("/"): - path = "/" + path - if scheme is not None: - scheme = scheme.lower() - return super(Url, cls).__new__( - cls, scheme, auth, host, port, path, query, fragment - ) - - @property - def hostname(self): - """For backwards-compatibility with urlparse. We're nice like that.""" - return self.host - - @property - def request_uri(self): - """Absolute path including the query string.""" - uri = self.path or "/" - - if self.query is not None: - uri += "?" + self.query - - return uri - - @property - def netloc(self): - """Network location including host and port""" - if self.port: - return "%s:%d" % (self.host, self.port) - return self.host - - @property - def url(self): - """ - Convert self into a url - - This function should more or less round-trip with :func:`.parse_url`. The - returned url may not be exactly the same as the url inputted to - :func:`.parse_url`, but it should be equivalent by the RFC (e.g., urls - with a blank port will have : removed). - - Example: :: - - >>> U = parse_url('http://google.com/mail/') - >>> U.url - 'http://google.com/mail/' - >>> Url('http', 'username:password', 'host.com', 80, - ... '/path', 'query', 'fragment').url - 'http://username:password@host.com:80/path?query#fragment' - """ - scheme, auth, host, port, path, query, fragment = self - url = u"" - - # We use "is not None" we want things to happen with empty strings (or 0 port) - if scheme is not None: - url += scheme + u"://" - if auth is not None: - url += auth + u"@" - if host is not None: - url += host - if port is not None: - url += u":" + str(port) - if path is not None: - url += path - if query is not None: - url += u"?" + query - if fragment is not None: - url += u"#" + fragment - - return url - - def __str__(self): - return self.url - - -def split_first(s, delims): - """ - .. deprecated:: 1.25 - - Given a string and an iterable of delimiters, split on the first found - delimiter. Return two split parts and the matched delimiter. - - If not found, then the first part is the full input string. - - Example:: - - >>> split_first('foo/bar?baz', '?/=') - ('foo', 'bar?baz', '/') - >>> split_first('foo/bar?baz', '123') - ('foo/bar?baz', '', None) - - Scales linearly with number of delims. Not ideal for large number of delims. - """ - min_idx = None - min_delim = None - for d in delims: - idx = s.find(d) - if idx < 0: - continue - - if min_idx is None or idx < min_idx: - min_idx = idx - min_delim = d - - if min_idx is None or min_idx < 0: - return s, "", None - - return s[:min_idx], s[min_idx + 1 :], min_delim - - -def _encode_invalid_chars(component, allowed_chars, encoding="utf-8"): - """Percent-encodes a URI component without reapplying - onto an already percent-encoded component. - """ - if component is None: - return component - - component = six.ensure_text(component) - - # Normalize existing percent-encoded bytes. - # Try to see if the component we're encoding is already percent-encoded - # so we can skip all '%' characters but still encode all others. - component, percent_encodings = PERCENT_RE.subn( - lambda match: match.group(0).upper(), component - ) - - uri_bytes = component.encode("utf-8", "surrogatepass") - is_percent_encoded = percent_encodings == uri_bytes.count(b"%") - encoded_component = bytearray() - - for i in range(0, len(uri_bytes)): - # Will return a single character bytestring on both Python 2 & 3 - byte = uri_bytes[i : i + 1] - byte_ord = ord(byte) - if (is_percent_encoded and byte == b"%") or ( - byte_ord < 128 and byte.decode() in allowed_chars - ): - encoded_component += byte - continue - encoded_component.extend(b"%" + (hex(byte_ord)[2:].encode().zfill(2).upper())) - - return encoded_component.decode(encoding) - - -def _remove_path_dot_segments(path): - # See http://tools.ietf.org/html/rfc3986#section-5.2.4 for pseudo-code - segments = path.split("/") # Turn the path into a list of segments - output = [] # Initialize the variable to use to store output - - for segment in segments: - # '.' is the current directory, so ignore it, it is superfluous - if segment == ".": - continue - # Anything other than '..', should be appended to the output - elif segment != "..": - output.append(segment) - # In this case segment == '..', if we can, we should pop the last - # element - elif output: - output.pop() - - # If the path starts with '/' and the output is empty or the first string - # is non-empty - if path.startswith("/") and (not output or output[0]): - output.insert(0, "") - - # If the path starts with '/.' or '/..' ensure we add one more empty - # string to add a trailing '/' - if path.endswith(("/.", "/..")): - output.append("") - - return "/".join(output) - - -def _normalize_host(host, scheme): - if host: - if isinstance(host, six.binary_type): - host = six.ensure_str(host) - - if scheme in NORMALIZABLE_SCHEMES: - is_ipv6 = IPV6_ADDRZ_RE.match(host) - if is_ipv6: - # IPv6 hosts of the form 'a::b%zone' are encoded in a URL as - # such per RFC 6874: 'a::b%25zone'. Unquote the ZoneID - # separator as necessary to return a valid RFC 4007 scoped IP. - match = ZONE_ID_RE.search(host) - if match: - start, end = match.span(1) - zone_id = host[start:end] - - if zone_id.startswith("%25") and zone_id != "%25": - zone_id = zone_id[3:] - else: - zone_id = zone_id[1:] - zone_id = "%" + _encode_invalid_chars(zone_id, UNRESERVED_CHARS) - return host[:start].lower() + zone_id + host[end:] - else: - return host.lower() - elif not IPV4_RE.match(host): - return six.ensure_str( - b".".join([_idna_encode(label) for label in host.split(".")]) - ) - return host - - -def _idna_encode(name): - if name and any(ord(x) >= 128 for x in name): - try: - from pip._vendor import idna - except ImportError: - six.raise_from( - LocationParseError("Unable to parse URL without the 'idna' module"), - None, - ) - try: - return idna.encode(name.lower(), strict=True, std3_rules=True) - except idna.IDNAError: - six.raise_from( - LocationParseError(u"Name '%s' is not a valid IDNA label" % name), None - ) - return name.lower().encode("ascii") - - -def _encode_target(target): - """Percent-encodes a request target so that there are no invalid characters""" - path, query = TARGET_RE.match(target).groups() - target = _encode_invalid_chars(path, PATH_CHARS) - query = _encode_invalid_chars(query, QUERY_CHARS) - if query is not None: - target += "?" + query - return target - - -def parse_url(url): - """ - Given a url, return a parsed :class:`.Url` namedtuple. Best-effort is - performed to parse incomplete urls. Fields not provided will be None. - This parser is RFC 3986 and RFC 6874 compliant. - - The parser logic and helper functions are based heavily on - work done in the ``rfc3986`` module. - - :param str url: URL to parse into a :class:`.Url` namedtuple. - - Partly backwards-compatible with :mod:`urlparse`. - - Example:: - - >>> parse_url('http://google.com/mail/') - Url(scheme='http', host='google.com', port=None, path='/mail/', ...) - >>> parse_url('google.com:80') - Url(scheme=None, host='google.com', port=80, path=None, ...) - >>> parse_url('/foo?bar') - Url(scheme=None, host=None, port=None, path='/foo', query='bar', ...) - """ - if not url: - # Empty - return Url() - - source_url = url - if not SCHEME_RE.search(url): - url = "//" + url - - try: - scheme, authority, path, query, fragment = URI_RE.match(url).groups() - normalize_uri = scheme is None or scheme.lower() in NORMALIZABLE_SCHEMES - - if scheme: - scheme = scheme.lower() - - if authority: - auth, _, host_port = authority.rpartition("@") - auth = auth or None - host, port = _HOST_PORT_RE.match(host_port).groups() - if auth and normalize_uri: - auth = _encode_invalid_chars(auth, USERINFO_CHARS) - if port == "": - port = None - else: - auth, host, port = None, None, None - - if port is not None: - port = int(port) - if not (0 <= port <= 65535): - raise LocationParseError(url) - - host = _normalize_host(host, scheme) - - if normalize_uri and path: - path = _remove_path_dot_segments(path) - path = _encode_invalid_chars(path, PATH_CHARS) - if normalize_uri and query: - query = _encode_invalid_chars(query, QUERY_CHARS) - if normalize_uri and fragment: - fragment = _encode_invalid_chars(fragment, FRAGMENT_CHARS) - - except (ValueError, AttributeError): - return six.raise_from(LocationParseError(source_url), None) - - # For the sake of backwards compatibility we put empty - # string values for path if there are any defined values - # beyond the path in the URL. - # TODO: Remove this when we break backwards compatibility. - if not path: - if query is not None or fragment is not None: - path = "" - else: - path = None - - # Ensure that each part of the URL is a `str` for - # backwards compatibility. - if isinstance(url, six.text_type): - ensure_func = six.ensure_text - else: - ensure_func = six.ensure_str - - def ensure_type(x): - return x if x is None else ensure_func(x) - - return Url( - scheme=ensure_type(scheme), - auth=ensure_type(auth), - host=ensure_type(host), - port=port, - path=ensure_type(path), - query=ensure_type(query), - fragment=ensure_type(fragment), - ) - - -def get_host(url): - """ - Deprecated. Use :func:`parse_url` instead. - """ - p = parse_url(url) - return p.scheme or "http", p.hostname, p.port diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/cmd.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/cmd.py deleted file mode 100644 index 68a9267c65babd799cec04213c20ad4f3289e109..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/_distutils/cmd.py +++ /dev/null @@ -1,436 +0,0 @@ -"""distutils.cmd - -Provides the Command class, the base class for the command classes -in the distutils.command package. -""" - -import sys -import os -import re -from distutils.errors import DistutilsOptionError -from distutils import util, dir_util, file_util, archive_util, dep_util -from distutils import log - - -class Command: - """Abstract base class for defining command classes, the "worker bees" - of the Distutils. A useful analogy for command classes is to think of - them as subroutines with local variables called "options". The options - are "declared" in 'initialize_options()' and "defined" (given their - final values, aka "finalized") in 'finalize_options()', both of which - must be defined by every command class. The distinction between the - two is necessary because option values might come from the outside - world (command line, config file, ...), and any options dependent on - other options must be computed *after* these outside influences have - been processed -- hence 'finalize_options()'. The "body" of the - subroutine, where it does all its work based on the values of its - options, is the 'run()' method, which must also be implemented by every - command class. - """ - - # 'sub_commands' formalizes the notion of a "family" of commands, - # eg. "install" as the parent with sub-commands "install_lib", - # "install_headers", etc. The parent of a family of commands - # defines 'sub_commands' as a class attribute; it's a list of - # (command_name : string, predicate : unbound_method | string | None) - # tuples, where 'predicate' is a method of the parent command that - # determines whether the corresponding command is applicable in the - # current situation. (Eg. we "install_headers" is only applicable if - # we have any C header files to install.) If 'predicate' is None, - # that command is always applicable. - # - # 'sub_commands' is usually defined at the *end* of a class, because - # predicates can be unbound methods, so they must already have been - # defined. The canonical example is the "install" command. - sub_commands = [] - - # -- Creation/initialization methods ------------------------------- - - def __init__(self, dist): - """Create and initialize a new Command object. Most importantly, - invokes the 'initialize_options()' method, which is the real - initializer and depends on the actual command being - instantiated. - """ - # late import because of mutual dependence between these classes - from distutils.dist import Distribution - - if not isinstance(dist, Distribution): - raise TypeError("dist must be a Distribution instance") - if self.__class__ is Command: - raise RuntimeError("Command is an abstract class") - - self.distribution = dist - self.initialize_options() - - # Per-command versions of the global flags, so that the user can - # customize Distutils' behaviour command-by-command and let some - # commands fall back on the Distribution's behaviour. None means - # "not defined, check self.distribution's copy", while 0 or 1 mean - # false and true (duh). Note that this means figuring out the real - # value of each flag is a touch complicated -- hence "self._dry_run" - # will be handled by __getattr__, below. - # XXX This needs to be fixed. - self._dry_run = None - - # verbose is largely ignored, but needs to be set for - # backwards compatibility (I think)? - self.verbose = dist.verbose - - # Some commands define a 'self.force' option to ignore file - # timestamps, but methods defined *here* assume that - # 'self.force' exists for all commands. So define it here - # just to be safe. - self.force = None - - # The 'help' flag is just used for command-line parsing, so - # none of that complicated bureaucracy is needed. - self.help = 0 - - # 'finalized' records whether or not 'finalize_options()' has been - # called. 'finalize_options()' itself should not pay attention to - # this flag: it is the business of 'ensure_finalized()', which - # always calls 'finalize_options()', to respect/update it. - self.finalized = 0 - - # XXX A more explicit way to customize dry_run would be better. - def __getattr__(self, attr): - if attr == 'dry_run': - myval = getattr(self, "_" + attr) - if myval is None: - return getattr(self.distribution, attr) - else: - return myval - else: - raise AttributeError(attr) - - def ensure_finalized(self): - if not self.finalized: - self.finalize_options() - self.finalized = 1 - - # Subclasses must define: - # initialize_options() - # provide default values for all options; may be customized by - # setup script, by options from config file(s), or by command-line - # options - # finalize_options() - # decide on the final values for all options; this is called - # after all possible intervention from the outside world - # (command-line, option file, etc.) has been processed - # run() - # run the command: do whatever it is we're here to do, - # controlled by the command's various option values - - def initialize_options(self): - """Set default values for all the options that this command - supports. Note that these defaults may be overridden by other - commands, by the setup script, by config files, or by the - command-line. Thus, this is not the place to code dependencies - between options; generally, 'initialize_options()' implementations - are just a bunch of "self.foo = None" assignments. - - This method must be implemented by all command classes. - """ - raise RuntimeError( - "abstract method -- subclass %s must override" % self.__class__ - ) - - def finalize_options(self): - """Set final values for all the options that this command supports. - This is always called as late as possible, ie. after any option - assignments from the command-line or from other commands have been - done. Thus, this is the place to code option dependencies: if - 'foo' depends on 'bar', then it is safe to set 'foo' from 'bar' as - long as 'foo' still has the same value it was assigned in - 'initialize_options()'. - - This method must be implemented by all command classes. - """ - raise RuntimeError( - "abstract method -- subclass %s must override" % self.__class__ - ) - - def dump_options(self, header=None, indent=""): - from distutils.fancy_getopt import longopt_xlate - - if header is None: - header = "command options for '%s':" % self.get_command_name() - self.announce(indent + header, level=log.INFO) - indent = indent + " " - for (option, _, _) in self.user_options: - option = option.translate(longopt_xlate) - if option[-1] == "=": - option = option[:-1] - value = getattr(self, option) - self.announce(indent + "{} = {}".format(option, value), level=log.INFO) - - def run(self): - """A command's raison d'etre: carry out the action it exists to - perform, controlled by the options initialized in - 'initialize_options()', customized by other commands, the setup - script, the command-line, and config files, and finalized in - 'finalize_options()'. All terminal output and filesystem - interaction should be done by 'run()'. - - This method must be implemented by all command classes. - """ - raise RuntimeError( - "abstract method -- subclass %s must override" % self.__class__ - ) - - def announce(self, msg, level=1): - """If the current verbosity level is of greater than or equal to - 'level' print 'msg' to stdout. - """ - log.log(level, msg) - - def debug_print(self, msg): - """Print 'msg' to stdout if the global DEBUG (taken from the - DISTUTILS_DEBUG environment variable) flag is true. - """ - from distutils.debug import DEBUG - - if DEBUG: - print(msg) - sys.stdout.flush() - - # -- Option validation methods ------------------------------------- - # (these are very handy in writing the 'finalize_options()' method) - # - # NB. the general philosophy here is to ensure that a particular option - # value meets certain type and value constraints. If not, we try to - # force it into conformance (eg. if we expect a list but have a string, - # split the string on comma and/or whitespace). If we can't force the - # option into conformance, raise DistutilsOptionError. Thus, command - # classes need do nothing more than (eg.) - # self.ensure_string_list('foo') - # and they can be guaranteed that thereafter, self.foo will be - # a list of strings. - - def _ensure_stringlike(self, option, what, default=None): - val = getattr(self, option) - if val is None: - setattr(self, option, default) - return default - elif not isinstance(val, str): - raise DistutilsOptionError( - "'{}' must be a {} (got `{}`)".format(option, what, val) - ) - return val - - def ensure_string(self, option, default=None): - """Ensure that 'option' is a string; if not defined, set it to - 'default'. - """ - self._ensure_stringlike(option, "string", default) - - def ensure_string_list(self, option): - r"""Ensure that 'option' is a list of strings. If 'option' is - currently a string, we split it either on /,\s*/ or /\s+/, so - "foo bar baz", "foo,bar,baz", and "foo, bar baz" all become - ["foo", "bar", "baz"]. - """ - val = getattr(self, option) - if val is None: - return - elif isinstance(val, str): - setattr(self, option, re.split(r',\s*|\s+', val)) - else: - if isinstance(val, list): - ok = all(isinstance(v, str) for v in val) - else: - ok = False - if not ok: - raise DistutilsOptionError( - "'{}' must be a list of strings (got {!r})".format(option, val) - ) - - def _ensure_tested_string(self, option, tester, what, error_fmt, default=None): - val = self._ensure_stringlike(option, what, default) - if val is not None and not tester(val): - raise DistutilsOptionError( - ("error in '%s' option: " + error_fmt) % (option, val) - ) - - def ensure_filename(self, option): - """Ensure that 'option' is the name of an existing file.""" - self._ensure_tested_string( - option, os.path.isfile, "filename", "'%s' does not exist or is not a file" - ) - - def ensure_dirname(self, option): - self._ensure_tested_string( - option, - os.path.isdir, - "directory name", - "'%s' does not exist or is not a directory", - ) - - # -- Convenience methods for commands ------------------------------ - - def get_command_name(self): - if hasattr(self, 'command_name'): - return self.command_name - else: - return self.__class__.__name__ - - def set_undefined_options(self, src_cmd, *option_pairs): - """Set the values of any "undefined" options from corresponding - option values in some other command object. "Undefined" here means - "is None", which is the convention used to indicate that an option - has not been changed between 'initialize_options()' and - 'finalize_options()'. Usually called from 'finalize_options()' for - options that depend on some other command rather than another - option of the same command. 'src_cmd' is the other command from - which option values will be taken (a command object will be created - for it if necessary); the remaining arguments are - '(src_option,dst_option)' tuples which mean "take the value of - 'src_option' in the 'src_cmd' command object, and copy it to - 'dst_option' in the current command object". - """ - # Option_pairs: list of (src_option, dst_option) tuples - src_cmd_obj = self.distribution.get_command_obj(src_cmd) - src_cmd_obj.ensure_finalized() - for (src_option, dst_option) in option_pairs: - if getattr(self, dst_option) is None: - setattr(self, dst_option, getattr(src_cmd_obj, src_option)) - - def get_finalized_command(self, command, create=1): - """Wrapper around Distribution's 'get_command_obj()' method: find - (create if necessary and 'create' is true) the command object for - 'command', call its 'ensure_finalized()' method, and return the - finalized command object. - """ - cmd_obj = self.distribution.get_command_obj(command, create) - cmd_obj.ensure_finalized() - return cmd_obj - - # XXX rename to 'get_reinitialized_command()'? (should do the - # same in dist.py, if so) - def reinitialize_command(self, command, reinit_subcommands=0): - return self.distribution.reinitialize_command(command, reinit_subcommands) - - def run_command(self, command): - """Run some other command: uses the 'run_command()' method of - Distribution, which creates and finalizes the command object if - necessary and then invokes its 'run()' method. - """ - self.distribution.run_command(command) - - def get_sub_commands(self): - """Determine the sub-commands that are relevant in the current - distribution (ie., that need to be run). This is based on the - 'sub_commands' class attribute: each tuple in that list may include - a method that we call to determine if the subcommand needs to be - run for the current distribution. Return a list of command names. - """ - commands = [] - for (cmd_name, method) in self.sub_commands: - if method is None or method(self): - commands.append(cmd_name) - return commands - - # -- External world manipulation ----------------------------------- - - def warn(self, msg): - log.warn("warning: %s: %s\n", self.get_command_name(), msg) - - def execute(self, func, args, msg=None, level=1): - util.execute(func, args, msg, dry_run=self.dry_run) - - def mkpath(self, name, mode=0o777): - dir_util.mkpath(name, mode, dry_run=self.dry_run) - - def copy_file( - self, infile, outfile, preserve_mode=1, preserve_times=1, link=None, level=1 - ): - """Copy a file respecting verbose, dry-run and force flags. (The - former two default to whatever is in the Distribution object, and - the latter defaults to false for commands that don't define it.)""" - return file_util.copy_file( - infile, - outfile, - preserve_mode, - preserve_times, - not self.force, - link, - dry_run=self.dry_run, - ) - - def copy_tree( - self, - infile, - outfile, - preserve_mode=1, - preserve_times=1, - preserve_symlinks=0, - level=1, - ): - """Copy an entire directory tree respecting verbose, dry-run, - and force flags. - """ - return dir_util.copy_tree( - infile, - outfile, - preserve_mode, - preserve_times, - preserve_symlinks, - not self.force, - dry_run=self.dry_run, - ) - - def move_file(self, src, dst, level=1): - """Move a file respecting dry-run flag.""" - return file_util.move_file(src, dst, dry_run=self.dry_run) - - def spawn(self, cmd, search_path=1, level=1): - """Spawn an external command respecting dry-run flag.""" - from distutils.spawn import spawn - - spawn(cmd, search_path, dry_run=self.dry_run) - - def make_archive( - self, base_name, format, root_dir=None, base_dir=None, owner=None, group=None - ): - return archive_util.make_archive( - base_name, - format, - root_dir, - base_dir, - dry_run=self.dry_run, - owner=owner, - group=group, - ) - - def make_file( - self, infiles, outfile, func, args, exec_msg=None, skip_msg=None, level=1 - ): - """Special case of 'execute()' for operations that process one or - more input files and generate one output file. Works just like - 'execute()', except the operation is skipped and a different - message printed if 'outfile' already exists and is newer than all - files listed in 'infiles'. If the command defined 'self.force', - and it is true, then the command is unconditionally run -- does no - timestamp checks. - """ - if skip_msg is None: - skip_msg = "skipping %s (inputs unchanged)" % outfile - - # Allow 'infiles' to be a single string - if isinstance(infiles, str): - infiles = (infiles,) - elif not isinstance(infiles, (list, tuple)): - raise TypeError("'infiles' must be a string, or a list or tuple of strings") - - if exec_msg is None: - exec_msg = "generating {} from {}".format(outfile, ', '.join(infiles)) - - # If 'outfile' must be regenerated (either because it doesn't - # exist, is out-of-date, or the 'force' flag is true) then - # perform the action that presumably regenerates it - if self.force or dep_util.newer_group(infiles, outfile): - self.execute(func, args, exec_msg, level) - # Otherwise, print the "skip" message - else: - log.debug(skip_msg) diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/dataset_mapper.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/dataset_mapper.py deleted file mode 100644 index a8714f7990f11e146a01e03d108518e0356b50c4..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/data/dataset_mapper.py +++ /dev/null @@ -1,191 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import numpy as np -from typing import List, Optional, Union -import torch - -from detectron2.config import configurable - -from . import detection_utils as utils -from . import transforms as T - -""" -This file contains the default mapping that's applied to "dataset dicts". -""" - -__all__ = ["DatasetMapper"] - - -class DatasetMapper: - """ - A callable which takes a dataset dict in Detectron2 Dataset format, - and map it into a format used by the model. - - This is the default callable to be used to map your dataset dict into training data. - You may need to follow it to implement your own one for customized logic, - such as a different way to read or transform images. - See :doc:`/tutorials/data_loading` for details. - - The callable currently does the following: - - 1. Read the image from "file_name" - 2. Applies cropping/geometric transforms to the image and annotations - 3. Prepare data and annotations to Tensor and :class:`Instances` - """ - - @configurable - def __init__( - self, - is_train: bool, - *, - augmentations: List[Union[T.Augmentation, T.Transform]], - image_format: str, - use_instance_mask: bool = False, - use_keypoint: bool = False, - instance_mask_format: str = "polygon", - keypoint_hflip_indices: Optional[np.ndarray] = None, - precomputed_proposal_topk: Optional[int] = None, - recompute_boxes: bool = False, - ): - """ - NOTE: this interface is experimental. - - Args: - is_train: whether it's used in training or inference - augmentations: a list of augmentations or deterministic transforms to apply - image_format: an image format supported by :func:`detection_utils.read_image`. - use_instance_mask: whether to process instance segmentation annotations, if available - use_keypoint: whether to process keypoint annotations if available - instance_mask_format: one of "polygon" or "bitmask". Process instance segmentation - masks into this format. - keypoint_hflip_indices: see :func:`detection_utils.create_keypoint_hflip_indices` - precomputed_proposal_topk: if given, will load pre-computed - proposals from dataset_dict and keep the top k proposals for each image. - recompute_boxes: whether to overwrite bounding box annotations - by computing tight bounding boxes from instance mask annotations. - """ - if recompute_boxes: - assert use_instance_mask, "recompute_boxes requires instance masks" - # fmt: off - self.is_train = is_train - self.augmentations = T.AugmentationList(augmentations) - self.image_format = image_format - self.use_instance_mask = use_instance_mask - self.instance_mask_format = instance_mask_format - self.use_keypoint = use_keypoint - self.keypoint_hflip_indices = keypoint_hflip_indices - self.proposal_topk = precomputed_proposal_topk - self.recompute_boxes = recompute_boxes - # fmt: on - logger = logging.getLogger(__name__) - mode = "training" if is_train else "inference" - logger.info(f"[DatasetMapper] Augmentations used in {mode}: {augmentations}") - - @classmethod - def from_config(cls, cfg, is_train: bool = True): - augs = utils.build_augmentation(cfg, is_train) - if cfg.INPUT.CROP.ENABLED and is_train: - augs.insert(0, T.RandomCrop(cfg.INPUT.CROP.TYPE, cfg.INPUT.CROP.SIZE)) - recompute_boxes = cfg.MODEL.MASK_ON - else: - recompute_boxes = False - - ret = { - "is_train": is_train, - "augmentations": augs, - "image_format": cfg.INPUT.FORMAT, - "use_instance_mask": cfg.MODEL.MASK_ON, - "instance_mask_format": cfg.INPUT.MASK_FORMAT, - "use_keypoint": cfg.MODEL.KEYPOINT_ON, - "recompute_boxes": recompute_boxes, - } - - if cfg.MODEL.KEYPOINT_ON: - ret["keypoint_hflip_indices"] = utils.create_keypoint_hflip_indices(cfg.DATASETS.TRAIN) - - if cfg.MODEL.LOAD_PROPOSALS: - ret["precomputed_proposal_topk"] = ( - cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TRAIN - if is_train - else cfg.DATASETS.PRECOMPUTED_PROPOSAL_TOPK_TEST - ) - return ret - - def _transform_annotations(self, dataset_dict, transforms, image_shape): - # USER: Modify this if you want to keep them for some reason. - for anno in dataset_dict["annotations"]: - if not self.use_instance_mask: - anno.pop("segmentation", None) - if not self.use_keypoint: - anno.pop("keypoints", None) - - # USER: Implement additional transformations if you have other types of data - annos = [ - utils.transform_instance_annotations( - obj, transforms, image_shape, keypoint_hflip_indices=self.keypoint_hflip_indices - ) - for obj in dataset_dict.pop("annotations") - if obj.get("iscrowd", 0) == 0 - ] - instances = utils.annotations_to_instances( - annos, image_shape, mask_format=self.instance_mask_format - ) - - # After transforms such as cropping are applied, the bounding box may no longer - # tightly bound the object. As an example, imagine a triangle object - # [(0,0), (2,0), (0,2)] cropped by a box [(1,0),(2,2)] (XYXY format). The tight - # bounding box of the cropped triangle should be [(1,0),(2,1)], which is not equal to - # the intersection of original bounding box and the cropping box. - if self.recompute_boxes: - instances.gt_boxes = instances.gt_masks.get_bounding_boxes() - dataset_dict["instances"] = utils.filter_empty_instances(instances) - - def __call__(self, dataset_dict): - """ - Args: - dataset_dict (dict): Metadata of one image, in Detectron2 Dataset format. - - Returns: - dict: a format that builtin models in detectron2 accept - """ - dataset_dict = copy.deepcopy(dataset_dict) # it will be modified by code below - # USER: Write your own image loading if it's not from a file - image = utils.read_image(dataset_dict["file_name"], format=self.image_format) - utils.check_image_size(dataset_dict, image) - - # USER: Remove if you don't do semantic/panoptic segmentation. - if "sem_seg_file_name" in dataset_dict: - sem_seg_gt = utils.read_image(dataset_dict.pop("sem_seg_file_name"), "L").squeeze(2) - else: - sem_seg_gt = None - - aug_input = T.AugInput(image, sem_seg=sem_seg_gt) - transforms = self.augmentations(aug_input) - image, sem_seg_gt = aug_input.image, aug_input.sem_seg - - image_shape = image.shape[:2] # h, w - # Pytorch's dataloader is efficient on torch.Tensor due to shared-memory, - # but not efficient on large generic data structures due to the use of pickle & mp.Queue. - # Therefore it's important to use torch.Tensor. - dataset_dict["image"] = torch.as_tensor(np.ascontiguousarray(image.transpose(2, 0, 1))) - if sem_seg_gt is not None: - dataset_dict["sem_seg"] = torch.as_tensor(sem_seg_gt.astype("long")) - - # USER: Remove if you don't use pre-computed proposals. - # Most users would not need this feature. - if self.proposal_topk is not None: - utils.transform_proposals( - dataset_dict, image_shape, transforms, proposal_topk=self.proposal_topk - ) - - if not self.is_train: - # USER: Modify this if you want to keep them for some reason. - dataset_dict.pop("annotations", None) - dataset_dict.pop("sem_seg_file_name", None) - return dataset_dict - - if "annotations" in dataset_dict: - self._transform_annotations(dataset_dict, transforms, image_shape) - - return dataset_dict diff --git a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/build.py b/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/build.py deleted file mode 100644 index 34eb12d00d94ff905b796e75e2c4c5845257c8e9..0000000000000000000000000000000000000000 --- a/spaces/Awiny/Image2Paragraph/models/grit_src/third_party/CenterNet2/detectron2/modeling/proposal_generator/build.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -from detectron2.utils.registry import Registry - -PROPOSAL_GENERATOR_REGISTRY = Registry("PROPOSAL_GENERATOR") -PROPOSAL_GENERATOR_REGISTRY.__doc__ = """ -Registry for proposal generator, which produces object proposals from feature maps. - -The registered object will be called with `obj(cfg, input_shape)`. -The call should return a `nn.Module` object. -""" - -from . import rpn, rrpn # noqa F401 isort:skip - - -def build_proposal_generator(cfg, input_shape): - """ - Build a proposal generator from `cfg.MODEL.PROPOSAL_GENERATOR.NAME`. - The name can be "PrecomputedProposals" to use no proposal generator. - """ - name = cfg.MODEL.PROPOSAL_GENERATOR.NAME - if name == "PrecomputedProposals": - return None - - return PROPOSAL_GENERATOR_REGISTRY.get(name)(cfg, input_shape) diff --git a/spaces/BetterAPI/BetterChat/src/lib/utils/sum.ts b/spaces/BetterAPI/BetterChat/src/lib/utils/sum.ts deleted file mode 100644 index 289b70584ef9f7795b1f4b1bf0151237dc2c55ff..0000000000000000000000000000000000000000 --- a/spaces/BetterAPI/BetterChat/src/lib/utils/sum.ts +++ /dev/null @@ -1,3 +0,0 @@ -export function sum(nums: number[]): number { - return nums.reduce((a, b) => a + b, 0); -} diff --git a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/compat.py b/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/compat.py deleted file mode 100644 index c9d5821ce04c48245b6ad39488476cdcf9495be7..0000000000000000000000000000000000000000 --- a/spaces/Big-Web/MMSD/env/Lib/site-packages/botocore/compat.py +++ /dev/null @@ -1,350 +0,0 @@ -# Copyright 2012-2014 Amazon.com, Inc. or its affiliates. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"). You -# may not use this file except in compliance with the License. A copy of -# the License is located at -# -# http://aws.amazon.com/apache2.0/ -# -# or in the "license" file accompanying this file. This file is -# distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF -# ANY KIND, either express or implied. See the License for the specific -# language governing permissions and limitations under the License. - -import copy -import datetime -import sys -import inspect -import warnings -import hashlib -from http.client import HTTPMessage -import logging -import shlex -import re -import os -from collections import OrderedDict -from collections.abc import MutableMapping -from math import floor - -from botocore.vendored import six -from botocore.exceptions import MD5UnavailableError -from dateutil.tz import tzlocal -from urllib3 import exceptions - -logger = logging.getLogger(__name__) - - -class HTTPHeaders(HTTPMessage): - pass - -from urllib.parse import ( - quote, - urlencode, - unquote, - unquote_plus, - urlparse, - urlsplit, - urlunsplit, - urljoin, - parse_qsl, - parse_qs, -) -from http.client import HTTPResponse -from io import IOBase as _IOBase -from base64 import encodebytes -from email.utils import formatdate -from itertools import zip_longest -file_type = _IOBase -zip = zip - -# In python3, unquote takes a str() object, url decodes it, -# then takes the bytestring and decodes it to utf-8. -unquote_str = unquote_plus - -def set_socket_timeout(http_response, timeout): - """Set the timeout of the socket from an HTTPResponse. - - :param http_response: An instance of ``httplib.HTTPResponse`` - - """ - http_response._fp.fp.raw._sock.settimeout(timeout) - -def accepts_kwargs(func): - # In python3.4.1, there's backwards incompatible - # changes when using getargspec with functools.partials. - return inspect.getfullargspec(func)[2] - -def ensure_unicode(s, encoding=None, errors=None): - # NOOP in Python 3, because every string is already unicode - return s - -def ensure_bytes(s, encoding='utf-8', errors='strict'): - if isinstance(s, str): - return s.encode(encoding, errors) - if isinstance(s, bytes): - return s - raise ValueError(f"Expected str or bytes, received {type(s)}.") - - -try: - import xml.etree.cElementTree as ETree -except ImportError: - # cElementTree does not exist from Python3.9+ - import xml.etree.ElementTree as ETree -XMLParseError = ETree.ParseError -import json - - -def filter_ssl_warnings(): - # Ignore warnings related to SNI as it is not being used in validations. - warnings.filterwarnings( - 'ignore', - message="A true SSLContext object is not available.*", - category=exceptions.InsecurePlatformWarning, - module=r".*urllib3\.util\.ssl_", - ) - - -@classmethod -def from_dict(cls, d): - new_instance = cls() - for key, value in d.items(): - new_instance[key] = value - return new_instance - - -@classmethod -def from_pairs(cls, pairs): - new_instance = cls() - for key, value in pairs: - new_instance[key] = value - return new_instance - - -HTTPHeaders.from_dict = from_dict -HTTPHeaders.from_pairs = from_pairs - - -def copy_kwargs(kwargs): - """ - This used to be a compat shim for 2.6 but is now just an alias. - """ - copy_kwargs = copy.copy(kwargs) - return copy_kwargs - - -def total_seconds(delta): - """ - Returns the total seconds in a ``datetime.timedelta``. - - This used to be a compat shim for 2.6 but is now just an alias. - - :param delta: The timedelta object - :type delta: ``datetime.timedelta`` - """ - return delta.total_seconds() - - -# Checks to see if md5 is available on this system. A given system might not -# have access to it for various reasons, such as FIPS mode being enabled. -try: - hashlib.md5() - MD5_AVAILABLE = True -except ValueError: - MD5_AVAILABLE = False - - -def get_md5(*args, **kwargs): - """ - Attempts to get an md5 hashing object. - - :param raise_error_if_unavailable: raise an error if md5 is unavailable on - this system. If False, None will be returned if it is unavailable. - :type raise_error_if_unavailable: bool - :param args: Args to pass to the MD5 constructor - :param kwargs: Key word arguments to pass to the MD5 constructor - :return: An MD5 hashing object if available. If it is unavailable, None - is returned if raise_error_if_unavailable is set to False. - """ - if MD5_AVAILABLE: - return hashlib.md5(*args, **kwargs) - else: - raise MD5UnavailableError() - - -def compat_shell_split(s, platform=None): - if platform is None: - platform = sys.platform - - if platform == "win32": - return _windows_shell_split(s) - else: - return shlex.split(s) - - -def _windows_shell_split(s): - """Splits up a windows command as the built-in command parser would. - - Windows has potentially bizarre rules depending on where you look. When - spawning a process via the Windows C runtime (which is what python does - when you call popen) the rules are as follows: - - https://docs.microsoft.com/en-us/cpp/cpp/parsing-cpp-command-line-arguments - - To summarize: - - * Only space and tab are valid delimiters - * Double quotes are the only valid quotes - * Backslash is interpreted literally unless it is part of a chain that - leads up to a double quote. Then the backslashes escape the backslashes, - and if there is an odd number the final backslash escapes the quote. - - :param s: The command string to split up into parts. - :return: A list of command components. - """ - if not s: - return [] - - components = [] - buff = [] - is_quoted = False - num_backslashes = 0 - for character in s: - if character == '\\': - # We can't simply append backslashes because we don't know if - # they are being used as escape characters or not. Instead we - # keep track of how many we've encountered and handle them when - # we encounter a different character. - num_backslashes += 1 - elif character == '"': - if num_backslashes > 0: - # The backslashes are in a chain leading up to a double - # quote, so they are escaping each other. - buff.append('\\' * int(floor(num_backslashes / 2))) - remainder = num_backslashes % 2 - num_backslashes = 0 - if remainder == 1: - # The number of backslashes is uneven, so they are also - # escaping the double quote, so it needs to be added to - # the current component buffer. - buff.append('"') - continue - - # We've encountered a double quote that is not escaped, - # so we toggle is_quoted. - is_quoted = not is_quoted - - # If there are quotes, then we may want an empty string. To be - # safe, we add an empty string to the buffer so that we make - # sure it sticks around if there's nothing else between quotes. - # If there is other stuff between quotes, the empty string will - # disappear during the joining process. - buff.append('') - elif character in [' ', '\t'] and not is_quoted: - # Since the backslashes aren't leading up to a quote, we put in - # the exact number of backslashes. - if num_backslashes > 0: - buff.append('\\' * num_backslashes) - num_backslashes = 0 - - # Excess whitespace is ignored, so only add the components list - # if there is anything in the buffer. - if buff: - components.append(''.join(buff)) - buff = [] - else: - # Since the backslashes aren't leading up to a quote, we put in - # the exact number of backslashes. - if num_backslashes > 0: - buff.append('\\' * num_backslashes) - num_backslashes = 0 - buff.append(character) - - # Quotes must be terminated. - if is_quoted: - raise ValueError(f"No closing quotation in string: {s}") - - # There may be some leftover backslashes, so we need to add them in. - # There's no quote so we add the exact number. - if num_backslashes > 0: - buff.append('\\' * num_backslashes) - - # Add the final component in if there is anything in the buffer. - if buff: - components.append(''.join(buff)) - - return components - - -def get_tzinfo_options(): - # Due to dateutil/dateutil#197, Windows may fail to parse times in the past - # with the system clock. We can alternatively fallback to tzwininfo when - # this happens, which will get time info from the Windows registry. - if sys.platform == 'win32': - from dateutil.tz import tzwinlocal - - return (tzlocal, tzwinlocal) - else: - return (tzlocal,) - - -# Detect if CRT is available for use -try: - import awscrt.auth - - # Allow user opt-out if needed - disabled = os.environ.get('BOTO_DISABLE_CRT', "false") - HAS_CRT = not disabled.lower() == 'true' -except ImportError: - HAS_CRT = False - - -######################################################## -# urllib3 compat backports # -######################################################## - -# Vendoring IPv6 validation regex patterns from urllib3 -# https://github.com/urllib3/urllib3/blob/7e856c0/src/urllib3/util/url.py -IPV4_PAT = r"(?:[0-9]{1,3}\.){3}[0-9]{1,3}" -IPV4_RE = re.compile("^" + IPV4_PAT + "$") -HEX_PAT = "[0-9A-Fa-f]{1,4}" -LS32_PAT = "(?:{hex}:{hex}|{ipv4})".format(hex=HEX_PAT, ipv4=IPV4_PAT) -_subs = {"hex": HEX_PAT, "ls32": LS32_PAT} -_variations = [ - # 6( h16 ":" ) ls32 - "(?:%(hex)s:){6}%(ls32)s", - # "::" 5( h16 ":" ) ls32 - "::(?:%(hex)s:){5}%(ls32)s", - # [ h16 ] "::" 4( h16 ":" ) ls32 - "(?:%(hex)s)?::(?:%(hex)s:){4}%(ls32)s", - # [ *1( h16 ":" ) h16 ] "::" 3( h16 ":" ) ls32 - "(?:(?:%(hex)s:)?%(hex)s)?::(?:%(hex)s:){3}%(ls32)s", - # [ *2( h16 ":" ) h16 ] "::" 2( h16 ":" ) ls32 - "(?:(?:%(hex)s:){0,2}%(hex)s)?::(?:%(hex)s:){2}%(ls32)s", - # [ *3( h16 ":" ) h16 ] "::" h16 ":" ls32 - "(?:(?:%(hex)s:){0,3}%(hex)s)?::%(hex)s:%(ls32)s", - # [ *4( h16 ":" ) h16 ] "::" ls32 - "(?:(?:%(hex)s:){0,4}%(hex)s)?::%(ls32)s", - # [ *5( h16 ":" ) h16 ] "::" h16 - "(?:(?:%(hex)s:){0,5}%(hex)s)?::%(hex)s", - # [ *6( h16 ":" ) h16 ] "::" - "(?:(?:%(hex)s:){0,6}%(hex)s)?::", -] - -UNRESERVED_PAT = ( - r"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789._!\-~" -) -IPV6_PAT = "(?:" + "|".join([x % _subs for x in _variations]) + ")" -ZONE_ID_PAT = "(?:%25|%)(?:[" + UNRESERVED_PAT + "]|%[a-fA-F0-9]{2})+" -IPV6_ADDRZ_PAT = r"\[" + IPV6_PAT + r"(?:" + ZONE_ID_PAT + r")?\]" -IPV6_ADDRZ_RE = re.compile("^" + IPV6_ADDRZ_PAT + "$") - -# These are the characters that are stripped by post-bpo-43882 urlparse(). -UNSAFE_URL_CHARS = frozenset('\t\r\n') - -# Detect if gzip is available for use -try: - import gzip - HAS_GZIP = True -except ImportError: - HAS_GZIP = False diff --git a/spaces/BigSalmon/BackTranslation/app.py b/spaces/BigSalmon/BackTranslation/app.py deleted file mode 100644 index cee442ba16e5d2422568a90298992781931c5179..0000000000000000000000000000000000000000 --- a/spaces/BigSalmon/BackTranslation/app.py +++ /dev/null @@ -1,117 +0,0 @@ -from deep_translator import GoogleTranslator -import streamlit as st - -st.set_page_config(page_title='Language Translator (Adaptation of https://github.com/Ompramod9921/Language_translator)') - -hide_streamlit_style = """ - <style> - #MainMenu {visibility: hidden;} - footer {visibility: hidden;} - footer:after { - content: 'Adaptation of https://github.com/Ompramod9921/Language_translator (om pram)' - visibility: visible; - } - </style> - """ -st.markdown(hide_streamlit_style, unsafe_allow_html=True) - -st.markdown("<h1 style='text-align: center; font-size: 24px; color: voilet;font-family: Droid Sans'>Language Translator (Adaptation of https://github.com/Ompramod9921/Language_translator)</h1>", unsafe_allow_html=True) -st.write("****") - -text = st.text_area("Enter text:",height=None,max_chars=None,key=None,help="Enter your text here -") -st.write("****") - -option1 = st.selectbox('Input language',('english','hindi','afrikaans', 'albanian', 'amharic', 'arabic', 'armenian', 'azerbaijani', 'basque', 'belarusian', 'bengali', 'bosnian', 'bulgarian', 'catalan', 'cebuano', 'chichewa', 'chinese', 'chinese (simplified)', 'chinese (traditional)', 'corsican', 'croatian', 'czech', 'danish', 'dutch', 'esperanto', 'estonian', 'filipino', 'finnish', 'french', 'frisian', 'galician', 'georgian', 'german', 'greek', 'gujarati', 'haitian creole', 'hausa', 'hawaiian', 'hebrew', 'hmong', 'hungarian', 'icelandic', 'igbo', 'indonesian', 'irish', 'italian', 'japanese', 'javanese', 'kannada', 'kazakh', 'khmer', 'korean', 'kurdish (kurmanji)', 'kyrgyz', 'lao', 'latin', 'latvian', 'lithuanian', 'luxembourgish', 'macedonian', 'malagasy', 'malay', 'malayalam', 'maltese', 'maori', 'marathi', 'mongolian', 'myanmar (burmese)', 'nepali', 'norwegian', 'pashto', 'persian', 'polish', 'portuguese', 'punjabi', 'romanian', 'russian', 'samoan', 'scots gaelic', 'serbian', 'sesotho', 'shona', 'sindhi', 'sinhala', 'slovak', 'slovenian', 'somali', 'spanish', 'sundanese', 'swahili', 'swedish', 'tajik', 'tamil', 'telugu', 'thai', 'turkish', 'ukrainian', 'urdu', 'uzbek', 'vietnamese', 'welsh', 'xhosa', 'yiddish', 'yoruba', 'zulu', 'Filipino')) -option2 = st.selectbox('Output language',('english','hindi','afrikaans', 'albanian', 'amharic', 'arabic', 'armenian', 'azerbaijani', 'basque', 'belarusian', 'bengali', 'bosnian', 'bulgarian', 'catalan', 'cebuano', 'chichewa', 'chinese', 'chinese (simplified)', 'chinese (traditional)', 'corsican', 'croatian', 'czech', 'danish', 'dutch', 'esperanto', 'estonian', 'filipino', 'finnish', 'french', 'frisian', 'galician', 'georgian', 'german', 'greek', 'gujarati', 'haitian creole', 'hausa', 'hawaiian', 'hebrew', 'hmong', 'hungarian', 'icelandic', 'igbo', 'indonesian', 'irish', 'italian', 'japanese', 'javanese', 'kannada', 'kazakh', 'khmer', 'korean', 'kurdish (kurmanji)', 'kyrgyz', 'lao', 'latin', 'latvian', 'lithuanian', 'luxembourgish', 'macedonian', 'malagasy', 'malay', 'malayalam', 'maltese', 'maori', 'marathi', 'mongolian', 'myanmar (burmese)', 'nepali', 'norwegian', 'pashto', 'persian', 'polish', 'portuguese', 'punjabi', 'romanian', 'russian', 'samoan', 'scots gaelic', 'serbian', 'sesotho', 'shona', 'sindhi', 'sinhala', 'slovak', 'slovenian', 'somali', 'spanish', 'sundanese', 'swahili', 'swedish', 'tajik', 'tamil', 'telugu', 'thai', 'turkish', 'ukrainian', 'urdu', 'uzbek', 'vietnamese', 'welsh', 'xhosa', 'yiddish', 'yoruba', 'zulu', 'Filipino')) -st.write("****") - -if st.button('Translate Sentence'): - st.write(" ") - st.write(" ") - if text == "": - st.warning('Please **enter text** for translation') - - else: - if option1 == option2 : - st.error("source and target language can't be the same") - else : - translated = GoogleTranslator(source=option1,target=option2).translate(text=text) - st.write("Translated text -") - st.info(str(translated)) - translated_text = str(translated) - back_translated = GoogleTranslator(source=option2,target=option1).translate(text=translated_text) - st.write("Back Translated text -") - st.info(str(back_translated)) - -if st.button('Back Translate: Multiple Languages'): - st.write(" ") - st.write(" ") - if text == "": - st.warning('Please **enter text** for translation') - else: - if option1 == option2 : - st.error("source and target language can't be the same") - else: - translated = GoogleTranslator(source=option1,target=option2).translate(text=text) - st.write("Translated text -") - st.info(str(translated)) - translated_text = str(translated) - back_translated = GoogleTranslator(source=option2,target=option1).translate(text=translated_text) - st.write("Back Translated text -") - st.info(str(back_translated)) - - translated = GoogleTranslator(source=option1,target="albanian").translate(text=text) - st.write("Translated text -") - st.info(str(translated)) - translated_text = str(translated) - back_translated = GoogleTranslator(source="albanian",target=option1).translate(text=translated_text) - st.write("Back Translated text -") - st.info(str(back_translated)) - - translated = GoogleTranslator(source=option1,target="greek").translate(text=text) - st.write("Translated text -") - st.info(str(translated)) - translated_text = str(translated) - back_translated = GoogleTranslator(source="greek",target=option1).translate(text=translated_text) - st.write("Back Translated text -") - st.info(str(back_translated)) - - translated = GoogleTranslator(source=option1,target="italian").translate(text=text) - st.write("Translated text -") - st.info(str(translated)) - translated_text = str(translated) - back_translated = GoogleTranslator(source="italian",target=option1).translate(text=translated_text) - st.write("Back Translated text -") - st.info(str(back_translated)) - - translated = GoogleTranslator(source=option1,target="polish").translate(text=text) - st.write("Translated text -") - st.info(str(translated)) - translated_text = str(translated) - back_translated = GoogleTranslator(source="polish",target=option1).translate(text=translated_text) - st.write("Back Translated text -") - st.info(str(back_translated)) - - translated = GoogleTranslator(source=option1,target="spanish").translate(text=text) - st.write("Translated text -") - st.info(str(translated)) - translated_text = str(translated) - back_translated = GoogleTranslator(source="spanish",target=option1).translate(text=translated_text) - st.write("Back Translated text -") - st.info(str(back_translated)) - - translated = GoogleTranslator(source=option1,target="galician").translate(text=text) - st.write("Translated text -") - st.info(str(translated)) - translated_text = str(translated) - back_translated = GoogleTranslator(source="galician",target=option1).translate(text=translated_text) - st.write("Back Translated text -") - st.info(str(back_translated)) - - translated = GoogleTranslator(source=option1,target="dutch").translate(text=text) - st.write("Translated text -") - st.info(str(translated)) - translated_text = str(translated) - back_translated = GoogleTranslator(source="dutch",target=option1).translate(text=translated_text) - st.write("Back Translated text -") - st.info(str(back_translated)) \ No newline at end of file diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/static_assert.h b/spaces/CVPR/LIVE/thrust/thrust/detail/static_assert.h deleted file mode 100644 index 52674dcaf18ef6459b6ef826a524623162ce0f23..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/static_assert.h +++ /dev/null @@ -1,92 +0,0 @@ -/* - * Copyright 2008-2018 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -/* - * (C) Copyright John Maddock 2000. - * - * Distributed under the Boost Software License, Version 1.0. - * (See accompanying NOTICE file for the complete license) - * - * For more information, see http://www.boost.org - */ - -#pragma once - -#include <thrust/detail/config.h> -#include <thrust/detail/type_traits.h> -#include <thrust/detail/preprocessor.h> - -namespace thrust -{ - -namespace detail -{ - -template <typename, bool x> -struct depend_on_instantiation -{ - THRUST_INLINE_INTEGRAL_MEMBER_CONSTANT bool value = x; -}; - -#if THRUST_CPP_DIALECT >= 2011 - -# if THRUST_CPP_DIALECT >= 2017 -# define THRUST_STATIC_ASSERT(B) static_assert(B) -# else -# define THRUST_STATIC_ASSERT(B) static_assert(B, "static assertion failed") -# endif -# define THRUST_STATIC_ASSERT_MSG(B, msg) static_assert(B, msg) - -#else // Older than C++11. - -// HP aCC cannot deal with missing names for template value parameters. -template <bool x> struct STATIC_ASSERTION_FAILURE; - -template <> struct STATIC_ASSERTION_FAILURE<true> {}; - -// HP aCC cannot deal with missing names for template value parameters. -template <int x> struct static_assert_test {}; - -#if ( (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_GCC) \ - && (THRUST_GCC_VERSION >= 40800)) \ - || (THRUST_HOST_COMPILER == THRUST_HOST_COMPILER_CLANG) - // Clang and GCC 4.8+ will complain about this typedef being unused unless we - // annotate it as such. -# define THRUST_STATIC_ASSERT(B) \ - typedef ::thrust::detail::static_assert_test< \ - sizeof(::thrust::detail::STATIC_ASSERTION_FAILURE<(bool)(B)>) \ - > \ - THRUST_PP_CAT2(thrust_static_assert_typedef_, __LINE__) \ - __attribute__((unused)) \ - /**/ -#else -# define THRUST_STATIC_ASSERT(B) \ - typedef ::thrust::detail::static_assert_test< \ - sizeof(::thrust::detail::STATIC_ASSERTION_FAILURE<(bool)(B)>) \ - > \ - THRUST_PP_CAT2(thrust_static_assert_typedef_, __LINE__) \ - /**/ -#endif - -#define THRUST_STATIC_ASSERT_MSG(B, msg) THRUST_STATIC_ASSERT(B) - -#endif // THRUST_CPP_DIALECT >= 2011 - -} // namespace detail - -} // end namespace thrust - - diff --git a/spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/iterator/is_discard_iterator.h b/spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/iterator/is_discard_iterator.h deleted file mode 100644 index 0a5900de2b4e8b62cc6ff8b9ca57f11ee419602d..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/detail/type_traits/iterator/is_discard_iterator.h +++ /dev/null @@ -1,40 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include <thrust/detail/config.h> -#include <thrust/detail/type_traits.h> -#include <thrust/iterator/discard_iterator.h> - -namespace thrust -{ -namespace detail -{ - -template <typename Iterator> -struct is_discard_iterator - : public thrust::detail::false_type -{}; - -template <typename System> -struct is_discard_iterator< thrust::discard_iterator<System> > - : public thrust::detail::true_type -{}; - -} // end namespace detail -} // end namespace thrust - diff --git a/spaces/CVPR/Text2Human/app.py b/spaces/CVPR/Text2Human/app.py deleted file mode 100644 index b7441a306dede5f2decf81e861591d4cee7a7c45..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Text2Human/app.py +++ /dev/null @@ -1,158 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import os -import pathlib -import subprocess - -import gradio as gr - -if os.getenv('SYSTEM') == 'spaces': - import mim - - mim.uninstall('mmcv-full', confirm_yes=True) - mim.install('mmcv-full==1.5.2', is_yes=True) - - with open('patch') as f: - subprocess.run('patch -p1'.split(), cwd='Text2Human', stdin=f) - -from model import Model - -DESCRIPTION = '''# Text2Human - -This is an unofficial demo for <a href="https://github.com/yumingj/Text2Human">https://github.com/yumingj/Text2Human</a> made by <a href="https://huggingface.co/spaces/hysts/Text2Human">@hysts</a>. -You can modify sample steps and seeds. By varying seeds, you can sample different human images under the same pose, shape description, and texture description. The larger the sample steps, the better quality of the generated images. (The default value of sample steps is 256 in the original repo.) - -Label image generation step can be skipped. However, in that case, the input label image must be 512x256 in size and must contain only the specified colors. -''' -FOOTER = '<img id="visitor-badge" alt="visitor badge" src="https://visitor-badge.glitch.me/badge?page_id=hysts.text2human" />' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--device', type=str, default='cpu') - parser.add_argument('--theme', type=str) - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - return parser.parse_args() - - -def set_example_image(example: list) -> dict: - return gr.Image.update(value=example[0]) - - -def set_example_text(example: list) -> dict: - return gr.Textbox.update(value=example[0]) - - -def main(): - args = parse_args() - model = Model(args.device) - - with gr.Blocks(theme=args.theme, css='style.css') as demo: - gr.Markdown(DESCRIPTION) - - with gr.Row(): - with gr.Column(): - with gr.Row(): - input_image = gr.Image(label='Input Pose Image', - type='pil', - elem_id='input-image') - pose_data = gr.Variable() - with gr.Row(): - paths = sorted(pathlib.Path('pose_images').glob('*.png')) - example_images = gr.Dataset(components=[input_image], - samples=[[path.as_posix()] - for path in paths]) - - with gr.Row(): - shape_text = gr.Textbox( - label='Shape Description', - placeholder= - '''<gender>, <sleeve length>, <length of lower clothing>, <outer clothing type>, <other accessories1>, ... -Note: The outer clothing type and accessories can be omitted.''') - with gr.Row(): - shape_example_texts = gr.Dataset( - components=[shape_text], - samples=[['man, sleeveless T-shirt, long pants'], - ['woman, short-sleeve T-shirt, short jeans']]) - with gr.Row(): - generate_label_button = gr.Button('Generate Label Image') - - with gr.Column(): - with gr.Row(): - label_image = gr.Image(label='Label Image', - type='numpy', - elem_id='label-image') - - with gr.Row(): - texture_text = gr.Textbox( - label='Texture Description', - placeholder= - '''<upper clothing texture>, <lower clothing texture>, <outer clothing texture> -Note: Currently, only 5 types of textures are supported, i.e., pure color, stripe/spline, plaid/lattice, floral, denim.''' - ) - with gr.Row(): - texture_example_texts = gr.Dataset( - components=[texture_text], - samples=[['pure color, denim'], ['floral, stripe']]) - with gr.Row(): - sample_steps = gr.Slider(10, - 300, - value=10, - step=10, - label='Sample Steps') - with gr.Row(): - seed = gr.Slider(0, 1000000, value=0, step=1, label='Seed') - with gr.Row(): - generate_human_button = gr.Button('Generate Human') - - with gr.Column(): - with gr.Row(): - result = gr.Image(label='Result', - type='numpy', - elem_id='result-image') - - gr.Markdown(FOOTER) - - input_image.change(fn=model.process_pose_image, - inputs=input_image, - outputs=pose_data) - generate_label_button.click(fn=model.generate_label_image, - inputs=[ - pose_data, - shape_text, - ], - outputs=label_image) - generate_human_button.click(fn=model.generate_human, - inputs=[ - label_image, - texture_text, - sample_steps, - seed, - ], - outputs=result) - example_images.click(fn=set_example_image, - inputs=example_images, - outputs=example_images.components) - shape_example_texts.click(fn=set_example_text, - inputs=shape_example_texts, - outputs=shape_example_texts.components) - texture_example_texts.click(fn=set_example_text, - inputs=texture_example_texts, - outputs=texture_example_texts.components) - - demo.launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/CVPR/WALT/mmdet/core/bbox/match_costs/builder.py b/spaces/CVPR/WALT/mmdet/core/bbox/match_costs/builder.py deleted file mode 100644 index 6894017d42eb16ee4a8ae3ed660a71cda3ad9940..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/core/bbox/match_costs/builder.py +++ /dev/null @@ -1,8 +0,0 @@ -from mmcv.utils import Registry, build_from_cfg - -MATCH_COST = Registry('Match Cost') - - -def build_match_cost(cfg, default_args=None): - """Builder of IoU calculator.""" - return build_from_cfg(cfg, MATCH_COST, default_args) diff --git a/spaces/CVPR/WALT/mmdet/models/dense_heads/dense_test_mixins.py b/spaces/CVPR/WALT/mmdet/models/dense_heads/dense_test_mixins.py deleted file mode 100644 index dd81364dec90e97c30a6e2220a5e0fe96373c5bd..0000000000000000000000000000000000000000 --- a/spaces/CVPR/WALT/mmdet/models/dense_heads/dense_test_mixins.py +++ /dev/null @@ -1,100 +0,0 @@ -from inspect import signature - -import torch - -from mmdet.core import bbox2result, bbox_mapping_back, multiclass_nms - - -class BBoxTestMixin(object): - """Mixin class for test time augmentation of bboxes.""" - - def merge_aug_bboxes(self, aug_bboxes, aug_scores, img_metas): - """Merge augmented detection bboxes and scores. - - Args: - aug_bboxes (list[Tensor]): shape (n, 4*#class) - aug_scores (list[Tensor] or None): shape (n, #class) - img_shapes (list[Tensor]): shape (3, ). - - Returns: - tuple: (bboxes, scores) - """ - recovered_bboxes = [] - for bboxes, img_info in zip(aug_bboxes, img_metas): - img_shape = img_info[0]['img_shape'] - scale_factor = img_info[0]['scale_factor'] - flip = img_info[0]['flip'] - flip_direction = img_info[0]['flip_direction'] - bboxes = bbox_mapping_back(bboxes, img_shape, scale_factor, flip, - flip_direction) - recovered_bboxes.append(bboxes) - bboxes = torch.cat(recovered_bboxes, dim=0) - if aug_scores is None: - return bboxes - else: - scores = torch.cat(aug_scores, dim=0) - return bboxes, scores - - def aug_test_bboxes(self, feats, img_metas, rescale=False): - """Test det bboxes with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - # check with_nms argument - gb_sig = signature(self.get_bboxes) - gb_args = [p.name for p in gb_sig.parameters.values()] - if hasattr(self, '_get_bboxes'): - gbs_sig = signature(self._get_bboxes) - else: - gbs_sig = signature(self._get_bboxes_single) - gbs_args = [p.name for p in gbs_sig.parameters.values()] - assert ('with_nms' in gb_args) and ('with_nms' in gbs_args), \ - f'{self.__class__.__name__}' \ - ' does not support test-time augmentation' - - aug_bboxes = [] - aug_scores = [] - aug_factors = [] # score_factors for NMS - for x, img_meta in zip(feats, img_metas): - # only one image in the batch - outs = self.forward(x) - bbox_inputs = outs + (img_meta, self.test_cfg, False, False) - bbox_outputs = self.get_bboxes(*bbox_inputs)[0] - aug_bboxes.append(bbox_outputs[0]) - aug_scores.append(bbox_outputs[1]) - # bbox_outputs of some detectors (e.g., ATSS, FCOS, YOLOv3) - # contains additional element to adjust scores before NMS - if len(bbox_outputs) >= 3: - aug_factors.append(bbox_outputs[2]) - - # after merging, bboxes will be rescaled to the original image size - merged_bboxes, merged_scores = self.merge_aug_bboxes( - aug_bboxes, aug_scores, img_metas) - merged_factors = torch.cat(aug_factors, dim=0) if aug_factors else None - det_bboxes, det_labels = multiclass_nms( - merged_bboxes, - merged_scores, - self.test_cfg.score_thr, - self.test_cfg.nms, - self.test_cfg.max_per_img, - score_factors=merged_factors) - - if rescale: - _det_bboxes = det_bboxes - else: - _det_bboxes = det_bboxes.clone() - _det_bboxes[:, :4] *= det_bboxes.new_tensor( - img_metas[0][0]['scale_factor']) - bbox_results = bbox2result(_det_bboxes, det_labels, self.num_classes) - return bbox_results diff --git a/spaces/CVPR/regionclip-demo/detectron2/structures/rotated_boxes.py b/spaces/CVPR/regionclip-demo/detectron2/structures/rotated_boxes.py deleted file mode 100644 index 8f48b40560f2f409b20d87bb1ff448bf44e090d2..0000000000000000000000000000000000000000 --- a/spaces/CVPR/regionclip-demo/detectron2/structures/rotated_boxes.py +++ /dev/null @@ -1,505 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import math -from typing import List, Tuple -import torch - -from detectron2.layers.rotated_boxes import pairwise_iou_rotated - -from .boxes import Boxes, _maybe_jit_unused - - -class RotatedBoxes(Boxes): - """ - This structure stores a list of rotated boxes as a Nx5 torch.Tensor. - It supports some common methods about boxes - (`area`, `clip`, `nonempty`, etc), - and also behaves like a Tensor - (support indexing, `to(device)`, `.device`, and iteration over all boxes) - """ - - def __init__(self, tensor: torch.Tensor): - """ - Args: - tensor (Tensor[float]): a Nx5 matrix. Each row is - (x_center, y_center, width, height, angle), - in which angle is represented in degrees. - While there's no strict range restriction for it, - the recommended principal range is between [-180, 180) degrees. - - Assume we have a horizontal box B = (x_center, y_center, width, height), - where width is along the x-axis and height is along the y-axis. - The rotated box B_rot (x_center, y_center, width, height, angle) - can be seen as: - - 1. When angle == 0: - B_rot == B - 2. When angle > 0: - B_rot is obtained by rotating B w.r.t its center by :math:`|angle|` degrees CCW; - 3. When angle < 0: - B_rot is obtained by rotating B w.r.t its center by :math:`|angle|` degrees CW. - - Mathematically, since the right-handed coordinate system for image space - is (y, x), where y is top->down and x is left->right, the 4 vertices of the - rotated rectangle :math:`(yr_i, xr_i)` (i = 1, 2, 3, 4) can be obtained from - the vertices of the horizontal rectangle :math:`(y_i, x_i)` (i = 1, 2, 3, 4) - in the following way (:math:`\\theta = angle*\\pi/180` is the angle in radians, - :math:`(y_c, x_c)` is the center of the rectangle): - - .. math:: - - yr_i = \\cos(\\theta) (y_i - y_c) - \\sin(\\theta) (x_i - x_c) + y_c, - - xr_i = \\sin(\\theta) (y_i - y_c) + \\cos(\\theta) (x_i - x_c) + x_c, - - which is the standard rigid-body rotation transformation. - - Intuitively, the angle is - (1) the rotation angle from y-axis in image space - to the height vector (top->down in the box's local coordinate system) - of the box in CCW, and - (2) the rotation angle from x-axis in image space - to the width vector (left->right in the box's local coordinate system) - of the box in CCW. - - More intuitively, consider the following horizontal box ABCD represented - in (x1, y1, x2, y2): (3, 2, 7, 4), - covering the [3, 7] x [2, 4] region of the continuous coordinate system - which looks like this: - - .. code:: none - - O--------> x - | - | A---B - | | | - | D---C - | - v y - - Note that each capital letter represents one 0-dimensional geometric point - instead of a 'square pixel' here. - - In the example above, using (x, y) to represent a point we have: - - .. math:: - - O = (0, 0), A = (3, 2), B = (7, 2), C = (7, 4), D = (3, 4) - - We name vector AB = vector DC as the width vector in box's local coordinate system, and - vector AD = vector BC as the height vector in box's local coordinate system. Initially, - when angle = 0 degree, they're aligned with the positive directions of x-axis and y-axis - in the image space, respectively. - - For better illustration, we denote the center of the box as E, - - .. code:: none - - O--------> x - | - | A---B - | | E | - | D---C - | - v y - - where the center E = ((3+7)/2, (2+4)/2) = (5, 3). - - Also, - - .. math:: - - width = |AB| = |CD| = 7 - 3 = 4, - height = |AD| = |BC| = 4 - 2 = 2. - - Therefore, the corresponding representation for the same shape in rotated box in - (x_center, y_center, width, height, angle) format is: - - (5, 3, 4, 2, 0), - - Now, let's consider (5, 3, 4, 2, 90), which is rotated by 90 degrees - CCW (counter-clockwise) by definition. It looks like this: - - .. code:: none - - O--------> x - | B-C - | | | - | |E| - | | | - | A-D - v y - - The center E is still located at the same point (5, 3), while the vertices - ABCD are rotated by 90 degrees CCW with regard to E: - A = (4, 5), B = (4, 1), C = (6, 1), D = (6, 5) - - Here, 90 degrees can be seen as the CCW angle to rotate from y-axis to - vector AD or vector BC (the top->down height vector in box's local coordinate system), - or the CCW angle to rotate from x-axis to vector AB or vector DC (the left->right - width vector in box's local coordinate system). - - .. math:: - - width = |AB| = |CD| = 5 - 1 = 4, - height = |AD| = |BC| = 6 - 4 = 2. - - Next, how about (5, 3, 4, 2, -90), which is rotated by 90 degrees CW (clockwise) - by definition? It looks like this: - - .. code:: none - - O--------> x - | D-A - | | | - | |E| - | | | - | C-B - v y - - The center E is still located at the same point (5, 3), while the vertices - ABCD are rotated by 90 degrees CW with regard to E: - A = (6, 1), B = (6, 5), C = (4, 5), D = (4, 1) - - .. math:: - - width = |AB| = |CD| = 5 - 1 = 4, - height = |AD| = |BC| = 6 - 4 = 2. - - This covers exactly the same region as (5, 3, 4, 2, 90) does, and their IoU - will be 1. However, these two will generate different RoI Pooling results and - should not be treated as an identical box. - - On the other hand, it's easy to see that (X, Y, W, H, A) is identical to - (X, Y, W, H, A+360N), for any integer N. For example (5, 3, 4, 2, 270) would be - identical to (5, 3, 4, 2, -90), because rotating the shape 270 degrees CCW is - equivalent to rotating the same shape 90 degrees CW. - - We could rotate further to get (5, 3, 4, 2, 180), or (5, 3, 4, 2, -180): - - .. code:: none - - O--------> x - | - | C---D - | | E | - | B---A - | - v y - - .. math:: - - A = (7, 4), B = (3, 4), C = (3, 2), D = (7, 2), - - width = |AB| = |CD| = 7 - 3 = 4, - height = |AD| = |BC| = 4 - 2 = 2. - - Finally, this is a very inaccurate (heavily quantized) illustration of - how (5, 3, 4, 2, 60) looks like in case anyone wonders: - - .. code:: none - - O--------> x - | B\ - | / C - | /E / - | A / - | `D - v y - - It's still a rectangle with center of (5, 3), width of 4 and height of 2, - but its angle (and thus orientation) is somewhere between - (5, 3, 4, 2, 0) and (5, 3, 4, 2, 90). - """ - device = tensor.device if isinstance(tensor, torch.Tensor) else torch.device("cpu") - tensor = torch.as_tensor(tensor, dtype=torch.float32, device=device) - if tensor.numel() == 0: - # Use reshape, so we don't end up creating a new tensor that does not depend on - # the inputs (and consequently confuses jit) - tensor = tensor.reshape((0, 5)).to(dtype=torch.float32, device=device) - assert tensor.dim() == 2 and tensor.size(-1) == 5, tensor.size() - - self.tensor = tensor - - def clone(self) -> "RotatedBoxes": - """ - Clone the RotatedBoxes. - - Returns: - RotatedBoxes - """ - return RotatedBoxes(self.tensor.clone()) - - @_maybe_jit_unused - def to(self, device: torch.device): - # Boxes are assumed float32 and does not support to(dtype) - return RotatedBoxes(self.tensor.to(device=device)) - - def area(self) -> torch.Tensor: - """ - Computes the area of all the boxes. - - Returns: - torch.Tensor: a vector with areas of each box. - """ - box = self.tensor - area = box[:, 2] * box[:, 3] - return area - - def normalize_angles(self) -> None: - """ - Restrict angles to the range of [-180, 180) degrees - """ - self.tensor[:, 4] = (self.tensor[:, 4] + 180.0) % 360.0 - 180.0 - - def clip(self, box_size: Tuple[int, int], clip_angle_threshold: float = 1.0) -> None: - """ - Clip (in place) the boxes by limiting x coordinates to the range [0, width] - and y coordinates to the range [0, height]. - - For RRPN: - Only clip boxes that are almost horizontal with a tolerance of - clip_angle_threshold to maintain backward compatibility. - - Rotated boxes beyond this threshold are not clipped for two reasons: - - 1. There are potentially multiple ways to clip a rotated box to make it - fit within the image. - 2. It's tricky to make the entire rectangular box fit within the image - and still be able to not leave out pixels of interest. - - Therefore we rely on ops like RoIAlignRotated to safely handle this. - - Args: - box_size (height, width): The clipping box's size. - clip_angle_threshold: - Iff. abs(normalized(angle)) <= clip_angle_threshold (in degrees), - we do the clipping as horizontal boxes. - """ - h, w = box_size - - # normalize angles to be within (-180, 180] degrees - self.normalize_angles() - - idx = torch.where(torch.abs(self.tensor[:, 4]) <= clip_angle_threshold)[0] - - # convert to (x1, y1, x2, y2) - x1 = self.tensor[idx, 0] - self.tensor[idx, 2] / 2.0 - y1 = self.tensor[idx, 1] - self.tensor[idx, 3] / 2.0 - x2 = self.tensor[idx, 0] + self.tensor[idx, 2] / 2.0 - y2 = self.tensor[idx, 1] + self.tensor[idx, 3] / 2.0 - - # clip - x1.clamp_(min=0, max=w) - y1.clamp_(min=0, max=h) - x2.clamp_(min=0, max=w) - y2.clamp_(min=0, max=h) - - # convert back to (xc, yc, w, h) - self.tensor[idx, 0] = (x1 + x2) / 2.0 - self.tensor[idx, 1] = (y1 + y2) / 2.0 - # make sure widths and heights do not increase due to numerical errors - self.tensor[idx, 2] = torch.min(self.tensor[idx, 2], x2 - x1) - self.tensor[idx, 3] = torch.min(self.tensor[idx, 3], y2 - y1) - - def nonempty(self, threshold: float = 0.0) -> torch.Tensor: - """ - Find boxes that are non-empty. - A box is considered empty, if either of its side is no larger than threshold. - - Returns: - Tensor: a binary vector which represents - whether each box is empty (False) or non-empty (True). - """ - box = self.tensor - widths = box[:, 2] - heights = box[:, 3] - keep = (widths > threshold) & (heights > threshold) - return keep - - def __getitem__(self, item) -> "RotatedBoxes": - """ - Returns: - RotatedBoxes: Create a new :class:`RotatedBoxes` by indexing. - - The following usage are allowed: - - 1. `new_boxes = boxes[3]`: return a `RotatedBoxes` which contains only one box. - 2. `new_boxes = boxes[2:10]`: return a slice of boxes. - 3. `new_boxes = boxes[vector]`, where vector is a torch.ByteTensor - with `length = len(boxes)`. Nonzero elements in the vector will be selected. - - Note that the returned RotatedBoxes might share storage with this RotatedBoxes, - subject to Pytorch's indexing semantics. - """ - if isinstance(item, int): - return RotatedBoxes(self.tensor[item].view(1, -1)) - b = self.tensor[item] - assert b.dim() == 2, "Indexing on RotatedBoxes with {} failed to return a matrix!".format( - item - ) - return RotatedBoxes(b) - - def __len__(self) -> int: - return self.tensor.shape[0] - - def __repr__(self) -> str: - return "RotatedBoxes(" + str(self.tensor) + ")" - - def inside_box(self, box_size: Tuple[int, int], boundary_threshold: int = 0) -> torch.Tensor: - """ - Args: - box_size (height, width): Size of the reference box covering - [0, width] x [0, height] - boundary_threshold (int): Boxes that extend beyond the reference box - boundary by more than boundary_threshold are considered "outside". - - For RRPN, it might not be necessary to call this function since it's common - for rotated box to extend to outside of the image boundaries - (the clip function only clips the near-horizontal boxes) - - Returns: - a binary vector, indicating whether each box is inside the reference box. - """ - height, width = box_size - - cnt_x = self.tensor[..., 0] - cnt_y = self.tensor[..., 1] - half_w = self.tensor[..., 2] / 2.0 - half_h = self.tensor[..., 3] / 2.0 - a = self.tensor[..., 4] - c = torch.abs(torch.cos(a * math.pi / 180.0)) - s = torch.abs(torch.sin(a * math.pi / 180.0)) - # This basically computes the horizontal bounding rectangle of the rotated box - max_rect_dx = c * half_w + s * half_h - max_rect_dy = c * half_h + s * half_w - - inds_inside = ( - (cnt_x - max_rect_dx >= -boundary_threshold) - & (cnt_y - max_rect_dy >= -boundary_threshold) - & (cnt_x + max_rect_dx < width + boundary_threshold) - & (cnt_y + max_rect_dy < height + boundary_threshold) - ) - - return inds_inside - - def get_centers(self) -> torch.Tensor: - """ - Returns: - The box centers in a Nx2 array of (x, y). - """ - return self.tensor[:, :2] - - def scale(self, scale_x: float, scale_y: float) -> None: - """ - Scale the rotated box with horizontal and vertical scaling factors - Note: when scale_factor_x != scale_factor_y, - the rotated box does not preserve the rectangular shape when the angle - is not a multiple of 90 degrees under resize transformation. - Instead, the shape is a parallelogram (that has skew) - Here we make an approximation by fitting a rotated rectangle to the parallelogram. - """ - self.tensor[:, 0] *= scale_x - self.tensor[:, 1] *= scale_y - theta = self.tensor[:, 4] * math.pi / 180.0 - c = torch.cos(theta) - s = torch.sin(theta) - - # In image space, y is top->down and x is left->right - # Consider the local coordintate system for the rotated box, - # where the box center is located at (0, 0), and the four vertices ABCD are - # A(-w / 2, -h / 2), B(w / 2, -h / 2), C(w / 2, h / 2), D(-w / 2, h / 2) - # the midpoint of the left edge AD of the rotated box E is: - # E = (A+D)/2 = (-w / 2, 0) - # the midpoint of the top edge AB of the rotated box F is: - # F(0, -h / 2) - # To get the old coordinates in the global system, apply the rotation transformation - # (Note: the right-handed coordinate system for image space is yOx): - # (old_x, old_y) = (s * y + c * x, c * y - s * x) - # E(old) = (s * 0 + c * (-w/2), c * 0 - s * (-w/2)) = (-c * w / 2, s * w / 2) - # F(old) = (s * (-h / 2) + c * 0, c * (-h / 2) - s * 0) = (-s * h / 2, -c * h / 2) - # After applying the scaling factor (sfx, sfy): - # E(new) = (-sfx * c * w / 2, sfy * s * w / 2) - # F(new) = (-sfx * s * h / 2, -sfy * c * h / 2) - # The new width after scaling tranformation becomes: - - # w(new) = |E(new) - O| * 2 - # = sqrt[(sfx * c * w / 2)^2 + (sfy * s * w / 2)^2] * 2 - # = sqrt[(sfx * c)^2 + (sfy * s)^2] * w - # i.e., scale_factor_w = sqrt[(sfx * c)^2 + (sfy * s)^2] - # - # For example, - # when angle = 0 or 180, |c| = 1, s = 0, scale_factor_w == scale_factor_x; - # when |angle| = 90, c = 0, |s| = 1, scale_factor_w == scale_factor_y - self.tensor[:, 2] *= torch.sqrt((scale_x * c) ** 2 + (scale_y * s) ** 2) - - # h(new) = |F(new) - O| * 2 - # = sqrt[(sfx * s * h / 2)^2 + (sfy * c * h / 2)^2] * 2 - # = sqrt[(sfx * s)^2 + (sfy * c)^2] * h - # i.e., scale_factor_h = sqrt[(sfx * s)^2 + (sfy * c)^2] - # - # For example, - # when angle = 0 or 180, |c| = 1, s = 0, scale_factor_h == scale_factor_y; - # when |angle| = 90, c = 0, |s| = 1, scale_factor_h == scale_factor_x - self.tensor[:, 3] *= torch.sqrt((scale_x * s) ** 2 + (scale_y * c) ** 2) - - # The angle is the rotation angle from y-axis in image space to the height - # vector (top->down in the box's local coordinate system) of the box in CCW. - # - # angle(new) = angle_yOx(O - F(new)) - # = angle_yOx( (sfx * s * h / 2, sfy * c * h / 2) ) - # = atan2(sfx * s * h / 2, sfy * c * h / 2) - # = atan2(sfx * s, sfy * c) - # - # For example, - # when sfx == sfy, angle(new) == atan2(s, c) == angle(old) - self.tensor[:, 4] = torch.atan2(scale_x * s, scale_y * c) * 180 / math.pi - - @classmethod - @_maybe_jit_unused - def cat(cls, boxes_list: List["RotatedBoxes"]) -> "RotatedBoxes": - """ - Concatenates a list of RotatedBoxes into a single RotatedBoxes - - Arguments: - boxes_list (list[RotatedBoxes]) - - Returns: - RotatedBoxes: the concatenated RotatedBoxes - """ - assert isinstance(boxes_list, (list, tuple)) - if len(boxes_list) == 0: - return cls(torch.empty(0)) - assert all([isinstance(box, RotatedBoxes) for box in boxes_list]) - - # use torch.cat (v.s. layers.cat) so the returned boxes never share storage with input - cat_boxes = cls(torch.cat([b.tensor for b in boxes_list], dim=0)) - return cat_boxes - - @property - def device(self) -> torch.device: - return self.tensor.device - - @torch.jit.unused - def __iter__(self): - """ - Yield a box as a Tensor of shape (5,) at a time. - """ - yield from self.tensor - - -def pairwise_iou(boxes1: RotatedBoxes, boxes2: RotatedBoxes) -> None: - """ - Given two lists of rotated boxes of size N and M, - compute the IoU (intersection over union) - between **all** N x M pairs of boxes. - The box order must be (x_center, y_center, width, height, angle). - - Args: - boxes1, boxes2 (RotatedBoxes): - two `RotatedBoxes`. Contains N & M rotated boxes, respectively. - - Returns: - Tensor: IoU, sized [N,M]. - """ - - return pairwise_iou_rotated(boxes1.tensor, boxes2.tensor) diff --git a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/backbone/__init__.py b/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/backbone/__init__.py deleted file mode 100644 index 76e4b272b479a26c63d120c818c140870cd8c287..0000000000000000000000000000000000000000 --- a/spaces/Caoyunkang/Segment-Any-Anomaly/GroundingDINO/groundingdino/models/GroundingDINO/backbone/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .backbone import build_backbone diff --git a/spaces/CjangCjengh/Shanghainese-TTS/monotonic_align/core.py b/spaces/CjangCjengh/Shanghainese-TTS/monotonic_align/core.py deleted file mode 100644 index dddc688d76172b880054e544b7a217acd013f14f..0000000000000000000000000000000000000000 --- a/spaces/CjangCjengh/Shanghainese-TTS/monotonic_align/core.py +++ /dev/null @@ -1,35 +0,0 @@ -import numba - - -@numba.jit(numba.void(numba.int32[:,:,::1], numba.float32[:,:,::1], numba.int32[::1], numba.int32[::1]), nopython=True, nogil=True) -def maximum_path_jit(paths, values, t_ys, t_xs): - b = paths.shape[0] - max_neg_val=-1e9 - for i in range(int(b)): - path = paths[i] - value = values[i] - t_y = t_ys[i] - t_x = t_xs[i] - - v_prev = v_cur = 0.0 - index = t_x - 1 - - for y in range(t_y): - for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - if x == y: - v_cur = max_neg_val - else: - v_cur = value[y-1, x] - if x == 0: - if y == 0: - v_prev = 0. - else: - v_prev = max_neg_val - else: - v_prev = value[y-1, x-1] - value[y, x] += max(v_prev, v_cur) - - for y in range(t_y - 1, -1, -1): - path[y, index] = 1 - if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - index = index - 1 diff --git a/spaces/CodingBillionaire/bark-voice-cloning/hubert/__init__.py b/spaces/CodingBillionaire/bark-voice-cloning/hubert/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/blocks.py b/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/blocks.py deleted file mode 100644 index 2145d18fa98060a618536d9a64fe6589e9be4f78..0000000000000000000000000000000000000000 --- a/spaces/CrucibleAI/ControlNetMediaPipeFaceSD21/ldm/modules/midas/midas/blocks.py +++ /dev/null @@ -1,342 +0,0 @@ -import torch -import torch.nn as nn - -from .vit import ( - _make_pretrained_vitb_rn50_384, - _make_pretrained_vitl16_384, - _make_pretrained_vitb16_384, - forward_vit, -) - -def _make_encoder(backbone, features, use_pretrained, groups=1, expand=False, exportable=True, hooks=None, use_vit_only=False, use_readout="ignore",): - if backbone == "vitl16_384": - pretrained = _make_pretrained_vitl16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [256, 512, 1024, 1024], features, groups=groups, expand=expand - ) # ViT-L/16 - 85.0% Top1 (backbone) - elif backbone == "vitb_rn50_384": - pretrained = _make_pretrained_vitb_rn50_384( - use_pretrained, - hooks=hooks, - use_vit_only=use_vit_only, - use_readout=use_readout, - ) - scratch = _make_scratch( - [256, 512, 768, 768], features, groups=groups, expand=expand - ) # ViT-H/16 - 85.0% Top1 (backbone) - elif backbone == "vitb16_384": - pretrained = _make_pretrained_vitb16_384( - use_pretrained, hooks=hooks, use_readout=use_readout - ) - scratch = _make_scratch( - [96, 192, 384, 768], features, groups=groups, expand=expand - ) # ViT-B/16 - 84.6% Top1 (backbone) - elif backbone == "resnext101_wsl": - pretrained = _make_pretrained_resnext101_wsl(use_pretrained) - scratch = _make_scratch([256, 512, 1024, 2048], features, groups=groups, expand=expand) # efficientnet_lite3 - elif backbone == "efficientnet_lite3": - pretrained = _make_pretrained_efficientnet_lite3(use_pretrained, exportable=exportable) - scratch = _make_scratch([32, 48, 136, 384], features, groups=groups, expand=expand) # efficientnet_lite3 - else: - print(f"Backbone '{backbone}' not implemented") - assert False - - return pretrained, scratch - - -def _make_scratch(in_shape, out_shape, groups=1, expand=False): - scratch = nn.Module() - - out_shape1 = out_shape - out_shape2 = out_shape - out_shape3 = out_shape - out_shape4 = out_shape - if expand==True: - out_shape1 = out_shape - out_shape2 = out_shape*2 - out_shape3 = out_shape*4 - out_shape4 = out_shape*8 - - scratch.layer1_rn = nn.Conv2d( - in_shape[0], out_shape1, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer2_rn = nn.Conv2d( - in_shape[1], out_shape2, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer3_rn = nn.Conv2d( - in_shape[2], out_shape3, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - scratch.layer4_rn = nn.Conv2d( - in_shape[3], out_shape4, kernel_size=3, stride=1, padding=1, bias=False, groups=groups - ) - - return scratch - - -def _make_pretrained_efficientnet_lite3(use_pretrained, exportable=False): - efficientnet = torch.hub.load( - "rwightman/gen-efficientnet-pytorch", - "tf_efficientnet_lite3", - pretrained=use_pretrained, - exportable=exportable - ) - return _make_efficientnet_backbone(efficientnet) - - -def _make_efficientnet_backbone(effnet): - pretrained = nn.Module() - - pretrained.layer1 = nn.Sequential( - effnet.conv_stem, effnet.bn1, effnet.act1, *effnet.blocks[0:2] - ) - pretrained.layer2 = nn.Sequential(*effnet.blocks[2:3]) - pretrained.layer3 = nn.Sequential(*effnet.blocks[3:5]) - pretrained.layer4 = nn.Sequential(*effnet.blocks[5:9]) - - return pretrained - - -def _make_resnet_backbone(resnet): - pretrained = nn.Module() - pretrained.layer1 = nn.Sequential( - resnet.conv1, resnet.bn1, resnet.relu, resnet.maxpool, resnet.layer1 - ) - - pretrained.layer2 = resnet.layer2 - pretrained.layer3 = resnet.layer3 - pretrained.layer4 = resnet.layer4 - - return pretrained - - -def _make_pretrained_resnext101_wsl(use_pretrained): - resnet = torch.hub.load("facebookresearch/WSL-Images", "resnext101_32x8d_wsl") - return _make_resnet_backbone(resnet) - - - -class Interpolate(nn.Module): - """Interpolation module. - """ - - def __init__(self, scale_factor, mode, align_corners=False): - """Init. - - Args: - scale_factor (float): scaling - mode (str): interpolation mode - """ - super(Interpolate, self).__init__() - - self.interp = nn.functional.interpolate - self.scale_factor = scale_factor - self.mode = mode - self.align_corners = align_corners - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: interpolated data - """ - - x = self.interp( - x, scale_factor=self.scale_factor, mode=self.mode, align_corners=self.align_corners - ) - - return x - - -class ResidualConvUnit(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True - ) - - self.relu = nn.ReLU(inplace=True) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - out = self.relu(x) - out = self.conv1(out) - out = self.relu(out) - out = self.conv2(out) - - return out + x - - -class FeatureFusionBlock(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock, self).__init__() - - self.resConfUnit1 = ResidualConvUnit(features) - self.resConfUnit2 = ResidualConvUnit(features) - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - output += self.resConfUnit1(xs[1]) - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=True - ) - - return output - - - - -class ResidualConvUnit_custom(nn.Module): - """Residual convolution module. - """ - - def __init__(self, features, activation, bn): - """Init. - - Args: - features (int): number of features - """ - super().__init__() - - self.bn = bn - - self.groups=1 - - self.conv1 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - self.conv2 = nn.Conv2d( - features, features, kernel_size=3, stride=1, padding=1, bias=True, groups=self.groups - ) - - if self.bn==True: - self.bn1 = nn.BatchNorm2d(features) - self.bn2 = nn.BatchNorm2d(features) - - self.activation = activation - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input - - Returns: - tensor: output - """ - - out = self.activation(x) - out = self.conv1(out) - if self.bn==True: - out = self.bn1(out) - - out = self.activation(out) - out = self.conv2(out) - if self.bn==True: - out = self.bn2(out) - - if self.groups > 1: - out = self.conv_merge(out) - - return self.skip_add.add(out, x) - - # return out + x - - -class FeatureFusionBlock_custom(nn.Module): - """Feature fusion block. - """ - - def __init__(self, features, activation, deconv=False, bn=False, expand=False, align_corners=True): - """Init. - - Args: - features (int): number of features - """ - super(FeatureFusionBlock_custom, self).__init__() - - self.deconv = deconv - self.align_corners = align_corners - - self.groups=1 - - self.expand = expand - out_features = features - if self.expand==True: - out_features = features//2 - - self.out_conv = nn.Conv2d(features, out_features, kernel_size=1, stride=1, padding=0, bias=True, groups=1) - - self.resConfUnit1 = ResidualConvUnit_custom(features, activation, bn) - self.resConfUnit2 = ResidualConvUnit_custom(features, activation, bn) - - self.skip_add = nn.quantized.FloatFunctional() - - def forward(self, *xs): - """Forward pass. - - Returns: - tensor: output - """ - output = xs[0] - - if len(xs) == 2: - res = self.resConfUnit1(xs[1]) - output = self.skip_add.add(output, res) - # output += res - - output = self.resConfUnit2(output) - - output = nn.functional.interpolate( - output, scale_factor=2, mode="bilinear", align_corners=self.align_corners - ) - - output = self.out_conv(output) - - return output - diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/io_.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/io_.py deleted file mode 100644 index 0976223422731574789f5ed7fc30c167a2db03fc..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/io_.py +++ /dev/null @@ -1,216 +0,0 @@ -#coding=utf-8 -''' -Created on 2016年9月27日 - -@author: dengdan - -Tool functions for file system operation and I/O. -In the style of linux shell commands -''' -import os -import pickle as pkl -# import commands -import logging - -# import util - -def mkdir(path): - """ - If the target directory does not exists, it and its parent directories will created. - """ - path = get_absolute_path(path) - if not exists(path): - os.makedirs(path) - return path - -def make_parent_dir(path): - """make the parent directories for a file.""" - parent_dir = get_dir(path) - mkdir(parent_dir) - - -def pwd(): - return os.getcwd() - -def dump(path, obj): - path = get_absolute_path(path) - parent_path = get_dir(path) - mkdir(parent_path) - with open(path, 'w') as f: - logging.info('dumping file:' + path); - pkl.dump(obj, f) - -def load(path): - path = get_absolute_path(path) - with open(path, 'r') as f: - data = pkl.load(f) - return data - -def join_path(a, *p): - return os.path.join(a, *p) - -def is_dir(path): - path = get_absolute_path(path) - return os.path.isdir(path) - - -def is_path(path): - path = get_absolute_path(path) - return os.path.ispath(path) - -def get_dir(path): - ''' - return the directory it belongs to. - if path is a directory itself, itself will be return - ''' - path = get_absolute_path(path) - if is_dir(path): - return path; - return os.path.split(path)[0] - -def get_filename(path): - return os.path.split(path)[1] - -def get_absolute_path(p): - if p.startswith('~'): - p = os.path.expanduser(p) - return os.path.abspath(p) - -def cd(p): - p = get_absolute_path(p) - os.chdir(p) - -# def ls(path = '.', suffix = None): -# """ -# list files in a directory. -# return file names in a list -# """ -# path = get_absolute_path(path) -# files = os.listdir(path) -# -# if suffix is None: -# return files -# -# filtered = [] -# for f in files: -# if util.str.ends_with(f, suffix, ignore_case = True): -# filtered.append(f) -# -# return filtered - -def find_files(pattern): - import glob - return glob.glob(pattern) - -def read_lines(p): - """return the text in a file in lines as a list """ - p = get_absolute_path(p) - f = open(p,'r') - return f.readlines() - -def write_lines(p, lines): - p = get_absolute_path(p) - make_parent_dir(p) - with open(p, 'w') as f: - for line in lines: - f.write(line) - - -# def cat(p): -# """return the text in a file as a whole""" -# cmd = 'cat ' + p -# return commands.getoutput(cmd) - -def exists(path): - path = get_absolute_path(path) - return os.path.exists(path) - -def load_mat(path): - import scipy.io as sio - path = get_absolute_path(path) - return sio.loadmat(path) - -def dump_mat(path, dict_obj, append = True): - import scipy.io as sio - path = get_absolute_path(path) - make_parent_dir(path) - sio.savemat(file_name = path, mdict = dict_obj, appendmat = append) - -def dir_mat(path): - ''' - list the variables in mat file. - return a list: [(name, shape, dtype), ...] - ''' - import scipy.io as sio - path = get_absolute_path(path) - return sio.whosmat(path) - -SIZE_UNIT_K = 1024 -SIZE_UNIT_M = SIZE_UNIT_K ** 2 -SIZE_UNIT_G = SIZE_UNIT_K ** 3 -def get_file_size(path, unit = SIZE_UNIT_K): - size = os.path.getsize(get_absolute_path(path)) - return size * 1.0 / unit - - -def create_h5(path): - import h5py - path = get_absolute_path(path) - make_parent_dir(path) - return h5py.File(path, 'w'); - -def open_h5(path, mode = 'r'): - import h5py - path = get_absolute_path(path) - return h5py.File(path, mode); - -def read_h5(h5, key): - return h5[key][:] -def read_h5_attrs(h5, key, attrs): - return h5[key].attrs[attrs] - -def copy(src, dest): - import shutil - shutil.copy(get_absolute_path(src), get_absolute_path(dest)) - -cp = copy - -def remove(p): - import os - os.remove(get_absolute_path(p)) -rm = remove - -# def search(pattern, path, file_only = True): -# """ -# Search files whose name matches the give pattern. The search scope -# is the directory and sub-directories of 'path'. -# """ -# path = get_absolute_path(path) -# pattern_here = util.io.join_path(path, pattern) -# targets = [] -# -# # find matchings in current directory -# candidates = find_files(pattern_here) -# for can in candidates: -# if util.io.is_dir(can) and file_only: -# continue -# else: -# targets.append(can) -# -# # find matching in sub-dirs -# files = ls(path) -# for f in files: -# fpath = util.io.join_path(path, f) -# if is_dir(fpath): -# targets_in_sub_dir = search(pattern, fpath, file_only) -# targets.extend(targets_in_sub_dir) -# return targets - -def dump_json(path, data): - import json - path = get_absolute_path(path) - make_parent_dir(path) - - with open(path, 'w') as f: - json.dump(data, f) - return path \ No newline at end of file diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-4ccfb72c.css b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-4ccfb72c.css deleted file mode 100644 index a528c508c9856f09311ecdc208c5d65121782769..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/frontend/assets/index-4ccfb72c.css +++ /dev/null @@ -1 +0,0 @@ -.wrap.svelte-1sc8eck{display:flex;flex-direction:column;flex-flow:column;margin:0;padding:0;height:100%}.codemirror-wrapper.svelte-1sc8eck{height:100%;overflow:auto}.cm-editor{height:100%}.cm-selectionBackground{background-color:#b9d2ff30!important}.cm-focused{outline:none!important}button.svelte-qi7jcw{position:relative;cursor:pointer;padding:5px;width:22px;height:22px}.check.svelte-qi7jcw{position:absolute;top:0;right:0;z-index:var(--layer-top);background:var(--background-fill-primary);padding:var(--size-1);width:100%;height:100%;color:var(--body-text-color)}a.svelte-14d303a{position:relative;cursor:pointer;padding:5px;width:22px;height:22px}.copied.svelte-14d303a{color:var(--color-green-500)}.check.svelte-14d303a{position:absolute;top:0;right:0;z-index:var(--layer-top);background:var(--background-fill-primary);padding:var(--size-1);width:100%;height:100%;color:var(--body-text-color)}div.svelte-1yin446{display:flex;position:absolute;top:var(--block-label-margin);right:var(--block-label-margin);align-items:center;z-index:var(--layer-2);transition:.15s;box-shadow:var(--shadow-drop);border:1px solid var(--border-color-primary);border-top:none;border-right:none;border-radius:var(--block-label-right-radius);background:var(--block-label-background-fill);overflow:hidden;color:var(--block-label-text-color);font:var(--font);font-size:var(--button-small-text-size)} diff --git a/spaces/Dagfinn1962/stablediffusion-articlera/theme.css b/spaces/Dagfinn1962/stablediffusion-articlera/theme.css deleted file mode 100644 index a4ed4c30008a88731ce406110152855c4dbfba1e..0000000000000000000000000000000000000000 --- a/spaces/Dagfinn1962/stablediffusion-articlera/theme.css +++ /dev/null @@ -1 +0,0 @@ -{"theme": {"background_accent": "*primary_500", "background_accent_soft": "#919cbf", "background_accent_soft_dark": "*neutral_700", "background_primary": "#586794", "background_primary_dark": "*neutral_950", "background_secondary": "#586794", "background_secondary_dark": "*neutral_900", "block_background": "#7280ad", "block_background_dark": "#31395294", "block_border_color": "*border_color_primary", "block_border_color_dark": "*border_color_primary", "block_border_width": "1px", "block_info_color": "#f8f8f2", "block_info_color_dark": "#f8f8f2", "block_info_text_size": "*text_sm", "block_info_text_weight": "400", "block_label_background": "*background_primary", "block_label_background_dark": "*background_secondary", "block_label_border_color": "*border_color_primary", "block_label_border_color_dark": "*border_color_primary", "block_label_border_width": "1px", "block_label_icon_color": "*block_label_text_color", "block_label_margin": "0", "block_label_padding": "*spacing_sm *spacing_lg", "block_label_radius": "calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px) 0", "block_label_right_radius": "0 calc(*radius_lg - 1px) 0 calc(*radius_lg - 1px)", "block_label_text_color": "#f8f8f2", "block_label_text_color_dark": "#f8f8f2", "block_label_text_size": "*text_sm", "block_label_text_weight": "400", "block_padding": "*spacing_xl calc(*spacing_xl + 2px)", "block_radius": "*radius_lg", "block_shadow": "none", "block_title_background": "none", "block_title_border_color": "none", "block_title_border_width": "0px", "block_title_padding": "0", "block_title_radius": "none", "block_title_text_color": "#f8f8f2", "block_title_text_color_dark": "#f8f8f2", "block_title_text_size": "*text_md", "block_title_text_weight": "400", "body_background": "#586794", "body_background_dark": "*background_primary", "body_text_color": "#f8f8f2", "body_text_color_dark": "#f8f8f2", "body_text_color_subdued": "#f8f8f2", "body_text_color_subdued_dark": "*neutral_400", "body_text_size": "*text_md", "body_text_weight": "400", "border_color_accent": "#818eb6", "border_color_accent_dark": "*neutral_600", "border_color_primary": "*neutral_200", "border_color_primary_dark": "*neutral_700", "button_border_width": "*input_border_width", "button_cancel_background": "*button_secondary_background", "button_cancel_background_dark": "*button_secondary_background", "button_cancel_background_hover": "*button_cancel_background", "button_cancel_background_hover_dark": "*button_cancel_background", "button_cancel_border_color": "*button_secondary_border_color", "button_cancel_border_color_dark": "*button_secondary_border_color", "button_cancel_border_color_hover": "*button_cancel_border_color", "button_cancel_border_color_hover_dark": "*button_cancel_border_color", "button_cancel_text_color": "*button_secondary_text_color", "button_cancel_text_color_dark": "*button_secondary_text_color", "button_cancel_text_color_hover": "*button_cancel_text_color", "button_cancel_text_color_hover_dark": "*button_cancel_text_color", "button_large_padding": "*spacing_lg calc(2 * *spacing_lg)", "button_large_radius": "*radius_lg", "button_large_text_size": "*text_lg", "button_large_text_weight": "600", "button_primary_background": "#ffa1d7", "button_primary_background_dark": "#ff79c6", "button_primary_background_hover": "*button_primary_background", "button_primary_background_hover_dark": "*button_primary_background", "button_primary_border_color": "*primary_200", "button_primary_border_color_dark": "*primary_600", "button_primary_border_color_hover": "*button_primary_border_color", "button_primary_border_color_hover_dark": "*button_primary_border_color", "button_primary_text_color": "*primary_600", "button_primary_text_color_dark": "white", "button_primary_text_color_hover": "*button_primary_text_color", "button_primary_text_color_hover_dark": "*button_primary_text_color", "button_secondary_background": "*neutral_200", "button_secondary_background_dark": "*neutral_600", "button_secondary_background_hover": "*button_secondary_background", "button_secondary_background_hover_dark": "*button_secondary_background", "button_secondary_border_color": "*neutral_200", "button_secondary_border_color_dark": "*neutral_600", "button_secondary_border_color_hover": "*button_secondary_border_color", "button_secondary_border_color_hover_dark": "*button_secondary_border_color", "button_secondary_text_color": "#f8f8f2", "button_secondary_text_color_dark": "white", "button_secondary_text_color_hover": "*button_secondary_text_color", "button_secondary_text_color_hover_dark": "*button_secondary_text_color", "button_shadow": "none", "button_shadow_active": "none", "button_shadow_hover": "none", "button_small_padding": "*spacing_sm calc(2 * *spacing_sm)", "button_small_radius": "*radius_lg", "button_small_text_size": "*text_md", "button_small_text_weight": "400", "button_transition": "background-color 0.2s ease", "checkbox_background": "*background_primary", "checkbox_background_dark": "*neutral_800", "checkbox_background_focus": "*checkbox_background", "checkbox_background_focus_dark": "*checkbox_background", "checkbox_background_hover": "*checkbox_background", "checkbox_background_hover_dark": "*checkbox_background", "checkbox_background_selected": "#ff79c6", "checkbox_background_selected_dark": "#ff79c6", "checkbox_border_color": "*neutral_300", "checkbox_border_color_dark": "*neutral_700", "checkbox_border_color_focus": "*secondary_500", "checkbox_border_color_focus_dark": "*secondary_500", "checkbox_border_color_hover": "*neutral_300", "checkbox_border_color_hover_dark": "*neutral_600", "checkbox_border_color_selected": "*secondary_600", "checkbox_border_color_selected_dark": "*secondary_600", "checkbox_border_radius": "*radius_sm", "checkbox_border_width": "*input_border_width", "checkbox_label_background": "*button_secondary_background", "checkbox_label_background_dark": "*button_secondary_background", "checkbox_label_background_hover": "*button_secondary_background_hover", "checkbox_label_background_hover_dark": "*button_secondary_background_hover", "checkbox_label_background_selected": "*checkbox_label_background", "checkbox_label_background_selected_dark": "*checkbox_label_background", "checkbox_label_border_color": "*border_color_primary", "checkbox_label_border_color_dark": "*border_color_primary", "checkbox_label_border_color_hover": "*checkbox_label_border_color", "checkbox_label_border_color_hover_dark": "*checkbox_label_border_color", "checkbox_label_border_width": "*input_border_width", "checkbox_label_gap": "*spacing_lg", "checkbox_label_padding": "*spacing_md calc(2 * *spacing_md)", "checkbox_label_shadow": "none", "checkbox_label_text_size": "*text_md", "checkbox_label_text_weight": "400", "checkbox_shadow": "*input_shadow", "checkbox_text_color": "*body_text_color", "checkbox_text_color_dark": "*body_text_color", "checkbox_text_color_selected": "*checkbox_text_color", "checkbox_text_color_selected_dark": "*checkbox_text_color", "container_radius": "*radius_lg", "embed_radius": "*radius_lg", "error_background": "#fee2e2", "error_background_dark": "*background_primary", "error_border_color": "#fecaca", "error_border_color_dark": "*border_color_primary", "error_border_width": "1px", "error_color": "#ef4444", "error_color_dark": "#ef4444", "font": "'Poppins'", "font_mono": "'IBM Plex Mono', 'ui-monospace', 'Consolas', monospace", "form_gap_width": "0px", "header_text_weight": "600", "input_background": "*neutral_100", "input_background_dark": "*neutral_700", "input_background_focus": "*secondary_500", "input_background_focus_dark": "*secondary_600", "input_background_hover": "*input_background", "input_background_hover_dark": "*input_background", "input_border_color": "*border_color_primary", "input_border_color_dark": "*border_color_primary", "input_border_color_focus": "*secondary_300", "input_border_color_focus_dark": "*neutral_700", "input_border_color_hover": "*input_border_color", "input_border_color_hover_dark": "*input_border_color", "input_border_width": "0px", "input_padding": "*spacing_xl", "input_placeholder_color": "*neutral_400", "input_placeholder_color_dark": "*neutral_500", "input_radius": "*radius_lg", "input_shadow": "none", "input_shadow_focus": "*input_shadow", "input_text_size": "*text_md", "input_text_weight": "400", "layout_gap": "*spacing_xxl", "link_text_color": "*secondary_600", "link_text_color_active": "*secondary_600", "link_text_color_active_dark": "*secondary_500", "link_text_color_dark": "*secondary_500", "link_text_color_hover": "*secondary_700", "link_text_color_hover_dark": "*secondary_400", "link_text_color_visited": "*secondary_500", "link_text_color_visited_dark": "*secondary_600", "loader_color": "*background_accent", "neutral_100": "#919cbf", "neutral_200": "#818eb6", "neutral_300": "#7280ad", "neutral_400": "#6272a4", "neutral_50": "#a1aac8", "neutral_500": "#586794", "neutral_600": "#4e5b83", "neutral_700": "#455073", "neutral_800": "#3b4462", "neutral_900": "#313952", "neutral_950": "#272e42", "panel_background": "*background_secondary", "panel_background_dark": "#31395294", "panel_border_color": "*border_color_primary", "panel_border_color_dark": "*border_color_primary", "panel_border_width": "0", "primary_100": "#fce7f3", "primary_200": "#fbcfe8", "primary_300": "#f9a8d4", "primary_400": "#f472b6", "primary_50": "#fdf2f8", "primary_500": "#ec4899", "primary_600": "#db2777", "primary_700": "#be185d", "primary_800": "#9d174d", "primary_900": "#831843", "primary_950": "#6e1a3d", "prose_text_size": "*text_md", "prose_text_weight": "400", "radius_lg": "8px", "radius_md": "6px", "radius_sm": "4px", "radius_xl": "12px", "radius_xs": "2px", "radius_xxl": "22px", "radius_xxs": "1px", "secondary_100": "#dbeafe", "secondary_200": "#bfdbfe", "secondary_300": "#93c5fd", "secondary_400": "#60a5fa", "secondary_50": "#eff6ff", "secondary_500": "#3b82f6", "secondary_600": "#2563eb", "secondary_700": "#1d4ed8", "secondary_800": "#1e40af", "secondary_900": "#1e3a8a", "secondary_950": "#1d3660", "section_header_text_size": "*text_md", "section_header_text_weight": "400", "shadow_drop": "rgba(0,0,0,0.05) 0px 1px 2px 0px", "shadow_drop_lg": "0 1px 3px 0 rgb(0 0 0 / 0.1), 0 1px 2px -1px rgb(0 0 0 / 0.1)", "shadow_inset": "rgba(0,0,0,0.05) 0px 2px 4px 0px inset", "shadow_spread": "3px", "shadow_spread_dark": "1px", "slider_color": "#ffa1d7", "slider_color_dark": "#ff79c6", "spacing_lg": "8px", "spacing_md": "6px", "spacing_sm": "4px", "spacing_xl": "10px", "spacing_xs": "2px", "spacing_xxl": "16px", "spacing_xxs": "1px", "stat_color_background": "*primary_300", "stat_color_background_dark": "*primary_500", "table_border_color": "*neutral_300", "table_border_color_dark": "*neutral_700", "table_even_background": "#7280ad", "table_even_background_dark": "*neutral_950", "table_odd_background": "*neutral_50", "table_odd_background_dark": "*neutral_900", "table_radius": "*radius_lg", "table_row_focus": "*background_accent_soft", "table_row_focus_dark": "*background_accent_soft", "text_lg": "16px", "text_md": "14px", "text_sm": "12px", "text_xl": "22px", "text_xs": "10px", "text_xxl": "26px", "text_xxs": "9px"}, "version": "0.3.2"} \ No newline at end of file diff --git a/spaces/Djacon/emotion_detection/files/js/summarizer.js b/spaces/Djacon/emotion_detection/files/js/summarizer.js deleted file mode 100644 index c9fbec8ff8617baa9cb74472fdf97ab8366cb01d..0000000000000000000000000000000000000000 --- a/spaces/Djacon/emotion_detection/files/js/summarizer.js +++ /dev/null @@ -1,213 +0,0 @@ -// Form Divs -const sumText = document.getElementById('sum-text-div'); -const sumFile = document.getElementById('sum-file-div') -const sumVideo = document.getElementById('sum-video-div'); - -// Form Data -const selectOption = document.getElementById('sum-type'); -const sumTextInput = document.getElementById('sum-text-input'); -const sumFileInput = document.getElementById('sum-file-input'); -const sumVideoInput = document.getElementById('sum-video-input'); - -// Error Output Section -const sumError = document.getElementById('sum-err'); - -// Result Section -const extractText = document.getElementById('extracted-text'); -const summaryText = document.getElementById('summarized-text'); - -// Word Counter -const wordsCount = document.getElementById('word-counter'); - -// Tabs -const original = document.getElementById('sum-original'); -const summary = document.getElementById('sum-summary'); -const showOriginal = document.getElementById('show-original'); -const showSummary = document.getElementById('show-summary'); - -const MAX_SIZE = 20000; - - -function _summarize() { - var xhr = new XMLHttpRequest(); - xhr.open('POST', '/predict_summarization', true); - xhr.setRequestHeader('Content-Type', 'application/json'); - - var data = JSON.stringify({ 'sum_type': selectOption.value, 'text': extractText.value }); - - xhr.onreadystatechange = function () { - if (xhr.readyState === 4 && xhr.status === 200) { - result = xhr.responseText.split('\\n').join('\n'); - summaryText.value = result.slice(1, -1); - _show_summary(); - } - }; - - xhr.send(data); - return; -} - -function _extractFile() { - const file = sumFileInput.files[0]; - if (file.type === 'text/plain') { - const reader = new FileReader(); - reader.onload = function() { - sumTextInput.value = reader.result.slice(0, MAX_SIZE); - }; - reader.readAsText(file, 'CP1251'); - return; - } else if (file.type === 'application/pdf') { - sumTextInput.value = ''; - const reader = new FileReader(); - reader.onload = function (e) { - const pdfData = e.target.result; - pdfjsLib.getDocument(pdfData).promise.then(function (pdfDocument) { - for (let pageNum = 1; pageNum <= pdfDocument.numPages; pageNum++) { - pdfDocument.getPage(pageNum).then(function (pdfPage) { - pdfPage.getTextContent().then(function (textContent) { - let size = sumTextInput.value.length; - let pageText = []; - for (const textItem of textContent.items) { - pageText.push(textItem.str); - size += textItem.str.length; - if (size > MAX_SIZE) break; - } - sumTextInput.value += pageText.join(' '); - }); - }); - } - }); - }; - reader.readAsDataURL(file); - } - return; -} - - -async function summarize(event) { - event.preventDefault(); - - switch (selectOption.value) { - case 'sum-text': - len = sumTextInput.value.trim().length - if (len < 250) { - sumError.innerText = `The text size should be at least 250 characters (${len} < 250)`; - sumError.classList.remove('hidden'); - return; - } - break; - case 'sum-video': - regex = /^((((http)s?:\/\/)?((www\.)|(m\.))?youtube.com\/watch\?([^\?]*&)?v=.+)|(((http)s?:\/\/)?youtu.be\/([^\?=]+)(\?[^?]+)?))$/ - if (!sumVideoInput.value.match(regex)) { - sumError.innerText = 'Invalid youtube link'; - sumError.classList.remove('hidden'); - return; - } - break; - } - - sumError.classList.add('hidden'); - - _show_summary(); - - // Here we can finally summarize data - summaryText.value = 'Please wait...'; - switch (selectOption.value) { - case 'sum-text': - extractText.value = sumTextInput.value.trim().slice(0, MAX_SIZE); - break; - case 'sum-video': - extractText.value = sumVideoInput.value.slice(0, MAX_SIZE); - break; - } - _summarize(); -} - - -function _update_option() { - switch (selectOption.value) { - case 'sum-text': - sumText.classList.remove('hidden'); - sumVideo.classList.add('hidden'); - - sumTextInput.setAttribute('required', ''); - sumVideoInput.removeAttribute('required'); - break; - case 'sum-video': - sumText.classList.add('hidden'); - sumVideo.classList.remove('hidden'); - - sumTextInput.removeAttribute('required'); - sumVideoInput.setAttribute('required', ''); - break; - } - sumError.classList.add('hidden'); -} - -function _update_counter() { - let text = sumTextInput.value.trim() - if (text === '') { - sumFile.classList.remove('hidden'); - wordsCount.classList.add('hidden'); - return; - } - - sumFile.classList.add('hidden'); - wordsCount.classList.remove('hidden'); - wordsCount.innerHTML = `Words: ${text.split(/\s+/).length} | Chars: ${text.length}` -} - - -function _show_summary() { - showOriginal.classList.remove('bg-gray-100'); - showSummary.classList.add('bg-gray-100'); - - summary.classList.remove('hidden'); - original.classList.add('hidden'); -} - -function _show_original() { - showOriginal.classList.add('bg-gray-100'); - showSummary.classList.remove('bg-gray-100'); - - original.classList.remove('hidden'); - summary.classList.add('hidden'); -} - - -document.addEventListener('DOMContentLoaded', function () { - selectOption.addEventListener('change', _update_option); - - var submitButton = document.getElementById('submit'); - submitButton.addEventListener('click', summarize); - - sumFileInput.addEventListener('change', async function() { - const allowedTypes = ['application/pdf', 'text/plain']; - const file = sumFileInput.files[0]; - - if (!file) { - sumError.classList.remove('hidden'); - return; - } - - if (!allowedTypes.includes(file.type)) { - sumError.innerText = 'Not supported type (Only `.pdf` or `.txt`)'; - sumError.classList.remove('hidden'); - return; - } - - // Back to main option - selectOption.options[0].selected = true; - _update_option(); - _extractFile(); - - await (new Promise(resolve => setTimeout(resolve, 1000))); - _update_counter(); - sumError.classList.add('hidden'); - }); - - sumTextInput.addEventListener('input', _update_counter); - - showSummary.addEventListener('click', _show_summary); - showOriginal.addEventListener('click', _show_original); -}); \ No newline at end of file diff --git a/spaces/DrSong/ChatGLM-6B-ChatBot/README.md b/spaces/DrSong/ChatGLM-6B-ChatBot/README.md deleted file mode 100644 index 77b42a5d73f4a1327e60713d688c515bc49ff41b..0000000000000000000000000000000000000000 --- a/spaces/DrSong/ChatGLM-6B-ChatBot/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ChatGLM 6B ChatBot -emoji: 🐨 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/optimizer.py b/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/optimizer.py deleted file mode 100644 index cae5ffff3d11aaccd705d6936e080175ab97dd0e..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/PTI/models/StyleCLIP/global_directions/dnnlib/tflib/optimizer.py +++ /dev/null @@ -1,372 +0,0 @@ -# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Helper wrapper for a Tensorflow optimizer.""" - -import platform -import numpy as np -import tensorflow as tf - -from collections import OrderedDict -from typing import List, Union - -from . import autosummary -from . import tfutil -from .. import util - -from .tfutil import TfExpression, TfExpressionEx - -_collective_ops_warning_printed = False -_collective_ops_group_key = 831766147 -_collective_ops_instance_key = 436340067 - -class Optimizer: - """A Wrapper for tf.train.Optimizer. - - Automatically takes care of: - - Gradient averaging for multi-GPU training. - - Gradient accumulation for arbitrarily large minibatches. - - Dynamic loss scaling and typecasts for FP16 training. - - Ignoring corrupted gradients that contain NaNs/Infs. - - Reporting statistics. - - Well-chosen default settings. - """ - - def __init__(self, - name: str = "Train", # Name string that will appear in TensorFlow graph. - tf_optimizer: str = "tf.train.AdamOptimizer", # Underlying optimizer class. - learning_rate: TfExpressionEx = 0.001, # Learning rate. Can vary over time. - minibatch_multiplier: TfExpressionEx = None, # Treat N consecutive minibatches as one by accumulating gradients. - share: "Optimizer" = None, # Share internal state with a previously created optimizer? - use_loss_scaling: bool = False, # Enable dynamic loss scaling for robust mixed-precision training? - loss_scaling_init: float = 64.0, # Log2 of initial loss scaling factor. - loss_scaling_inc: float = 0.0005, # Log2 of per-minibatch loss scaling increment when there is no overflow. - loss_scaling_dec: float = 1.0, # Log2 of per-minibatch loss scaling decrement when there is an overflow. - report_mem_usage: bool = False, # Report fine-grained memory usage statistics in TensorBoard? - **kwargs): - - # Public fields. - self.name = name - self.learning_rate = learning_rate - self.minibatch_multiplier = minibatch_multiplier - self.id = self.name.replace("/", ".") - self.scope = tf.get_default_graph().unique_name(self.id) - self.optimizer_class = util.get_obj_by_name(tf_optimizer) - self.optimizer_kwargs = dict(kwargs) - self.use_loss_scaling = use_loss_scaling - self.loss_scaling_init = loss_scaling_init - self.loss_scaling_inc = loss_scaling_inc - self.loss_scaling_dec = loss_scaling_dec - - # Private fields. - self._updates_applied = False - self._devices = OrderedDict() # device_name => EasyDict() - self._shared_optimizers = OrderedDict() # device_name => optimizer_class - self._gradient_shapes = None # [shape, ...] - self._report_mem_usage = report_mem_usage - - # Validate arguments. - assert callable(self.optimizer_class) - - # Share internal state if requested. - if share is not None: - assert isinstance(share, Optimizer) - assert self.optimizer_class is share.optimizer_class - assert self.learning_rate is share.learning_rate - assert self.optimizer_kwargs == share.optimizer_kwargs - self._shared_optimizers = share._shared_optimizers # pylint: disable=protected-access - - def _get_device(self, device_name: str): - """Get internal state for the given TensorFlow device.""" - tfutil.assert_tf_initialized() - if device_name in self._devices: - return self._devices[device_name] - - # Initialize fields. - device = util.EasyDict() - device.name = device_name - device.optimizer = None # Underlying optimizer: optimizer_class - device.loss_scaling_var = None # Log2 of loss scaling: tf.Variable - device.grad_raw = OrderedDict() # Raw gradients: var => [grad, ...] - device.grad_clean = OrderedDict() # Clean gradients: var => grad - device.grad_acc_vars = OrderedDict() # Accumulation sums: var => tf.Variable - device.grad_acc_count = None # Accumulation counter: tf.Variable - device.grad_acc = OrderedDict() # Accumulated gradients: var => grad - - # Setup TensorFlow objects. - with tfutil.absolute_name_scope(self.scope + "/Devices"), tf.device(device_name), tf.control_dependencies(None): - if device_name not in self._shared_optimizers: - optimizer_name = self.scope.replace("/", "_") + "_opt%d" % len(self._shared_optimizers) - self._shared_optimizers[device_name] = self.optimizer_class(name=optimizer_name, learning_rate=self.learning_rate, **self.optimizer_kwargs) - device.optimizer = self._shared_optimizers[device_name] - if self.use_loss_scaling: - device.loss_scaling_var = tf.Variable(np.float32(self.loss_scaling_init), trainable=False, name="loss_scaling_var") - - # Register device. - self._devices[device_name] = device - return device - - def register_gradients(self, loss: TfExpression, trainable_vars: Union[List, dict]) -> None: - """Register the gradients of the given loss function with respect to the given variables. - Intended to be called once per GPU.""" - tfutil.assert_tf_initialized() - assert not self._updates_applied - device = self._get_device(loss.device) - - # Validate trainables. - if isinstance(trainable_vars, dict): - trainable_vars = list(trainable_vars.values()) # allow passing in Network.trainables as vars - assert isinstance(trainable_vars, list) and len(trainable_vars) >= 1 - assert all(tfutil.is_tf_expression(expr) for expr in trainable_vars + [loss]) - assert all(var.device == device.name for var in trainable_vars) - - # Validate shapes. - if self._gradient_shapes is None: - self._gradient_shapes = [var.shape.as_list() for var in trainable_vars] - assert len(trainable_vars) == len(self._gradient_shapes) - assert all(var.shape.as_list() == var_shape for var, var_shape in zip(trainable_vars, self._gradient_shapes)) - - # Report memory usage if requested. - deps = [loss] - if self._report_mem_usage: - self._report_mem_usage = False - try: - with tf.name_scope(self.id + '_mem'), tf.device(device.name), tf.control_dependencies([loss]): - deps.append(autosummary.autosummary(self.id + "/mem_usage_gb", tf.contrib.memory_stats.BytesInUse() / 2**30)) - except tf.errors.NotFoundError: - pass - - # Compute gradients. - with tf.name_scope(self.id + "_grad"), tf.device(device.name), tf.control_dependencies(deps): - loss = self.apply_loss_scaling(tf.cast(loss, tf.float32)) - gate = tf.train.Optimizer.GATE_NONE # disable gating to reduce memory usage - grad_list = device.optimizer.compute_gradients(loss=loss, var_list=trainable_vars, gate_gradients=gate) - - # Register gradients. - for grad, var in grad_list: - if var not in device.grad_raw: - device.grad_raw[var] = [] - device.grad_raw[var].append(grad) - - def apply_updates(self, allow_no_op: bool = False) -> tf.Operation: - """Construct training op to update the registered variables based on their gradients.""" - tfutil.assert_tf_initialized() - assert not self._updates_applied - self._updates_applied = True - all_ops = [] - - # Check for no-op. - if allow_no_op and len(self._devices) == 0: - with tfutil.absolute_name_scope(self.scope): - return tf.no_op(name='TrainingOp') - - # Clean up gradients. - for device_idx, device in enumerate(self._devices.values()): - with tfutil.absolute_name_scope(self.scope + "/Clean%d" % device_idx), tf.device(device.name): - for var, grad in device.grad_raw.items(): - - # Filter out disconnected gradients and convert to float32. - grad = [g for g in grad if g is not None] - grad = [tf.cast(g, tf.float32) for g in grad] - - # Sum within the device. - if len(grad) == 0: - grad = tf.zeros(var.shape) # No gradients => zero. - elif len(grad) == 1: - grad = grad[0] # Single gradient => use as is. - else: - grad = tf.add_n(grad) # Multiple gradients => sum. - - # Scale as needed. - scale = 1.0 / len(device.grad_raw[var]) / len(self._devices) - scale = tf.constant(scale, dtype=tf.float32, name="scale") - if self.minibatch_multiplier is not None: - scale /= tf.cast(self.minibatch_multiplier, tf.float32) - scale = self.undo_loss_scaling(scale) - device.grad_clean[var] = grad * scale - - # Sum gradients across devices. - if len(self._devices) > 1: - with tfutil.absolute_name_scope(self.scope + "/Broadcast"), tf.device(None): - if platform.system() == "Windows": # Windows => NCCL ops are not available. - self._broadcast_fallback() - elif tf.VERSION.startswith("1.15."): # TF 1.15 => NCCL ops are broken: https://github.com/tensorflow/tensorflow/issues/41539 - self._broadcast_fallback() - else: # Otherwise => NCCL ops are safe to use. - self._broadcast_nccl() - - # Apply updates separately on each device. - for device_idx, device in enumerate(self._devices.values()): - with tfutil.absolute_name_scope(self.scope + "/Apply%d" % device_idx), tf.device(device.name): - # pylint: disable=cell-var-from-loop - - # Accumulate gradients over time. - if self.minibatch_multiplier is None: - acc_ok = tf.constant(True, name='acc_ok') - device.grad_acc = OrderedDict(device.grad_clean) - else: - # Create variables. - with tf.control_dependencies(None): - for var in device.grad_clean.keys(): - device.grad_acc_vars[var] = tf.Variable(tf.zeros(var.shape), trainable=False, name="grad_acc_var") - device.grad_acc_count = tf.Variable(tf.zeros([]), trainable=False, name="grad_acc_count") - - # Track counter. - count_cur = device.grad_acc_count + 1.0 - count_inc_op = lambda: tf.assign(device.grad_acc_count, count_cur) - count_reset_op = lambda: tf.assign(device.grad_acc_count, tf.zeros([])) - acc_ok = (count_cur >= tf.cast(self.minibatch_multiplier, tf.float32)) - all_ops.append(tf.cond(acc_ok, count_reset_op, count_inc_op)) - - # Track gradients. - for var, grad in device.grad_clean.items(): - acc_var = device.grad_acc_vars[var] - acc_cur = acc_var + grad - device.grad_acc[var] = acc_cur - with tf.control_dependencies([acc_cur]): - acc_inc_op = lambda: tf.assign(acc_var, acc_cur) - acc_reset_op = lambda: tf.assign(acc_var, tf.zeros(var.shape)) - all_ops.append(tf.cond(acc_ok, acc_reset_op, acc_inc_op)) - - # No overflow => apply gradients. - all_ok = tf.reduce_all(tf.stack([acc_ok] + [tf.reduce_all(tf.is_finite(g)) for g in device.grad_acc.values()])) - apply_op = lambda: device.optimizer.apply_gradients([(tf.cast(grad, var.dtype), var) for var, grad in device.grad_acc.items()]) - all_ops.append(tf.cond(all_ok, apply_op, tf.no_op)) - - # Adjust loss scaling. - if self.use_loss_scaling: - ls_inc_op = lambda: tf.assign_add(device.loss_scaling_var, self.loss_scaling_inc) - ls_dec_op = lambda: tf.assign_sub(device.loss_scaling_var, self.loss_scaling_dec) - ls_update_op = lambda: tf.group(tf.cond(all_ok, ls_inc_op, ls_dec_op)) - all_ops.append(tf.cond(acc_ok, ls_update_op, tf.no_op)) - - # Last device => report statistics. - if device_idx == len(self._devices) - 1: - all_ops.append(autosummary.autosummary(self.id + "/learning_rate", tf.convert_to_tensor(self.learning_rate))) - all_ops.append(autosummary.autosummary(self.id + "/overflow_frequency", tf.where(all_ok, 0, 1), condition=acc_ok)) - if self.use_loss_scaling: - all_ops.append(autosummary.autosummary(self.id + "/loss_scaling_log2", device.loss_scaling_var)) - - # Initialize variables. - self.reset_optimizer_state() - if self.use_loss_scaling: - tfutil.init_uninitialized_vars([device.loss_scaling_var for device in self._devices.values()]) - if self.minibatch_multiplier is not None: - tfutil.run([var.initializer for device in self._devices.values() for var in list(device.grad_acc_vars.values()) + [device.grad_acc_count]]) - - # Group everything into a single op. - with tfutil.absolute_name_scope(self.scope): - return tf.group(*all_ops, name="TrainingOp") - - def reset_optimizer_state(self) -> None: - """Reset internal state of the underlying optimizer.""" - tfutil.assert_tf_initialized() - tfutil.run([var.initializer for device in self._devices.values() for var in device.optimizer.variables()]) - - def get_loss_scaling_var(self, device: str) -> Union[tf.Variable, None]: - """Get or create variable representing log2 of the current dynamic loss scaling factor.""" - return self._get_device(device).loss_scaling_var - - def apply_loss_scaling(self, value: TfExpression) -> TfExpression: - """Apply dynamic loss scaling for the given expression.""" - assert tfutil.is_tf_expression(value) - if not self.use_loss_scaling: - return value - return value * tfutil.exp2(self.get_loss_scaling_var(value.device)) - - def undo_loss_scaling(self, value: TfExpression) -> TfExpression: - """Undo the effect of dynamic loss scaling for the given expression.""" - assert tfutil.is_tf_expression(value) - if not self.use_loss_scaling: - return value - return value * tfutil.exp2(-self.get_loss_scaling_var(value.device)) # pylint: disable=invalid-unary-operand-type - - def _broadcast_nccl(self): - """Sum gradients across devices using NCCL ops (fast path).""" - from tensorflow.python.ops import nccl_ops # pylint: disable=no-name-in-module - for all_vars in zip(*[device.grad_clean.keys() for device in self._devices.values()]): - if any(x.shape.num_elements() > 0 for x in all_vars): - all_grads = [device.grad_clean[var] for device, var in zip(self._devices.values(), all_vars)] - all_grads = nccl_ops.all_sum(all_grads) - for device, var, grad in zip(self._devices.values(), all_vars, all_grads): - device.grad_clean[var] = grad - - def _broadcast_fallback(self): - """Sum gradients across devices using TensorFlow collective ops (slow fallback path).""" - from tensorflow.python.ops import collective_ops # pylint: disable=no-name-in-module - global _collective_ops_warning_printed, _collective_ops_group_key, _collective_ops_instance_key - if all(x.shape.num_elements() == 0 for device in self._devices.values() for x in device.grad_clean.values()): - return - if not _collective_ops_warning_printed: - print("------------------------------------------------------------------------") - print("WARNING: Using slow fallback implementation for inter-GPU communication.") - print("Please use TensorFlow 1.14 on Linux for optimal training performance.") - print("------------------------------------------------------------------------") - _collective_ops_warning_printed = True - for device in self._devices.values(): - with tf.device(device.name): - combo = [tf.reshape(x, [x.shape.num_elements()]) for x in device.grad_clean.values()] - combo = tf.concat(combo, axis=0) - combo = collective_ops.all_reduce(combo, merge_op='Add', final_op='Id', - group_size=len(self._devices), group_key=_collective_ops_group_key, - instance_key=_collective_ops_instance_key) - cur_ofs = 0 - for var, grad_old in device.grad_clean.items(): - grad_new = tf.reshape(combo[cur_ofs : cur_ofs + grad_old.shape.num_elements()], grad_old.shape) - cur_ofs += grad_old.shape.num_elements() - device.grad_clean[var] = grad_new - _collective_ops_instance_key += 1 - - -class SimpleAdam: - """Simplified version of tf.train.AdamOptimizer that behaves identically when used with dnnlib.tflib.Optimizer.""" - - def __init__(self, name="Adam", learning_rate=0.001, beta1=0.9, beta2=0.999, epsilon=1e-8): - self.name = name - self.learning_rate = learning_rate - self.beta1 = beta1 - self.beta2 = beta2 - self.epsilon = epsilon - self.all_state_vars = [] - - def variables(self): - return self.all_state_vars - - def compute_gradients(self, loss, var_list, gate_gradients=tf.train.Optimizer.GATE_NONE): - assert gate_gradients == tf.train.Optimizer.GATE_NONE - return list(zip(tf.gradients(loss, var_list), var_list)) - - def apply_gradients(self, grads_and_vars): - with tf.name_scope(self.name): - state_vars = [] - update_ops = [] - - # Adjust learning rate to deal with startup bias. - with tf.control_dependencies(None): - b1pow_var = tf.Variable(dtype=tf.float32, initial_value=1, trainable=False) - b2pow_var = tf.Variable(dtype=tf.float32, initial_value=1, trainable=False) - state_vars += [b1pow_var, b2pow_var] - b1pow_new = b1pow_var * self.beta1 - b2pow_new = b2pow_var * self.beta2 - update_ops += [tf.assign(b1pow_var, b1pow_new), tf.assign(b2pow_var, b2pow_new)] - lr_new = self.learning_rate * tf.sqrt(1 - b2pow_new) / (1 - b1pow_new) - - # Construct ops to update each variable. - for grad, var in grads_and_vars: - with tf.control_dependencies(None): - m_var = tf.Variable(dtype=tf.float32, initial_value=tf.zeros_like(var), trainable=False) - v_var = tf.Variable(dtype=tf.float32, initial_value=tf.zeros_like(var), trainable=False) - state_vars += [m_var, v_var] - m_new = self.beta1 * m_var + (1 - self.beta1) * grad - v_new = self.beta2 * v_var + (1 - self.beta2) * tf.square(grad) - var_delta = lr_new * m_new / (tf.sqrt(v_new) + self.epsilon) - update_ops += [tf.assign(m_var, m_new), tf.assign(v_var, v_new), tf.assign_sub(var, var_delta)] - - # Group everything together. - self.all_state_vars += state_vars - return tf.group(*update_ops) diff --git a/spaces/Eddycrack864/Applio-Inference/infer/lib/audio.py b/spaces/Eddycrack864/Applio-Inference/infer/lib/audio.py deleted file mode 100644 index 9ad4ff74218957cf18782fa71add40a734b47e78..0000000000000000000000000000000000000000 --- a/spaces/Eddycrack864/Applio-Inference/infer/lib/audio.py +++ /dev/null @@ -1,197 +0,0 @@ -import librosa -import numpy as np -import av -from io import BytesIO -import ffmpeg -import os -import sys - -import random -from infer.lib.csvutil import CSVutil -#import csv - -platform_stft_mapping = { - 'linux': 'stftpitchshift', - 'darwin': 'stftpitchshift', - 'win32': 'stftpitchshift.exe', -} - -stft = platform_stft_mapping.get(sys.platform) - -def wav2(i, o, format): - inp = av.open(i, 'rb') - if format == "m4a": format = "mp4" - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "mp4": format = "aac" - - ostream = out.add_stream(format) - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - for p in ostream.encode(None): out.mux(p) - - out.close() - inp.close() - -def audio2(i, o, format, sr): - inp = av.open(i, 'rb') - out = av.open(o, 'wb', format=format) - if format == "ogg": format = "libvorbis" - if format == "f32le": format = "pcm_f32le" - - ostream = out.add_stream(format, channels=1) - ostream.sample_rate = sr - - for frame in inp.decode(audio=0): - for p in ostream.encode(frame): out.mux(p) - - out.close() - inp.close() - -def load_audion(file, sr): - try: - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - with open(file, "rb") as f: - with BytesIO() as out: - audio2(f, out, "f32le", sr) - return np.frombuffer(out.getvalue(), np.float32).flatten() - - except AttributeError: - audio = file[1] / 32768.0 - if len(audio.shape) == 2: - audio = np.mean(audio, -1) - return librosa.resample(audio, orig_sr=file[0], target_sr=16000) - - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - - - -def load_audio(file, sr, DoFormant=False, Quefrency=1.0, Timbre=1.0): - converted = False - DoFormant, Quefrency, Timbre = CSVutil("csvdb/formanting.csv", "r", "formanting") - try: - # https://github.com/openai/whisper/blob/main/whisper/audio.py#L26 - # This launches a subprocess to decode audio while down-mixing and resampling as necessary. - # Requires the ffmpeg CLI and `ffmpeg-python` package to be installed. - file = ( - file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - ) # 防止小白拷路径头尾带了空格和"和回车 - file_formanted = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - # print(f"dofor={bool(DoFormant)} timbr={Timbre} quef={Quefrency}\n") - - if ( - lambda DoFormant: True - if DoFormant.lower() == "true" - else (False if DoFormant.lower() == "false" else DoFormant) - )(DoFormant): - numerator = round(random.uniform(1, 4), 4) - # os.system(f"stftpitchshift -i {file} -q {Quefrency} -t {Timbre} -o {file_formanted}") - # print('stftpitchshift -i "%s" -p 1.0 --rms -w 128 -v 8 -q %s -t %s -o "%s"' % (file, Quefrency, Timbre, file_formanted)) - - if not file.endswith(".wav"): - if not os.path.isfile(f"{file_formanted}.wav"): - converted = True - # print(f"\nfile = {file}\n") - # print(f"\nfile_formanted = {file_formanted}\n") - converting = ( - ffmpeg.input(file_formanted, threads=0) - .output(f"{file_formanted}.wav") - .run( - cmd=["ffmpeg", "-nostdin"], - capture_stdout=True, - capture_stderr=True, - ) - ) - else: - pass - - file_formanted = ( - f"{file_formanted}.wav" - if not file_formanted.endswith(".wav") - else file_formanted - ) - - print(f" · Formanting {file_formanted}...\n") - - os.system( - '%s -i "%s" -q "%s" -t "%s" -o "%sFORMANTED_%s.wav"' - % ( - stft, - file_formanted, - Quefrency, - Timbre, - file_formanted, - str(numerator), - ) - ) - - print(f" · Formanted {file_formanted}!\n") - - # filepraat = (os.path.abspath(os.getcwd()) + '\\' + file).replace('/','\\') - # file_formantedpraat = ('"' + os.path.abspath(os.getcwd()) + '/' + 'formanted'.join(file_formanted) + '"').replace('/','\\') - # print("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - - out, _ = ( - ffmpeg.input( - "%sFORMANTED_%s.wav" % (file_formanted, str(numerator)), threads=0 - ) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - - try: - os.remove("%sFORMANTED_%s.wav" % (file_formanted, str(numerator))) - except Exception: - pass - print("couldn't remove formanted type of file") - - else: - out, _ = ( - ffmpeg.input(file, threads=0) - .output("-", format="f32le", acodec="pcm_f32le", ac=1, ar=sr) - .run( - cmd=["ffmpeg", "-nostdin"], capture_stdout=True, capture_stderr=True - ) - ) - except Exception as e: - raise RuntimeError(f"Failed to load audio: {e}") - - if converted: - try: - os.remove(file_formanted) - except Exception: - pass - print("couldn't remove converted type of file") - converted = False - - return np.frombuffer(out, np.float32).flatten() - - -def check_audio_duration(file): - try: - file = file.strip(" ").strip('"').strip("\n").strip('"').strip(" ") - - probe = ffmpeg.probe(file) - - duration = float(probe['streams'][0]['duration']) - - if duration < 0.76: - print( - f"\n------------\n" - f"Audio file, {file.split('/')[-1]}, under ~0.76s detected - file is too short. Target at least 1-2s for best results." - f"\n------------\n\n" - ) - return False - - return True - except Exception as e: - raise RuntimeError(f"Failed to check audio duration: {e}") \ No newline at end of file diff --git a/spaces/ElainaFanBoy/MusicGen/tests/modules/test_rope.py b/spaces/ElainaFanBoy/MusicGen/tests/modules/test_rope.py deleted file mode 100644 index 067c6f067acbf27fb0fef5c2b812c22474c4fcd0..0000000000000000000000000000000000000000 --- a/spaces/ElainaFanBoy/MusicGen/tests/modules/test_rope.py +++ /dev/null @@ -1,168 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from audiocraft.modules.rope import RotaryEmbedding -from audiocraft.modules.transformer import StreamingTransformer, set_efficient_attention_backend - - -def test_rope(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_rope_io_dtypes(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope_32 = RotaryEmbedding(dim=C, dtype=torch.float32) - rope_64 = RotaryEmbedding(dim=C, dtype=torch.float64) - - # Test bfloat16 inputs w/ both 32 and 64 precision rope. - xq_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xk_16 = torch.rand((B, T, H, C)).to(torch.bfloat16) - xq_out, xk_out = rope_32.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - xq_out, xk_out = rope_64.rotate_qk(xq_16, xk_16) - assert xq_out.dtype == torch.bfloat16 - - # Test float32 inputs w/ both 32 and 64 precision rope. - xq_32 = torch.rand((B, T, H, C)).to(torch.float32) - xk_32 = torch.rand((B, T, H, C)).to(torch.float32) - xq_out, xk_out = rope_32.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - xq_out, xk_out = rope_64.rotate_qk(xq_32, xk_32) - assert xq_out.dtype == torch.float32 - - -def test_transformer_with_rope(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - for pos in ['rope', 'sin_rope']: - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding=pos) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - out = tr(x) - assert list(out.shape) == list(x.shape) - - -@torch.no_grad() -def test_rope_streaming(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, causal=True, dropout=0., - custom=True, positional_embedding='rope') - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -@torch.no_grad() -def test_rope_streaming_past_context(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - - for context in [None, 10]: - tr = StreamingTransformer( - 16, 4, 1 if context else 2, - causal=True, past_context=context, custom=True, - dropout=0., positional_embedding='rope') - tr.eval() - - steps = 20 - x = torch.randn(3, steps, 16) - ref = tr(x) - - with tr.streaming(): - outs = [] - frame_sizes = [1] * steps - - for frame_size in frame_sizes: - frame = x[:, :frame_size] - x = x[:, frame_size:] - outs.append(tr(frame)) - - out = torch.cat(outs, dim=1) - assert list(out.shape) == [3, steps, 16] - delta = torch.norm(out - ref) / torch.norm(out) - assert delta < 1e-6, delta - - -def test_rope_memory_efficient(): - set_efficient_attention_backend('xformers') - torch.manual_seed(1234) - tr = StreamingTransformer( - 16, 4, 2, custom=True, dropout=0., layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient = StreamingTransformer( - 16, 4, 2, dropout=0., memory_efficient=True, layer_scale=0.1, - positional_embedding='rope') - tr_mem_efficient.load_state_dict(tr.state_dict()) - tr.eval() - steps = 12 - x = torch.randn(3, steps, 16) - - with torch.no_grad(): - y = tr(x) - y2 = tr_mem_efficient(x) - # Check at float precision b/c this is the rope default. - assert torch.allclose(y, y2, atol=1e-7), (y - y2).norm() - - -def test_rope_with_xpos(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert list(xq_out.shape) == [B, T, H, C] - assert list(xk_out.shape) == [B, T, H, C] - - -def test_positional_scale(): - set_efficient_attention_backend('xformers') - B, T, H, C = 8, 75, 16, 128 - - rope = RotaryEmbedding(dim=C, xpos=True, scale=0.0) - xq = torch.rand((B, T, H, C)) - xk = torch.rand((B, T, H, C)) - xq_out, xk_out = rope.rotate_qk(xq, xk, start=7) - - assert torch.allclose(xq, xq_out) - assert torch.allclose(xk, xk_out) diff --git a/spaces/EuroPython2022/viciu/README.md b/spaces/EuroPython2022/viciu/README.md deleted file mode 100644 index 86a60497e5b58ffb3a62af0c0f0aa02414b3a160..0000000000000000000000000000000000000000 --- a/spaces/EuroPython2022/viciu/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Viciu -emoji: 💩 -colorFrom: gray -colorTo: red -sdk: gradio -sdk_version: 3.0.26 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/FathomNet/MBARI_Monterey_Bay_Benthic/tator_inference.py b/spaces/FathomNet/MBARI_Monterey_Bay_Benthic/tator_inference.py deleted file mode 100644 index 5593e9ccccb6bbd222b50220b5ffe65ab486c300..0000000000000000000000000000000000000000 --- a/spaces/FathomNet/MBARI_Monterey_Bay_Benthic/tator_inference.py +++ /dev/null @@ -1,100 +0,0 @@ -import os -import logging -from tempfile import TemporaryFile - -import cv2 -import numpy as np -from PIL import Image - -import tator -import inference - - -logger = logging.getLogger(__name__) -logger.setLevel(logging.INFO) - -# Read environment variables that are provided from TATOR -host = os.getenv('HOST') -token = os.getenv('TOKEN') -project_id = int(os.getenv('PROJECT_ID')) -media_ids = [int(id_) for id_ in os.getenv('MEDIA_IDS').split(',')] -frames_per_inference = int(os.getenv('FRAMES_PER_INFERENCE', 30)) - -# Set up the TATOR API. -api = tator.get_api(host, token) - -# Iterate through each video. -for media_id in media_ids: - - # Download video. - media = api.get_media(media_id) - logger.info(f"Downloading {media.name}...") - out_path = f"/tmp/{media.name}" - for progress in tator.util.download_media(api, media, out_path): - logger.info(f"Download progress: {progress}%") - - # Do inference on each video. - logger.info(f"Doing inference on {media.name}...") - localizations = [] - vid = cv2.VideoCapture(out_path) - frame_number = 0 - - # Read *every* frame from the video, break when at the end. - while True: - ret, frame = vid.read() - if not ret: - break - - # Create a temporary file, access the image data, save data to file. - framefile = TemporaryFile(suffix='.jpg') - im = Image.fromarray(frame) - im.save(framefile) - - # For every N frames, make a prediction; append prediction results - # to a list, increase the frame count. - if frame_number % frames_per_inference == 0: - - spec = {} - - # Predictions contains all information inside pandas dataframe - predictions = inference.run_inference(framefile) - - for i, r in predictions.pandas().xyxy[0].iterrows: - - spec['media_id'] = media_id - spec['type'] = None # Unsure, docs not specific - spec['frame'] = frame_number - - x, y, x2, y2 = r['xmin'], r['ymin'], r['xmax'], r['ymax'] - w, h = x2 - x, y2 - y - - spec['x'] = x - spec['y'] = y - spec['width'] = w - spec['height'] = h - spec['class_category'] = r['name'] - spec['confidence'] = r['confidence'] - - localizations.append(spec) - - frame_number += 1 - - # End interaction with video properly. - vid.release() - - logger.info(f"Uploading object detections on {media.name}...") - - # Create the localizations in the video. - num_created = 0 - for response in tator.util.chunked_create(api.create_localization_list, - project_id, - localization_spec=localizations): - num_created += len(response.id) - - # Output pretty logging information. - logger.info(f"Successfully created {num_created} localizations on " - f"{media.name}!") - - logger.info("-------------------------------------------------") - -logger.info(f"Completed inference on {len(media_ids)} files.") \ No newline at end of file diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/train.py b/spaces/Gen-Sim/Gen-Sim/cliport/train.py deleted file mode 100644 index 27494fe1639322bac086980b833ca0b1932fe02b..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/train.py +++ /dev/null @@ -1,133 +0,0 @@ -"""Main training script.""" - -import os -from pathlib import Path - -import torch -from cliport import agents -from cliport.dataset import RavensDataset, RavensMultiTaskDataset, RavenMultiTaskDatasetBalance - -import hydra -from pytorch_lightning import Trainer -from pytorch_lightning.callbacks import ModelCheckpoint -from pytorch_lightning.loggers import WandbLogger -import numpy as np -from torch.utils.data import DataLoader -from torch.utils.data.dataloader import default_collate -import IPython -import pytorch_lightning as pl -from pytorch_lightning.utilities import rank_zero_only -import datetime -import time -import random - - -def set_seed_everywhere(seed): - torch.manual_seed(seed) - if torch.cuda.is_available(): - torch.cuda.manual_seed_all(seed) - np.random.seed(seed) - random.seed(seed) - -@hydra.main(config_path="./cfg", config_name='train', version_base="1.2") -def main(cfg): - # Logger - set_seed_everywhere(1) - wandb_logger = None - - if cfg['train']['log']: - try: - wandb_logger = WandbLogger(name=cfg['tag']) - except: - pass - - # Checkpoint saver - hydra_dir = Path(os.getcwd()) - checkpoint_path = os.path.join(cfg['train']['train_dir'], 'checkpoints') - last_checkpoint_path = os.path.join(checkpoint_path, 'last.ckpt') - last_checkpoint = last_checkpoint_path if os.path.exists(last_checkpoint_path) and cfg['train']['load_from_last_ckpt'] else None - checkpoint_callback = [ModelCheckpoint( - # monitor=cfg['wandb']['saver']['monitor'], - dirpath=os.path.join(checkpoint_path, 'best'), - save_top_k=1, - every_n_epochs=3, - save_last=True, - # every_n_train_steps=100 - )] - - # Trainer - max_epochs = cfg['train']['n_steps'] * cfg['train']['batch_size'] // cfg['train']['n_demos'] - if cfg['train']['training_step_scale'] > 0: - # scale training time depending on the tasks to ensure coverage. - max_epochs = cfg['train']['training_step_scale'] # // cfg['train']['batch_size'] - - trainer = Trainer( - accelerator='gpu', - devices=cfg['train']['gpu'], - fast_dev_run=cfg['debug'], - logger=wandb_logger, - callbacks=checkpoint_callback, - max_epochs=max_epochs, - # check_val_every_n_epoch=max_epochs // 50, - # resume_from_checkpoint=last_checkpoint, - sync_batchnorm=True, - log_every_n_steps=30, - ) - - print(f"max epochs: {max_epochs}!") - - # Resume epoch and global_steps - if last_checkpoint: - print(f"Resuming: {last_checkpoint}") - - # Config - data_dir = cfg['train']['data_dir'] - task = cfg['train']['task'] - agent_type = cfg['train']['agent'] - n_demos = cfg['train']['n_demos'] - - if agent_type == 'mdetr': - print('======import torch.multiprocessing to avioid shared memory issue======') - import torch.multiprocessing - torch.multiprocessing.set_sharing_strategy('file_system') - - # n_demos = cfg['train']['n_demos'] - # n_demos = cfg['train']['n_demos'] - n_val = cfg['train']['n_val'] - name = '{}-{}-{}'.format(task, agent_type, n_demos) - - # Datasets - dataset_type = cfg['dataset']['type'] - if 'multi' in dataset_type: - train_ds = RavensMultiTaskDataset(data_dir, cfg, group=task, mode='train', - n_demos=n_demos, augment=True) - val_ds = RavensMultiTaskDataset(data_dir, cfg, group=task, mode='val', n_demos=n_val, augment=False) - elif 'weighted' in dataset_type: - train_ds = RavenMultiTaskDatasetBalance(data_dir, cfg, group=task, mode='train', n_demos=n_demos, augment=True) - val_ds = RavenMultiTaskDatasetBalance(data_dir, cfg, group=task, mode='val', n_demos=n_val, augment=False) - else: - train_ds = RavensDataset(os.path.join(data_dir, '{}-train'.format(task)), cfg, n_demos=n_demos, augment=True) - val_ds = RavensDataset(os.path.join(data_dir, '{}-val'.format(task)), cfg, n_demos=n_val, augment=False) - - # Initialize agent - train_loader = DataLoader(train_ds, shuffle=True, - pin_memory=True, - batch_size=cfg['train']['batch_size'], - num_workers=1 ) - test_loader = DataLoader(val_ds, shuffle=False, - num_workers=1, - batch_size=cfg['train']['batch_size'], - pin_memory=True) - - agent = agents.names[agent_type](name, cfg, train_loader, test_loader) - dt_string = datetime.datetime.now().strftime("%d_%m_%Y_%H:%M:%S") - print("current time:", dt_string) - - start_time = time.time() - # Main training loop - trainer.fit(agent, ckpt_path=last_checkpoint) - - print("current time:", time.time() - start_time) - -if __name__ == '__main__': - main() diff --git a/spaces/GitHunter0/100_prisoners_problem_app/pages/03_Strategies_Simulations.py b/spaces/GitHunter0/100_prisoners_problem_app/pages/03_Strategies_Simulations.py deleted file mode 100644 index 18029206efaccfe35e986b748f3e5d0211781f54..0000000000000000000000000000000000000000 --- a/spaces/GitHunter0/100_prisoners_problem_app/pages/03_Strategies_Simulations.py +++ /dev/null @@ -1,218 +0,0 @@ - -import streamlit as st -import os -import tempfile - -from functions.streamlit_basic import get_binary_file_downloader_html - -from functions.module_project_specific_functions import ( - f_streamlit_hide_menu_and_marks, - f_streamlit_customize_page, - f_100_prisoners_game_simulate_random_strategy, - f_100_prisoners_game_simulate_cf_strategy -) - -exec(open("./functions/module_project_specific_functions.py").read()) - -if False: - @st.experimental_memo - def f_100_prisoners_game_simulate_cf_strategy_cached(**kwargs): - - success_rate = \ - f_100_prisoners_game_simulate_cf_strategy(**kwargs) - - # return is necessary - return success_rate - - -#%%% Page Configuration - -# set_page_config() can only be called once per app, and must be called as -# the first Streamlit command in your script. -st.set_page_config( - page_title = "100 Prisoners Game Riddle", - page_icon='www/100_prisoners_problem_favicon_1.jpg', # None ":memo:", ... - layout='wide', # centered, wide - initial_sidebar_state='auto' # auto, expanded, collapsed -) - -# Hide Hamburger Menu and Streamlit logo 'Made with Streamlit' -f_streamlit_hide_menu_and_marks() - -# -f_streamlit_customize_page( - padding_top="10px", margin_top="10px" -) - -#%% strategy - -# NOTE: "delete=False" is necessary to make it work on Windows -# https://stackoverflow.com/questions/15169101/how-to-create-a-temporary-file-that-can-be-read-by-a-subprocess - -# This makes sure to use the same temporary file -# (in case of a cached function, this would prevent it to rerun) -if 'log_path1' not in st.session_state: - - tempf = tempfile.NamedTemporaryFile(suffix=".txt", delete=False) - st.session_state["log_path1"] = tempf.name - -log_path1 = st.session_state["log_path1"] -# print(log_path1.replace("\\","/")) -# os.remove(log_path1) - -if 'log_path2' not in st.session_state: - tempf = tempfile.NamedTemporaryFile(suffix=".txt", delete=False) - st.session_state["log_path2"] = tempf.name - -log_path2 = st.session_state["log_path2"] - - -with st.form(key="cf_strategy_simulation"): - - st.markdown(f'''<h3 style='text-align: center; font-weight: bold;' > - GAMES SIMULATIONS <br> - </h3>''', unsafe_allow_html=True) - - cols = st.columns(2) - - with cols[0]: - n_prisoners = st.number_input("Number of Prisoners", value=4, - min_value=2, max_value=180, step=2) - - with cols[1]: - n_games = st.number_input("Games Played (number of simulations)", - value=5, min_value=2, max_value=200, step=1) - - cols = st.columns(2) - with cols[0]: - submit = st.form_submit_button("Generate") - - with cols[1]: - st.markdown('''<p style='text-align: left; font-style: italic; - font-size: 15px;' > - Note: Each game draws random numbers for the boxes.</p>''', unsafe_allow_html=True) - -# BUG: Workaround: https://github.com/streamlit/streamlit/issues/3832#issuecomment-1138994421 -if submit: - st.session_state["submitted"] = True - -if 'submitted' in st.session_state: - - cf_success_rate = \ - f_100_prisoners_game_simulate_cf_strategy( - # f_100_prisoners_game_simulate_cf_strategy_cached( - n_prisoners = n_prisoners, - n_games = n_games, - log_path = log_path1, - display_level = ["ALL", "SHORT", None][0] - ) - - random_success_rate = \ - f_100_prisoners_game_simulate_random_strategy( - n_prisoners = n_prisoners, - n_games = n_games, - log_path = log_path2, - display_level = ["ALL", "SHORT", None][0] - ) - - cols = st.columns(2) - - with cols[0]: - st.markdown('''<h4 style='text-align: center; font-weight: bold;' > - Cycle-Following (Optimal) Strategy </h4>''', unsafe_allow_html=True) - # Simulated, Empirical / Experimental Probability - st.markdown(f'''<h5 style='text-align: center; font-weight: bold; - padding-bottom: 0px;' > - Simulated Probability of Winning the Game (Success Rate) </h5> ''', unsafe_allow_html=True - ) - # - st.markdown(f'''<h3 style='text-align: center; font-weight: bold; - padding-top: 2px;' > - % {cf_success_rate} </h3>''', unsafe_allow_html=True) - - with open(log_path1, "r") as f: - log_str1 = f.read() - # - # log_str - # print(log_str) - # - log_print1 = log_str1.replace('\n', '<br>') - # os.startfile(log_path) - # st.write(log_print) - - # REVIEW: https://github.com/streamlit/streamlit/issues/4382 - # with open(log_path1, 'r') as f: - # st.download_button( - # "Download Games Simulations", - # data=f, - # file_name="100_Prisoners_Game_-_Optimal_Strategy.txt", - # mime='text/plain' - # ) - html_link = get_binary_file_downloader_html( - file_path = log_path1, - file_name = "100_Prisoners_Game_-_Optimal_Strategy.txt", - file_label = 'Download Games Simulations', - button_bgcolor = "rgb(72, 47, 142)", - button_bordercolor = "rgba(250, 250, 250, 0.2)" - ) - # Display Link - st.markdown(html_link, unsafe_allow_html=True) - - if n_prisoners > 10 or n_games > 20: - st.warning("Large Games will not be displayed here, " + - "download the file instead.") - else: - st.markdown(log_print1, unsafe_allow_html=True) - - with cols[1]: - - st.markdown('''<h4 style='text-align: center; font-weight: bold;' > - Random strategy </h4>''', unsafe_allow_html=True) - # - st.markdown(f'''<h5 style='text-align: center; font-weight: bold; - padding-bottom: 0px;' > - Simulated Probability of Winning the Game (Success Rate) </h5> ''', unsafe_allow_html=True - ) - # - st.markdown(f'''<h3 style='text-align: center; font-weight: bold; - padding-top: 2px;' > - % {random_success_rate} </h3>''', unsafe_allow_html=True) - - with open(log_path2, "r") as f: - log_str2 = f.read() - # - # log_str - # print(log_str) - # - log_print2 = log_str2.replace('\n', '<br>') - # os.startfile(log_path) - # st.write(log_print) - - # with open(log_path2, 'r') as f: - # st.download_button( - # "Download Games Simulations", - # data=f, - # file_name="100_Prisoners_Game_-_Random_Strategy.txt", - # mime='text/plain' - # ) - html_link = get_binary_file_downloader_html( - file_path = log_path2, - file_name = "100_Prisoners_Game_-_Random_Strategy.txt", - file_label = 'Download Games Simulations', - button_bgcolor = "rgb(72, 47, 142)", - button_bordercolor = "rgba(250, 250, 250, 0.2)" - ) - # Display Download button - st.markdown(html_link, unsafe_allow_html=True) - - if n_prisoners > 10 or n_games > 20: - st.warning("Large Games will not be displayed here, " + - "download the file instead.") - else: - st.markdown(log_print2, unsafe_allow_html=True) - - - -#%% ___________________________________________ - - diff --git a/spaces/Gokul14/impira-layoutlm-document-qa/app.py b/spaces/Gokul14/impira-layoutlm-document-qa/app.py deleted file mode 100644 index c80208650f94f0a6bd291fdf0a78afaf1fcf318b..0000000000000000000000000000000000000000 --- a/spaces/Gokul14/impira-layoutlm-document-qa/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/impira/layoutlm-document-qa").launch() \ No newline at end of file diff --git a/spaces/GordenGhost/Gorden/Dockerfile b/spaces/GordenGhost/Gorden/Dockerfile deleted file mode 100644 index c677b05b75f7e4b2beee8c97fb47957a0861a83e..0000000000000000000000000000000000000000 --- a/spaces/GordenGhost/Gorden/Dockerfile +++ /dev/null @@ -1,7 +0,0 @@ -FROM weaigc/bingo:latest - -ARG DEBIAN_FRONTEND=noninteractive - -ENV BING_HEADER "" - -CMD npm start diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/retinanet_regnetx-1.6GF_fpn_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/retinanet_regnetx-1.6GF_fpn_1x_coco.py deleted file mode 100644 index 4f2beb850ded95402d6b44c80553f224e15fb557..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/regnet/retinanet_regnetx-1.6GF_fpn_1x_coco.py +++ /dev/null @@ -1,16 +0,0 @@ -_base_ = './retinanet_regnetx-3.2GF_fpn_1x_coco.py' -model = dict( - pretrained='open-mmlab://regnetx_1.6gf', - backbone=dict( - type='RegNet', - arch='regnetx_1.6gf', - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch'), - neck=dict( - type='FPN', - in_channels=[72, 168, 408, 912], - out_channels=256, - num_outs=5)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r2_101_fpn_mstrain_2x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r2_101_fpn_mstrain_2x_coco.py deleted file mode 100644 index 2bcf779db008dbbf0c8f3b1fdc84a9940967f78a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/vfnet/vfnet_r2_101_fpn_mstrain_2x_coco.py +++ /dev/null @@ -1,14 +0,0 @@ -_base_ = './vfnet_r50_fpn_mstrain_2x_coco.py' -model = dict( - pretrained='open-mmlab://res2net101_v1d_26w_4s', - backbone=dict( - type='Res2Net', - depth=101, - scales=4, - base_width=26, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch')) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x512_80k_ade20k.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x512_80k_ade20k.py deleted file mode 100644 index 6ad67722a50c2b2ece5fcb7f0dd1819061ff6b3e..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/ocrnet/ocrnet_hr18_512x512_80k_ade20k.py +++ /dev/null @@ -1,35 +0,0 @@ -_base_ = [ - '../_base_/models/ocrnet_hr18.py', '../_base_/datasets/ade20k.py', - '../_base_/default_runtime.py', '../_base_/schedules/schedule_80k.py' -] -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict(decode_head=[ - dict( - type='FCNHead', - in_channels=[18, 36, 72, 144], - channels=sum([18, 36, 72, 144]), - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - kernel_size=1, - num_convs=1, - concat_input=False, - dropout_ratio=-1, - num_classes=150, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - dict( - type='OCRHead', - in_channels=[18, 36, 72, 144], - in_index=(0, 1, 2, 3), - input_transform='resize_concat', - channels=512, - ocr_channels=256, - dropout_ratio=-1, - num_classes=150, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), -]) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/fcn_head.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/fcn_head.py deleted file mode 100644 index 4ea3742f0b685ffc891d70db86cd6ea058283fd6..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/mmseg/models/decode_heads/fcn_head.py +++ /dev/null @@ -1,81 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import ConvModule - -from ..builder import HEADS -from .decode_head import BaseDecodeHead - - -@HEADS.register_module() -class FCNHead(BaseDecodeHead): - """Fully Convolution Networks for Semantic Segmentation. - - This head is implemented of `FCNNet <https://arxiv.org/abs/1411.4038>`_. - - Args: - num_convs (int): Number of convs in the head. Default: 2. - kernel_size (int): The kernel size for convs in the head. Default: 3. - concat_input (bool): Whether concat the input and output of convs - before classification layer. - dilation (int): The dilation rate for convs in the head. Default: 1. - """ - - def __init__(self, - num_convs=2, - kernel_size=3, - concat_input=True, - dilation=1, - **kwargs): - assert num_convs >= 0 and dilation > 0 and isinstance(dilation, int) - self.num_convs = num_convs - self.concat_input = concat_input - self.kernel_size = kernel_size - super(FCNHead, self).__init__(**kwargs) - if num_convs == 0: - assert self.in_channels == self.channels - - conv_padding = (kernel_size // 2) * dilation - convs = [] - convs.append( - ConvModule( - self.in_channels, - self.channels, - kernel_size=kernel_size, - padding=conv_padding, - dilation=dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - for i in range(num_convs - 1): - convs.append( - ConvModule( - self.channels, - self.channels, - kernel_size=kernel_size, - padding=conv_padding, - dilation=dilation, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg)) - if num_convs == 0: - self.convs = nn.Identity() - else: - self.convs = nn.Sequential(*convs) - if self.concat_input: - self.conv_cat = ConvModule( - self.in_channels + self.channels, - self.channels, - kernel_size=kernel_size, - padding=kernel_size // 2, - conv_cfg=self.conv_cfg, - norm_cfg=self.norm_cfg, - act_cfg=self.act_cfg) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs(x) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/modules/test_seanet.py b/spaces/GrandaddyShmax/AudioCraft_Plus/tests/modules/test_seanet.py deleted file mode 100644 index e5c51b340a2f94fb2828b14daf83d5fad645073d..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/modules/test_seanet.py +++ /dev/null @@ -1,115 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product - -import pytest -import torch - -from audiocraft.modules.seanet import SEANetEncoder, SEANetDecoder, SEANetResnetBlock -from audiocraft.modules import StreamableConv1d, StreamableConvTranspose1d - - -class TestSEANetModel: - - def test_base(self): - encoder = SEANetEncoder() - decoder = SEANetDecoder() - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_causal(self): - encoder = SEANetEncoder(causal=True) - decoder = SEANetDecoder(causal=True) - x = torch.randn(1, 1, 24000) - - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_conv_skip_connection(self): - encoder = SEANetEncoder(true_skip=False) - decoder = SEANetDecoder(true_skip=False) - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def test_seanet_encoder_decoder_final_act(self): - encoder = SEANetEncoder(true_skip=False) - decoder = SEANetDecoder(true_skip=False, final_activation='Tanh') - - x = torch.randn(1, 1, 24000) - z = encoder(x) - assert list(z.shape) == [1, 128, 75], z.shape - y = decoder(z) - assert y.shape == x.shape, (x.shape, y.shape) - - def _check_encoder_blocks_norm(self, encoder: SEANetEncoder, n_disable_blocks: int, norm: str): - n_blocks = 0 - for layer in encoder.model: - if isinstance(layer, StreamableConv1d): - n_blocks += 1 - assert layer.conv.norm_type == 'none' if n_blocks <= n_disable_blocks else norm - elif isinstance(layer, SEANetResnetBlock): - for resnet_layer in layer.block: - if isinstance(resnet_layer, StreamableConv1d): - # here we add + 1 to n_blocks as we increment n_blocks just after the block - assert resnet_layer.conv.norm_type == 'none' if (n_blocks + 1) <= n_disable_blocks else norm - - def test_encoder_disable_norm(self): - n_residuals = [0, 1, 3] - disable_blocks = [0, 1, 2, 3, 4, 5, 6] - norms = ['weight_norm', 'none'] - for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms): - encoder = SEANetEncoder(n_residual_layers=n_res, norm=norm, - disable_norm_outer_blocks=disable_blocks) - self._check_encoder_blocks_norm(encoder, disable_blocks, norm) - - def _check_decoder_blocks_norm(self, decoder: SEANetDecoder, n_disable_blocks: int, norm: str): - n_blocks = 0 - for layer in decoder.model: - if isinstance(layer, StreamableConv1d): - n_blocks += 1 - assert layer.conv.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - elif isinstance(layer, StreamableConvTranspose1d): - n_blocks += 1 - assert layer.convtr.norm_type == 'none' if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - elif isinstance(layer, SEANetResnetBlock): - for resnet_layer in layer.block: - if isinstance(resnet_layer, StreamableConv1d): - assert resnet_layer.conv.norm_type == 'none' \ - if (decoder.n_blocks - n_blocks) < n_disable_blocks else norm - - def test_decoder_disable_norm(self): - n_residuals = [0, 1, 3] - disable_blocks = [0, 1, 2, 3, 4, 5, 6] - norms = ['weight_norm', 'none'] - for n_res, disable_blocks, norm in product(n_residuals, disable_blocks, norms): - decoder = SEANetDecoder(n_residual_layers=n_res, norm=norm, - disable_norm_outer_blocks=disable_blocks) - self._check_decoder_blocks_norm(decoder, disable_blocks, norm) - - def test_disable_norm_raises_exception(self): - # Invalid disable_norm_outer_blocks values raise exceptions - with pytest.raises(AssertionError): - SEANetEncoder(disable_norm_outer_blocks=-1) - - with pytest.raises(AssertionError): - SEANetEncoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7) - - with pytest.raises(AssertionError): - SEANetDecoder(disable_norm_outer_blocks=-1) - - with pytest.raises(AssertionError): - SEANetDecoder(ratios=[1, 1, 2, 2], disable_norm_outer_blocks=7) diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/dataset/dataset_TM_eval.py b/spaces/Grezz/generate_human_motion/VQ-Trans/dataset/dataset_TM_eval.py deleted file mode 100644 index 576a53b7dabd8095bed59dcc86199e30f2798838..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/VQ-Trans/dataset/dataset_TM_eval.py +++ /dev/null @@ -1,217 +0,0 @@ -import torch -from torch.utils import data -import numpy as np -from os.path import join as pjoin -import random -import codecs as cs -from tqdm import tqdm - -import utils.paramUtil as paramUtil -from torch.utils.data._utils.collate import default_collate - - -def collate_fn(batch): - batch.sort(key=lambda x: x[3], reverse=True) - return default_collate(batch) - - -'''For use of training text-2-motion generative model''' -class Text2MotionDataset(data.Dataset): - def __init__(self, dataset_name, is_test, w_vectorizer, feat_bias = 5, max_text_len = 20, unit_length = 4): - - self.max_length = 20 - self.pointer = 0 - self.dataset_name = dataset_name - self.is_test = is_test - self.max_text_len = max_text_len - self.unit_length = unit_length - self.w_vectorizer = w_vectorizer - if dataset_name == 't2m': - self.data_root = './dataset/HumanML3D' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 22 - radius = 4 - fps = 20 - self.max_motion_length = 196 - dim_pose = 263 - kinematic_chain = paramUtil.t2m_kinematic_chain - self.meta_dir = 'checkpoints/t2m/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - elif dataset_name == 'kit': - self.data_root = './dataset/KIT-ML' - self.motion_dir = pjoin(self.data_root, 'new_joint_vecs') - self.text_dir = pjoin(self.data_root, 'texts') - self.joints_num = 21 - radius = 240 * 8 - fps = 12.5 - dim_pose = 251 - self.max_motion_length = 196 - kinematic_chain = paramUtil.kit_kinematic_chain - self.meta_dir = 'checkpoints/kit/VQVAEV3_CB1024_CMT_H1024_NRES3/meta' - - mean = np.load(pjoin(self.meta_dir, 'mean.npy')) - std = np.load(pjoin(self.meta_dir, 'std.npy')) - - if is_test: - split_file = pjoin(self.data_root, 'test.txt') - else: - split_file = pjoin(self.data_root, 'val.txt') - - min_motion_len = 40 if self.dataset_name =='t2m' else 24 - # min_motion_len = 64 - - joints_num = self.joints_num - - data_dict = {} - id_list = [] - with cs.open(split_file, 'r') as f: - for line in f.readlines(): - id_list.append(line.strip()) - - new_name_list = [] - length_list = [] - for name in tqdm(id_list): - try: - motion = np.load(pjoin(self.motion_dir, name + '.npy')) - if (len(motion)) < min_motion_len or (len(motion) >= 200): - continue - text_data = [] - flag = False - with cs.open(pjoin(self.text_dir, name + '.txt')) as f: - for line in f.readlines(): - text_dict = {} - line_split = line.strip().split('#') - caption = line_split[0] - tokens = line_split[1].split(' ') - f_tag = float(line_split[2]) - to_tag = float(line_split[3]) - f_tag = 0.0 if np.isnan(f_tag) else f_tag - to_tag = 0.0 if np.isnan(to_tag) else to_tag - - text_dict['caption'] = caption - text_dict['tokens'] = tokens - if f_tag == 0.0 and to_tag == 0.0: - flag = True - text_data.append(text_dict) - else: - try: - n_motion = motion[int(f_tag*fps) : int(to_tag*fps)] - if (len(n_motion)) < min_motion_len or (len(n_motion) >= 200): - continue - new_name = random.choice('ABCDEFGHIJKLMNOPQRSTUVW') + '_' + name - while new_name in data_dict: - new_name = random.choice('ABCDEFGHIJKLMNOPQRSTUVW') + '_' + name - data_dict[new_name] = {'motion': n_motion, - 'length': len(n_motion), - 'text':[text_dict]} - new_name_list.append(new_name) - length_list.append(len(n_motion)) - except: - print(line_split) - print(line_split[2], line_split[3], f_tag, to_tag, name) - # break - - if flag: - data_dict[name] = {'motion': motion, - 'length': len(motion), - 'text': text_data} - new_name_list.append(name) - length_list.append(len(motion)) - except Exception as e: - # print(e) - pass - - name_list, length_list = zip(*sorted(zip(new_name_list, length_list), key=lambda x: x[1])) - self.mean = mean - self.std = std - self.length_arr = np.array(length_list) - self.data_dict = data_dict - self.name_list = name_list - self.reset_max_len(self.max_length) - - def reset_max_len(self, length): - assert length <= self.max_motion_length - self.pointer = np.searchsorted(self.length_arr, length) - print("Pointer Pointing at %d"%self.pointer) - self.max_length = length - - def inv_transform(self, data): - return data * self.std + self.mean - - def forward_transform(self, data): - return (data - self.mean) / self.std - - def __len__(self): - return len(self.data_dict) - self.pointer - - def __getitem__(self, item): - idx = self.pointer + item - name = self.name_list[idx] - data = self.data_dict[name] - # data = self.data_dict[self.name_list[idx]] - motion, m_length, text_list = data['motion'], data['length'], data['text'] - # Randomly select a caption - text_data = random.choice(text_list) - caption, tokens = text_data['caption'], text_data['tokens'] - - if len(tokens) < self.max_text_len: - # pad with "unk" - tokens = ['sos/OTHER'] + tokens + ['eos/OTHER'] - sent_len = len(tokens) - tokens = tokens + ['unk/OTHER'] * (self.max_text_len + 2 - sent_len) - else: - # crop - tokens = tokens[:self.max_text_len] - tokens = ['sos/OTHER'] + tokens + ['eos/OTHER'] - sent_len = len(tokens) - pos_one_hots = [] - word_embeddings = [] - for token in tokens: - word_emb, pos_oh = self.w_vectorizer[token] - pos_one_hots.append(pos_oh[None, :]) - word_embeddings.append(word_emb[None, :]) - pos_one_hots = np.concatenate(pos_one_hots, axis=0) - word_embeddings = np.concatenate(word_embeddings, axis=0) - - if self.unit_length < 10: - coin2 = np.random.choice(['single', 'single', 'double']) - else: - coin2 = 'single' - - if coin2 == 'double': - m_length = (m_length // self.unit_length - 1) * self.unit_length - elif coin2 == 'single': - m_length = (m_length // self.unit_length) * self.unit_length - idx = random.randint(0, len(motion) - m_length) - motion = motion[idx:idx+m_length] - - "Z Normalization" - motion = (motion - self.mean) / self.std - - if m_length < self.max_motion_length: - motion = np.concatenate([motion, - np.zeros((self.max_motion_length - m_length, motion.shape[1])) - ], axis=0) - - return word_embeddings, pos_one_hots, caption, sent_len, motion, m_length, '_'.join(tokens), name - - - - -def DATALoader(dataset_name, is_test, - batch_size, w_vectorizer, - num_workers = 8, unit_length = 4) : - - val_loader = torch.utils.data.DataLoader(Text2MotionDataset(dataset_name, is_test, w_vectorizer, unit_length=unit_length), - batch_size, - shuffle = True, - num_workers=num_workers, - collate_fn=collate_fn, - drop_last = True) - return val_loader - - -def cycle(iterable): - while True: - for x in iterable: - yield x diff --git a/spaces/Grezz/generate_human_motion/VQ-Trans/visualize/vis_utils.py b/spaces/Grezz/generate_human_motion/VQ-Trans/visualize/vis_utils.py deleted file mode 100644 index 05728b38e3d6be4bfd83324907e3fa7a3f358071..0000000000000000000000000000000000000000 --- a/spaces/Grezz/generate_human_motion/VQ-Trans/visualize/vis_utils.py +++ /dev/null @@ -1,66 +0,0 @@ -from model.rotation2xyz import Rotation2xyz -import numpy as np -from trimesh import Trimesh -import os -import torch -from visualize.simplify_loc2rot import joints2smpl - -class npy2obj: - def __init__(self, npy_path, sample_idx, rep_idx, device=0, cuda=True): - self.npy_path = npy_path - self.motions = np.load(self.npy_path, allow_pickle=True) - if self.npy_path.endswith('.npz'): - self.motions = self.motions['arr_0'] - self.motions = self.motions[None][0] - self.rot2xyz = Rotation2xyz(device='cpu') - self.faces = self.rot2xyz.smpl_model.faces - self.bs, self.njoints, self.nfeats, self.nframes = self.motions['motion'].shape - self.opt_cache = {} - self.sample_idx = sample_idx - self.total_num_samples = self.motions['num_samples'] - self.rep_idx = rep_idx - self.absl_idx = self.rep_idx*self.total_num_samples + self.sample_idx - self.num_frames = self.motions['motion'][self.absl_idx].shape[-1] - self.j2s = joints2smpl(num_frames=self.num_frames, device_id=device, cuda=cuda) - - if self.nfeats == 3: - print(f'Running SMPLify For sample [{sample_idx}], repetition [{rep_idx}], it may take a few minutes.') - motion_tensor, opt_dict = self.j2s.joint2smpl(self.motions['motion'][self.absl_idx].transpose(2, 0, 1)) # [nframes, njoints, 3] - self.motions['motion'] = motion_tensor.cpu().numpy() - elif self.nfeats == 6: - self.motions['motion'] = self.motions['motion'][[self.absl_idx]] - self.bs, self.njoints, self.nfeats, self.nframes = self.motions['motion'].shape - self.real_num_frames = self.motions['lengths'][self.absl_idx] - - self.vertices = self.rot2xyz(torch.tensor(self.motions['motion']), mask=None, - pose_rep='rot6d', translation=True, glob=True, - jointstype='vertices', - # jointstype='smpl', # for joint locations - vertstrans=True) - self.root_loc = self.motions['motion'][:, -1, :3, :].reshape(1, 1, 3, -1) - self.vertices += self.root_loc - - def get_vertices(self, sample_i, frame_i): - return self.vertices[sample_i, :, :, frame_i].squeeze().tolist() - - def get_trimesh(self, sample_i, frame_i): - return Trimesh(vertices=self.get_vertices(sample_i, frame_i), - faces=self.faces) - - def save_obj(self, save_path, frame_i): - mesh = self.get_trimesh(0, frame_i) - with open(save_path, 'w') as fw: - mesh.export(fw, 'obj') - return save_path - - def save_npy(self, save_path): - data_dict = { - 'motion': self.motions['motion'][0, :, :, :self.real_num_frames], - 'thetas': self.motions['motion'][0, :-1, :, :self.real_num_frames], - 'root_translation': self.motions['motion'][0, -1, :3, :self.real_num_frames], - 'faces': self.faces, - 'vertices': self.vertices[0, :, :, :self.real_num_frames], - 'text': self.motions['text'][0], - 'length': self.real_num_frames, - } - np.save(save_path, data_dict) diff --git a/spaces/Groenewaldt/stabilityai-stable-diffusion-xl-refiner-1.0/app.py b/spaces/Groenewaldt/stabilityai-stable-diffusion-xl-refiner-1.0/app.py deleted file mode 100644 index f0854c17140255ac783638680bc5dce595cc9fd0..0000000000000000000000000000000000000000 --- a/spaces/Groenewaldt/stabilityai-stable-diffusion-xl-refiner-1.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/stabilityai/stable-diffusion-xl-refiner-1.0").launch() \ No newline at end of file diff --git a/spaces/Hallucinate/demo/AdaBins-main/infer.py b/spaces/Hallucinate/demo/AdaBins-main/infer.py deleted file mode 100644 index 16a1b88a6d94c57649d36cf7ee5ff1b9547677b1..0000000000000000000000000000000000000000 --- a/spaces/Hallucinate/demo/AdaBins-main/infer.py +++ /dev/null @@ -1,161 +0,0 @@ -import glob -import os - -import numpy as np -import torch -import torch.nn as nn -from PIL import Image -from torchvision import transforms -from tqdm import tqdm - -import model_io -import utils -from models import UnetAdaptiveBins - - -def _is_pil_image(img): - return isinstance(img, Image.Image) - - -def _is_numpy_image(img): - return isinstance(img, np.ndarray) and (img.ndim in {2, 3}) - - -class ToTensor(object): - def __init__(self): - self.normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]) - - def __call__(self, image, target_size=(640, 480)): - # image = image.resize(target_size) - image = self.to_tensor(image) - image = self.normalize(image) - return image - - def to_tensor(self, pic): - if not (_is_pil_image(pic) or _is_numpy_image(pic)): - raise TypeError( - 'pic should be PIL Image or ndarray. Got {}'.format(type(pic))) - - if isinstance(pic, np.ndarray): - img = torch.from_numpy(pic.transpose((2, 0, 1))) - return img - - # handle PIL Image - if pic.mode == 'I': - img = torch.from_numpy(np.array(pic, np.int32, copy=False)) - elif pic.mode == 'I;16': - img = torch.from_numpy(np.array(pic, np.int16, copy=False)) - else: - img = torch.ByteTensor(torch.ByteStorage.from_buffer(pic.tobytes())) - # PIL image mode: 1, L, P, I, F, RGB, YCbCr, RGBA, CMYK - if pic.mode == 'YCbCr': - nchannel = 3 - elif pic.mode == 'I;16': - nchannel = 1 - else: - nchannel = len(pic.mode) - img = img.view(pic.size[1], pic.size[0], nchannel) - - img = img.transpose(0, 1).transpose(0, 2).contiguous() - if isinstance(img, torch.ByteTensor): - return img.float() - else: - return img - - -class InferenceHelper: - def __init__(self, dataset='nyu', device='cuda:0'): - self.toTensor = ToTensor() - self.device = device - if dataset == 'nyu': - self.min_depth = 1e-3 - self.max_depth = 10 - self.saving_factor = 1000 # used to save in 16 bit - model = UnetAdaptiveBins.build(n_bins=256, min_val=self.min_depth, max_val=self.max_depth) - pretrained_path = "./pretrained/AdaBins_nyu.pt" - elif dataset == 'kitti': - self.min_depth = 1e-3 - self.max_depth = 80 - self.saving_factor = 256 - model = UnetAdaptiveBins.build(n_bins=256, min_val=self.min_depth, max_val=self.max_depth) - pretrained_path = "./pretrained/AdaBins_kitti.pt" - else: - raise ValueError("dataset can be either 'nyu' or 'kitti' but got {}".format(dataset)) - - model, _, _ = model_io.load_checkpoint(pretrained_path, model) - model.eval() - self.model = model.to(self.device) - - @torch.no_grad() - def predict_pil(self, pil_image, visualized=False): - # pil_image = pil_image.resize((640, 480)) - img = np.asarray(pil_image) / 255. - - img = self.toTensor(img).unsqueeze(0).float().to(self.device) - bin_centers, pred = self.predict(img) - - if visualized: - viz = utils.colorize(torch.from_numpy(pred).unsqueeze(0), vmin=None, vmax=None, cmap='magma') - # pred = np.asarray(pred*1000, dtype='uint16') - viz = Image.fromarray(viz) - return bin_centers, pred, viz - return bin_centers, pred - - @torch.no_grad() - def predict(self, image): - bins, pred = self.model(image) - pred = np.clip(pred.cpu().numpy(), self.min_depth, self.max_depth) - - # Flip - image = torch.Tensor(np.array(image.cpu().numpy())[..., ::-1].copy()).to(self.device) - pred_lr = self.model(image)[-1] - pred_lr = np.clip(pred_lr.cpu().numpy()[..., ::-1], self.min_depth, self.max_depth) - - # Take average of original and mirror - final = 0.5 * (pred + pred_lr) - final = nn.functional.interpolate(torch.Tensor(final), image.shape[-2:], - mode='bilinear', align_corners=True).cpu().numpy() - - final[final < self.min_depth] = self.min_depth - final[final > self.max_depth] = self.max_depth - final[np.isinf(final)] = self.max_depth - final[np.isnan(final)] = self.min_depth - - centers = 0.5 * (bins[:, 1:] + bins[:, :-1]) - centers = centers.cpu().squeeze().numpy() - centers = centers[centers > self.min_depth] - centers = centers[centers < self.max_depth] - - return centers, final - - @torch.no_grad() - def predict_dir(self, test_dir, out_dir): - os.makedirs(out_dir, exist_ok=True) - transform = ToTensor() - all_files = glob.glob(os.path.join(test_dir, "*")) - self.model.eval() - for f in tqdm(all_files): - image = np.asarray(Image.open(f), dtype='float32') / 255. - image = transform(image).unsqueeze(0).to(self.device) - - centers, final = self.predict(image) - # final = final.squeeze().cpu().numpy() - - final = (final * self.saving_factor).astype('uint16') - basename = os.path.basename(f).split('.')[0] - save_path = os.path.join(out_dir, basename + ".png") - - Image.fromarray(final.squeeze()).save(save_path) - - -if __name__ == '__main__': - import matplotlib.pyplot as plt - from time import time - - img = Image.open("test_imgs/classroom__rgb_00283.jpg") - start = time() - inferHelper = InferenceHelper() - centers, pred = inferHelper.predict_pil(img) - print(f"took :{time() - start}s") - plt.imshow(pred.squeeze(), cmap='magma_r') - plt.show() diff --git a/spaces/HaoFeng2019/DocGeoNet/README.md b/spaces/HaoFeng2019/DocGeoNet/README.md deleted file mode 100644 index 4f27c3092c442e70a03d193b6345ed532ae33ec1..0000000000000000000000000000000000000000 --- a/spaces/HaoFeng2019/DocGeoNet/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: DocGeoNet -emoji: 📈 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py deleted file mode 100644 index 44f7989bd863329f763aa62b78df2eb42b3084ea..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/examples/linformer/linformer_src/modules/linformer_sentence_encoder.py +++ /dev/null @@ -1,54 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import math - -import torch.nn as nn -from fairseq.models.transformer import TransformerEncoder - -from .linformer_sentence_encoder_layer import LinformerTransformerEncoderLayer - - -class LinformerTransformerEncoder(TransformerEncoder): - """ - Implementation for a Bi-directional Linformer based Sentence Encoder used - in BERT/XLM style pre-trained models. - - This first computes the token embedding using the token embedding matrix, - position embeddings (if specified) and segment embeddings - (if specified). After applying the specified number of - LinformerEncoderLayers, it outputs all the internal states of the - encoder as well as the final representation associated with the first - token (usually CLS token). - - Input: - - tokens: B x T matrix representing sentences - - segment_labels: B x T matrix representing segment label for tokens - - Output: - - a tuple of the following: - - a list of internal model states used to compute the - predictions where each tensor has shape T x B x C - - sentence representation associated with first input token - in format B x C. - """ - - def __init__(self, args, dictionary, embed_tokens): - self.compress_layer = None - super().__init__(args, dictionary, embed_tokens) - - def build_encoder_layer(self, args): - if self.args.shared_layer_kv_compressed == 1 and self.compress_layer is None: - compress_layer = nn.Linear( - self.args.max_positions, - self.args.max_positions // self.args.compressed, - ) - # intialize parameters for compressed layer - nn.init.xavier_uniform_(compress_layer.weight, gain=1 / math.sqrt(2)) - if self.args.freeze_compress == 1: - compress_layer.weight.requires_grad = False - self.compress_layer = compress_layer - - return LinformerTransformerEncoderLayer(args, self.compress_layer) diff --git a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/scripts/glow/train_glow.sh b/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/scripts/glow/train_glow.sh deleted file mode 100644 index f12939d5d4563de555bf49408fa7a27397e0dae3..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/Vakyansh-Odia-TTS/ttsv/scripts/glow/train_glow.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash - -gender='male' - -config='../../config/glow/'$gender'.json' -modeldir='../../checkpoints/glow/'$gender -logdir='../../logs/glow/'$gender -init=1 # 1 if start from scratch. 0 if start from last checkpoint - - -#################################################### - -if [[ $init -eq 1 ]] -then - python ../../src/glow_tts/init.py -c $config -m $modeldir -l $logdir -fi -python ../../src/glow_tts/train.py -c $config -m $modeldir -l $logdir diff --git a/spaces/Harveenchadha/oiTrans/subword-nmt/get_vocab.py b/spaces/Harveenchadha/oiTrans/subword-nmt/get_vocab.py deleted file mode 100644 index 76eb55904a0bf46c32d140848bda384dad584ca6..0000000000000000000000000000000000000000 --- a/spaces/Harveenchadha/oiTrans/subword-nmt/get_vocab.py +++ /dev/null @@ -1,82 +0,0 @@ -#! /usr/bin/env python -from __future__ import print_function - -import os -import sys -import inspect -import warnings -import argparse -import codecs - -from collections import Counter - -# hack for python2/3 compatibility -from io import open -argparse.open = open - -def create_parser(subparsers=None): - - if subparsers: - parser = subparsers.add_parser('get-vocab', - formatter_class=argparse.RawDescriptionHelpFormatter, - description="Generates vocabulary") - else: - parser = argparse.ArgumentParser( - formatter_class=argparse.RawDescriptionHelpFormatter, - description="Generates vocabulary") - - parser.add_argument( - '--input', '-i', type=argparse.FileType('r'), default=sys.stdin, - metavar='PATH', - help="Input file (default: standard input).") - - parser.add_argument( - '--output', '-o', type=argparse.FileType('w'), default=sys.stdout, - metavar='PATH', - help="Output file (default: standard output)") - - return parser - -def get_vocab(train_file, vocab_file): - - c = Counter() - - for line in train_file: - for word in line.strip('\r\n ').split(' '): - if word: - c[word] += 1 - - for key,f in sorted(c.items(), key=lambda x: x[1], reverse=True): - vocab_file.write(key+" "+ str(f) + "\n") - -if __name__ == "__main__": - - currentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe()))) - newdir = os.path.join(currentdir, 'subword_nmt') - if os.path.isdir(newdir): - warnings.simplefilter('default') - warnings.warn( - "this script's location has moved to {0}. This symbolic link will be removed in a future version. Please point to the new location, or install the package and use the command 'subword-nmt'".format(newdir), - DeprecationWarning - ) - - # python 2/3 compatibility - if sys.version_info < (3, 0): - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin) - else: - sys.stderr = codecs.getwriter('UTF-8')(sys.stderr.buffer) - sys.stdout = codecs.getwriter('UTF-8')(sys.stdout.buffer) - sys.stdin = codecs.getreader('UTF-8')(sys.stdin.buffer) - - parser = create_parser() - args = parser.parse_args() - - # read/write files as UTF-8 - if args.input.name != '<stdin>': - args.input = codecs.open(args.input.name, encoding='utf-8') - if args.output.name != '<stdout>': - args.output = codecs.open(args.output.name, 'w', encoding='utf-8') - - get_vocab(args.input, args.output) \ No newline at end of file diff --git a/spaces/Hexamind/swarms/utils.py b/spaces/Hexamind/swarms/utils.py deleted file mode 100644 index 0d127d3578d37df5ceb9e56f85eb352f97481ddb..0000000000000000000000000000000000000000 --- a/spaces/Hexamind/swarms/utils.py +++ /dev/null @@ -1,65 +0,0 @@ -""" -works with all the transformations and calculation associated to position, speed, acceleration -""" - -import numpy as np - - -def position_to_xyz(position: [float]) -> [float]: - """ - allows to get the 3D xyz coordinates from a polar representation - :param position: array (3,) with rho in meter, theta in rad, zed in meter - :return: float array (3,) with x, y, z in meter - """ - pos = position[0] * np.exp(1j * position[1]) - return [np.real(pos), np.imag(pos), position[2]] - - -""" -def _test_position_to_norm(): - assert position_to_norm([param.PERIMETER, 0, 100]) == [1, 0, 1] - assert position_to_norm([0, -np.pi / 2, 0]) == [0, 0.75, 0] - assert position_to_norm([0, np.pi / 2, 0]) == [0, 0.25, 0] -""" - - -def is_in_the_cone(position1: [float], position2: [float], vector2: [float], angle: float) -> bool: - """ - checks if the point @ position 2 is in the cone from position 1 with an angle of angle - :param position1: in x, y, z - :param position2: in x, y, z - :param vector2: in x, y, z - :param angle: in rad - :return: - """ - vector1 = np.array(position2, dtype=float) - np.array(position1) - vector1 /= np.linalg.norm(vector1) - vector2 = np.array(vector2, dtype=float) - vector2 /= np.linalg.norm(vector2) - cos_theta = np.dot(vector1, vector2) - if 0 < cos_theta: - theta = np.arcsin(np.sqrt(1 - cos_theta ** 2)) - return theta < angle - return False - - -def _test_is_in_the_cone(): - assert is_in_the_cone([0, 0, 0], [1, 0.1, 0], [1, 0, 0], np.pi / 5) - assert is_in_the_cone([0, 0, 0], [1, 0.1, 0], [0, 1, 0], np.pi / 5) - pass - - -def rhotheta_to_latlon(rho: float, theta: float, lat_tg: float, lon_tg: float) -> [float, float]: - """ - transforms polar coordinates into lat, lon - :param rho: - :param theta: - :param lat_tg: latitude de la target (0,0) - :param lon_tg: longitude de la target (0,0) - :return: - """ - z = rho * np.exp(1j * theta) - lat = np.imag(z) * 360 / (40075 * 1000) + lat_tg - lon = np.real(z) * 360 / (40075 * 1000 * np.cos(np.pi / 180 * lat)) + lon_tg - return lat, lon - diff --git a/spaces/Hushh/Generative_QNA/app.py b/spaces/Hushh/Generative_QNA/app.py deleted file mode 100644 index 24a5a583c3f00280153db8594aa7d3bc8435130e..0000000000000000000000000000000000000000 --- a/spaces/Hushh/Generative_QNA/app.py +++ /dev/null @@ -1,107 +0,0 @@ -import streamlit as st -from run_llama import load_models -import variables as vr -from load_documents import load_documents_fn -import torch -from langchain.vectorstores import Chroma -from langchain.embeddings.sentence_transformer import SentenceTransformerEmbeddings -from langchain.chains import RetrievalQA -from langchain.prompts import PromptTemplate -from langchain.memory import ConversationBufferMemory -from variables import EMBEDDING_MODEL_NAME,MODEL_ID,MODEL_BASENAME -from chromadb.utils import embedding_functions - -print(f"Is CUDA available: {torch.cuda.is_available()}") -# True -print(f"CUDA device: {torch.cuda.get_device_name(torch.cuda.current_device())}") -# Tesla T4 - -def model_memory(): - # Adding history to the model. - template = """Use the following pieces of context to answer the question at the end. If you don't know the answer,\ - just say that you don't know, don't try to make up an answer. - - {context} - - - Question: {question} - Helpful Answer:""" - -# template = """Use the following pieces of context to answer the question at the end. If you don't know the answer,\ -# try to make up an short and sweet answer. -# if the context does not exists or has no relevancy then try answer it on your previous knowledge base.But keep the answer short and precise. -# {context} -# -# -# Question: {question} -# Helpful Answer:""" - - prompt = PromptTemplate(input_variables=["context", "question"], template=template) - # memory = ConversationBufferMemory(input_key="question") - return prompt - -with st.sidebar: - st.subheader("Your documents") - global docs - docs = st.file_uploader("Upload your PDFs here and click on 'Process'", accept_multiple_files=True,type=["pdf","docx","csv","xlsx","html"]) - - if st.button("Process"): - with st.spinner("Processing"): - # raw_text = extract_text_from_pdfs(docs) - # all_loaders =classify_and_load_files_into_respective_loaders(docs) - # chroma_vectorstore = index_initializing_upserting_chroma_db(all_loaders) - if docs: - loaded_documents = load_documents_fn(docs) - else: - st.error("Error While loading the documents!!!Try Again!!!") - if "EMBEDDINGS" not in st.session_state: - EMBEDDINGS = SentenceTransformerEmbeddings(model_name=EMBEDDING_MODEL_NAME) - # EMBEDDINGS = embedding_functions.SentenceTransformerEmbeddingFunction(model_name="all-MiniLM-L6-v2") - st.session_state.EMBEDDINGS = EMBEDDINGS - # if "DB" not in st.session_state: - # DB = Chroma( - # persist_directory=loaded_documents, - # embedding_function=st.session_state.EMBEDDINGS, - # client_settings=CHROMA_SETTINGS, - # ) - DB = Chroma.from_documents(loaded_documents, st.session_state.EMBEDDINGS,persist_directory="db") - - st.session_state.DB = DB - - if "RETRIEVER" not in st.session_state: - RETRIEVER = DB.as_retriever() - st.session_state.RETRIEVER = RETRIEVER - - if "LLM" not in st.session_state: - LLM = load_models(model_id=MODEL_ID, model_basename=MODEL_BASENAME) - # st.session_state["LLM"] = LLM - - - - - if "QA" not in st.session_state: - - prompt = model_memory() - - QA = RetrievalQA.from_chain_type( - llm=LLM, - chain_type="stuff", - retriever=RETRIEVER, - return_source_documents=True, - chain_type_kwargs={"prompt": prompt}, - ) - st.session_state["QA"] = QA - st.success("LLM Initialized !!! You Chat with your documents!!") - st.title('Chat With Your Documents') - -prompt = st.text_input('Input your prompt here') -# while True: -if docs is None: - prompt = "" - # If the user hits enter -if prompt: - # Then pass the prompt to the LLM - response = st.session_state["QA"](prompt) - answer, docs = response["result"], response["source_documents"] - # ...and write it out to the screen - st.write(answer) diff --git a/spaces/ICML2022/OFA/fairseq/examples/nonautoregressive_translation/README.md b/spaces/ICML2022/OFA/fairseq/examples/nonautoregressive_translation/README.md deleted file mode 100644 index 8793e225c99732c42c9c19e22075cde37c73341d..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/examples/nonautoregressive_translation/README.md +++ /dev/null @@ -1,146 +0,0 @@ -# Non-autoregressive Neural Machine Translation (NAT) - -This page mainly includes instructions for reproducing results from the following papers -* [Levenshtein Transformer (Gu et al., 2019)](https://arxiv.org/abs/1905.11006). -* [Understanding Knowledge Distillation in Non-autoregressive Machine Translation (Zhou et al., 2019)](https://arxiv.org/abs/1911.02727). - -We also provided our own implementations for several popular non-autoregressive-based models as reference:<br> -* [Non-Autoregressive Neural Machine Translation (Gu et al., 2017)](https://arxiv.org/abs/1711.02281)<br> -* [Deterministic Non-Autoregressive Neural Sequence Modeling by Iterative Refinement (Lee et al., 2018)](https://arxiv.org/abs/1802.06901)<br> -* [Insertion Transformer: Flexible Sequence Generation via Insertion Operations (Stern et al., 2019)](https://arxiv.org/abs/1902.03249)<br> -* [Mask-Predict: Parallel Decoding of Conditional Masked Language Models (Ghazvininejad et al., 2019)](https://arxiv.org/abs/1904.09324v2)<br> -* [Fast Structured Decoding for Sequence Models (Sun et al., 2019)](https://arxiv.org/abs/1910.11555) - -## Dataset - -First, follow the [instructions to download and preprocess the WMT'14 En-De dataset](../translation#wmt14-english-to-german-convolutional). -Make sure to learn a joint vocabulary by passing the `--joined-dictionary` option to `fairseq-preprocess`. - -### Knowledge Distillation -Following [Gu et al. 2019](https://arxiv.org/abs/1905.11006), [knowledge distillation](https://arxiv.org/abs/1606.07947) from an autoregressive model can effectively simplify the training data distribution, which is sometimes essential for NAT-based models to learn good translations. -The easiest way of performing distillation is to follow the [instructions of training a standard transformer model](../translation) on the same data, and then decode the training set to produce a distillation dataset for NAT. - -### Download -We also provided the preprocessed [original](http://dl.fbaipublicfiles.com/nat/original_dataset.zip) and [distillation](http://dl.fbaipublicfiles.com/nat/distill_dataset.zip) datasets. Please build the binarized dataset on your own. - - -## Train a model - -Then we can train a nonautoregressive model using the `translation_lev` task and a new criterion `nat_loss`. -Use the `--noise` flag to specify the input noise used on the target sentences. -In default, we run the task for *Levenshtein Transformer*, with `--noise='random_delete'`. Full scripts to run other models can also be found [here](./scripts.md). - -The following command will train a *Levenshtein Transformer* on the binarized dataset. - -```bash -fairseq-train \ - data-bin/wmt14_en_de_distill \ - --save-dir checkpoints \ - --ddp-backend=legacy_ddp \ - --task translation_lev \ - --criterion nat_loss \ - --arch levenshtein_transformer \ - --noise random_delete \ - --share-all-embeddings \ - --optimizer adam --adam-betas '(0.9,0.98)' \ - --lr 0.0005 --lr-scheduler inverse_sqrt \ - --stop-min-lr '1e-09' --warmup-updates 10000 \ - --warmup-init-lr '1e-07' --label-smoothing 0.1 \ - --dropout 0.3 --weight-decay 0.01 \ - --decoder-learned-pos \ - --encoder-learned-pos \ - --apply-bert-init \ - --log-format 'simple' --log-interval 100 \ - --fixed-validation-seed 7 \ - --max-tokens 8000 \ - --save-interval-updates 10000 \ - --max-update 300000 -``` - -## Translate - -Once a model is trained, we can generate translations using an `iterative_refinement_generator` which will based on the model's initial output and iteratively read and greedily refine the translation until (1) the model predicts the same translations for two consecutive iterations; or (2) the generator reaches the maximum iterations (`--iter-decode-max-iter`). Use `--print-step` to check the actual # of iteration for each sentence. - -For *Levenshtein Transformer*, it sometimes helps to apply a `--iter-decode-eos-penalty` (typically, 0~3) to penalize the model finishing generation too early and generating too short translations. - -For example, to generate with `--iter-decode-max-iter=9`: -```bash -fairseq-generate \ - data-bin/wmt14_en_de_distill \ - --gen-subset test \ - --task translation_lev \ - --path checkpoints/checkpoint_best.pt \ - --iter-decode-max-iter 9 \ - --iter-decode-eos-penalty 0 \ - --beam 1 --remove-bpe \ - --print-step \ - --batch-size 400 -``` -In the end of the generation, we can see the tokenized BLEU score for the translation. - -## Advanced Decoding Methods -### Ensemble -The NAT models use special implementations of [ensembling](https://github.com/fairinternal/fairseq-py/blob/b98d88da52f2f21f1b169bab8c70c1c4ca19a768/fairseq/sequence_generator.py#L522) to support iterative refinement and a variety of parallel operations in different models, while it shares the same API as standard autoregressive models as follows: -```bash -fairseq-generate \ - data-bin/wmt14_en_de_distill \ - --gen-subset test \ - --task translation_lev \ - --path checkpoint_1.pt:checkpoint_2.pt:checkpoint_3.pt \ - --iter-decode-max-iter 9 \ - --iter-decode-eos-penalty 0 \ - --beam 1 --remove-bpe \ - --print-step \ - --batch-size 400 -``` -We use ``:`` to split multiple models. Note that, not all NAT models support ensembling for now. - - -### Length-beam -For models that predict lengths before decoding (e.g. the vanilla NAT, Mask-Predict, etc), it is possible to improve the translation quality by varying the target lengths around the predicted value, and translating the same example multiple times in parallel. We can select the best translation with the highest scores defined by your model's output. - -Note that, not all models support length beams. For models which dynamically change the lengths (e.g. *Insertion Transformer*, *Levenshtein Transformer*), the same trick does not apply. - -### Re-ranking -If the model generates multiple translations with length beam, we can also introduce an autoregressive model to rerank the translations considering scoring from an autoregressive model is much faster than decoding from that. - -For example, to generate translations with length beam and reranking, -```bash -fairseq-generate \ - data-bin/wmt14_en_de_distill \ - --gen-subset test \ - --task translation_lev \ - --path checkpoints/checkpoint_best.pt:at_checkpoints/checkpoint_best.pt \ - --iter-decode-max-iter 9 \ - --iter-decode-eos-penalty 0 \ - --iter-decode-with-beam 9 \ - --iter-decode-with-external-reranker \ - --beam 1 --remove-bpe \ - --print-step \ - --batch-size 100 -``` -Note that we need to make sure the autoregressive model shares the same vocabulary as our target non-autoregressive model. - - -## Citation - -```bibtex -@incollection{NIPS2019_9297, - title = {Levenshtein Transformer}, - author = {Gu, Jiatao and Wang, Changhan and Zhao, Junbo}, - booktitle = {Advances in Neural Information Processing Systems 32}, - editor = {H. Wallach and H. Larochelle and A. Beygelzimer and F. d\textquotesingle Alch\'{e}-Buc and E. Fox and R. Garnett}, - pages = {11179--11189}, - year = {2019}, - publisher = {Curran Associates, Inc.}, - url = {http://papers.nips.cc/paper/9297-levenshtein-transformer.pdf} -} -``` -```bibtex -@article{zhou2019understanding, - title={Understanding Knowledge Distillation in Non-autoregressive Machine Translation}, - author={Zhou, Chunting and Neubig, Graham and Gu, Jiatao}, - journal={arXiv preprint arXiv:1911.02727}, - year={2019} -} -``` diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/models/hubert/__init__.py b/spaces/ICML2022/OFA/fairseq/fairseq/models/hubert/__init__.py deleted file mode 100644 index a1b0eabbdbcaf12b15bb96b329ab1e276256f79a..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/models/hubert/__init__.py +++ /dev/null @@ -1,7 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hubert import * # noqa -from .hubert_asr import * # noqa diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/optim/fused_adam.py b/spaces/ICML2022/OFA/fairseq/fairseq/optim/fused_adam.py deleted file mode 100644 index 7a6d1f73d53cae24ff94bb0bbc42bcc1de75548a..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/optim/fused_adam.py +++ /dev/null @@ -1,384 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import types - -import torch - - -def get_fused_adam_class(): - """ - Look for the FusedAdam optimizer from apex. We first try to load the - "contrib" interface, which is a bit faster than the main interface, - but is technically deprecated. - """ - try: - # The "deprecated" interface in recent versions of apex is a bit - # faster than the main interface, since we don't use the apex - # optimizer. This can be installed by passing the - # `--deprecated_fused_adam` option when building apex. - global fused_adam_cuda - import importlib - - fused_adam_cuda = importlib.import_module("fused_adam_cuda") - return FusedAdamV1 - except ImportError: - try: - # fallback to the newer interface - from apex.optimizers import FusedAdam as _FusedAdam # noqa - from apex.multi_tensor_apply import multi_tensor_applier - - if multi_tensor_applier.available: - return FusedAdamV2 - except ImportError: - pass - return None - - -class FusedAdamV1(torch.optim.Optimizer): - """ - Implements Adam algorithm. Currently GPU-only. Requires Apex to be installed via - ``python setup.py install --cuda_ext --cpp_ext``. - - It has been proposed in `Adam: A Method for Stochastic Optimization`_. - - Compared to the original version in Apex, the fairseq version casts grads - and params to FP32 internally to support ``--memory-efficient-fp16``. - - Args: - params (iterable): iterable of parameters to optimize or dicts defining - parameter groups. - lr (float, optional): learning rate. (default: 1e-3) - betas (Tuple[float, float], optional): coefficients used for computing - running averages of gradient and its square. (default: (0.9, 0.999)) - eps (float, optional): term added to the denominator to improve - numerical stability. (default: 1e-8) - weight_decay (float, optional): weight decay (L2 penalty) (default: 0) - amsgrad (boolean, optional): whether to use the AMSGrad variant of this - algorithm from the paper `On the Convergence of Adam and Beyond`_ - (default: False) NOT SUPPORTED in FusedAdam! - eps_inside_sqrt (boolean, optional): in the 'update parameters' step, - adds eps to the bias-corrected second moment estimate before - evaluating square root instead of adding it to the square root of - second moment estimate as in the original paper. (default: False) - .. _Adam: A Method for Stochastic Optimization: - https://arxiv.org/abs/1412.6980 - .. _On the Convergence of Adam and Beyond: - https://openreview.net/forum?id=ryQu7f-RZ - """ - - def __init__( - self, - params, - lr=1e-3, - bias_correction=True, - betas=(0.9, 0.999), - eps=1e-8, - eps_inside_sqrt=False, - weight_decay=0.0, - max_grad_norm=0.0, - amsgrad=False, - use_fp16_stats=False, - ): - global fused_adam_cuda - import importlib - - fused_adam_cuda = importlib.import_module("fused_adam_cuda") - - if amsgrad: - raise RuntimeError("FusedAdam does not support the AMSGrad variant.") - defaults = { - "lr": lr, - "bias_correction": bias_correction, - "betas": betas, - "eps": eps, - "weight_decay": weight_decay, - "max_grad_norm": max_grad_norm, - } - super().__init__(params, defaults) - self.eps_mode = 0 if eps_inside_sqrt else 1 - - self.use_fp16_stats = use_fp16_stats - self.FLOAT16_MAX = 65504.0 - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - @property - def supports_step_with_scale(self): - return True - - def step(self, closure=None, grads=None, scale=1.0, grad_norms=None): - """Performs a single optimization step. - Args: - closure (callable, optional): A closure that reevaluates the model - and returns the loss. - grads (list of tensors, optional): weight gradient to use for the - optimizer update. If gradients have type torch.half, parameters - are expected to be in type torch.float. (default: None) - output params (list of tensors, optional): A reduced precision copy - of the updated weights written out in addition to the regular - updated weights. Have to be of same type as gradients. (default: None) - scale (float, optional): factor to divide gradient tensor values - by before applying to weights. (default: 1) - """ - loss = None - if closure is not None: - loss = closure() - - if grads is None: - grads_group = [None] * len(self.param_groups) - # backward compatibility - # assuming a list/generator of parameter means single group - elif isinstance(grads, types.GeneratorType): - grads_group = [grads] - elif type(grads[0]) != list: - grads_group = [grads] - else: - grads_group = grads - - if grad_norms is None: - grad_norms = [None] * len(self.param_groups) - - for group, grads_this_group, grad_norm in zip( - self.param_groups, grads_group, grad_norms - ): - if grads_this_group is None: - grads_this_group = [None] * len(group["params"]) - - # compute combined scale factor for this group - combined_scale = scale - if group.get("max_grad_norm", 0) > 0: - # norm is in fact norm*scale - clip = ((grad_norm / scale) + 1e-6) / group["max_grad_norm"] - if clip > 1: - combined_scale = clip * scale - - bias_correction = 1 if group.get("bias_correction", 1) else 0 - - for p, grad in zip(group["params"], grads_this_group): - # note: p.grad should not ever be set for correct - # operation of mixed precision optimizer that sometimes - # sends None gradients - if p.grad is None and grad is None: - continue - if grad is None: - grad = p.grad.data - if grad.is_sparse: - raise RuntimeError( - "FusedAdam does not support sparse gradients, " - "please consider SparseAdam instead" - ) - - if p.device.type == "cpu": - p_data_fp32 = p.data.cuda(non_blocking=True).float() - out_p = torch.tensor([], dtype = torch.float) - else: - p_data_fp32 = p.data.float() - out_p = p.data - - state = self.state[p] - - # State initialization - dtype = torch.float16 if self.use_fp16_stats else p_data_fp32.dtype - if len(state) == 0: - state["step"] = 0 - # Exponential moving average of gradient values - state["exp_avg"] = torch.zeros_like(p_data_fp32, dtype=dtype) - # Exponential moving average of squared gradient values - state["exp_avg_sq"] = torch.zeros_like(p_data_fp32, dtype=dtype) - if self.use_fp16_stats: - state["exp_avg_scale"] = 1.0 - state["exp_avg_sq_scale"] = 1.0 - else: - device = p_data_fp32.device - state["exp_avg"] = state["exp_avg"].to(device, dtype) - state["exp_avg_sq"] = state["exp_avg_sq"].to(device, dtype) - - exp_avg = state["exp_avg"] - exp_avg_sq = state["exp_avg_sq"] - if self.use_fp16_stats: - assert exp_avg.dtype == torch.float16 - exp_avg = exp_avg.float() * state["exp_avg_scale"] - exp_avg_sq = exp_avg_sq.float() * state["exp_avg_sq_scale"] - beta1, beta2 = group["betas"] - - state["step"] += 1 - - with torch.cuda.device(p_data_fp32.device): - fused_adam_cuda.adam( - p_data_fp32, - out_p, - exp_avg, - exp_avg_sq, - grad, - group["lr"], - beta1, - beta2, - group["eps"], - combined_scale, - state["step"], - self.eps_mode, - bias_correction, - group["weight_decay"], - ) - - if p.device.type == "cpu": - p.data.copy_(p_data_fp32, non_blocking=True) - - if self.use_fp16_stats: - def inf_norm(t): - return torch.norm(t, float("inf")) - - # from github.com/openai/jukebox/blob/master/jukebox/utils/fp16.py - state["exp_avg_scale"], state["exp_avg_sq_scale"] = ( - 1e-8 + inf_norm(exp_avg) / self.FLOAT16_MAX, - 1e-8 + inf_norm(exp_avg_sq) / self.FLOAT16_MAX, - ) - state["exp_avg"], state["exp_avg_sq"] = ( - (exp_avg / state["exp_avg_scale"]).half(), - (exp_avg_sq / state["exp_avg_sq_scale"]).half(), - ) - - return loss - - -try: - from apex.optimizers import FusedAdam - from apex.multi_tensor_apply import multi_tensor_applier - - class FusedAdamV2(FusedAdam): - """ - Compared to the original version in Apex, the fairseq version casts grads - and params to FP32 internally to support ``--memory-efficient-fp16``. - """ - - def __init__(self, *args, use_fp16_stats=False, **kwargs): - if use_fp16_stats: - raise NotImplementedError("--fp16-adam-stats is only supported with FusedAdamV1") - super().__init__(*args, **kwargs) - if not hasattr(self, "multi_tensor_adam"): - raise Exception( - "Apex installation is outdated. Please install an updated version of apex." - ) - - @property - def supports_memory_efficient_fp16(self): - return True - - @property - def supports_flat_params(self): - return True - - def step( - self, - closure=None, - grads=None, - output_params=None, - scale=None, - grad_norms=None, - ): - """Performs a single optimization step.""" - loss = None - if closure is not None: - loss = closure() - - for group in self.param_groups: - bias_correction = 1 if group["bias_correction"] else 0 - beta1, beta2 = group["betas"] - - # assume same step across group now to simplify things - # per parameter step can be easily support by making it tensor, or pass list into kernel - if "step" in group: - group["step"] += 1 - else: - group["step"] = 1 - - # create lists for multi-tensor apply - g_16, p_16, orig_p_16, m_16, v_16 = [], [], [], [], [] - g_32, p_32, m_32, v_32 = [], [], [], [] - - for p in group["params"]: - if p.grad is None: - continue - if p.grad.data.is_sparse: - raise RuntimeError( - "FusedAdam does not support sparse gradients, " - "please consider SparseAdam instead" - ) - - state = self.state[p] - # State initialization - if len(state) == 0: - # Exponential moving average of gradient values - state["exp_avg"] = torch.zeros_like(p.data, dtype=torch.float) - # Exponential moving average of squared gradient values - state["exp_avg_sq"] = torch.zeros_like( - p.data, dtype=torch.float - ) - else: - state["exp_avg"] = state["exp_avg"].to( - device=p.data.device, dtype=torch.float - ) - state["exp_avg_sq"] = state["exp_avg_sq"].to( - device=p.data.device, dtype=torch.float - ) - - if p.dtype == torch.float16: - g_16.append(p.grad.data.float()) - p_16.append(p.data.float()) - orig_p_16.append(p.data) - m_16.append(state["exp_avg"]) - v_16.append(state["exp_avg_sq"]) - elif p.dtype == torch.float32: - g_32.append(p.grad.data) - p_32.append(p.data) - m_32.append(state["exp_avg"]) - v_32.append(state["exp_avg_sq"]) - else: - raise RuntimeError("FusedAdam only support fp16 and fp32.") - - with torch.cuda.device(p.device): - if len(g_16) > 0: - multi_tensor_applier( - self.multi_tensor_adam, - self._dummy_overflow_buf, - [g_16, p_16, m_16, v_16], - group["lr"], - beta1, - beta2, - group["eps"], - group["step"], - self.adam_w_mode, - bias_correction, - group["weight_decay"], - ) - for orig_p, p in zip(orig_p_16, p_16): - orig_p.copy_(p.data) - if len(g_32) > 0: - multi_tensor_applier( - self.multi_tensor_adam, - self._dummy_overflow_buf, - [g_32, p_32, m_32, v_32], - group["lr"], - beta1, - beta2, - group["eps"], - group["step"], - self.adam_w_mode, - bias_correction, - group["weight_decay"], - ) - - return loss - - -except ImportError: - pass diff --git a/spaces/Iceclear/StableSR/StableSR/basicsr/models/realesrgan_model.py b/spaces/Iceclear/StableSR/StableSR/basicsr/models/realesrgan_model.py deleted file mode 100644 index c74b28fb1dc6a7f5c5ad3f7d8bb96c19c52ee92b..0000000000000000000000000000000000000000 --- a/spaces/Iceclear/StableSR/StableSR/basicsr/models/realesrgan_model.py +++ /dev/null @@ -1,267 +0,0 @@ -import numpy as np -import random -import torch -from collections import OrderedDict -from torch.nn import functional as F - -from basicsr.data.degradations import random_add_gaussian_noise_pt, random_add_poisson_noise_pt -from basicsr.data.transforms import paired_random_crop -from basicsr.losses.loss_util import get_refined_artifact_map -from basicsr.models.srgan_model import SRGANModel -from basicsr.utils import DiffJPEG, USMSharp -from basicsr.utils.img_process_util import filter2D -from basicsr.utils.registry import MODEL_REGISTRY - - -@MODEL_REGISTRY.register(suffix='basicsr') -class RealESRGANModel(SRGANModel): - """RealESRGAN Model for Real-ESRGAN: Training Real-World Blind Super-Resolution with Pure Synthetic Data. - - It mainly performs: - 1. randomly synthesize LQ images in GPU tensors - 2. optimize the networks with GAN training. - """ - - def __init__(self, opt): - super(RealESRGANModel, self).__init__(opt) - self.jpeger = DiffJPEG(differentiable=False).cuda() # simulate JPEG compression artifacts - self.usm_sharpener = USMSharp().cuda() # do usm sharpening - self.queue_size = opt.get('queue_size', 180) - - @torch.no_grad() - def _dequeue_and_enqueue(self): - """It is the training pair pool for increasing the diversity in a batch. - - Batch processing limits the diversity of synthetic degradations in a batch. For example, samples in a - batch could not have different resize scaling factors. Therefore, we employ this training pair pool - to increase the degradation diversity in a batch. - """ - # initialize - b, c, h, w = self.lq.size() - if not hasattr(self, 'queue_lr'): - assert self.queue_size % b == 0, f'queue size {self.queue_size} should be divisible by batch size {b}' - self.queue_lr = torch.zeros(self.queue_size, c, h, w).cuda() - _, c, h, w = self.gt.size() - self.queue_gt = torch.zeros(self.queue_size, c, h, w).cuda() - self.queue_ptr = 0 - if self.queue_ptr == self.queue_size: # the pool is full - # do dequeue and enqueue - # shuffle - idx = torch.randperm(self.queue_size) - self.queue_lr = self.queue_lr[idx] - self.queue_gt = self.queue_gt[idx] - # get first b samples - lq_dequeue = self.queue_lr[0:b, :, :, :].clone() - gt_dequeue = self.queue_gt[0:b, :, :, :].clone() - # update the queue - self.queue_lr[0:b, :, :, :] = self.lq.clone() - self.queue_gt[0:b, :, :, :] = self.gt.clone() - - self.lq = lq_dequeue - self.gt = gt_dequeue - else: - # only do enqueue - self.queue_lr[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.lq.clone() - self.queue_gt[self.queue_ptr:self.queue_ptr + b, :, :, :] = self.gt.clone() - self.queue_ptr = self.queue_ptr + b - - @torch.no_grad() - def feed_data(self, data): - """Accept data from dataloader, and then add two-order degradations to obtain LQ images. - """ - if self.is_train and self.opt.get('high_order_degradation', True): - # training data synthesis - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - self.kernel1 = data['kernel1'].to(self.device) - self.kernel2 = data['kernel2'].to(self.device) - self.sinc_kernel = data['sinc_kernel'].to(self.device) - - ori_h, ori_w = self.gt.size()[2:4] - - # ----------------------- The first degradation process ----------------------- # - # blur - out = filter2D(self.gt_usm, self.kernel1) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, scale_factor=scale, mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob'] - if np.random.uniform() < self.opt['gaussian_noise_prob']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range']) - out = torch.clamp(out, 0, 1) # clamp to [0, 1], otherwise JPEGer will result in unpleasant artifacts - out = self.jpeger(out, quality=jpeg_p) - - # ----------------------- The second degradation process ----------------------- # - # blur - if np.random.uniform() < self.opt['second_blur_prob']: - out = filter2D(out, self.kernel2) - # random resize - updown_type = random.choices(['up', 'down', 'keep'], self.opt['resize_prob2'])[0] - if updown_type == 'up': - scale = np.random.uniform(1, self.opt['resize_range2'][1]) - elif updown_type == 'down': - scale = np.random.uniform(self.opt['resize_range2'][0], 1) - else: - scale = 1 - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate( - out, size=(int(ori_h / self.opt['scale'] * scale), int(ori_w / self.opt['scale'] * scale)), mode=mode) - # add noise - gray_noise_prob = self.opt['gray_noise_prob2'] - if np.random.uniform() < self.opt['gaussian_noise_prob2']: - out = random_add_gaussian_noise_pt( - out, sigma_range=self.opt['noise_range2'], clip=True, rounds=False, gray_prob=gray_noise_prob) - else: - out = random_add_poisson_noise_pt( - out, - scale_range=self.opt['poisson_scale_range2'], - gray_prob=gray_noise_prob, - clip=True, - rounds=False) - - # JPEG compression + the final sinc filter - # We also need to resize images to desired sizes. We group [resize back + sinc filter] together - # as one operation. - # We consider two orders: - # 1. [resize back + sinc filter] + JPEG compression - # 2. JPEG compression + [resize back + sinc filter] - # Empirically, we find other combinations (sinc + JPEG + Resize) will introduce twisted lines. - if np.random.uniform() < 0.5: - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - else: - # JPEG compression - jpeg_p = out.new_zeros(out.size(0)).uniform_(*self.opt['jpeg_range2']) - out = torch.clamp(out, 0, 1) - out = self.jpeger(out, quality=jpeg_p) - # resize back + the final sinc filter - mode = random.choice(['area', 'bilinear', 'bicubic']) - out = F.interpolate(out, size=(ori_h // self.opt['scale'], ori_w // self.opt['scale']), mode=mode) - out = filter2D(out, self.sinc_kernel) - - # clamp and round - self.lq = torch.clamp((out * 255.0).round(), 0, 255) / 255. - - # random crop - gt_size = self.opt['gt_size'] - (self.gt, self.gt_usm), self.lq = paired_random_crop([self.gt, self.gt_usm], self.lq, gt_size, - self.opt['scale']) - - # training pair pool - self._dequeue_and_enqueue() - # sharpen self.gt again, as we have changed the self.gt with self._dequeue_and_enqueue - self.gt_usm = self.usm_sharpener(self.gt) - self.lq = self.lq.contiguous() # for the warning: grad and param do not obey the gradient layout contract - else: - # for paired training or validation - self.lq = data['lq'].to(self.device) - if 'gt' in data: - self.gt = data['gt'].to(self.device) - self.gt_usm = self.usm_sharpener(self.gt) - - def nondist_validation(self, dataloader, current_iter, tb_logger, save_img): - # do not use the synthetic process during validation - self.is_train = False - super(RealESRGANModel, self).nondist_validation(dataloader, current_iter, tb_logger, save_img) - self.is_train = True - - def optimize_parameters(self, current_iter): - # usm sharpening - l1_gt = self.gt_usm - percep_gt = self.gt_usm - gan_gt = self.gt_usm - if self.opt['l1_gt_usm'] is False: - l1_gt = self.gt - if self.opt['percep_gt_usm'] is False: - percep_gt = self.gt - if self.opt['gan_gt_usm'] is False: - gan_gt = self.gt - - # optimize net_g - for p in self.net_d.parameters(): - p.requires_grad = False - - self.optimizer_g.zero_grad() - self.output = self.net_g(self.lq) - if self.cri_ldl: - self.output_ema = self.net_g_ema(self.lq) - - l_g_total = 0 - loss_dict = OrderedDict() - if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters): - # pixel loss - if self.cri_pix: - l_g_pix = self.cri_pix(self.output, l1_gt) - l_g_total += l_g_pix - loss_dict['l_g_pix'] = l_g_pix - if self.cri_ldl: - pixel_weight = get_refined_artifact_map(self.gt, self.output, self.output_ema, 7) - l_g_ldl = self.cri_ldl(torch.mul(pixel_weight, self.output), torch.mul(pixel_weight, self.gt)) - l_g_total += l_g_ldl - loss_dict['l_g_ldl'] = l_g_ldl - # perceptual loss - if self.cri_perceptual: - l_g_percep, l_g_style = self.cri_perceptual(self.output, percep_gt) - if l_g_percep is not None: - l_g_total += l_g_percep - loss_dict['l_g_percep'] = l_g_percep - if l_g_style is not None: - l_g_total += l_g_style - loss_dict['l_g_style'] = l_g_style - # gan loss - fake_g_pred = self.net_d(self.output) - l_g_gan = self.cri_gan(fake_g_pred, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan'] = l_g_gan - - l_g_total.backward() - self.optimizer_g.step() - - # optimize net_d - for p in self.net_d.parameters(): - p.requires_grad = True - - self.optimizer_d.zero_grad() - # real - real_d_pred = self.net_d(gan_gt) - l_d_real = self.cri_gan(real_d_pred, True, is_disc=True) - loss_dict['l_d_real'] = l_d_real - loss_dict['out_d_real'] = torch.mean(real_d_pred.detach()) - l_d_real.backward() - # fake - fake_d_pred = self.net_d(self.output.detach().clone()) # clone for pt1.9 - l_d_fake = self.cri_gan(fake_d_pred, False, is_disc=True) - loss_dict['l_d_fake'] = l_d_fake - loss_dict['out_d_fake'] = torch.mean(fake_d_pred.detach()) - l_d_fake.backward() - self.optimizer_d.step() - - if self.ema_decay > 0: - self.model_ema(decay=self.ema_decay) - - self.log_dict = self.reduce_loss_dict(loss_dict) diff --git a/spaces/Illumotion/Koboldcpp/otherarch/llama_v2.cpp b/spaces/Illumotion/Koboldcpp/otherarch/llama_v2.cpp deleted file mode 100644 index 01b47697ce8776103bb899cfd0389aa6f6a88026..0000000000000000000000000000000000000000 --- a/spaces/Illumotion/Koboldcpp/otherarch/llama_v2.cpp +++ /dev/null @@ -1,3104 +0,0 @@ -// Defines fileno on msys: -#ifndef _GNU_SOURCE -#define _GNU_SOURCE -#include <cstdint> -#include <cstdio> -#endif - -#include "llama_v2-util.h" -#include "llama_v2.h" - -#include "ggml_v2.h" - -#ifdef GGML_USE_CUBLAS -#include "ggml_v2-cuda.h" -#endif -#if defined(GGML_USE_CLBLAST) -#include "ggml_v2-opencl.h" -#endif - - -#include <array> -#include <ctime> -#include <cinttypes> -#include <fstream> -#include <random> -#include <map> -#include <unordered_map> -#include <queue> -#include <cassert> -#include <cstring> -#include <climits> -#include <memory> -#include <algorithm> -#include <initializer_list> -#include <thread> -#include <atomic> -#include <mutex> -#include <sstream> -#include <numeric> - -#define LLAMA_V2_USE_SCRATCH -#define LLAMA_V2_MAX_SCRATCH_BUFFERS 16 - -// available llama models -enum e_model2 { - MODEL_UNKNOWN_2, - MODEL_7B_2, - MODEL_13B_2, - MODEL_30B_2, - MODEL_65B_2, -}; - -static const size_t MB_2 = 1024*1024; - -// computed for n_ctx == 2048 -// TODO: dynamically determine these sizes -// needs modifications in ggml - -static const std::map<e_model2, size_t> & MEM_REQ_SCRATCH0_2() -{ - static std::map<e_model2, size_t> k_sizes = { - { MODEL_UNKNOWN_2, 512ull * MB_2 }, - { MODEL_7B_2, 512ull * MB_2 }, - { MODEL_13B_2, 512ull * MB_2 }, - { MODEL_30B_2, 640ull * MB_2 }, - { MODEL_65B_2, 1024ull * MB_2 }, - }; - return k_sizes; -} - -static const std::map<e_model2, size_t> & MEM_REQ_SCRATCH1_2() -{ - static std::map<e_model2, size_t> k_sizes = { - { MODEL_UNKNOWN_2, 512ull * MB_2 }, - { MODEL_7B_2, 512ull * MB_2 }, - { MODEL_13B_2, 512ull * MB_2 }, - { MODEL_30B_2, 640ull * MB_2 }, - { MODEL_65B_2, 1024ull * MB_2 }, - }; - return k_sizes; -} - -// 2*n_embd*n_ctx*n_layer*sizeof(float16) -static const std::map<e_model2, size_t> & MEM_REQ_KV_SELF_2() -{ - static std::map<e_model2, size_t> k_sizes = { - { MODEL_UNKNOWN_2, 1026ull * MB_2 }, - { MODEL_7B_2, 1026ull * MB_2 }, - { MODEL_13B_2, 1608ull * MB_2 }, - { MODEL_30B_2, 3124ull * MB_2 }, - { MODEL_65B_2, 5120ull * MB_2 }, - }; - return k_sizes; -} - -// this is mostly needed for temporary mul_mat buffers to dequantize the data -// not actually needed if BLAS is disabled -static const std::map<e_model2, size_t> & MEM_REQ_EVAL_2() -{ - static std::map<e_model2, size_t> k_sizes = { - { MODEL_UNKNOWN_2, 800ull * MB_2 }, - { MODEL_7B_2, 800ull * MB_2 }, - { MODEL_13B_2, 1024ull * MB_2 }, - { MODEL_30B_2, 1280ull * MB_2 }, - { MODEL_65B_2, 1536ull * MB_2 }, - }; - return k_sizes; -} - -// default hparams (LLaMA 7B) -struct llama_v2_hparams { - uint32_t n_vocab = 32000; - uint32_t n_ctx = 512; // this is provided as user input? - uint32_t n_embd = 4096; - uint32_t n_mult = 256; - uint32_t n_head = 32; - uint32_t n_layer = 32; - uint32_t n_rot = 64; - enum llama_v2_ftype ftype = LLAMA_V2_FTYPE_MOSTLY_F16; - - bool operator!=(const llama_v2_hparams & other) const { - return memcmp(this, &other, sizeof(llama_v2_hparams)); - } -}; - -struct llama_v2_layer { - // normalization - struct ggml_v2_tensor * attention_norm; - - // attention - struct ggml_v2_tensor * wq; - struct ggml_v2_tensor * wk; - struct ggml_v2_tensor * wv; - struct ggml_v2_tensor * wo; - - // normalization - struct ggml_v2_tensor * ffn_norm; - - // ff - struct ggml_v2_tensor * w1; - struct ggml_v2_tensor * w2; - struct ggml_v2_tensor * w3; -}; - -struct llama_v2_kv_cache { - struct ggml_v2_tensor * k; - struct ggml_v2_tensor * v; - - struct ggml_v2_context * ctx = NULL; - - llama_v2_ctx_buffer buf; - - int n; // number of tokens currently in the cache - - ~llama_v2_kv_cache() { - if (ctx) { - ggml_v2_free(ctx); - } - } -}; - -struct llama_v2_model { - e_model2 type = MODEL_UNKNOWN_2; - - llama_v2_hparams hparams; - - struct ggml_v2_tensor * tok_embeddings; - - struct ggml_v2_tensor * norm; - struct ggml_v2_tensor * output; - - std::vector<llama_v2_layer> layers; - - // context - struct ggml_v2_context * ctx = NULL; - - // key + value cache for the self attention - // TODO: move to llama_v2_state - struct llama_v2_kv_cache kv_self; - - // the model memory buffer - llama_v2_ctx_buffer buf; - - // model memory mapped file - std::unique_ptr<llama_v2_mmap> mapping; - - // objects representing data potentially being locked in memory - llama_v2_mlock mlock_buf; - llama_v2_mlock mlock_mmap; - - // for quantize-stats only - std::vector<std::pair<std::string, struct ggml_v2_tensor *>> tensors_by_name; - - ~llama_v2_model() { - if (ctx) { - ggml_v2_free(ctx); - } - } -}; - -struct llama_v2_vocab { - using id = int32_t; - using token = std::string; - - struct token_score { - token tok; - float score; - }; - - std::unordered_map<token, id> token_to_id; - std::vector<token_score> id_to_token; -}; - -struct llama_v2_context { - std::mt19937 rng; - - int64_t t_load_us = 0; - int64_t t_start_us = 0; - bool has_evaluated_once = false; - - int64_t t_sample_us = 0; - int64_t t_eval_us = 0; - int64_t t_p_eval_us = 0; - - int32_t n_sample = 0; // number of tokens sampled - int32_t n_eval = 0; // number of eval calls - int32_t n_p_eval = 0; // number of tokens in eval calls for the prompt (with batch size > 1) - - llama_v2_model model; - llama_v2_vocab vocab; - - size_t mem_per_token = 0; - - // decode output (2-dimensional array: [n_tokens][n_vocab]) - std::vector<float> logits; - bool logits_all = false; - - // input embedding (1-dimensional array: [n_embd]) - std::vector<float> embedding; - - // memory buffers used to evaluate the model - // TODO: move in llama_v2_state - llama_v2_ctx_buffer buf_compute; - llama_v2_ctx_buffer buf_scratch[LLAMA_V2_MAX_SCRATCH_BUFFERS]; - - int buf_last = 0; - size_t buf_max_size[LLAMA_V2_MAX_SCRATCH_BUFFERS] = { 0 }; - - void use_buf(struct ggml_v2_context * ctx, int i) { -#if defined(LLAMA_V2_USE_SCRATCH) - size_t last_size = 0; - - if (i == -1) { - last_size = ggml_v2_set_scratch(ctx, { 0, 0, nullptr, }); - } else { - auto & buf = buf_scratch[i]; - last_size = ggml_v2_set_scratch(ctx, { 0, buf.size, buf.addr, }); - } - - if (buf_last >= 0) { - buf_max_size[buf_last] = std::max(buf_max_size[buf_last], last_size); - } - - buf_last = i; -#else - (void) i; - (void) ctx; -#endif - } - - size_t get_buf_max_mem(int i) const { -#if defined(LLAMA_V2_USE_SCRATCH) - return buf_max_size[i]; -#else - (void) i; - return 0; -#endif - } -}; - -template <typename T> -static T checked_mul2(T a, T b) { - T ret = a * b; - if (a != 0 && ret / a != b) { - throw format_old("overflow multiplying %llu * %llu", - (unsigned long long) a, (unsigned long long) b); - } - return ret; -} - -static size_t checked_div2(size_t a, size_t b) { - if (b == 0 || a % b != 0) { - throw format_old("error dividing %zu / %zu", a, b); - } - return a / b; -} - -static std::string llama_v2_format_tensor_shape(const std::vector<uint32_t> & ne) { - char buf[256]; - snprintf(buf, sizeof(buf), "%5u", ne.at(0)); - for (size_t i = 1; i < ne.size(); i++) { - snprintf(buf + strlen(buf), sizeof(buf) - strlen(buf), " x %5u", ne.at(i)); - } - return buf; -} - -static size_t llama_v2_calc_tensor_size(const std::vector<uint32_t> & ne, enum ggml_v2_type type) { - size_t size = ggml_v2_type_size(type); - for (uint32_t dim : ne) { - size = checked_mul2<size_t>(size, dim); - } - return size / ggml_v2_blck_size(type); -} - -struct llama_v2_load_tensor_shard { - std::vector<uint32_t> ne; - size_t size; - enum ggml_v2_type type; - size_t file_idx; - size_t file_off; - - void calc_size() { - size = llama_v2_calc_tensor_size(ne, type); - } -}; - -enum llama_v2_split_type { - SPLIT_NONE_2, - SPLIT_BY_COLUMNS_2, - SPLIT_BY_ROWS_2 -}; - -struct llama_v2_load_tensor { - std::vector<llama_v2_load_tensor_shard> shards; - - std::string name; - enum ggml_v2_type type = GGML_V2_TYPE_F32; - llama_v2_split_type split_type = SPLIT_NONE_2; - std::vector<uint32_t> ne; - size_t size; - struct ggml_v2_tensor * ggml_v2_tensor = NULL; - uint8_t * data; - - llama_v2_load_tensor(const std::string & name) : name(name) {} - - void calc_all() { - calc_type(); - calc_split_type(); - calc_ne(); - calc_size(); - } - - void calc_type() { - const auto & first_shard = shards.at(0); - for (const auto & shard : shards) { - if (shard.type != first_shard.type) { - throw format_old("inconsistent tensor shard type in '%s'", name.c_str()); - } - } - type = first_shard.type; - } - - void calc_split_type() { - if (shards.at(0).ne.size() == 1 || // 1D tensors are just duplicated in every file - shards.size() == 1) { // only one file? - split_type = SPLIT_NONE_2; - } else if (name.find("tok_embeddings.") == 0 || - name.find(".attention.wo.weight") != std::string::npos || - name.find(".feed_forward.w2.weight") != std::string::npos) { - split_type = SPLIT_BY_COLUMNS_2; - } else { - split_type = SPLIT_BY_ROWS_2; - } - } - - void calc_ne() { - const auto & first_shard = shards.at(0); - for (const auto & shard : shards) { - if (shard.ne != first_shard.ne) { - throw format_old("inconsistent tensor shard shape in '%s': first was %s, other was %s", - name.c_str(), llama_v2_format_tensor_shape(first_shard.ne).c_str(), llama_v2_format_tensor_shape(shard.ne).c_str()); - } - } - ne = first_shard.ne; - LLAMA_V2_ASSERT(shards.size() <= UINT32_MAX); - uint32_t n_shards = (uint32_t) shards.size(); - switch (split_type) { - case SPLIT_NONE_2: - ne = first_shard.ne; - break; - case SPLIT_BY_COLUMNS_2: - ne = {checked_mul2<uint32_t>(first_shard.ne[0], n_shards), - first_shard.ne[1]}; - break; - case SPLIT_BY_ROWS_2: - ne = {first_shard.ne[0], - checked_mul2<uint32_t>(first_shard.ne[1], n_shards)}; - break; - } - } - - void calc_size() { - size = llama_v2_calc_tensor_size(ne, type); - } -}; - -struct llama_v2_load_tensors_map { - // tensors is kept in a separate vector to preserve file order - std::vector<llama_v2_load_tensor> tensors; - std::unordered_map<std::string, size_t> name_to_idx; -}; - -enum llama_v2_file_version { - LLAMA_V2_FILE_VERSION_GGML, - LLAMA_V2_FILE_VERSION_GGMF_V1, // added version field and scores in vocab - LLAMA_V2_FILE_VERSION_GGJT_V1, // added padding - LLAMA_V2_FILE_VERSION_GGJT_V2, // changed quantization format - LLAMA_V2_FILE_VERSION_GGJT_V3, // changed Q4 and Q8 quantization format -}; - -struct llama_v2_file_loader { - llama_v2_file file; - llama_v2_file_version file_version; - llama_v2_hparams hparams; - llama_v2_vocab vocab; - - llama_v2_file_loader(const char * fname, size_t file_idx, llama_v2_load_tensors_map & tensors_map) - : file(fname, "rb") { - fprintf(stderr, "llama.cpp: loading model from %s\n", fname); - read_magic(); - read_hparams(); - read_vocab(); - read_tensor_metadata(file_idx, tensors_map); - } - void read_magic() { - uint32_t magic = file.read_u32(); - uint32_t version = 0; - - if (magic != 'ggml') { - version = file.read_u32(); - } - - if (magic == 'ggml' && version == 0) { - file_version = LLAMA_V2_FILE_VERSION_GGML; - } else if (magic == 'ggmf' && version == 1) { - file_version = LLAMA_V2_FILE_VERSION_GGMF_V1; - } else if (magic == 'ggjt' && version == 1) { - file_version = LLAMA_V2_FILE_VERSION_GGJT_V1; - } else if (magic == 'ggjt' && version == 2) { - file_version = LLAMA_V2_FILE_VERSION_GGJT_V2; - } else if (magic == 'ggjt' && version == 3) { - file_version = LLAMA_V2_FILE_VERSION_GGJT_V3; - } else { - throw format_old("unknown (magic, version) combination: %08x, %08x; is this really a GGML file?", - magic, version); - } - } - void read_hparams() { - hparams.n_vocab = file.read_u32(); - hparams.n_embd = file.read_u32(); - hparams.n_mult = file.read_u32(); - hparams.n_head = file.read_u32(); - hparams.n_layer = file.read_u32(); - hparams.n_rot = file.read_u32(); - hparams.ftype = (enum llama_v2_ftype) file.read_u32(); - } - void read_vocab() { - vocab.id_to_token.resize(hparams.n_vocab); - - int32_t vocabloops = hparams.n_vocab; - if(vocabloops==32001 && file_version == LLAMA_V2_FILE_VERSION_GGML) - { - printf("---\n!! WARNING: Model appears to be GPT4ALL v1 model, triggering compatibility fix !!\n---\n"); - vocabloops -= 1; - } - - for (uint32_t i = 0; i < vocabloops; i++) { - uint32_t len = file.read_u32(); - std::string word = file.read_string(len); - - float score = 0.0f; - if (file_version >= LLAMA_V2_FILE_VERSION_GGMF_V1) { - file.read_raw(&score, sizeof(score)); - } - - vocab.token_to_id[word] = i; - - auto & tok_score = vocab.id_to_token[i]; - tok_score.tok = std::move(word); - tok_score.score = score; - } - } - void read_tensor_metadata(size_t file_idx, llama_v2_load_tensors_map & tensors_map) { - while (file.tell() < file.size) { - llama_v2_load_tensor_shard shard; - uint32_t n_dims = file.read_u32(); - uint32_t name_len = file.read_u32(); - shard.type = (enum ggml_v2_type) file.read_u32(); - shard.ne.resize(n_dims); - file.read_raw(shard.ne.data(), sizeof(shard.ne[0]) * n_dims); - std::string name = file.read_string(name_len); - if (n_dims < 1 || n_dims > 2) { - throw format_old("llama.cpp: tensor '%s' should not be %u-dimensional", name.c_str(), n_dims); - } - switch (shard.type) { - case GGML_V2_TYPE_F32: - case GGML_V2_TYPE_F16: - case GGML_V2_TYPE_Q4_0: - case GGML_V2_TYPE_Q4_1: - case GGML_V2_TYPE_Q4_2: - case GGML_V2_TYPE_Q4_3: - case GGML_V2_TYPE_Q5_0: - case GGML_V2_TYPE_Q5_1: - case GGML_V2_TYPE_Q8_0: - break; - default: { - throw format_old("unrecognized tensor type %u\n", shard.type); - } - } - - if (file_version >= LLAMA_V2_FILE_VERSION_GGJT_V1) { - // skip to the next multiple of 32 bytes - file.seek(-file.tell() & 31, SEEK_CUR); - } - shard.file_idx = file_idx; - shard.file_off = file.tell(); - - shard.calc_size(); - file.seek(shard.size, SEEK_CUR); - - auto it = tensors_map.name_to_idx.find(name); - size_t idx; - if (it != tensors_map.name_to_idx.end()) { - idx = it->second; - } else { - tensors_map.tensors.emplace_back(name); - idx = tensors_map.tensors.size() - 1; - tensors_map.name_to_idx.emplace(name, idx); - } - tensors_map.tensors.at(idx).shards.push_back(shard); - } - } -}; - -struct llama_v2_file_saver { - llama_v2_file file; - llama_v2_file_loader * any_file_loader; - llama_v2_file_saver(const char * fname, llama_v2_file_loader * any_file_loader, enum llama_v2_ftype new_ftype) - : file(fname, "wb"), any_file_loader(any_file_loader) { - fprintf(stderr, "llama.cpp: saving model to %s\n", fname); - write_magic(); - write_hparams(new_ftype); - write_vocab(); - } - void write_magic() { - file.write_u32(LLAMA_V2_FILE_MAGIC); // magic - file.write_u32(LLAMA_V2_FILE_VERSION); // version - } - void write_hparams(enum llama_v2_ftype new_ftype) { - const llama_v2_hparams & hparams = any_file_loader->hparams; - file.write_u32(hparams.n_vocab); - file.write_u32(hparams.n_embd); - file.write_u32(hparams.n_mult); - file.write_u32(hparams.n_head); - file.write_u32(hparams.n_layer); - file.write_u32(hparams.n_rot); - file.write_u32(new_ftype); - } - void write_vocab() { - if (any_file_loader->file_version == LLAMA_V2_FILE_VERSION_GGML) { - fprintf(stderr, "llama.cpp: WARNING: input is an old file that doesn't have scores; will add dummy scores\n"); - } - uint32_t n_vocab = any_file_loader->hparams.n_vocab; - for (uint32_t i = 0; i < n_vocab; i++) { - const auto & token_score = any_file_loader->vocab.id_to_token.at(i); - file.write_u32((uint32_t) token_score.tok.size()); - file.write_raw(token_score.tok.data(), token_score.tok.size()); - file.write_raw(&token_score.score, sizeof(token_score.score)); - } - } - void write_tensor(llama_v2_load_tensor & tensor, enum ggml_v2_type new_type, const void * new_data, size_t new_size) { - switch (new_type) { - case GGML_V2_TYPE_F32: - case GGML_V2_TYPE_F16: - case GGML_V2_TYPE_Q4_0: - case GGML_V2_TYPE_Q4_1: - case GGML_V2_TYPE_Q4_2: - case GGML_V2_TYPE_Q4_3: - case GGML_V2_TYPE_Q5_0: - case GGML_V2_TYPE_Q5_1: - case GGML_V2_TYPE_Q8_0: - break; - default: LLAMA_V2_ASSERT(false); - } - file.write_u32((uint32_t) tensor.ne.size()); - file.write_u32((uint32_t) tensor.name.size()); - file.write_u32(new_type); - file.write_raw(tensor.ne.data(), sizeof(tensor.ne[0]) * tensor.ne.size()); - file.write_raw(tensor.name.data(), tensor.name.size()); - file.seek(-file.tell() & 31, SEEK_CUR); - LLAMA_V2_ASSERT(new_size == llama_v2_calc_tensor_size(tensor.ne, new_type)); - file.write_raw(new_data, new_size); - } -}; - -struct llama_v2_model_loader { - std::vector<std::unique_ptr<llama_v2_file_loader>> file_loaders; - llama_v2_load_tensors_map tensors_map; - bool use_mmap; - size_t num_ggml_v2_tensors_created = 0; - struct ggml_v2_context * ggml_v2_ctx = NULL; - std::unique_ptr<llama_v2_mmap> mapping; - - llama_v2_model_loader(const std::string & fname_base, bool use_mmap, bool vocab_only) { - auto * first_file = new llama_v2_file_loader(fname_base.c_str(), 0, tensors_map); - file_loaders.emplace_back(first_file); - uint32_t n_parts = vocab_only ? 1 : guess_n_parts(); - for (uint32_t i = 1; i < n_parts; i++) { - std::string fname = fname_base + "." + std::to_string(i); - auto * ith_file = new llama_v2_file_loader(fname.c_str(), i, tensors_map); - file_loaders.emplace_back(ith_file); - if (ith_file->hparams != first_file->hparams) { - throw format_old("llama.cpp: hparams inconsistent between files"); - } - } - if (!llama_v2_mmap::SUPPORTED) { - use_mmap = false; - } - if (use_mmap && alignment_prevents_mmap()) { - fprintf(stderr, "llama.cpp: can't use mmap because tensors are not aligned; convert to new format to avoid this\n"); - use_mmap = false; - } - this->use_mmap = use_mmap; - for (llama_v2_load_tensor & lt : tensors_map.tensors) { - lt.calc_all(); - } - } - - bool alignment_prevents_mmap() { - for (const llama_v2_load_tensor & lt : tensors_map.tensors) { - for (const llama_v2_load_tensor_shard & shard : lt.shards) { - if (shard.file_off & 3) { - return true; - } - } - } - return false; - } - - uint32_t guess_n_parts() const { - auto it = tensors_map.name_to_idx.find("tok_embeddings.weight"); - if (it == tensors_map.name_to_idx.end()) { - throw std::string("missing tok_embeddings.weight"); - } - const llama_v2_load_tensor & lt = tensors_map.tensors.at(it->second); - return file_loaders.at(0)->hparams.n_embd / lt.shards.at(0).ne.at(0); - } - - void calc_sizes(size_t * ctx_size_p, size_t * mmapped_size_p) const { - *ctx_size_p = *mmapped_size_p = 0; - for (const llama_v2_load_tensor & lt : tensors_map.tensors) { - *ctx_size_p += sizeof(struct ggml_v2_tensor) + GGML_V2_OBJECT_SIZE; - *(use_mmap ? mmapped_size_p : ctx_size_p) += lt.size; - } - } - - struct ggml_v2_tensor * get_tensor(const std::string & name, const std::vector<uint32_t> & ne) { - auto it = tensors_map.name_to_idx.find(name); - if (it == tensors_map.name_to_idx.end()) { - throw format_old("llama.cpp: tensor '%s' is missing from model", name.c_str()); - } - llama_v2_load_tensor & lt = tensors_map.tensors.at(it->second); - if (lt.ne != ne) { - throw format_old("llama.cpp: tensor '%s' has wrong shape; expected %s, got %s", - name.c_str(), llama_v2_format_tensor_shape(ne).c_str(), llama_v2_format_tensor_shape(lt.ne).c_str()); - } - - return get_tensor_for(lt); - } - - struct ggml_v2_tensor * get_tensor_for(llama_v2_load_tensor & lt) { - struct ggml_v2_tensor * tensor; - if (lt.ne.size() == 2) { - tensor = ggml_v2_new_tensor_2d(ggml_v2_ctx, lt.type, lt.ne.at(0), lt.ne.at(1)); - } else { - LLAMA_V2_ASSERT(lt.ne.size() == 1); - tensor = ggml_v2_new_tensor_1d(ggml_v2_ctx, lt.type, lt.ne.at(0)); - } - ggml_v2_set_name(tensor, lt.name.c_str()); - LLAMA_V2_ASSERT(lt.ggml_v2_tensor == NULL); // if this fails, we called get_tensor twice on the same tensor - lt.ggml_v2_tensor = tensor; - num_ggml_v2_tensors_created++; - return tensor; - } - - void done_getting_tensors() const { - if (num_ggml_v2_tensors_created != tensors_map.tensors.size()) { - throw std::string("llama.cpp: file contained more tensors than expected"); - } - } - - void load_all_data(llama_v2_progress_callback progress_callback, void * progress_callback_user_data, llama_v2_mlock * lmlock) { - size_t data_size = 0; - for (const llama_v2_load_tensor & lt : tensors_map.tensors) { - data_size += lt.size; - } - - if (use_mmap) { - mapping.reset(new llama_v2_mmap(&file_loaders.at(0)->file)); - if (!lmlock) { - // Don't call the callback since the actual loading will be lazy - // and we can't measure it. - progress_callback = NULL; - } - if (lmlock) { - lmlock->init(mapping->addr); - } - } - - size_t done_size = 0; - for (llama_v2_load_tensor & lt : tensors_map.tensors) { - if (progress_callback) { - progress_callback((float) done_size / data_size, progress_callback_user_data); - } - LLAMA_V2_ASSERT(lt.ggml_v2_tensor); // unused tensors should have been caught by load_data already - lt.data = (uint8_t *) lt.ggml_v2_tensor->data; - load_data_for(lt); - lt.ggml_v2_tensor->data = lt.data; - done_size += lt.size; - if (use_mmap && lmlock) { - lmlock->grow_to(done_size); - } - } - if (progress_callback) { - progress_callback(1.0f, progress_callback_user_data); - } - } - - void load_data_for(llama_v2_load_tensor & lt) { - if (use_mmap) { - LLAMA_V2_ASSERT(lt.shards.size() == 1); - lt.data = (uint8_t *) mapping->addr + lt.shards.at(0).file_off; - } else if (lt.split_type == SPLIT_NONE_2) { - llama_v2_file & file = file_loaders.at(lt.shards.at(0).file_idx)->file; - file.seek(lt.shards.at(0).file_off, SEEK_SET); - file.read_raw(lt.data, lt.size); - } else if (lt.split_type == SPLIT_BY_ROWS_2) { - size_t offset = 0; - for (llama_v2_load_tensor_shard & shard : lt.shards) { - llama_v2_file & file = file_loaders.at(shard.file_idx)->file; - file.seek(shard.file_off, SEEK_SET); - file.read_raw(lt.data + offset, shard.size); - offset += shard.size; - } - LLAMA_V2_ASSERT(offset == lt.size); - } else if (lt.split_type == SPLIT_BY_COLUMNS_2) { - // Let's load the data into temporary buffers to ensure the OS performs large loads. - std::vector<llama_v2_buffer> tmp_bufs(lt.shards.size()); - for (size_t i = 0; i < lt.shards.size(); i++) { - llama_v2_load_tensor_shard & shard = lt.shards.at(i); - llama_v2_file & file = file_loaders.at(shard.file_idx)->file; - file.seek(shard.file_off, SEEK_SET); - tmp_bufs.at(i).resize(shard.size); - file.read_raw(tmp_bufs.at(i).addr, shard.size); - } - // Then reshape. - size_t num_rows = lt.ne.at(1); - size_t per_shard_row_size = lt.shards.at(0).size / num_rows; - size_t out_offset = 0; - for (size_t row = 0; row < num_rows; row++) { - for (llama_v2_buffer & tmp_buf : tmp_bufs) { - memcpy(lt.data + out_offset, - tmp_buf.addr + row * per_shard_row_size, - per_shard_row_size); - out_offset += per_shard_row_size; - } - } - LLAMA_V2_ASSERT(out_offset == lt.size); - } - if (0) { - print_checksum(lt); - } - } - - static void print_checksum(llama_v2_load_tensor & lt) { - uint32_t sum = 0; - for (size_t i = 0; i < lt.size; i++) { - uint8_t byte = lt.data[i]; - sum = byte + (sum << 6) + (sum << 16) - sum; // sdbm hash - } - fprintf(stderr, "%s checksum: %#08x (%s, size %zu)\n", lt.name.c_str(), sum, - llama_v2_format_tensor_shape(lt.ne).c_str(), lt.size); - } - -}; - - -// -// kv cache -// - -static bool kv_cache_init( - const struct llama_v2_hparams & hparams, - struct llama_v2_kv_cache & cache, - ggml_v2_type wtype, - int n_ctx) { - const int n_embd = hparams.n_embd; - const int n_layer = hparams.n_layer; - - const int64_t n_mem = n_layer*n_ctx; - const int64_t n_elements = n_embd*n_mem; - - cache.buf.resize(2u*n_elements*ggml_v2_type_size(wtype) + 2u*MB_2); - - struct ggml_v2_init_params params; - params.mem_size = cache.buf.size; - params.mem_buffer = cache.buf.addr; - params.no_alloc = false; - - cache.ctx = ggml_v2_init(params); - - if (!cache.ctx) { - fprintf(stderr, "%s: failed to allocate memory for kv cache\n", __func__); - return false; - } - - cache.k = ggml_v2_new_tensor_1d(cache.ctx, wtype, n_elements); - cache.v = ggml_v2_new_tensor_1d(cache.ctx, wtype, n_elements); - ggml_v2_set_name(cache.k, "cache_k"); - ggml_v2_set_name(cache.v, "cache_v"); - - return true; -} - -struct llama_v2_context_params llama_v2_context_default_params() { - struct llama_v2_context_params result = { - /*.n_ctx =*/ 512, - /*.gpu_layers =*/ 0, - /*.seed =*/ -1, - /*.f16_kv =*/ true, - /*.logits_all =*/ false, - /*.vocab_only =*/ false, - /*.use_mmap =*/ true, - /*.use_mlock =*/ false, - /*.embedding =*/ false, - /*.progress_callback =*/ nullptr, - /*.progress_callback_user_data =*/ nullptr, - }; - - return result; -} - -bool llama_v2_mmap_supported() { - return llama_v2_mmap::SUPPORTED; -} - -bool llama_v2_mlock_supported() { - return llama_v2_mlock::SUPPORTED; -} - -// -// model loading -// - -static const char *llama_v2_file_version_name(llama_v2_file_version version) { - switch (version) { - case LLAMA_V2_FILE_VERSION_GGML: return "'ggml' (old version with low tokenizer quality and no mmap support)"; - case LLAMA_V2_FILE_VERSION_GGMF_V1: return "ggmf v1 (old version with no mmap support)"; - case LLAMA_V2_FILE_VERSION_GGJT_V1: return "ggjt v1 (pre #1405)"; - case LLAMA_V2_FILE_VERSION_GGJT_V2: return "ggjt v2 (pre #1508)"; - case LLAMA_V2_FILE_VERSION_GGJT_V3: return "ggjt v3 (latest)"; - } - - return "unknown"; -} - -static const char *llama_v2_ftype_name(enum llama_v2_ftype ftype) { - switch (ftype) { - case LLAMA_V2_FTYPE_ALL_F32: return "all F32"; - case LLAMA_V2_FTYPE_MOSTLY_F16: return "mostly F16"; - case LLAMA_V2_FTYPE_MOSTLY_Q4_0: return "mostly Q4_0"; - case LLAMA_V2_FTYPE_MOSTLY_Q4_1: return "mostly Q4_1"; - case LLAMA_V2_FTYPE_MOSTLY_Q4_1_SOME_F16: - return "mostly Q4_1, some F16"; - case LLAMA_V2_FTYPE_MOSTLY_Q4_2: return "mostly Q4_2"; - case LLAMA_V2_FTYPE_MOSTLY_Q4_3: return "mostly Q4_3"; - case LLAMA_V2_FTYPE_MOSTLY_Q5_0: return "mostly Q5_0"; - case LLAMA_V2_FTYPE_MOSTLY_Q5_1: return "mostly Q5_1"; - case LLAMA_V2_FTYPE_MOSTLY_Q8_0: return "mostly Q8_0"; - default: return "unknown, may not work"; - } -} - -static const char *llama_v2_model_type_name(e_model2 type) { - switch (type) { - case MODEL_7B_2: return "7B"; - case MODEL_13B_2: return "13B"; - case MODEL_30B_2: return "30B"; - case MODEL_65B_2: return "65B"; - default: - printf("\nWARNING: NON-STANDARD LLAMA FILE DETECTED. DEFAULT TO 7B SIZE.\n"); - return "UNKNOWN"; - } -} - -static void llama_v2_model_load_internal( - const std::string & fname, - llama_v2_context & lctx, - int n_ctx, - int n_gpu_layers, - ggml_v2_type memory_type, - bool use_mmap, - bool use_mlock, - bool vocab_only, - llama_v2_progress_callback progress_callback, - void * progress_callback_user_data) { - - lctx.t_start_us = ggml_v2_time_us(); - - std::unique_ptr<llama_v2_model_loader> ml(new llama_v2_model_loader(fname, use_mmap, vocab_only)); - - lctx.vocab = std::move(ml->file_loaders.at(0)->vocab); - auto & model = lctx.model; - model.hparams = ml->file_loaders.at(0)->hparams; - llama_v2_file_version file_version = ml->file_loaders.at(0)->file_version; - auto & hparams = model.hparams; - uint32_t n_ff = ((2*(4*hparams.n_embd)/3 + hparams.n_mult - 1)/hparams.n_mult)*hparams.n_mult; - - { - switch (hparams.n_layer) { - case 32: model.type = e_model2::MODEL_7B_2; break; - case 40: model.type = e_model2::MODEL_13B_2; break; - case 60: model.type = e_model2::MODEL_30B_2; break; - case 80: model.type = e_model2::MODEL_65B_2; break; - default: model.type = e_model2::MODEL_UNKNOWN_2; break; - } - - hparams.n_ctx = n_ctx; - } - - { - fprintf(stderr, "%s: format = %s\n", __func__, llama_v2_file_version_name(file_version)); - fprintf(stderr, "%s: n_vocab = %u\n", __func__, hparams.n_vocab); - fprintf(stderr, "%s: n_ctx = %u\n", __func__, hparams.n_ctx); - fprintf(stderr, "%s: n_embd = %u\n", __func__, hparams.n_embd); - fprintf(stderr, "%s: n_mult = %u\n", __func__, hparams.n_mult); - fprintf(stderr, "%s: n_head = %u\n", __func__, hparams.n_head); - fprintf(stderr, "%s: n_layer = %u\n", __func__, hparams.n_layer); - fprintf(stderr, "%s: n_rot = %u\n", __func__, hparams.n_rot); - fprintf(stderr, "%s: ftype = %u (%s)\n", __func__, hparams.ftype, llama_v2_ftype_name(hparams.ftype)); - fprintf(stderr, "%s: n_ff = %u\n", __func__, n_ff); - fprintf(stderr, "%s: n_parts = %zu\n", __func__, ml->file_loaders.size()); - fprintf(stderr, "%s: model size = %s\n", __func__, llama_v2_model_type_name(model.type)); - } - - if (file_version < LLAMA_V2_FILE_VERSION_GGJT_V2) { - if (hparams.ftype != LLAMA_V2_FTYPE_ALL_F32 && - hparams.ftype != LLAMA_V2_FTYPE_MOSTLY_F16 && - hparams.ftype != LLAMA_V2_FTYPE_MOSTLY_Q8_0) { - printf("\nLegacy LLAMA GGJT v1 compatability changes triggered.\n"); - } - } - - if (file_version < LLAMA_V2_FILE_VERSION_GGJT_V3) { - if (hparams.ftype == LLAMA_V2_FTYPE_MOSTLY_Q4_0 || - hparams.ftype == LLAMA_V2_FTYPE_MOSTLY_Q4_1 || - hparams.ftype == LLAMA_V2_FTYPE_MOSTLY_Q8_0) { - printf("\nLegacy LLAMA GGJT v2 compatability changes triggered.\n"); - } - } - - if (vocab_only) { - return; - } - - auto & ctx = model.ctx; - - size_t ctx_size; - size_t mmapped_size; - ml->calc_sizes(&ctx_size, &mmapped_size); - fprintf(stderr, "%s: ggml ctx size = %6.2f MB\n", __func__, ctx_size/1024.0/1024.0); - - // print memory requirements - { - const size_t scale = memory_type == GGML_V2_TYPE_F32 ? 2 : 1; - - // this is the total memory required to run the inference - const size_t mem_required = - ctx_size + - mmapped_size + - MEM_REQ_SCRATCH0_2().at(model.type) + - MEM_REQ_SCRATCH1_2().at(model.type) + - MEM_REQ_EVAL_2().at(model.type); - - // this is the memory required by one llama_v2_state - const size_t mem_required_state = - scale*MEM_REQ_KV_SELF_2().at(model.type); - - fprintf(stderr, "%s: mem required = %7.2f MB (+ %7.2f MB per state)\n", __func__, - mem_required / 1024.0 / 1024.0, mem_required_state / 1024.0 / 1024.0); - } - - // create the ggml context - { - lctx.model.buf.resize(ctx_size); - if (use_mlock) { - lctx.model.mlock_buf.init(lctx.model.buf.addr); - lctx.model.mlock_buf.grow_to(lctx.model.buf.size); - } - - struct ggml_v2_init_params params = { - /*.mem_size =*/ lctx.model.buf.size, - /*.mem_buffer =*/ lctx.model.buf.addr, - /*.no_alloc =*/ ml->use_mmap, - }; - - model.ctx = ggml_v2_init(params); - if (!model.ctx) { - throw format_old("ggml_v2_init() failed"); - } - } - - // prepare memory for the weights - { - const uint32_t n_embd = hparams.n_embd; - const uint32_t n_layer = hparams.n_layer; - const uint32_t n_vocab = hparams.n_vocab; - - ml->ggml_v2_ctx = ctx; - - model.tok_embeddings = ml->get_tensor("tok_embeddings.weight", {n_embd, n_vocab}); - model.norm = ml->get_tensor("norm.weight", {n_embd}); - model.output = ml->get_tensor("output.weight", {n_embd, n_vocab}); - - model.layers.resize(n_layer); - for (uint32_t i = 0; i < n_layer; ++i) { - auto & layer = model.layers[i]; - - std::string layers_i = "layers." + std::to_string(i); - - layer.attention_norm = ml->get_tensor(layers_i + ".attention_norm.weight", {n_embd}); - - layer.wq = ml->get_tensor(layers_i + ".attention.wq.weight", {n_embd, n_embd}); - layer.wk = ml->get_tensor(layers_i + ".attention.wk.weight", {n_embd, n_embd}); - layer.wv = ml->get_tensor(layers_i + ".attention.wv.weight", {n_embd, n_embd}); - layer.wo = ml->get_tensor(layers_i + ".attention.wo.weight", {n_embd, n_embd}); - - layer.ffn_norm = ml->get_tensor(layers_i + ".ffn_norm.weight", {n_embd}); - - layer.w1 = ml->get_tensor(layers_i + ".feed_forward.w1.weight", {n_embd, n_ff}); - layer.w2 = ml->get_tensor(layers_i + ".feed_forward.w2.weight", { n_ff, n_embd}); - layer.w3 = ml->get_tensor(layers_i + ".feed_forward.w3.weight", {n_embd, n_ff}); - } - } - - ml->done_getting_tensors(); - - // populate `tensors_by_name` - for (llama_v2_load_tensor & lt : ml->tensors_map.tensors) { - model.tensors_by_name.emplace_back(lt.name, lt.ggml_v2_tensor); - } - - ml->load_all_data(progress_callback, progress_callback_user_data, use_mlock ? &lctx.model.mlock_mmap : NULL); - - model.mapping = std::move(ml->mapping); -#if defined(GGML_USE_CUBLAS) - { - const int n_gpu = std::min(n_gpu_layers, int(hparams.n_layer)); - if(GetQuantsUnshuffled()) - { - - fprintf(stderr, "%s: [old cublas] offloading %d layers to GPU\n", __func__, n_gpu); - - size_t vram_total = 0; - - for (int i = 0; i < n_gpu; ++i) { - const auto & layer = model.layers[i]; - - ggml_v2_cuda_transform_tensor(layer.wq); vram_total += ggml_v2_nbytes(layer.wq); - ggml_v2_cuda_transform_tensor(layer.wk); vram_total += ggml_v2_nbytes(layer.wk); - ggml_v2_cuda_transform_tensor(layer.wv); vram_total += ggml_v2_nbytes(layer.wv); - ggml_v2_cuda_transform_tensor(layer.wo); vram_total += ggml_v2_nbytes(layer.wo); - ggml_v2_cuda_transform_tensor(layer.w1); vram_total += ggml_v2_nbytes(layer.w1); - ggml_v2_cuda_transform_tensor(layer.w2); vram_total += ggml_v2_nbytes(layer.w2); - ggml_v2_cuda_transform_tensor(layer.w3); vram_total += ggml_v2_nbytes(layer.w3); - } - if (n_gpu_layers > (int) hparams.n_layer) { - fprintf(stderr, "%s: [old cublas] offloading output layer to GPU\n", __func__); - ggml_v2_cuda_transform_tensor(model.output); vram_total += ggml_v2_nbytes(model.output); - } - - fprintf(stderr, "%s: [old cublas] total VRAM used: %zu MB\n", __func__, vram_total / 1024 / 1024); - } - else - { - if(n_gpu>0) - { - printf("\n[WARNING: Old format does not support GPU offloading! It will be deactivated!]\n"); - } - } - } -#elif defined(GGML_USE_CLBLAST) - { - const int n_gpu = std::min(n_gpu_layers, int(hparams.n_layer)); - if(GetQuantsUnshuffled()) - { - - fprintf(stderr, "%s: [opencl] offloading %d layers to GPU\n", __func__, n_gpu); - - size_t vram_total = 0; - - for (int i = 0; i < n_gpu; ++i) { - const auto & layer = model.layers[i]; - - ggml_v2_cl_transform_tensor(layer.wq); vram_total += ggml_v2_nbytes(layer.wq); - ggml_v2_cl_transform_tensor(layer.wk); vram_total += ggml_v2_nbytes(layer.wk); - ggml_v2_cl_transform_tensor(layer.wv); vram_total += ggml_v2_nbytes(layer.wv); - ggml_v2_cl_transform_tensor(layer.wo); vram_total += ggml_v2_nbytes(layer.wo); - ggml_v2_cl_transform_tensor(layer.w1); vram_total += ggml_v2_nbytes(layer.w1); - ggml_v2_cl_transform_tensor(layer.w2); vram_total += ggml_v2_nbytes(layer.w2); - ggml_v2_cl_transform_tensor(layer.w3); vram_total += ggml_v2_nbytes(layer.w3); - } - if (n_gpu_layers > (int) hparams.n_layer) { - fprintf(stderr, "%s: [opencl] offloading output layer to GPU\n", __func__); - ggml_v2_cl_transform_tensor(model.output); vram_total += ggml_v2_nbytes(model.output); - } - - fprintf(stderr, "%s: [opencl] total VRAM used: %zu MB\n", __func__, vram_total / 1024 / 1024); - } - else - { - if(n_gpu>0) - { - printf("\n[WARNING: Old format does not support GPU offloading! It will be deactivated!]\n"); - } - } - } -#else - (void) n_gpu_layers; -#endif - - // loading time will be recalculate after the first eval, so - // we take page faults deferred by mmap() into consideration - lctx.t_load_us = ggml_v2_time_us() - lctx.t_start_us; -} - -static bool llama_v2_model_load( - const std::string & fname, - llama_v2_context & lctx, - int n_ctx, - int n_gpu_layers, - ggml_v2_type memory_type, - bool use_mmap, - bool use_mlock, - bool vocab_only, - llama_v2_progress_callback progress_callback, - void *progress_callback_user_data) { - try { - llama_v2_model_load_internal(fname, lctx, n_ctx, n_gpu_layers, memory_type, use_mmap, use_mlock, - vocab_only, progress_callback, progress_callback_user_data); - return true; - } catch (const std::string & err) { - fprintf(stderr, "error loading model: %s\n", err.c_str()); - return false; - } -} - -// evaluate the transformer -// -// - lctx: llama context -// - tokens: new batch of tokens to process -// - n_past: the context size so far -// - n_threads: number of threads to use -// -static bool llama_v2_eval_internal( - llama_v2_context & lctx, - const llama_v2_token * tokens, - const int n_tokens, - const int n_past, - const int n_threads) { - - // enforce that the first token is BOS (not needed, messes with my context manip code) - //if (n_past == 0 && tokens[0] != llama_v2_token_bos()) { - //fprintf(stderr, "%s: first token must be BOS\n", __func__); - // return false; //never fail. Not even in the face of Armageddon. - //} - - const int64_t t_start_us = ggml_v2_time_us(); - - const int N = n_tokens; - - const auto & model = lctx.model; - const auto & hparams = model.hparams; - - const auto & kv_self = model.kv_self; - - LLAMA_V2_ASSERT(!!kv_self.ctx); - - const int n_embd = hparams.n_embd; - const int n_layer = hparams.n_layer; - const int n_ctx = hparams.n_ctx; - const int n_head = hparams.n_head; - const int n_vocab = hparams.n_vocab; - const int n_rot = hparams.n_embd/hparams.n_head; - - auto & mem_per_token = lctx.mem_per_token; - auto & buf_compute = lctx.buf_compute; - - struct ggml_v2_init_params params = { - /*.mem_size =*/ buf_compute.size, - /*.mem_buffer =*/ buf_compute.addr, - /*.no_alloc =*/ false, - }; - - struct ggml_v2_context * ctx0 = ggml_v2_init(params); - - // for big prompts, if BLAS is enabled, it is better to use only one thread - // otherwise, the threads are spin-lock waiting for the BLAS calls and are degrading the performance - ggml_v2_cgraph gf = {}; - gf.n_threads = N >= 32 && ggml_v2_cpu_has_blas() && !ggml_v2_cpu_has_gpublas() ? 1 : n_threads; - - struct ggml_v2_tensor * embd = ggml_v2_new_tensor_1d(ctx0, GGML_V2_TYPE_I32, N); - ggml_v2_set_name(embd, "embd"); - memcpy(embd->data, tokens, N*ggml_v2_element_size(embd)); - - struct ggml_v2_tensor * inpL = ggml_v2_get_rows(ctx0, model.tok_embeddings, embd); - - for (int il = 0; il < n_layer; ++il) { - struct ggml_v2_tensor * inpSA = inpL; - - struct ggml_v2_tensor * cur; - - lctx.use_buf(ctx0, 0); - - // norm - { - cur = ggml_v2_rms_norm(ctx0, inpL); - - // cur = attention_norm*cur - cur = ggml_v2_mul(ctx0, - ggml_v2_repeat(ctx0, model.layers[il].attention_norm, cur), - cur); - } - - // self-attention - { - // compute Q and K and RoPE them - struct ggml_v2_tensor * Qcur = ggml_v2_rope_inplace(ctx0, ggml_v2_reshape_3d(ctx0, ggml_v2_mul_mat(ctx0, model.layers[il].wq, cur), n_embd/n_head, n_head, N), n_past, n_rot, 0); - struct ggml_v2_tensor * Kcur = ggml_v2_rope_inplace(ctx0, ggml_v2_reshape_3d(ctx0, ggml_v2_mul_mat(ctx0, model.layers[il].wk, cur), n_embd/n_head, n_head, N), n_past, n_rot, 0); - ggml_v2_set_name(Qcur, "Qcur"); - ggml_v2_set_name(Kcur, "Kcur"); - - // store key and value to memory - { - // compute the transposed [N, n_embd] V matrix - struct ggml_v2_tensor * Vcur = ggml_v2_transpose(ctx0, ggml_v2_reshape_2d(ctx0, ggml_v2_mul_mat(ctx0, model.layers[il].wv, cur), n_embd, N)); - - struct ggml_v2_tensor * k = ggml_v2_view_1d(ctx0, kv_self.k, N*n_embd, (ggml_v2_element_size(kv_self.k)*n_embd)*(il*n_ctx + n_past)); - struct ggml_v2_tensor * v = ggml_v2_view_2d(ctx0, kv_self.v, N, n_embd, - ( n_ctx)*ggml_v2_element_size(kv_self.v), - (il*n_ctx)*ggml_v2_element_size(kv_self.v)*n_embd + n_past*ggml_v2_element_size(kv_self.v)); - - // important: storing RoPE-ed version of K in the KV cache! - ggml_v2_build_forward_expand(&gf, ggml_v2_cpy(ctx0, Kcur, k)); - ggml_v2_build_forward_expand(&gf, ggml_v2_cpy(ctx0, Vcur, v)); - } - - struct ggml_v2_tensor * Q = - ggml_v2_permute(ctx0, - Qcur, - 0, 2, 1, 3); - ggml_v2_set_name(Q, "Q"); - - struct ggml_v2_tensor * K = - ggml_v2_permute(ctx0, - ggml_v2_reshape_3d(ctx0, - ggml_v2_view_1d(ctx0, kv_self.k, (n_past + N)*n_embd, il*n_ctx*ggml_v2_element_size(kv_self.k)*n_embd), - n_embd/n_head, n_head, n_past + N), - 0, 2, 1, 3); - ggml_v2_set_name(K, "K"); - - // K * Q - struct ggml_v2_tensor * KQ = ggml_v2_mul_mat(ctx0, K, Q); - ggml_v2_set_name(KQ, "KQ"); - - // KQ_scaled = KQ / sqrt(n_embd/n_head) - struct ggml_v2_tensor * KQ_scale = ggml_v2_new_f32(ctx0, 1.0f/sqrtf(float(n_embd)/n_head)); - ggml_v2_set_name(KQ_scale, "1/sqrt(n_embd/n_head)"); - - // KQ_scaled shape [n_past + N, N, n_head, 1] - struct ggml_v2_tensor * KQ_scaled = ggml_v2_scale_inplace(ctx0, KQ, KQ_scale); - ggml_v2_set_name(KQ_scaled, "KQ_scaled"); - - // KQ_masked = mask_past(KQ_scaled) - struct ggml_v2_tensor * KQ_masked = ggml_v2_diag_mask_inf_inplace(ctx0, KQ_scaled, n_past); - ggml_v2_set_name(KQ_masked, "KQ_masked"); - - // KQ = soft_max(KQ_masked) - struct ggml_v2_tensor * KQ_soft_max = ggml_v2_soft_max_inplace(ctx0, KQ_masked); - ggml_v2_set_name(KQ_soft_max, "KQ_soft_max"); - - - // split cached V into n_head heads - struct ggml_v2_tensor * V = - ggml_v2_view_3d(ctx0, kv_self.v, - n_past + N, n_embd/n_head, n_head, - n_ctx*ggml_v2_element_size(kv_self.v), - n_ctx*ggml_v2_element_size(kv_self.v)*n_embd/n_head, - il*n_ctx*ggml_v2_element_size(kv_self.v)*n_embd); - ggml_v2_set_name(V, "V"); - -#if 1 - struct ggml_v2_tensor * KQV = ggml_v2_mul_mat(ctx0, V, KQ_soft_max); - ggml_v2_set_name(KQV, "KQV"); -#else - // make V contiguous in memory to speed up the matmul, however we waste time on the copy - // on M1 this is faster for the perplexity computation, but ~5% slower for the single-token generation - // is there a better way? - struct ggml_v2_tensor * V_cont = ggml_v2_cpy(ctx0, V, ggml_v2_new_tensor_3d(ctx0, kv_self.v->type, n_past + N, n_embd/n_head, n_head)); - struct ggml_v2_tensor * KQV = ggml_v2_mul_mat(ctx0, V_cont, KQ_soft_max); -#endif - - // KQV_merged = KQV.permute(0, 2, 1, 3) - struct ggml_v2_tensor * KQV_merged = ggml_v2_permute(ctx0, KQV, 0, 2, 1, 3); - ggml_v2_set_name(KQV_merged, "KQV_merged"); - - // cur = KQV_merged.contiguous().view(n_embd, N) - cur = ggml_v2_cpy(ctx0, - KQV_merged, - ggml_v2_new_tensor_2d(ctx0, GGML_V2_TYPE_F32, n_embd, N)); - ggml_v2_set_name(cur, "KQV_merged_contiguous"); - - // projection (no bias) - cur = ggml_v2_mul_mat(ctx0, - model.layers[il].wo, - cur); - } - - lctx.use_buf(ctx0, 1); - - struct ggml_v2_tensor * inpFF = ggml_v2_add(ctx0, cur, inpSA); - - // feed-forward network - { - // norm - { - cur = ggml_v2_rms_norm(ctx0, inpFF); - - // cur = ffn_norm*cur - cur = ggml_v2_mul(ctx0, - ggml_v2_repeat(ctx0, model.layers[il].ffn_norm, cur), - cur); - } - - struct ggml_v2_tensor * tmp = ggml_v2_mul_mat(ctx0, - model.layers[il].w3, - cur); - - cur = ggml_v2_mul_mat(ctx0, - model.layers[il].w1, - cur); - - // SILU activation - cur = ggml_v2_silu(ctx0, cur); - - cur = ggml_v2_mul(ctx0, cur, tmp); - - cur = ggml_v2_mul_mat(ctx0, - model.layers[il].w2, - cur); - } - - cur = ggml_v2_add(ctx0, cur, inpFF); - - // input for next layer - inpL = cur; - } - - lctx.use_buf(ctx0, 0); - - // used at the end to optionally extract the embeddings - struct ggml_v2_tensor * embeddings = NULL; - - // norm - { - - inpL = ggml_v2_rms_norm(ctx0, inpL); - - // inpL = norm*inpL - inpL = ggml_v2_mul(ctx0, - ggml_v2_repeat(ctx0, model.norm, inpL), - inpL); - - embeddings = inpL; - } - - // lm_head - inpL = ggml_v2_mul_mat(ctx0, model.output, inpL); - - lctx.use_buf(ctx0, -1); - - // logits -> probs - //inpL = ggml_v2_soft_max_inplace(ctx0, inpL); - - // run the computation - ggml_v2_build_forward_expand(&gf, inpL); - ggml_v2_graph_compute (ctx0, &gf); - -#ifdef GGML_V2_PERF - // print timing information per ggml operation (for debugging purposes) - // requires GGML_V2_PERF to be defined - ggml_v2_graph_print(&gf); -#endif - - // plot the computation graph in dot format (for debugging purposes) - //if (n_past%100 == 0) { - // ggml_v2_graph_dump_dot(&gf, NULL, "llama.dot"); - //} - - //embd_w.resize(n_vocab*N); - //memcpy(embd_w.data(), ggml_v2_get_data(inpL), sizeof(float)*n_vocab*N); - - // update kv token count - lctx.model.kv_self.n = n_past + N; - - // extract logits - { - auto & logits_out = lctx.logits; - - if (lctx.logits_all) { - logits_out.resize(n_vocab * N); - memcpy(logits_out.data(), (float *) ggml_v2_get_data(inpL), sizeof(float)*n_vocab*N); - } else { - // return result for just the last token - logits_out.resize(n_vocab); - memcpy(logits_out.data(), (float *) ggml_v2_get_data(inpL) + (n_vocab*(N-1)), sizeof(float)*n_vocab); - } - } - - // extract embeddings - if (!lctx.embedding.empty()) { - auto & embedding_out = lctx.embedding; - - embedding_out.resize(n_embd); - memcpy(embedding_out.data(), (float *) ggml_v2_get_data(embeddings) + (n_embd*(N - 1)), sizeof(float)*n_embd); - } - - if (mem_per_token == 0) { - mem_per_token = ggml_v2_used_mem(ctx0)/N; - } - -#if 0 - printf("\n%s: used_mem = %.3f MB, scratch -- %.3f MB %.3f MB\n", __func__, - ggml_v2_used_mem(ctx0)/1024.0/1024.0, - lctx.get_buf_max_mem(0)/1024.0/1024.0, - lctx.get_buf_max_mem(1)/1024.0/1024.0); -#endif - - ggml_v2_free(ctx0); - - // measure the performance only for the single-token evals - if (N == 1) { - lctx.t_eval_us += ggml_v2_time_us() - t_start_us; - lctx.n_eval++; - } - else if (N > 1) { - lctx.t_p_eval_us += ggml_v2_time_us() - t_start_us; - lctx.n_p_eval += N; - } - - return true; -} - -// -// tokenizer -// - -static size_t utf8_len2(char src) { - const size_t lookup[] = { 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 2, 2, 3, 4 }; - uint8_t highbits = static_cast<uint8_t>(src) >> 4; - return lookup[highbits]; -} - -struct llama_v2_sp_symbol { - using index = int; - index prev; - index next; - const char * text; - size_t n; -}; - -static_assert(std::is_trivially_copyable<llama_v2_sp_symbol>::value, "llama_v2_sp_symbol is not trivially copyable"); - -struct llama_v2_sp_bigram { - struct comparator { - bool operator()(llama_v2_sp_bigram & l, llama_v2_sp_bigram & r) { - return (l.score < r.score) || (l.score == r.score && l.left > r.left); - } - }; - using queue_storage = std::vector<llama_v2_sp_bigram>; - using queue = std::priority_queue<llama_v2_sp_bigram, queue_storage, comparator>; - llama_v2_sp_symbol::index left; - llama_v2_sp_symbol::index right; - float score; - size_t size; -}; - -// original implementation: -// https://github.com/ggerganov/llama.cpp/commit/074bea2eb1f1349a0118239c4152914aecaa1be4 -struct llama_v2_tokenizer { - llama_v2_tokenizer(const llama_v2_vocab & vocab): vocab_(vocab) {} - - void tokenize(const std::string & text, std::vector<llama_v2_vocab::id> & output) { - // split string into utf8 chars - int index = 0; - size_t offs = 0; - while (offs < text.size()) { - llama_v2_sp_symbol sym; - size_t char_len = std::min(text.size() - offs, utf8_len2(text[offs])); - sym.text = text.c_str() + offs; - sym.n = char_len; - offs += char_len; - sym.prev = index - 1; - sym.next = offs == text.size() ? -1 : index + 1; - index++; - symbols_.emplace_back(sym); - } - - // seed the work queue with all possible 2-character tokens. - for (size_t i = 1; i < symbols_.size(); ++i) { - try_add_bigram(i - 1, i); - } - - // keep substituting the highest frequency pairs for as long as we can. - while (!work_queue_.empty()) { - auto bigram = work_queue_.top(); - work_queue_.pop(); - - auto & left_sym = symbols_[bigram.left]; - auto & right_sym = symbols_[bigram.right]; - - // if one of the symbols already got merged, skip it. - if (left_sym.n == 0 || right_sym.n == 0 || - left_sym.n + right_sym.n != bigram.size) { - continue; - } - - // merge the right sym into the left one - left_sym.n += right_sym.n; - right_sym.n = 0; - - //printf("left = '%*s' size = %zu\n", (int) left_sym.n, left_sym.text, bigram.size); - - // remove the right sym from the chain - left_sym.next = right_sym.next; - if (right_sym.next >= 0) { - symbols_[right_sym.next].prev = bigram.left; - } - - // find more substitutions - try_add_bigram(left_sym.prev, bigram.left); - try_add_bigram(bigram.left, left_sym.next); - } - - for (int i = 0; i != -1; i = symbols_[i].next) { - auto & symbol = symbols_[i]; - auto token = vocab_.token_to_id.find(std::string(symbol.text, symbol.n)); - - if (token == vocab_.token_to_id.end()) { - // output any symbols that did not form tokens as bytes. - for (int j = 0; j < (int) symbol.n; ++j) { - llama_v2_vocab::id token_id = static_cast<uint8_t>(symbol.text[j]) + 3; - output.push_back(token_id); - } - } else { - output.push_back((*token).second); - } - } - } - -private: - void try_add_bigram(int left, int right) { - if (left == -1 || right == -1) { - return; - } - - const std::string text = std::string(symbols_[left].text, symbols_[left].n + symbols_[right].n); - auto token = vocab_.token_to_id.find(text); - - if (token == vocab_.token_to_id.end()) { - return; - } - - if (static_cast<size_t>((*token).second) >= vocab_.id_to_token.size()) { - return; - } - - const auto &tok_score = vocab_.id_to_token[(*token).second]; - - llama_v2_sp_bigram bigram; - bigram.left = left; - bigram.right = right; - bigram.score = tok_score.score; - bigram.size = text.size(); - work_queue_.push(bigram); - } - - const llama_v2_vocab & vocab_; - std::vector<llama_v2_sp_symbol> symbols_; - llama_v2_sp_bigram::queue work_queue_; -}; - -static std::vector<llama_v2_vocab::id> llama_v2_tokenize(const llama_v2_vocab & vocab, const std::string & text, bool bos) { - llama_v2_tokenizer tokenizer(vocab); - std::vector<llama_v2_vocab::id> output; - - if (text.empty()) { - return output; - } - - if (bos) { - output.push_back(llama_v2_token_bos()); - } - - tokenizer.tokenize(text, output); - return output; -} - -// -// sampling -// - -void llama_v2_sample_softmax(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates) { - assert(candidates->size > 0); - - const int64_t t_start_sample_us = ggml_v2_time_us(); - - // Sort the logits in descending order - if (!candidates->sorted) { - std::sort(candidates->data, candidates->data + candidates->size, [](const llama_v2_token_data & a, const llama_v2_token_data & b) { - return a.logit > b.logit; - }); - candidates->sorted = true; - } - - float max_l = candidates->data[0].logit; - float cum_sum = 0.0f; - for (size_t i = 0; i < candidates->size; ++i) { - float p = expf(candidates->data[i].logit - max_l); - candidates->data[i].p = p; - cum_sum += p; - } - for (size_t i = 0; i < candidates->size; ++i) { - candidates->data[i].p /= cum_sum; - } - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } -} - -void llama_v2_sample_top_k(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, int k, size_t min_keep) { - const int64_t t_start_sample_us = ggml_v2_time_us(); - - k = std::max(k, (int) min_keep); - k = std::min(k, (int) candidates->size); - - // Sort scores in descending order - if (!candidates->sorted) { - auto comp = [](const llama_v2_token_data & a, const llama_v2_token_data & b) { - return a.logit > b.logit; - }; - if (k == (int) candidates->size) { - std::sort(candidates->data, candidates->data + candidates->size, comp); - } else { - std::partial_sort(candidates->data, candidates->data + k, candidates->data + candidates->size, comp); - } - candidates->sorted = true; - } - candidates->size = k; - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } -} - -void llama_v2_sample_top_p(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float p, size_t min_keep) { - if (p >= 1.0f) { - return; - } - - const int64_t t_start_sample_us = ggml_v2_time_us(); - - llama_v2_sample_softmax(ctx, candidates); - - // Compute the cumulative probabilities - float cum_sum = 0.0f; - size_t last_idx = candidates->size; - - for (size_t i = 0; i < candidates->size; ++i) { - cum_sum += candidates->data[i].p; - - // Check if the running sum is greater than p or if we have kept at least min_keep tokens - if (cum_sum > p && i >= min_keep) { - last_idx = i; - break; - } - } - - // Resize the output vector to keep only the top-p tokens - candidates->size = last_idx; - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } -} - -void llama_v2_sample_tail_free(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float z, size_t min_keep) { - if (z >= 1.0f || candidates->size <= 2) { - return; - } - - const int64_t t_start_sample_us = ggml_v2_time_us(); - - llama_v2_sample_softmax(nullptr, candidates); - - // Compute the first and second derivatives - std::vector<float> first_derivatives(candidates->size - 1); - std::vector<float> second_derivatives(candidates->size - 2); - - for (size_t i = 0; i < first_derivatives.size(); ++i) { - first_derivatives[i] = candidates->data[i].p - candidates->data[i + 1].p; - } - for (size_t i = 0; i < second_derivatives.size(); ++i) { - second_derivatives[i] = first_derivatives[i] - first_derivatives[i + 1]; - } - - // Calculate absolute value of second derivatives - for (size_t i = 0; i < second_derivatives.size(); ++i) { - second_derivatives[i] = abs(second_derivatives[i]); - } - - // Normalize the second derivatives - float second_derivatives_sum = std::accumulate(second_derivatives.begin(), second_derivatives.end(), 0.0f); - for (float & value : second_derivatives) { - value /= second_derivatives_sum; - } - - float cum_sum = 0.0f; - size_t last_idx = candidates->size; - for (size_t i = 0; i < second_derivatives.size(); ++i) { - cum_sum += second_derivatives[i]; - - // Check if the running sum is greater than z or if we have kept at least min_keep tokens - if (cum_sum > z && i >= min_keep) { - last_idx = i; - break; - } - } - - // Resize the output vector to keep only the tokens above the tail location - candidates->size = last_idx; - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } -} - - -void llama_v2_sample_typical(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float p, size_t min_keep) { - // Reference implementation: - // https://github.com/huggingface/transformers/compare/main...cimeister:typical-sampling:typical-pr - if (p >= 1.0f) { - return; - } - - const int64_t t_start_sample_us = ggml_v2_time_us(); - - // Compute the softmax of logits and calculate entropy - llama_v2_sample_softmax(nullptr, candidates); - - float entropy = 0.0f; - for (size_t i = 0; i < candidates->size; ++i) { - entropy += -candidates->data[i].p * logf(candidates->data[i].p); - } - - // Compute the absolute difference between negative log probability and entropy for each candidate - std::vector<float> shifted_scores; - for (size_t i = 0; i < candidates->size; ++i) { - float shifted_score = fabsf(-logf(candidates->data[i].p) - entropy); - shifted_scores.push_back(shifted_score); - } - - // Sort tokens based on the shifted_scores and their corresponding indices - std::vector<size_t> indices(candidates->size); - std::iota(indices.begin(), indices.end(), 0); - - std::sort(indices.begin(), indices.end(), [&](size_t a, size_t b) { - return shifted_scores[a] < shifted_scores[b]; - }); - - // Compute the cumulative probabilities - float cum_sum = 0.0f; - size_t last_idx = indices.size(); - - for (size_t i = 0; i < indices.size(); ++i) { - size_t idx = indices[i]; - cum_sum += candidates->data[idx].p; - - // Check if the running sum is greater than typical or if we have kept at least min_keep tokens - if (cum_sum > p && i >= min_keep - 1) { - last_idx = i + 1; - break; - } - } - - // Resize the output vector to keep only the locally typical tokens - std::vector<llama_v2_token_data> new_candidates; - for (size_t i = 0; i < last_idx; ++i) { - size_t idx = indices[i]; - new_candidates.push_back(candidates->data[idx]); - } - - // Replace the data in candidates with the new_candidates data - std::copy(new_candidates.begin(), new_candidates.end(), candidates->data); - candidates->size = new_candidates.size(); - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } -} - -void llama_v2_sample_temperature(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates_p, float temp) { - const int64_t t_start_sample_us = ggml_v2_time_us(); - - for (size_t i = 0; i < candidates_p->size; ++i) { - candidates_p->data[i].logit /= temp; - } - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } -} - -void llama_v2_sample_repetition_penalty(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, const llama_v2_token * last_tokens, size_t last_tokens_size, float penalty) { - if (last_tokens_size == 0 || penalty == 1.0f) { - return; - } - - const int64_t t_start_sample_us = ggml_v2_time_us(); - - for (size_t i = 0; i < candidates->size; ++i) { - const auto * token_iter = std::find(last_tokens, last_tokens + last_tokens_size, candidates->data[i].id); - if (token_iter == last_tokens + last_tokens_size) { - continue; - } - - // The academic publication that described this technique actually just only divided, but that would cause tokens with negative logits to become more likely, which is obviously wrong. - // This is common fix for this problem, which is to multiply by the penalty instead of dividing. - if (candidates->data[i].logit <= 0) { - candidates->data[i].logit *= penalty; - } else { - candidates->data[i].logit /= penalty; - } - } - - candidates->sorted = false; - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } -} - -void llama_v2_sample_frequency_and_presence_penalties(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, const llama_v2_token * last_tokens_p, size_t last_tokens_size, float alpha_frequency, float alpha_presence) { - if (last_tokens_size == 0 || (alpha_frequency == 0.0f && alpha_presence == 0.0f)) { - return; - } - - const int64_t t_start_sample_us = ggml_v2_time_us(); - - // Create a frequency map to count occurrences of each token in last_tokens - std::unordered_map<llama_v2_token, int> token_count; - for (size_t i = 0; i < last_tokens_size; ++i) { - token_count[last_tokens_p[i]]++; - } - - // Apply frequency and presence penalties to the candidates - for (size_t i = 0; i < candidates->size; ++i) { - auto token_iter = token_count.find(candidates->data[i].id); - if (token_iter == token_count.end()) { - continue; - } - - int count = token_iter->second; - candidates->data[i].logit -= float(count) * alpha_frequency + float(count > 0) * alpha_presence; - } - - candidates->sorted = false; - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } -} - - -llama_v2_token llama_v2_sample_token_mirostat(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float tau, float eta, int m, float * mu) { - assert(ctx); - auto N = float(llama_v2_n_vocab(ctx)); - int64_t t_start_sample_us; - t_start_sample_us = ggml_v2_time_us(); - - llama_v2_sample_softmax(nullptr, candidates); - - // Estimate s_hat using the most probable m tokens - float s_hat = 0.0; - float sum_ti_bi = 0.0; - float sum_ti_sq = 0.0; - for (size_t i = 0; i < size_t(m - 1) && i < candidates->size - 1; ++i) { - float t_i = logf(float(i + 2) / float(i + 1)); - float b_i = logf(candidates->data[i].p / candidates->data[i + 1].p); - sum_ti_bi += t_i * b_i; - sum_ti_sq += t_i * t_i; - } - s_hat = sum_ti_bi / sum_ti_sq; - - // Compute k from the estimated s_hat and target surprise value - float epsilon_hat = s_hat - 1; - float k = powf((epsilon_hat * powf(2, *mu)) / (1 - powf(N, -epsilon_hat)), 1 / s_hat); - - // Sample the next word X using top-k sampling - llama_v2_sample_top_k(nullptr, candidates, int(k), 1); - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } - llama_v2_token X = llama_v2_sample_token(ctx, candidates); - t_start_sample_us = ggml_v2_time_us(); - - // Compute error as the difference between observed surprise and target surprise value - size_t X_idx = std::distance(candidates->data, std::find_if(candidates->data, candidates->data + candidates->size, [&](const llama_v2_token_data & candidate) { - return candidate.id == X; - })); - float observed_surprise = -log2f(candidates->data[X_idx].p); - float e = observed_surprise - tau; - - // Update mu using the learning rate and error - *mu = *mu - eta * e; - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - ctx->n_sample++; - } - return X; -} - -llama_v2_token llama_v2_sample_token_mirostat_v2(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates, float tau, float eta, float * mu) { - assert(ctx); - int64_t t_start_sample_us; - t_start_sample_us = ggml_v2_time_us(); - - llama_v2_sample_softmax(ctx, candidates); - - // Truncate the words with surprise values greater than mu - candidates->size = std::distance(candidates->data, std::find_if(candidates->data, candidates->data + candidates->size, [&](const llama_v2_token_data & candidate) { - return -log2f(candidate.p) > *mu; - })); - - // Normalize the probabilities of the remaining words - llama_v2_sample_softmax(ctx, candidates); - - // Sample the next word X from the remaining words - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } - llama_v2_token X = llama_v2_sample_token(ctx, candidates); - t_start_sample_us = ggml_v2_time_us(); - - // Compute error as the difference between observed surprise and target surprise value - size_t X_idx = std::distance(candidates->data, std::find_if(candidates->data, candidates->data + candidates->size, [&](const llama_v2_token_data & candidate) { - return candidate.id == X; - })); - float observed_surprise = -log2f(candidates->data[X_idx].p); - float e = observed_surprise - tau; - - // Update mu using the learning rate and error - *mu = *mu - eta * e; - - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - } - return X; -} - -llama_v2_token llama_v2_sample_token_greedy(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates) { - const int64_t t_start_sample_us = ggml_v2_time_us(); - - // Find max element - auto * max_iter = std::max_element(candidates->data, candidates->data + candidates->size, [](const llama_v2_token_data & a, const llama_v2_token_data & b) { - return a.logit < b.logit; - }); - - llama_v2_token result = max_iter->id; - if (ctx) { - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - ctx->n_sample++; - } - return result; -} - -llama_v2_token llama_v2_sample_token(struct llama_v2_context * ctx, llama_v2_token_data_array * candidates) { - assert(ctx); - const int64_t t_start_sample_us = ggml_v2_time_us(); - llama_v2_sample_softmax(nullptr, candidates); - - std::vector<float> probs; - probs.reserve(candidates->size); - for (size_t i = 0; i < candidates->size; ++i) { - probs.push_back(candidates->data[i].p); - } - - std::discrete_distribution<> dist(probs.begin(), probs.end()); - auto & rng = ctx->rng; - int idx = dist(rng); - - llama_v2_token result = candidates->data[idx].id; - - ctx->t_sample_us += ggml_v2_time_us() - t_start_sample_us; - ctx->n_sample++; - return result; -} - -// -// quantization -// - -static void llama_v2_model_quantize_internal(const std::string & fname_inp, const std::string & fname_out, enum llama_v2_ftype ftype, int nthread) { - ggml_v2_type quantized_type; - switch (ftype) { - case LLAMA_V2_FTYPE_MOSTLY_Q4_0: quantized_type = GGML_V2_TYPE_Q4_0; break; - case LLAMA_V2_FTYPE_MOSTLY_Q4_1: quantized_type = GGML_V2_TYPE_Q4_1; break; - case LLAMA_V2_FTYPE_MOSTLY_Q4_2: quantized_type = GGML_V2_TYPE_Q4_2; break; - case LLAMA_V2_FTYPE_MOSTLY_Q4_3: quantized_type = GGML_V2_TYPE_Q4_3; break; - case LLAMA_V2_FTYPE_MOSTLY_Q5_0: quantized_type = GGML_V2_TYPE_Q5_0; break; - case LLAMA_V2_FTYPE_MOSTLY_Q5_1: quantized_type = GGML_V2_TYPE_Q5_1; break; - case LLAMA_V2_FTYPE_MOSTLY_Q8_0: quantized_type = GGML_V2_TYPE_Q8_0; break; - default: throw format_old("invalid output file type %d\n", ftype); - }; - - if (nthread <= 0) { - nthread = std::thread::hardware_concurrency(); - } - - std::unique_ptr<llama_v2_model_loader> model_loader(new llama_v2_model_loader(fname_inp, /*use_mmap*/ false, - /*vocab_only*/ false)); - llama_v2_file_saver file_saver(fname_out.c_str(), model_loader->file_loaders.at(0).get(), ftype); - - size_t total_size_org = 0; - size_t total_size_new = 0; - std::vector<int64_t> hist_all(1 << 4, 0); - - std::vector<std::thread> workers; - std::mutex mutex; - - size_t idx = 0; - for (llama_v2_load_tensor & tensor : model_loader->tensors_map.tensors) { - llama_v2_buffer read_data; - read_data.resize(tensor.size); - tensor.data = read_data.addr; - model_loader->load_data_for(tensor); - - printf("[%4zu/%4zu] %36s - %16s, type = %6s, ", - ++idx, model_loader->tensors_map.tensors.size(), - tensor.name.c_str(), llama_v2_format_tensor_shape(tensor.ne).c_str(), - ggml_v2_type_name(tensor.type)); - - // This used to be a regex, but <regex> has an extreme cost to compile times. - bool quantize = tensor.name.rfind("weight") == tensor.name.size() - 6; // ends with 'weight'? - - // quantize only 2D tensors - quantize &= (tensor.ne.size() == 2); - - // uncomment this to keep the output layer in FP16 - //if (tensor.name == "output.weight") { - // quantize = false; - //} - - enum ggml_v2_type new_type; - void * new_data; - size_t new_size; - llama_v2_buffer work; - - if (!quantize) { - new_type = tensor.type; - new_data = tensor.data; - new_size = tensor.size; - printf("size = %8.3f MB\n", tensor.size/1024.0/1024.0); - } else { - new_type = quantized_type; - float * f32_data; - size_t nelements = tensor.ne.at(0) * tensor.ne.at(1); - llama_v2_buffer f32_conv_buf; - if (tensor.type == GGML_V2_TYPE_F32) { - f32_data = (float *) tensor.data; - } else if (tensor.type == GGML_V2_TYPE_F16) { - f32_conv_buf.resize(nelements * sizeof(float)); - f32_data = (float *) f32_conv_buf.addr; - const auto * f16_data = (const ggml_v2_fp16_t *) tensor.data; - for (size_t i = 0; i < nelements; i++) { - f32_data[i] = ggml_v2_fp16_to_fp32(f16_data[i]); - } - } else { - throw format_old("type %s unsupported for integer quantization", ggml_v2_type_name(tensor.type)); - } - - printf("quantizing .. "); - fflush(stdout); - - work.resize(nelements * 4); // upper bound on size - new_data = work.addr; - std::vector<int64_t> hist_cur(1 << 4, 0); - - int chunk_size = 32 * 512; - const int nchunk = (nelements + chunk_size - 1)/chunk_size; - const int nthread_use = nthread > 1 ? std::max(1, std::min(nthread, nchunk)) : 1; - if (nthread_use < 2) { - new_size = ggml_v2_quantize_chunk(new_type, f32_data, new_data, 0, nelements, hist_cur.data()); - } else { - size_t counter = 0; - new_size = 0; - auto compute = [&mutex, &counter, &hist_cur, &new_size, new_type, f32_data, new_data, nelements, chunk_size] () { - std::vector<int64_t> local_hist; - size_t local_size = 0; - while (true) { - std::unique_lock<std::mutex> lock(mutex); - size_t first = counter; counter += chunk_size; - if (first >= nelements) { - if (!local_hist.empty()) { - for (int j=0; j<int(local_hist.size()); ++j) { - hist_cur[j] += local_hist[j]; - } - new_size += local_size; - } - break; - } - lock.unlock(); - size_t last = std::min(nelements, first + chunk_size); - if (local_hist.empty()) { - local_hist.resize(hist_cur.size(), 0); - } - local_size += ggml_v2_quantize_chunk(new_type, f32_data, new_data, first, last - first, local_hist.data()); - } - }; - if ((int) workers.size() < nthread_use - 1) { - workers.resize(nthread_use - 1); - } - for (int it = 0; it < nthread_use - 1; ++it) { - workers[it] = std::thread(compute); - } - compute(); - for (int it = 0; it < nthread_use - 1; ++it) { - workers[it].join(); - } - } - - printf("size = %8.2f MB -> %8.2f MB | hist: ", tensor.size/1024.0/1024.0, new_size/1024.0/1024.0); - for (size_t i = 0; i < hist_cur.size(); i++) { - hist_all[i] += hist_cur[i]; - } - - for (size_t i = 0; i < hist_cur.size(); i++) { - printf("%5.3f ", hist_cur[i] / float(nelements)); - } - printf("\n"); - } - total_size_org += tensor.size; - total_size_new += new_size; - file_saver.write_tensor(tensor, new_type, new_data, new_size); - } - - printf("%s: model size = %8.2f MB\n", __func__, total_size_org/1024.0/1024.0); - printf("%s: quant size = %8.2f MB\n", __func__, total_size_new/1024.0/1024.0); - - { - int64_t sum_all = 0; - for (size_t i = 0; i < hist_all.size(); i++) { - sum_all += hist_all[i]; - } - - printf("%s: hist: ", __func__); - for (size_t i = 0; i < hist_all.size(); i++) { - printf("%5.3f ", hist_all[i] / float(sum_all)); - } - printf("\n"); - } -} - -// -// interface implementation -// - -struct llama_v2_context * llama_v2_init_from_file( - const char * path_model, - struct llama_v2_context_params params) { - ggml_v2_time_init(); - - llama_v2_context * ctx = new llama_v2_context; - - if (params.seed < 0 || params.seed==0xFFFFFFFF) { - params.seed = time(NULL); - } - - unsigned cur_percentage = 0; - if (params.progress_callback == NULL) { - params.progress_callback_user_data = &cur_percentage; - params.progress_callback = [](float progress, void * ctx) { - unsigned * cur_percentage_p = (unsigned *) ctx; - unsigned percentage = (unsigned) (100 * progress); - while (percentage > *cur_percentage_p) { - ++*cur_percentage_p; - fprintf(stderr, "."); - fflush(stderr); - if (percentage >= 100) { - fprintf(stderr, "\n"); - } - } - }; - } - - ctx->rng = std::mt19937(params.seed); - ctx->logits_all = params.logits_all; - - ggml_v2_type memory_type = params.f16_kv ? GGML_V2_TYPE_F16 : GGML_V2_TYPE_F32; - - if (!llama_v2_model_load(path_model, *ctx, params.n_ctx, params.n_gpu_layers, memory_type, - params.use_mmap, params.use_mlock, params.vocab_only, - params.progress_callback, params.progress_callback_user_data)) { - fprintf(stderr, "%s: failed to load model\n", __func__); - llama_v2_free(ctx); - return nullptr; - } - - // reserve memory for context buffers - if (!params.vocab_only) { - if (!kv_cache_init(ctx->model.hparams, ctx->model.kv_self, memory_type, ctx->model.hparams.n_ctx)) { - fprintf(stderr, "%s: kv_cache_init() failed for self-attention cache\n", __func__); - llama_v2_free(ctx); - return nullptr; - } - - { - const size_t memory_size = ggml_v2_nbytes(ctx->model.kv_self.k) + ggml_v2_nbytes(ctx->model.kv_self.v); - fprintf(stderr, "%s: kv self size = %7.2f MB\n", __func__, memory_size / 1024.0 / 1024.0); - } - - const auto & hparams = ctx->model.hparams; - - // resized during inference - if (params.logits_all) { - ctx->logits.reserve(hparams.n_ctx*hparams.n_vocab); - } else { - ctx->logits.reserve(hparams.n_vocab); - } - - if (params.embedding){ - ctx->embedding.resize(hparams.n_embd); - } - - ctx->buf_compute.resize(MEM_REQ_EVAL_2().at(ctx->model.type)); - - ctx->buf_scratch[0].resize(MEM_REQ_SCRATCH0_2().at(ctx->model.type)); - ctx->buf_scratch[1].resize(MEM_REQ_SCRATCH1_2().at(ctx->model.type)); - } - - return ctx; -} - -void llama_v2_free(struct llama_v2_context * ctx) { - delete ctx; -} - -int llama_v2_model_quantize( - const char * fname_inp, - const char * fname_out, - enum llama_v2_ftype ftype, - int nthread) { - try { - llama_v2_model_quantize_internal(fname_inp, fname_out, ftype, nthread); - return 0; - } catch (const std::string & err) { - fprintf(stderr, "%s: failed to quantize: %s\n", __func__, err.c_str()); - return 1; - } -} - -int llama_v2_apply_lora_from_file_internal(struct llama_v2_context * ctx, const char * path_lora, const char * path_base_model, int n_threads) { - fprintf(stderr, "%s: applying lora adapter from '%s' - please wait ...\n", __func__, path_lora); - - auto & model = ctx->model; - - const int64_t t_start_lora_us = ggml_v2_time_us(); - - auto fin = std::ifstream(path_lora, std::ios::binary); - if (!fin) { - fprintf(stderr, "%s: failed to open '%s'\n", __func__, path_lora); - return 1; - } - - // verify magic and version - { - uint32_t magic; - fin.read((char *) &magic, sizeof(magic)); - if (magic != 'ggla') { - fprintf(stderr, "%s: bad file magic\n", __func__); - return 1; - } - uint32_t format_version; - fin.read((char *) &format_version, sizeof(format_version)); - - if (format_version != 1) { - fprintf(stderr, "%s: unsupported file version\n", __func__ ); - return 1; - } - } - - int32_t lora_r; - int32_t lora_alpha; - fin.read((char *) &lora_r, sizeof(lora_r)); - fin.read((char *) &lora_alpha, sizeof(lora_alpha)); - float scaling = (float)lora_alpha / (float)lora_r; - - fprintf(stderr, "%s: r = %d, alpha = %d, scaling = %.2f\n", __func__, lora_r, lora_alpha, scaling); - - - // create a temporary ggml context to store the lora tensors - // todo: calculate size from biggest possible tensor - std::vector<uint8_t> lora_buf(1024ull * 1024ull * 1024ull); - struct ggml_v2_init_params params; - params.mem_size = lora_buf.size(); - params.mem_buffer = lora_buf.data(); - params.no_alloc = false; - - ggml_v2_context * lora_ctx = ggml_v2_init(params); - std::unordered_map<std::string, struct ggml_v2_tensor *> lora_tensors; - - // create a name -> tensor map of the model to accelerate lookups - std::unordered_map<std::string, struct ggml_v2_tensor*> model_tensors; - for (auto & kv: model.tensors_by_name) { - model_tensors.insert(kv); - } - - - // load base model - std::unique_ptr<llama_v2_model_loader> model_loader; - ggml_v2_context * base_ctx = NULL; - llama_v2_buffer base_buf; - if (path_base_model) { - fprintf(stderr, "%s: loading base model from '%s'\n", __func__, path_base_model); - model_loader.reset(new llama_v2_model_loader(path_base_model, /*use_mmap*/ true, /*vocab_only*/ false)); - - size_t ctx_size; - size_t mmapped_size; - model_loader->calc_sizes(&ctx_size, &mmapped_size); - base_buf.resize(ctx_size); - - ggml_v2_init_params base_params; - base_params.mem_size = base_buf.size; - base_params.mem_buffer = base_buf.addr; - base_params.no_alloc = model_loader->use_mmap; - - base_ctx = ggml_v2_init(base_params); - - model_loader->ggml_v2_ctx = base_ctx; - - // maybe this should in llama_v2_model_loader - if (model_loader->use_mmap) { - model_loader->mapping.reset(new llama_v2_mmap(&model_loader->file_loaders.at(0)->file, /* prefetch */ false)); - } - } - - // read tensors and apply - bool warned = false; - int n_tensors = 0; - while (true) { - int32_t n_dims; - int32_t length; - int32_t ftype; - - fin.read(reinterpret_cast<char *>(&n_dims), sizeof(n_dims)); - fin.read(reinterpret_cast<char *>(&length), sizeof(length)); - fin.read(reinterpret_cast<char *>(&ftype), sizeof(ftype)); - if (fin.eof()) { - break; - } - - int32_t ne[2] = { 1, 1 }; - for (int i = 0; i < n_dims; ++i) { - fin.read(reinterpret_cast<char *>(&ne[i]), sizeof(ne[i])); - } - - std::string name; - { - char buf[1024]; - fin.read(buf, length); - name = std::string(buf, length); - } - - // check for lora suffix and get the type of tensor - const std::string lora_suffix = ".lora"; - size_t pos = name.rfind(lora_suffix); - if (pos == std::string::npos) { - fprintf(stderr, "%s: error: '%s' is not a lora tensor\n", __func__, name.c_str()); - return 1; - } - - std::string lora_type = name.substr(pos + lora_suffix.length()); - std::string base_name = name; - base_name.erase(pos); - // fprintf(stderr, "%s: %s => %s (lora type %s) ", __func__, name.c_str(),base_name.c_str(), lora_type.c_str()); - - if (model_tensors.find(base_name) == model_tensors.end()) { - fprintf(stderr, "%s: unknown tensor '%s' in lora adapter\n", __func__, name.data()); - return 1; - } - - // create ggml tensor - ggml_v2_type wtype; - switch (ftype) { - case 0: wtype = GGML_V2_TYPE_F32; break; - case 1: wtype = GGML_V2_TYPE_F16; break; - default: - { - fprintf(stderr, "%s: invalid tensor data type '%d'\n", - __func__, ftype); - return false; - } - } - ggml_v2_tensor* lora_tensor; - if (n_dims == 2) { - lora_tensor = ggml_v2_new_tensor_2d(lora_ctx, wtype, ne[0], ne[1]); - } - else { - fprintf(stderr, "%s: unsupported tensor dimension %d\n", __func__, n_dims); - return 1; - } - - // load tensor data - size_t offset = fin.tellg(); - size_t tensor_data_size = ggml_v2_nbytes(lora_tensor); - offset = (offset + 31) & -32; - fin.seekg(offset); - fin.read((char*)lora_tensor->data, tensor_data_size); - - lora_tensors[name] = lora_tensor; - - // check if we have both A and B tensors and apply - if (lora_tensors.find(base_name + ".loraA") != lora_tensors.end() && - lora_tensors.find(base_name + ".loraB") != lora_tensors.end()) { - - ggml_v2_tensor * dest_t = model_tensors[base_name]; - ggml_v2_tensor * base_t; - if (model_loader) { - // load from base model - if (model_loader->tensors_map.name_to_idx.find(base_name) == model_loader->tensors_map.name_to_idx.end()) { - fprintf(stderr, "%s: error: tensor '%s' not found in base model\n", __func__, base_name.c_str()); - return 1; - } - size_t idx = model_loader->tensors_map.name_to_idx[base_name]; - llama_v2_load_tensor & lt = model_loader->tensors_map.tensors[idx]; - base_t = model_loader->get_tensor(base_name, { (uint32_t)dest_t->ne[0], (uint32_t)dest_t->ne[1] }); - lt.data = (uint8_t *) lt.ggml_v2_tensor->data; - model_loader->load_data_for(lt); - lt.ggml_v2_tensor->data = lt.data; - } - else { - base_t = dest_t; - } - - if (ggml_v2_is_quantized(base_t->type)) { - if (!warned) { - fprintf(stderr, "%s: warning: using a lora adapter with a quantized model may result in poor quality, " - "use a f16 or f32 base model with --lora-base\n", __func__); - warned = true; - } - } - - ggml_v2_tensor * loraA = lora_tensors[base_name + ".loraA"]; - ggml_v2_tensor * loraB = lora_tensors[base_name + ".loraB"]; - - if (base_t->ne[0] != loraA->ne[1] || base_t->ne[1] != loraB->ne[1]) { - fprintf(stderr, "%s: incompatible tensor dimensions (%" PRId64 " and %" PRId64 ");" - " are you sure that this adapter is for this model?\n", __func__, base_t->ne[0], loraA->ne[1]); - return 1; - } - - // w = w + BA*s - ggml_v2_tensor * BA = ggml_v2_mul_mat(lora_ctx, loraA, loraB); - - if (scaling != 1.0f) { - ggml_v2_tensor * scale_tensor = ggml_v2_new_f32(lora_ctx, scaling); - BA = ggml_v2_scale_inplace(lora_ctx, BA, scale_tensor); - } - - ggml_v2_tensor * r; - if (base_t == dest_t) { - r = ggml_v2_add_inplace(lora_ctx, dest_t, BA); - } - else { - r = ggml_v2_add(lora_ctx, base_t, BA); - r = ggml_v2_cpy(lora_ctx, r, dest_t); - } - - struct ggml_v2_cgraph gf = ggml_v2_build_forward(r); - gf.n_threads = n_threads; - ggml_v2_graph_compute(lora_ctx, &gf); - - // we won't need these tensors again, reset the context to save memory - ggml_v2_free(lora_ctx); - lora_ctx = ggml_v2_init(params); - lora_tensors.clear(); - - n_tensors++; - if (n_tensors % 4 == 0) { - fprintf(stderr, "."); - } - } - } - - // TODO: this should be in a destructor, it will leak on failure - ggml_v2_free(lora_ctx); - if (base_ctx) { - ggml_v2_free(base_ctx); - } - - const int64_t t_lora_us = ggml_v2_time_us() - t_start_lora_us; - fprintf(stderr, " done (%.2f ms)\n", t_lora_us / 1000.0); - - return 0; -} - -int llama_v2_apply_lora_from_file(struct llama_v2_context * ctx, const char * path_lora, const char * path_base_model, int n_threads) { - try { - return llama_v2_apply_lora_from_file_internal(ctx, path_lora, path_base_model, n_threads); - } catch (const std::string & err) { - fprintf(stderr, "%s: failed to apply lora adapter: %s\n", __func__, err.c_str()); - return 1; - } -} - -int llama_v2_get_kv_cache_token_count(const struct llama_v2_context * ctx) { - return ctx->model.kv_self.n; -} - -#define LLAMA_V2_MAX_RNG_STATE (64*1024) - -void llama_v2_set_rng_seed(struct llama_v2_context * ctx, int seed) { - if (seed < 0 || seed==0xFFFFFFFF) { - seed = time(NULL); - } - ctx->rng.seed(seed); -} - -// Returns the *maximum* size of the state -size_t llama_v2_get_state_size(const struct llama_v2_context * ctx) { - // we don't know size of rng until we actually serialize it. so reserve more than enough memory for its serialized state. - // for reference, std::mt19937(1337) serializes to 6701 bytes. - const size_t s_rng_size = sizeof(size_t); - const size_t s_rng = LLAMA_V2_MAX_RNG_STATE; - const size_t s_logits_capacity = sizeof(size_t); - const size_t s_logits_size = sizeof(size_t); - const size_t s_logits = ctx->logits.capacity() * sizeof(float); - const size_t s_embedding_size = sizeof(size_t); - const size_t s_embedding = ctx->embedding.size() * sizeof(float); - const size_t s_kv_size = sizeof(size_t); - const size_t s_kv_ntok = sizeof(int); - const size_t s_kv = ctx->model.kv_self.buf.size; - - const size_t s_total = ( - + s_rng_size - + s_rng - + s_logits_capacity - + s_logits_size - + s_logits - + s_embedding_size - + s_embedding - + s_kv_size - + s_kv_ntok - + s_kv - ); - - return s_total; -} - -// Copies the state to the specified destination address -size_t llama_v2_copy_state_data(struct llama_v2_context * ctx, uint8_t * dst) { - uint8_t * out = dst; - - // copy rng - { - std::stringstream rng_ss; - rng_ss << ctx->rng; - - const size_t rng_size = rng_ss.str().size(); - char rng_buf[LLAMA_V2_MAX_RNG_STATE]; - - memset(&rng_buf[0], 0, LLAMA_V2_MAX_RNG_STATE); - memcpy(&rng_buf[0], rng_ss.str().data(), rng_ss.str().size()); - - memcpy(out, &rng_size, sizeof(rng_size)); out += sizeof(rng_size); - memcpy(out, &rng_buf[0], LLAMA_V2_MAX_RNG_STATE); out += LLAMA_V2_MAX_RNG_STATE; - } - - // copy logits - { - const size_t logits_cap = ctx->logits.capacity(); - const size_t logits_size = ctx->logits.size(); - - memcpy(out, &logits_cap, sizeof(logits_cap)); out += sizeof(logits_cap); - memcpy(out, &logits_size, sizeof(logits_size)); out += sizeof(logits_size); - - if (logits_size) { - memcpy(out, ctx->logits.data(), logits_size * sizeof(float)); - } - - out += logits_cap * sizeof(float); - } - - // copy embeddings - { - const size_t embedding_size = ctx->embedding.size(); - - memcpy(out, &embedding_size, sizeof(embedding_size)); out += sizeof(embedding_size); - - if (embedding_size) { - memcpy(out, ctx->embedding.data(), embedding_size * sizeof(float)); - out += embedding_size * sizeof(float); - } - } - - // copy kv cache - { - const auto & kv_self = ctx->model.kv_self; - const auto & hparams = ctx->model.hparams; - const int n_layer = hparams.n_layer; - const int n_embd = hparams.n_embd; - const int n_ctx = hparams.n_ctx; - - const size_t kv_size = kv_self.buf.size; - const int kv_ntok = llama_v2_get_kv_cache_token_count(ctx); - - memcpy(out, &kv_size, sizeof(kv_size)); out += sizeof(kv_size); - memcpy(out, &kv_ntok, sizeof(kv_ntok)); out += sizeof(kv_ntok); - - if (kv_size) { - const size_t elt_size = ggml_v2_element_size(kv_self.k); - - char buffer[4096]; - - ggml_v2_context * cpy_ctx = ggml_v2_init({ sizeof(buffer), buffer, /* no_alloc */ true }); - ggml_v2_cgraph gf{}; - gf.n_threads = 1; - - ggml_v2_tensor * kout3d = ggml_v2_new_tensor_3d(cpy_ctx, kv_self.k->type, n_embd, kv_ntok, n_layer); - kout3d->data = out; - out += ggml_v2_nbytes(kout3d); - - ggml_v2_tensor * vout3d = ggml_v2_new_tensor_3d(cpy_ctx, kv_self.v->type, kv_ntok, n_embd, n_layer); - vout3d->data = out; - out += ggml_v2_nbytes(vout3d); - - ggml_v2_tensor * k3d = ggml_v2_view_3d(cpy_ctx, kv_self.k, - n_embd, kv_ntok, n_layer, - elt_size*n_embd, elt_size*n_embd*n_ctx, 0); - - ggml_v2_tensor * v3d = ggml_v2_view_3d(cpy_ctx, kv_self.v, - kv_ntok, n_embd, n_layer, - elt_size*n_ctx, elt_size*n_ctx*n_embd, 0); - - ggml_v2_build_forward_expand(&gf, ggml_v2_cpy(cpy_ctx, k3d, kout3d)); - ggml_v2_build_forward_expand(&gf, ggml_v2_cpy(cpy_ctx, v3d, vout3d)); - ggml_v2_graph_compute(cpy_ctx, &gf); - - ggml_v2_free(cpy_ctx); - } - } - - const size_t written = out - dst; - const size_t max_size = llama_v2_get_state_size(ctx); - - LLAMA_V2_ASSERT(written <= max_size); - - return written; -} - -// Sets the state reading from the specified source address -size_t llama_v2_set_state_data(struct llama_v2_context * ctx, const uint8_t * src) { - const uint8_t * inp = src; - - // set rng - { - size_t rng_size; - char rng_buf[LLAMA_V2_MAX_RNG_STATE]; - - memcpy(&rng_size, inp, sizeof(rng_size)); inp += sizeof(rng_size); - memcpy(&rng_buf[0], inp, LLAMA_V2_MAX_RNG_STATE); inp += LLAMA_V2_MAX_RNG_STATE; - - std::stringstream rng_ss; - rng_ss.str(std::string(&rng_buf[0], rng_size)); - rng_ss >> ctx->rng; - - LLAMA_V2_ASSERT(rng_ss.fail() == false); - } - - // set logits - { - size_t logits_cap; - size_t logits_size; - - memcpy(&logits_cap, inp, sizeof(logits_cap)); inp += sizeof(logits_cap); - memcpy(&logits_size, inp, sizeof(logits_size)); inp += sizeof(logits_size); - - LLAMA_V2_ASSERT(ctx->logits.capacity() == logits_cap); - - if (logits_size) { - ctx->logits.resize(logits_size); - memcpy(ctx->logits.data(), inp, logits_size * sizeof(float)); - } - - inp += logits_cap * sizeof(float); - } - - // set embeddings - { - size_t embedding_size; - - memcpy(&embedding_size, inp, sizeof(embedding_size)); inp += sizeof(embedding_size); - - LLAMA_V2_ASSERT(ctx->embedding.capacity() == embedding_size); - - if (embedding_size) { - memcpy(ctx->embedding.data(), inp, embedding_size * sizeof(float)); - inp += embedding_size * sizeof(float); - } - } - - // set kv cache - { - const auto & kv_self = ctx->model.kv_self; - const auto & hparams = ctx->model.hparams; - const int n_layer = hparams.n_layer; - const int n_embd = hparams.n_embd; - const int n_ctx = hparams.n_ctx; - - size_t kv_size; - int kv_ntok; - - memcpy(&kv_size, inp, sizeof(kv_size)); inp += sizeof(kv_size); - memcpy(&kv_ntok, inp, sizeof(kv_ntok)); inp += sizeof(kv_ntok); - - if (kv_size) { - LLAMA_V2_ASSERT(kv_self.buf.size == kv_size); - - const size_t elt_size = ggml_v2_element_size(kv_self.k); - - char buffer[4096]; - - ggml_v2_context * cpy_ctx = ggml_v2_init({ sizeof(buffer), buffer, /* no_alloc */ true }); - ggml_v2_cgraph gf{}; - gf.n_threads = 1; - - ggml_v2_tensor * kin3d = ggml_v2_new_tensor_3d(cpy_ctx, kv_self.k->type, n_embd, kv_ntok, n_layer); - kin3d->data = (void *) inp; - inp += ggml_v2_nbytes(kin3d); - - ggml_v2_tensor * vin3d = ggml_v2_new_tensor_3d(cpy_ctx, kv_self.v->type, kv_ntok, n_embd, n_layer); - vin3d->data = (void *) inp; - inp += ggml_v2_nbytes(vin3d); - - ggml_v2_tensor * k3d = ggml_v2_view_3d(cpy_ctx, kv_self.k, - n_embd, kv_ntok, n_layer, - elt_size*n_embd, elt_size*n_embd*n_ctx, 0); - - ggml_v2_tensor * v3d = ggml_v2_view_3d(cpy_ctx, kv_self.v, - kv_ntok, n_embd, n_layer, - elt_size*n_ctx, elt_size*n_ctx*n_embd, 0); - - ggml_v2_build_forward_expand(&gf, ggml_v2_cpy(cpy_ctx, kin3d, k3d)); - ggml_v2_build_forward_expand(&gf, ggml_v2_cpy(cpy_ctx, vin3d, v3d)); - ggml_v2_graph_compute(cpy_ctx, &gf); - - ggml_v2_free(cpy_ctx); - } - - ctx->model.kv_self.n = kv_ntok; - } - - const size_t nread = inp - src; - const size_t max_size = llama_v2_get_state_size(ctx); - - LLAMA_V2_ASSERT(nread <= max_size); - - return nread; -} - -bool llama_v2_load_session_file(struct llama_v2_context * ctx, const char * path_session, llama_v2_token * tokens_out, size_t n_token_capacity, size_t * n_token_count_out) { - llama_v2_file file(path_session, "rb"); - - // sanity checks - { - const uint32_t magic = file.read_u32(); - const uint32_t version = file.read_u32(); - - if (magic != LLAMA_V2_SESSION_MAGIC || version != LLAMA_V2_SESSION_VERSION) { - fprintf(stderr, "%s : unknown (magic, version) for session file: %08x, %08x\n", __func__, magic, version); - return false; - } - - llama_v2_hparams session_hparams; - file.read_raw(&session_hparams, sizeof(llama_v2_hparams)); - - if (session_hparams != ctx->model.hparams) { - fprintf(stderr, "%s : model hparams didn't match from session file!\n", __func__); - return false; - } - } - - // load the prompt - { - const uint32_t n_token_count = file.read_u32(); - - if (n_token_count > n_token_capacity) { - fprintf(stderr, "%s : token count in session file exceeded capacity! %u > %zu\n", __func__, n_token_count, n_token_capacity); - return false; - } - - file.read_raw(tokens_out, sizeof(llama_v2_token) * n_token_count); - *n_token_count_out = n_token_count; - } - - // restore the context state - { - const size_t n_state_size_cur = file.size - file.tell(); - const size_t n_state_size_max = llama_v2_get_state_size(ctx); - - if (n_state_size_cur > n_state_size_max) { - fprintf(stderr, "%s : the state size in session file is too big! max %zu, got %zu\n", __func__, n_state_size_max, n_state_size_cur); - return false; - } - - std::vector<uint8_t> state_data(n_state_size_max); - file.read_raw(state_data.data(), n_state_size_cur); - - llama_v2_set_state_data(ctx, state_data.data()); - } - - return true; -} - -bool llama_v2_save_session_file(struct llama_v2_context * ctx, const char * path_session, const llama_v2_token * tokens, size_t n_token_count) { - llama_v2_file file(path_session, "wb"); - - file.write_u32(LLAMA_V2_SESSION_MAGIC); - file.write_u32(LLAMA_V2_SESSION_VERSION); - - file.write_raw(&ctx->model.hparams, sizeof(llama_v2_hparams)); - - // save the prompt - file.write_u32((uint32_t) n_token_count); - file.write_raw(tokens, sizeof(llama_v2_token) * n_token_count); - - // save the context state - { - const size_t n_state_size_max = llama_v2_get_state_size(ctx); - - std::vector<uint8_t> state_data(n_state_size_max); - const size_t n_state_size_cur = llama_v2_copy_state_data(ctx, state_data.data()); - - file.write_raw(state_data.data(), n_state_size_cur); - } - - return true; -} - -int llama_v2_eval( - struct llama_v2_context * ctx, - const llama_v2_token * tokens, - int n_tokens, - int n_past, - int n_threads) { - if (!llama_v2_eval_internal(*ctx, tokens, n_tokens, n_past, n_threads)) { - fprintf(stderr, "%s: failed to eval\n", __func__); - return 1; - } - - // get a more accurate load time, upon first eval - // TODO: fix this - if (!ctx->has_evaluated_once) { - ctx->t_load_us = ggml_v2_time_us() - ctx->t_start_us; - ctx->has_evaluated_once = true; - } - - return 0; -} - -int llama_v2_tokenize( - struct llama_v2_context * ctx, - const char * text, - llama_v2_token * tokens, - int n_max_tokens, - bool add_bos) { - auto res = llama_v2_tokenize(ctx->vocab, text, add_bos); - - if (n_max_tokens < (int) res.size()) { - fprintf(stderr, "%s: too many tokens\n", __func__); - return -((int) res.size()); - } - - for (size_t i = 0; i < res.size(); i++) { - tokens[i] = res[i]; - } - - return res.size(); -} - -int llama_v2_n_vocab(const struct llama_v2_context * ctx) { - return ctx->vocab.id_to_token.size(); -} - -int llama_v2_n_ctx(const struct llama_v2_context * ctx) { - return ctx->model.hparams.n_ctx; -} - -int llama_v2_n_embd(const struct llama_v2_context * ctx) { - return ctx->model.hparams.n_embd; -} - -float * llama_v2_get_logits(struct llama_v2_context * ctx) { - return ctx->logits.data(); -} - -float * llama_v2_get_embeddings(struct llama_v2_context * ctx) { - return ctx->embedding.data(); -} - -const char * llama_v2_token_to_str(const struct llama_v2_context * ctx, llama_v2_token token) { - if (token >= llama_v2_n_vocab(ctx)) { - return nullptr; - } - - return ctx->vocab.id_to_token[token].tok.c_str(); -} - -llama_v2_token llama_v2_token_bos() { - return 1; -} - -llama_v2_token llama_v2_token_eos() { - return 2; -} - -llama_v2_token llama_v2_token_nl() { - return 13; -} - - -void llama_v2_print_timings(struct llama_v2_context * ctx) { - const int64_t t_end_us = ggml_v2_time_us(); - - const int32_t n_sample = std::max(1, ctx->n_sample); - const int32_t n_eval = std::max(1, ctx->n_eval); - const int32_t n_p_eval = std::max(1, ctx->n_p_eval); - - fprintf(stderr, "\n"); - fprintf(stderr, "%s: load time = %8.2f ms\n", __func__, ctx->t_load_us / 1000.0); - fprintf(stderr, "%s: sample time = %8.2f ms / %5d runs (%8.2f ms per token)\n", __func__, 1e-3 * ctx->t_sample_us, n_sample, 1e-3 * ctx->t_sample_us / n_sample); - fprintf(stderr, "%s: prompt eval time = %8.2f ms / %5d tokens (%8.2f ms per token)\n", __func__, 1e-3 * ctx->t_p_eval_us, n_p_eval, 1e-3 * ctx->t_p_eval_us / n_p_eval); - fprintf(stderr, "%s: eval time = %8.2f ms / %5d runs (%8.2f ms per token)\n", __func__, 1e-3 * ctx->t_eval_us, n_eval, 1e-3 * ctx->t_eval_us / n_eval); - fprintf(stderr, "%s: total time = %8.2f ms\n", __func__, (t_end_us - ctx->t_start_us)/1000.0); -} - -void llama_v2_reset_timings(struct llama_v2_context * ctx) { - ctx->t_start_us = ggml_v2_time_us(); - ctx->t_sample_us = ctx->n_sample = 0; - ctx->t_eval_us = ctx->n_eval = 0; - ctx->t_p_eval_us = ctx->n_p_eval = 0; -} - -const char * llama_v2_print_system_info(void) { - static std::string s; - - s = ""; - s += "AVX = " + std::to_string(ggml_v2_cpu_has_avx()) + " | "; - s += "AVX2 = " + std::to_string(ggml_v2_cpu_has_avx2()) + " | "; - s += "AVX512 = " + std::to_string(ggml_v2_cpu_has_avx512()) + " | "; - s += "AVX512_VBMI = " + std::to_string(ggml_v2_cpu_has_avx512_vbmi()) + " | "; - s += "AVX512_VNNI = " + std::to_string(ggml_v2_cpu_has_avx512_vnni()) + " | "; - s += "FMA = " + std::to_string(ggml_v2_cpu_has_fma()) + " | "; - s += "NEON = " + std::to_string(ggml_v2_cpu_has_neon()) + " | "; - s += "ARM_FMA = " + std::to_string(ggml_v2_cpu_has_arm_fma()) + " | "; - s += "F16C = " + std::to_string(ggml_v2_cpu_has_f16c()) + " | "; - s += "FP16_VA = " + std::to_string(ggml_v2_cpu_has_fp16_va()) + " | "; - s += "WASM_SIMD = " + std::to_string(ggml_v2_cpu_has_wasm_simd()) + " | "; - s += "BLAS = " + std::to_string(ggml_v2_cpu_has_blas()) + " | "; - s += "SSE3 = " + std::to_string(ggml_v2_cpu_has_sse3()) + " | "; - s += "VSX = " + std::to_string(ggml_v2_cpu_has_vsx()) + " | "; - - return s.c_str(); -} - -// For internal test use -std::vector<std::pair<std::string, struct ggml_v2_tensor *>>& llama_v2_internal_get_tensor_map(struct llama_v2_context * ctx) { - return ctx->model.tensors_by_name; -} - - -// TODO: Calculate this constant from the vocabulary -#define MAX_TOKEN_LEN 18 -// SentencePiece implementation after https://guillaume-be.github.io/2020-05-30/sentence_piece -std::vector<llama_v2_token> legacy_llama_v2_tokenize(const llama_v2_vocab & vocab, const std::string & text, bool bos) { - std::vector<llama_v2_token> res; - std::vector<int> score; - std::vector<llama_v2_token> prev; - int len = text.length(); - - score.resize(len + 1); - prev.resize(len + 1); - - // Forward pass - for (int i = 0; i < len; i++) { - int max_len = std::min(len - i, MAX_TOKEN_LEN); - for (int sub_len = 1; sub_len <= max_len; sub_len++) { - auto sub = text.substr(i, sub_len); - auto token = vocab.token_to_id.find(sub); - if (token != vocab.token_to_id.end()) { - int token_score = sub.length() * sub.length(); - int local_score = score[i] + token_score; - int next = i + sub_len; - if (score[next] < local_score) { - score[next] = local_score; - prev[next] = (*token).second; - } - } - } - } - - // Backward pass - int i = len; - while (i > 0) { - llama_v2_token token_id = prev[i]; - if (token_id == 0) { - // TODO: Return error or something more meaningful - printf("failed to tokenize string!\n"); - break; - } - res.push_back(token_id); - auto token = vocab.id_to_token[token_id].tok; - i -= token.length(); - } - - if (bos) { - res.push_back(1); // TODO: replace with vocab.bos - } - - // Pieces are in reverse order so correct that - std::reverse(res.begin(), res.end()); - - return res; -} - -int legacy_llama_v2_tokenize( - struct llama_v2_context * ctx, - const char * text, - llama_v2_token * tokens, - int n_max_tokens, - bool add_bos) { - auto res = legacy_llama_v2_tokenize(ctx->vocab, text, add_bos); - - if (n_max_tokens < (int) res.size()) { - fprintf(stderr, "%s: too many tokens\n", __func__); - return -((int) res.size()); - } - - for (size_t i = 0; i < res.size(); i++) { - tokens[i] = res[i]; - } - - return res.size(); -} - -std::vector<llama_v2_token> legacy_llama_v2_tokenize(struct llama_v2_context * ctx, const std::string & text, bool add_bos) { - std::vector<llama_v2_token> res(8096); - int n = legacy_llama_v2_tokenize(ctx, text.c_str(), res.data(), res.size(), add_bos); - res.resize(n); - - return res; -} - -std::vector<llama_token> llama_v2_tokenize(struct llama_v2_context * ctx, const std::string & text, bool add_bos) { - // initialize to prompt numer of chars, since n_tokens <= n_prompt_chars - std::vector<llama_token> res(text.size() + (int) add_bos); - const int n = llama_v2_tokenize(ctx, text.c_str(), res.data(), res.size(), add_bos); - assert(n >= 0); - res.resize(n); - - return res; -} diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/modules/__init__.py b/spaces/JohnSmith9982/ChuanhuChatGPT/modules/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/KPCGD/bingo/README.md b/spaces/KPCGD/bingo/README.md deleted file mode 100644 index 218767d1d7debd26932ffddca2ec0f421c0171a9..0000000000000000000000000000000000000000 --- a/spaces/KPCGD/bingo/README.md +++ /dev/null @@ -1,195 +0,0 @@ ---- -title: bingo -emoji: 📉 -colorFrom: red -colorTo: red -sdk: docker -pinned: true -license: mit -duplicated_from: hf4all/bingo ---- - -<div align="center"> - -# Bingo - -Bingo,一个让你呼吸顺畅 New Bing。 - -高度还原 New Bing 网页版的主要操作,国内可用,兼容绝大多数微软 Bing AI 的功能,可自行部署使用。 - -![Github stars](https://badgen.net/github/stars/weaigc/bingo?icon=github&label=stars) -![Gthub issues](https://img.shields.io/github/issues/weaigc/bingo) -[![docker build](https://github.com/weaigc/bingo/actions/workflows/docker.yml/badge.svg)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![docker hub](https://badgen.net/docker/size/weaigc/bingo?icon=docker&label=image%20size)](https://hub.docker.com/repository/docker/weaigc/bingo/) -[![MIT License](https://img.shields.io/badge/license-MIT-97c50f)](https://github.com/weaigc/bingo/blob/main/license) - -</div> - -## 演示站点 - -https://bing.github1s.tk - - - -[![img](./docs/images/demo.png)](https://bing.github1s.tk) - -## 功能和特点 - -- 完全基于 Next.js 重写,高度还原 New Bing Web 版 UI,使用体验和 Bing AI 基本一致。 -- 支持 Docker 构建,方便快捷地部署和访问。 -- Cookie 可全局配置,全局共享。 -- 支持持续语音对话 - -## RoadMap - - - [x] 支持 wss 转发 - - [x] 支持一键部署 - - [x] 优化移动端展示 - - [x] 支持画图 - - [x] 支持语音输入(支持语音指令,目前仅支持 PC 版 Edge 及 Chrome 浏览器) - - [x] 支持语音输出(需要手动开启) - - [x] 支持图片输入 - - [x] 支持自定义域名 - - [ ] 支持历史记录 - - [ ] 适配深色模式 - - [ ] 支持内置提示词 - - [ ] 支持离线访问 - - [ ] 国际化翻译 - -## 一键部署 -你也可以一键部署自己的 New Bing AI 到 🤗 HuggingFace 。 - -### 部署到 Huggingface -1. 点击此图标 -[![Deploy to HuggingFace](https://img.shields.io/badge/%E7%82%B9%E5%87%BB%E9%83%A8%E7%BD%B2-%F0%9F%A4%97-fff)](https://huggingface.co/login?next=%2Fspaces%2Fhf4all%2Fbingo%3Fduplicate%3Dtrue%26visibility%3Dpublic),配置可以不改。 - -2. 部署署完成后,点击“设置” 》“站点域名”,点一下,复制一下 HF 域名信息,然后分享给别人即可。 - -> Huggingface 不支持绑定自己的域名,不过我们可以使用曲线救国的方式来达到这个目的 -> 1. 方式二,借助 Cloudflare Workers [部署Cloudflare Workers](#使用Cloudflare-Workers自定义域名) -> 2. 方式一,借助 Github Pages 及 iframe [如何绑定域名](https://github.com/weaigc/bingo/issues/4) - -### 使用Cloudflare Workers自定义域名 - -> 核心代码 [worker.js](./cloudflare/worker.js) - -- [注册 Cloudflare 账号](https://dash.cloudflare.com/sign-up) - -- 添加一个新的网站,需要你有自己的域名并且将域名`Name Server`托管给 Cloudflare 才行(更多信息可自行 Google) - -- 通过左侧菜单进入「Workers」,并点击「Create a Worker」。 - -- 创建 Worker 服务,复制 [worker.js](./cloudflare/worker.js) 全部代码,粘贴至创建的服务中,根据注释进行改动,保存并部署。 - -- 触发器 中自定义访问域名。 - -### 部署其它平台 -<details> -<summary> -由于其他平台目前遭到 New Bing 封杀,会遇到很多问题,不再做推荐,有需要的可以自行查看 -</summary> - -#### 部署到 Netlify -[![Deploy to Netlify Button](https://www.netlify.com/img/deploy/button.svg)](https://app.netlify.com/start/deploy?repository=https://github.com/weaigc/bingo) - -#### 部署到 Vercel -如果你是 Vercel 付费用户,可以点以下链接一键部署到 Vercel。免费版本有[接口超时限制](https://vercel.com/docs/concepts/limits/overview),不推荐使用 - -[![Deploy with Vercel](https://vercel.com/button)](https://vercel.com/new/clone?demo-title=bingo&demo-description=bingo&demo-url=https%3A%2F%2Fbing.github1s.tk%2F&project-name=bingo&repository-name=bingo&repository-url=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo&from=templates&skippable-integrations=1&env=BING_HEADER&envDescription=%E5%A6%82%E6%9E%9C%E4%B8%8D%E7%9F%A5%E9%81%93%E6%80%8E%E4%B9%88%E9%85%8D%E7%BD%AE%E8%AF%B7%E7%82%B9%E5%8F%B3%E4%BE%A7Learn+More&envLink=https%3A%2F%2Fgithub.com%2Fweaigc%2Fbingo%2Fblob%2Fmain%2F.env.example) - -#### 部署到 Render - -[![Deploy to Render](https://render.com/images/deploy-to-render-button.svg)](https://render.com/deploy?repo=https://github.com/weaigc/bingo) -</details> - -## 环境和依赖 - -- Node.js >= 18 -- Bing AI 的[身份信息](#如何获取-BING_HEADER)) - -## 安装和使用 - -* 使用 Node 启动 - -```bash -git clone https://github.com/weaigc/bingo.git -npm i # 推荐使用 pnpm i -npm run build -npm run start -``` - -* 使用 Docker 启动 -```bash -docker pull weaigc/bingo -docker run --rm -it -p 7860:7860 weaigc/bingo -# 或者 -docker run --rm -it -e BING_HEADER=xxxx -p 7860:7860 weaigc/bingo -``` - -## 如何获取 BING_HEADER -> 配置了 BING_HEADER 意味着你将自己的账号共享给所有使用此服务的人,如果不需要免登录画图的功能,不建议设置此变量 - -打开 https://www.bing.com 并登录,然后访问 https://www.bing.com/turing/captcha/challenge,通过人机校验,然后 - -![BING HEADER](./docs/images/curl.png) - -> 复制出来的内容应该如下所示。确认格式无误后,打开 https://effulgent-bubblegum-e2f5df.netlify.app/#dialog=%22settings%22 ,粘贴进去,点击“转成 BING_HEADER 并复制”,然后从剪切板粘贴即可得到。(你也可以先在网页上进行验证) - -以下是格式参考,需要注意的是,网页端保存的格式是以`curl`开头, 而服务端配置的 `BING_HEADER` 是 `base64` 格式,两者不能互通。 -<details> -<summary>正常格式/网页端保存的格式(格式仅供参考)</summary> - -``` -curl 'https://www.bing.com/turing/captcha/challenge' \ - -H 'authority: www.bing.com' \ - -H 'accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.7' \ - -H 'accept-language: zh-CN,zh;q=0.9,en;q=0.8,en-GB;q=0.7,en-US;q=0.6' \ - -H 'cache-control: max-age=0' \ - -H 'cookie: MicrosoftApplicationsTelemetryDeviceId=3399c004-fd0e-48ec-bb92-d82a27b2bbd4; _EDGE_V=1; SRCHD=AF=NOFORM; SRCHUID=V=2&GUID=29EBDDA4E6674329ACCF1A0A423C3E98&dmnchg=1; _UR=QS=0&TQS=0; _HPVN=CS=eyJQbiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiUCJ9LCJTYyI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiSCJ9LCJReiI6eyJDbiI6MSwiU3QiOjAsIlFzIjowLCJQcm9kIjoiVCJ9LCJBcCI6dHJ1ZSwiTXV0ZSI6dHJ1ZSwiTGFkIjoiMjAyMy0wNy0yNVQwMDowMDowMFoiLCJJb3RkIjowLCJHd2IiOjAsIkRmdCI6bnVsbCwiTXZzIjowLCJGbHQiOjAsIkltcCI6Mn0=; _RwBf=ilt=1&ihpd=1&ispd=0&rc=0&rb=0&gb=0&rg=200&pc=0&mtu=0&rbb=0&g=0&cid=&clo=0&v=1&l=2023-07-25T07:00:00.0000000Z&lft=0001-01-01T00:00:00.0000000&aof=0&o=2&p=&c=&t=0&s=0001-01-01T00:00:00.0000000+00:00&ts=2023-07-25T11:00:31.7111548+00:00&rwred=0&wls=&lka=0&lkt=0&TH=&dci=0; ANON=A=0043C6590EA808ED6E395059FFFFFFFF&E=1c8b&W=1; NAP=V=1.9&E=1c31&C=DnaMSbDN_4efZ_xXqBF3Daorjr53kYqYoaP8YHsupjmiXnysX7a37A&W=1; PPLState=1; KievRPSSecAuth=FABSBBRaTOJILtFsMkpLVWSG6AN6C/svRwNmAAAEgAAACMGUA7EGVSjGEAQBGHtNsc5sNL7unmJsfPJ2t6imfo4BeUJlAia3IpMTtMUy4PU/C5QAzRI5pODtsIee0+blgllXt/5IiWwGjwmdhivsFM597pRPkjARPfwsPhNLPNbJrCPNPHdje4Is78MnCADXw6/NBq2FL8V2/byw2fH6IuAMD2MvN/VvqpEa9ZxiDjZtENj4HEj0mO2SgzjfyEhVAkjvznJqU2rw/Q2tHmX94NAM2kzlzKF/hWPhCCUmu8IHLvCnHDS6mSptvJDDP/sp3ovtzOXkP1mlM/Xju5ftesUvccVEQGffXORa1dE5hEMbKIiKXz1tDdduSXE19g9/+mRMAjaQhpwhI8XmilCTx1adb1Ll5qK+VjC9GNfEZzcbsGBPVaOl+anG8rEMq+Xnhjo7J+NqTNolavHgcuV8kJsCeJZIged33UA8eOZeFo+wAECMguxMoSqgpGH+sthqynvD/FJD6r/tiU2N3uqVq8NE8V37asrN6T14Z0FGBJOe6ET1+PGApm3s11OY9/xhFEB9T5BEPUGEbvRcLcW2ncFQX0EU+xweiPqo1Q1hNUg/dCtSI+lZ7c2H8XheePZavZ0TJQ8oNCSAuKiTqJmI0fVGpwbXwfaADkEipuawz3fIuMJBNgMU0OtA7Hm59v2fGLIBuvi6YeKS6GgVk3BIPf+P/eKahwozrxQZaFnoHTSqMkvct7xCP4atBROfXKf5Ww0CcFKp+2WX9BIskTOo2jjk6bAyyYJ+ElUB1fgLKNk5m/YSMc9iYCLIBMIGN8F0Yvy3tZ7cvh7Ue5Klo98US/I+nW1G7ZJMHRgUO8h8lpneHqEMegKd8gynO4VF7RpCjJkunDmW0Ta+RkXAP619pg0dqHMFkoOgknN78oBbGTV6fJUKotv+vi61kLhAeXZGWoHGCRXh2wUC6YgfPgKA6ESRNHtFn7E5B3HHpLc5rVMDSNhKZYfdhupV4Ezf6+5DhMcZLZhi0kk+ivDiN1gdHlVtSN55xpvf+c+XZDzR0uhgcvgy0LAbmzgk6y4WbYH+LQsMpzNNj+aC72vMiWovWrKh9jY4MYCmdgxsS/skPtLdp18muiEIRXTbZQGUmhxFpJAIbBIsCscMpzL0BgeujxUwM5wr79Sd9r4xwbgSMwmBlBfUHRVBdNyg8feepeJbCS63nD6eHOuLqMRsPIio3w/ki/EAa92UUEiZeavLsMUD/y/qAvWUdzdP5Y+C/TM+CMGS/kGL4LEdY/28MQeTvU1qv1X21kQt2aiaj3pPVL36hAzxbcLgqcMo9oymDRy87kdCXW/+g4oKLtMh6fm/G6W6Y/B01JlxohyyvueHQIG557uzkEkTJ3FnOVODSKBKpb3WZ65rExfV71zSZa25F3GmpaIG6HiYrX2YYhQAkIE9pKEQBHbnwHuwNDGottZTXZw=; WLS=C=9df3f9d8518fae19&N=wen; WLID=pGY8HgWCu4p5XYCOk2oa0+DBdftkMUfmNIn8XtSjSTKsgv/Il7GUlYs0Jpjf/E12jZMgV7x44Dy3fXOgjjUoJx7Y/ClLrLhsk20THksJJoI=; _EDGE_S=F=1&SID=17CF6EE006426448213C7DB907436588&mkt=zh-CN; MUID=225621093D8A6C27301632413C0E6D08; MUIDB=225621093D8A6C27301632413C0E6D08; SUID=A; SNRHOP=I=&TS=; _U=nGyzKQruEsDwLiu65fZFIG6e12hf2lwTJmroW__k8joUJIKmG3OIjayXKGW9dCVR3sNhF76mEVxyW6yjUGPodOfjtSa3s3J_DxMOrEK1BqXCOBI9bC66spAIASV7prsYFlVAJz73jVNENp_tBubLHJy6EbT0BKRe4AjrYkH-9uMnmCKB8Zmyg; _SS=SID=17CF6EE006426448213C7DB907436588&R=0&RB=0&GB=0&RG=200&RP=0&PC=U531; SRCHS=PC=U531; USRLOC=HS=1&ELOC=LAT=22.501529693603516|LON=113.9263687133789|N=%E5%8D%97%E5%B1%B1%E5%8C%BA%EF%BC%8C%E5%B9%BF%E4%B8%9C%E7%9C%81|ELT=2|&CLOC=LAT=22.50153029046461|LON=113.92637070632928|A=733.4464586120832|TS=230726151034|SRC=W; SRCHUSR=DOB=20230725&T=1690384908000&POEX=W; ipv6=hit=1690388509974&t=6; SRCHHPGUSR=HV=1690384945&SRCHLANG=zh-Hans&PV=15.0.0&BRW=MW&BRH=MT&CW=410&CH=794&SCW=410&SCH=794&DPR=1.5&UTC=480&DM=0&WTS=63825879627&PRVCW=410&PRVCH=794&PR=1.5; cct=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpny6Y_CVyi_MSyM94VyMWnjdYkkccVtm3czoIAtXUGQA; GC=AjWIBYOoVP-Afq6gWwtx80If6yHn6iBuEVHA1XHdAKpR3Y_D9Ytcks4Ht6XhadXk75dvhzP4YOUS0UmoEyqyxw' \ - -H 'dnt: 1' \ - -H 'sec-ch-ua: "Chromium";v="116", "Not)A;Brand";v="24", "Microsoft Edge";v="116"' \ - -H 'sec-ch-ua-arch: "x86"' \ - -H 'sec-ch-ua-bitness: "64"' \ - -H 'sec-ch-ua-full-version: "116.0.1938.29"' \ - -H 'sec-ch-ua-full-version-list: "Chromium";v="116.0.5845.42", "Not)A;Brand";v="24.0.0.0", "Microsoft Edge";v="116.0.1938.29"' \ - -H 'sec-ch-ua-mobile: ?0' \ - -H 'sec-ch-ua-model: ""' \ - -H 'sec-ch-ua-platform: "Windows"' \ - -H 'sec-ch-ua-platform-version: "15.0.0"' \ - -H 'sec-fetch-dest: document' \ - -H 'sec-fetch-mode: navigate' \ - -H 'sec-fetch-site: none' \ - -H 'sec-fetch-user: ?1' \ - -H 'sec-ms-gec: B3F47AD4A283CAB374C0451C46AAFD147C6A4DACAFF6A1C13F34B2C72B024494' \ - -H 'sec-ms-gec-version: 1-116.0.1938.29' \ - -H 'upgrade-insecure-requests: 1' \ - -H 'user-agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/116.0.0.0 Safari/537.36 Edg/116.0.0.0' \ - -H 'x-client-data: eyIxIjoiMiIsIjEwIjoiXCJTMGg3R05HOTF2aDQ1TUZSUnZ5NHN2akRmMWdlaVJKenNxNlA3aU1WbnF3PVwiIiwiMiI6IjEiLCIzIjoiMSIsIjQiOiIyMTU4ODQ5NTM4MjY4OTM5NTA3IiwiNSI6IlwiSm9GUWpPTDk3OS9MbkRRZnlCd2N1M2FsOUN3eTZTQmdaMGNYMXBtOWVMZz1cIiIsIjYiOiJiZXRhIiwiNyI6IjE4MDM4ODYyNjQzNSIsIjkiOiJkZXNrdG9wIn0=' \ - -H 'x-edge-shopping-flag: 1' \ - --compressed -``` -</details> - -<details> -<summary>转成base64之后的格式(BING_HEADER只能使用 base64 之后的格式)</summary> - -``` -Y3VybCAnaHR0cHM6Ly93d3cuYmluZy5jb20vdHVyaW5nL2NvbnZlcnNhdGlvbi9jcmVhdGUnIFwgICAtSCAnYXV0aG9yaXR5OiB3d3cuYmluZy5jb20nIFwgICAtSCAnYWNjZXB0OiB0ZXh0L2h0bWwsYXBwbGljYXRpb24veGh0bWwreG1sLGFwcGxpY2F0aW9uL3htbDtxPTAuOSxpbWFnZS93ZWJwLGltYWdlL2FwbmcsKi8qO3E9MC44LGFwcGxpY2F0aW9uL3NpZ25lZC1leGNoYW5nZTt2PWIzO3E9MC43JyBcICAgLUggJ2FjY2VwdC1sYW5ndWFnZTogemgtQ04semg7cT0wLjksZW47cT0wLjgsZW4tR0I7cT0wLjcsZW4tVVM7cT0wLjYnIFwgICAtSCAnY2FjaGUtY29udHJvbDogbWF4LWFnZT0wJyBcICAgLUggJ2Nvb2tpZTogTWljcm9zb2Z0QXBwbGljYXRpb25zVGVsZW1ldHJ5RGV2aWNlSWQ9MzM5OWMwMDQtZmQwZS00OGVjLWJiOTItZDgyYTI3YjJiYmQ0OyBfRURHRV9WPTE7IFNSQ0hEPUFGPU5PRk9STTsgU1JDSFVJRD1WPTImR1VJRD0yOUVCRERBNEU2Njc0MzI5QUNDRjFBMEE0MjNDM0U5OCZkbW5jaGc9MTsgX1VSPVFTPTAmVFFTPTA7IF9IUFZOPUNTPWV5SlFiaUk2ZXlKRGJpSTZNU3dpVTNRaU9qQXNJbEZ6SWpvd0xDSlFjbTlrSWpvaVVDSjlMQ0pUWXlJNmV5SkRiaUk2TVN3aVUzUWlPakFzSWxGeklqb3dMQ0pRY205a0lqb2lTQ0o5TENKUmVpSTZleUpEYmlJNk1Td2lVM1FpT2pBc0lsRnpJam93TENKUWNtOWtJam9pVkNKOUxDSkJjQ0k2ZEhKMVpTd2lUWFYwWlNJNmRISjFaU3dpVEdGa0lqb2lNakF5TXkwd055MHlOVlF3TURvd01Eb3dNRm9pTENKSmIzUmtJam93TENKSGQySWlPakFzSWtSbWRDSTZiblZzYkN3aVRYWnpJam93TENKR2JIUWlPakFzSWtsdGNDSTZNbjA9OyBfUndCZj1pbHQ9MSZpaHBkPTEmaXNwZD0wJnJjPTAmcmI9MCZnYj0wJnJnPTIwMCZwYz0wJm10dT0wJnJiYj0wJmc9MCZjaWQ9JmNsbz0wJnY9MSZsPTIwMjMtMDctMjVUMDc6MDA6MDAuMDAwMDAwMFombGZ0PTAwMDEtMDEtMDFUMDA6MDA6MDAuMDAwMDAwMCZhb2Y9MCZvPTImcD0mYz0mdD0wJnM9MDAwMS0wMS0wMVQwMDowMDowMC4wMDAwMDAwKzAwOjAwJnRzPTIwMjMtMDctMjVUMTE6MDA6MzEuNzExMTU0OCswMDowMCZyd3JlZD0wJndscz0mbGthPTAmbGt0PTAmVEg9JmRjaT0wOyBBTk9OPUE9MDA0M0M2NTkwRUE4MDhFRDZFMzk1MDU5RkZGRkZGRkYmRT0xYzhiJlc9MTsgTkFQPVY9MS45JkU9MWMzMSZDPURuYU1TYkROXzRlZlpfeFhxQkYzRGFvcmpyNTNrWXFZb2FQOFlIc3Vwam1pWG55c1g3YTM3QSZXPTE7IFBQTFN0YXRlPTE7IEtpZXZSUFNTZWNBdXRoPUZBQlNCQlJhVE9KSUx0RnNNa3BMVldTRzZBTjZDL3N2UndObUFBQUVnQUFBQ01HVUE3RUdWU2pHRUFRQkdIdE5zYzVzTkw3dW5tSnNmUEoydDZpbWZvNEJlVUpsQWlhM0lwTVR0TVV5NFBVL0M1UUF6Ukk1cE9EdHNJZWUwK2JsZ2xsWHQvNUlpV3dHandtZGhpdnNGTTU5N3BSUGtqQVJQZndzUGhOTFBOYkpyQ1BOUEhkamU0SXM3OE1uQ0FEWHc2L05CcTJGTDhWMi9ieXcyZkg2SXVBTUQyTXZOL1Z2cXBFYTlaeGlEalp0RU5qNEhFajBtTzJTZ3pqZnlFaFZBa2p2em5KcVUycncvUTJ0SG1YOTROQU0ya3psektGL2hXUGhDQ1VtdThJSEx2Q25IRFM2bVNwdHZKRERQL3NwM292dHpPWGtQMW1sTS9YanU1ZnRlc1V2Y2NWRVFHZmZYT1JhMWRFNWhFTWJLSWlLWHoxdERkZHVTWEUxOWc5LyttUk1BamFRaHB3aEk4WG1pbENUeDFhZGIxTGw1cUsrVmpDOUdOZkVaemNic0dCUFZhT2wrYW5HOHJFTXErWG5oam83SitOcVROb2xhdkhnY3VWOGtKc0NlSlpJZ2VkMzNVQThlT1plRm8rd0FFQ01ndXhNb1NxZ3BHSCtzdGhxeW52RC9GSkQ2ci90aVUyTjN1cVZxOE5FOFYzN2Fzck42VDE0WjBGR0JKT2U2RVQxK1BHQXBtM3MxMU9ZOS94aEZFQjlUNUJFUFVHRWJ2UmNMY1cybmNGUVgwRVUreHdlaVBxbzFRMWhOVWcvZEN0U0krbFo3YzJIOFhoZWVQWmF2WjBUSlE4b05DU0F1S2lUcUptSTBmVkdwd2JYd2ZhQURrRWlwdWF3ejNmSXVNSkJOZ01VME90QTdIbTU5djJmR0xJQnV2aTZZZUtTNkdnVmszQklQZitQL2VLYWh3b3pyeFFaYUZub0hUU3FNa3ZjdDd4Q1A0YXRCUk9mWEtmNVd3MENjRktwKzJXWDlCSXNrVE9vMmpqazZiQXl5WUorRWxVQjFmZ0xLTms1bS9ZU01jOWlZQ0xJQk1JR044RjBZdnkzdFo3Y3ZoN1VlNUtsbzk4VVMvSStuVzFHN1pKTUhSZ1VPOGg4bHBuZUhxRU1lZ0tkOGd5bk80VkY3UnBDakprdW5EbVcwVGErUmtYQVA2MTlwZzBkcUhNRmtvT2drbk43OG9CYkdUVjZmSlVLb3R2K3ZpNjFrTGhBZVhaR1dvSEdDUlhoMndVQzZZZ2ZQZ0tBNkVTUk5IdEZuN0U1QjNISHBMYzVyVk1EU05oS1pZZmRodXBWNEV6ZjYrNURoTWNaTFpoaTBraytpdkRpTjFnZEhsVnRTTjU1eHB2ZitjK1haRHpSMHVoZ2N2Z3kwTEFibXpnazZ5NFdiWUgrTFFzTXB6Tk5qK2FDNzJ2TWlXb3ZXcktoOWpZNE1ZQ21kZ3hzUy9za1B0TGRwMThtdWlFSVJYVGJaUUdVbWh4RnBKQUliQklzQ3NjTXB6TDBCZ2V1anhVd001d3I3OVNkOXI0eHdiZ1NNd21CbEJmVUhSVkJkTnlnOGZlZXBlSmJDUzYzbkQ2ZUhPdUxxTVJzUElpbzN3L2tpL0VBYTkyVVVFaVplYXZMc01VRC95L3FBdldVZHpkUDVZK0MvVE0rQ01HUy9rR0w0TEVkWS8yOE1RZVR2VTFxdjFYMjFrUXQyYWlhajNwUFZMMzZoQXp4YmNMZ3FjTW85b3ltRFJ5ODdrZENYVy8rZzRvS0x0TWg2Zm0vRzZXNlkvQjAxSmx4b2h5eXZ1ZUhRSUc1NTd1emtFa1RKM0ZuT1ZPRFNLQktwYjNXWjY1ckV4ZlY3MXpTWmEyNUYzR21wYUlHNkhpWXJYMllZaFFBa0lFOXBLRVFCSGJud0h1d05ER290dFpUWFp3PTsgV0xTPUM9OWRmM2Y5ZDg1MThmYWUxOSZOPXdlbjsgV0xJRD1wR1k4SGdXQ3U0cDVYWUNPazJvYTArREJkZnRrTVVmbU5JbjhYdFNqU1RLc2d2L0lsN0dVbFlzMEpwamYvRTEyalpNZ1Y3eDQ0RHkzZlhPZ2pqVW9KeDdZL0NsTHJMaHNrMjBUSGtzSkpvST07IF9FREdFX1M9Rj0xJlNJRD0xN0NGNkVFMDA2NDI2NDQ4MjEzQzdEQjkwNzQzNjU4OCZta3Q9emgtQ047IE1VSUQ9MjI1NjIxMDkzRDhBNkMyNzMwMTYzMjQxM0MwRTZEMDg7IE1VSURCPTIyNTYyMTA5M0Q4QTZDMjczMDE2MzI0MTNDMEU2RDA4OyBTVUlEPUE7IFNOUkhPUD1JPSZUUz07IF9VPW5HeXpLUXJ1RXNEd0xpdTY1ZlpGSUc2ZTEyaGYybHdUSm1yb1dfX2s4am9VSklLbUczT0lqYXlYS0dXOWRDVlIzc05oRjc2bUVWeHlXNnlqVUdQb2RPZmp0U2EzczNKX0R4TU9yRUsxQnFYQ09CSTliQzY2c3BBSUFTVjdwcnNZRmxWQUp6NzNqVk5FTnBfdEJ1YkxISnk2RWJUMEJLUmU0QWpyWWtILTl1TW5tQ0tCOFpteWc7IF9TUz1TSUQ9MTdDRjZFRTAwNjQyNjQ0ODIxM0M3REI5MDc0MzY1ODgmUj0wJlJCPTAmR0I9MCZSRz0yMDAmUlA9MCZQQz1VNTMxOyBTUkNIUz1QQz1VNTMxOyBVU1JMT0M9SFM9MSZFTE9DPUxBVD0yMi41MDE1Mjk2OTM2MDM1MTZ8TE9OPTExMy45MjYzNjg3MTMzNzg5fE49JUU1JThEJTk3JUU1JUIxJUIxJUU1JThDJUJBJUVGJUJDJThDJUU1JUI5JUJGJUU0JUI4JTlDJUU3JTlDJTgxfEVMVD0yfCZDTE9DPUxBVD0yMi41MDE1MzAyOTA0NjQ2MXxMT049MTEzLjkyNjM3MDcwNjMyOTI4fEE9NzMzLjQ0NjQ1ODYxMjA4MzJ8VFM9MjMwNzI2MTUxMDM0fFNSQz1XOyBTUkNIVVNSPURPQj0yMDIzMDcyNSZUPTE2OTAzODQ5MDgwMDAmUE9FWD1XOyBpcHY2PWhpdD0xNjkwMzg4NTA5OTc0JnQ9NjsgU1JDSEhQR1VTUj1IVj0xNjkwMzg0OTQ1JlNSQ0hMQU5HPXpoLUhhbnMmUFY9MTUuMC4wJkJSVz1NVyZCUkg9TVQmQ1c9NDEwJkNIPTc5NCZTQ1c9NDEwJlNDSD03OTQmRFBSPTEuNSZVVEM9NDgwJkRNPTAmV1RTPTYzODI1ODc5NjI3JlBSVkNXPTQxMCZQUlZDSD03OTQmUFI9MS41OyBjY3Q9QWpXSUJZT29WUC1BZnE2Z1d3dHg4MElmNnlIbjZpQnVFVkhBMVhIZEFLcG55NllfQ1Z5aV9NU3lNOTRWeU1XbmpkWWtrY2NWdG0zY3pvSUF0WFVHUUE7IEdDPUFqV0lCWU9vVlAtQWZxNmdXd3R4ODBJZjZ5SG42aUJ1RVZIQTFYSGRBS3BSM1lfRDlZdGNrczRIdDZYaGFkWGs3NWR2aHpQNFlPVVMwVW1vRXlxeXh3JyBcICAgLUggJ2RudDogMScgXCAgIC1IICdzZWMtY2gtdWE6ICJDaHJvbWl1bSI7dj0iMTE2IiwgIk5vdClBO0JyYW5kIjt2PSIyNCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2IicgXCAgIC1IICdzZWMtY2gtdWEtYXJjaDogIng4NiInIFwgICAtSCAnc2VjLWNoLXVhLWJpdG5lc3M6ICI2NCInIFwgICAtSCAnc2VjLWNoLXVhLWZ1bGwtdmVyc2lvbjogIjExNi4wLjE5MzguMjkiJyBcICAgLUggJ3NlYy1jaC11YS1mdWxsLXZlcnNpb24tbGlzdDogIkNocm9taXVtIjt2PSIxMTYuMC41ODQ1LjQyIiwgIk5vdClBO0JyYW5kIjt2PSIyNC4wLjAuMCIsICJNaWNyb3NvZnQgRWRnZSI7dj0iMTE2LjAuMTkzOC4yOSInIFwgICAtSCAnc2VjLWNoLXVhLW1vYmlsZTogPzAnIFwgICAtSCAnc2VjLWNoLXVhLW1vZGVsOiAiIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm06ICJXaW5kb3dzIicgXCAgIC1IICdzZWMtY2gtdWEtcGxhdGZvcm0tdmVyc2lvbjogIjE1LjAuMCInIFwgICAtSCAnc2VjLWZldGNoLWRlc3Q6IGRvY3VtZW50JyBcICAgLUggJ3NlYy1mZXRjaC1tb2RlOiBuYXZpZ2F0ZScgXCAgIC1IICdzZWMtZmV0Y2gtc2l0ZTogbm9uZScgXCAgIC1IICdzZWMtZmV0Y2gtdXNlcjogPzEnIFwgICAtSCAnc2VjLW1zLWdlYzogQjNGNDdBRDRBMjgzQ0FCMzc0QzA0NTFDNDZBQUZEMTQ3QzZBNERBQ0FGRjZBMUMxM0YzNEIyQzcyQjAyNDQ5NCcgXCAgIC1IICdzZWMtbXMtZ2VjLXZlcnNpb246IDEtMTE2LjAuMTkzOC4yOScgXCAgIC1IICd1cGdyYWRlLWluc2VjdXJlLXJlcXVlc3RzOiAxJyBcICAgLUggJ3VzZXItYWdlbnQ6IE1vemlsbGEvNS4wIChXaW5kb3dzIE5UIDEwLjA7IFdpbjY0OyB4NjQpIEFwcGxlV2ViS2l0LzUzNy4zNiAoS0hUTUwsIGxpa2UgR2Vja28pIENocm9tZS8xMTYuMC4wLjAgU2FmYXJpLzUzNy4zNiBFZGcvMTE2LjAuMC4wJyBcICAgLUggJ3gtY2xpZW50LWRhdGE6IGV5SXhJam9pTWlJc0lqRXdJam9pWENKVE1HZzNSMDVIT1RGMmFEUTFUVVpTVW5aNU5ITjJha1JtTVdkbGFWSktlbk54TmxBM2FVMVdibkYzUFZ3aUlpd2lNaUk2SWpFaUxDSXpJam9pTVNJc0lqUWlPaUl5TVRVNE9EUTVOVE00TWpZNE9UTTVOVEEzSWl3aU5TSTZJbHdpU205R1VXcFBURGszT1M5TWJrUlJabmxDZDJOMU0yRnNPVU4zZVRaVFFtZGFNR05ZTVhCdE9XVk1aejFjSWlJc0lqWWlPaUppWlhSaElpd2lOeUk2SWpFNE1ETTRPRFl5TmpRek5TSXNJamtpT2lKa1pYTnJkRzl3SW4wPScgXCAgIC1IICd4LWVkZ2Utc2hvcHBpbmctZmxhZzogMScgXCAgIC0tY29tcHJlc3NlZA== -``` -</details> - - -## 鸣谢 - - 感谢 [EdgeGPT](https://github.com/acheong08/EdgeGPT) 提供的代理 API 的方法。 - - 感谢 [Vercel AI](https://github.com/vercel-labs/ai-chatbot) 提供的基础脚手架和 [ChatHub](https://github.com/chathub-dev/chathub) [go-proxy-bingai](https://github.com/adams549659584/go-proxy-bingai) 提供的部分代码。 - - -## 答疑及交流 - -<image src="./docs/images/wechat.png" width=240 /> - -## License - -MIT © [LICENSE](https://github.com/weaigc/bingo/blob/main/LICENSE). - - diff --git a/spaces/Kaludi/VirtualBrainGPT/Brain_Entry.py b/spaces/Kaludi/VirtualBrainGPT/Brain_Entry.py deleted file mode 100644 index 083c5dee211eb7f8f65c12dce78e2269813c35db..0000000000000000000000000000000000000000 --- a/spaces/Kaludi/VirtualBrainGPT/Brain_Entry.py +++ /dev/null @@ -1,139 +0,0 @@ -import os -import datetime -import streamlit as st - -hide_streamlit_style = """ - <style> - footer {visibility: hidden;} - </style> - """ -st.markdown(hide_streamlit_style, unsafe_allow_html=True) - -# Create a 'brain' folder in the current directory if it doesn't exist -if not os.path.exists("brain"): - os.makedirs("brain") - -# Name of the single journal file -journal_file = "brain/brain_journal.txt" - -def parse_date(date_string): - try: - return datetime.datetime.strptime(date_string, "%m-%d-%Y %A") - except ValueError: - return datetime.datetime.strptime(date_string, "%m-%d-%Y") - -def get_journal_entries(): - entries = [] - if not os.path.exists(journal_file): - return entries - - with open(journal_file, "r", encoding="utf-8") as f: - for line in f: - if line.startswith("Date: "): - entry_date = parse_date(line[6:].strip()) - entries.append(entry_date) - entries.sort(reverse=True) - return entries - -def read_entry(date): - content = "" - with open(journal_file, "r", encoding="utf-8") as f: - lines = f.readlines() - - start_reading = False - for line in lines: - if line.startswith("Date: ") and start_reading: - break - - if start_reading: - content += line - - if line.startswith("Date: ") and date == parse_date(line[6:].strip()): - start_reading = True - - return content - -def write_entry(date, content): - new_entry = f"\nDate: {date}\n{content}\n\n" - - # If the journal file does not exist, create it with the new entry - if not os.path.exists(journal_file): - with open(journal_file, "w", encoding="utf-8") as f: - f.write(new_entry) - else: - with open(journal_file, "r", encoding="utf-8") as f: - lines = f.readlines() - - # Remove existing entry if present - lines_to_remove = set() - removing_entry = False - for i, line in enumerate(lines): - if line.startswith("Date: "): - if date == line[6:].strip(): - removing_entry = True - lines_to_remove.add(i) - else: - removing_entry = False - - if removing_entry: - lines_to_remove.add(i) - - lines = [line for i, line in enumerate(lines) if i not in lines_to_remove] - - # Find the correct position for the new entry based on its date - new_entry_date = parse_date(date) - position = None - for i, line in enumerate(lines): - if line.startswith("Date: "): - entry_date = parse_date(line[6:].strip()) - if new_entry_date < entry_date: - position = i - break - - # Insert the new entry at the correct position - if position is None: - lines.append(new_entry) - else: - lines.insert(position, new_entry) - - # Write the updated journal entries to the file - with open(journal_file, "w", encoding="utf-8") as f: - f.writelines(lines) - - - -st.title("Digital Brain Journal Entry ✍️") -st.write("Write a diary journal entry or edit an existing one by selecting on the date picker.") - -selected_date = st.date_input("Select the date for the journal entry:", value=datetime.date.today()) -formatted_date = selected_date.strftime("%m-%d-%Y %A") -st.write(f"Selected date: {formatted_date}") - -entry = "" - -if selected_date in get_journal_entries(): - entry = read_entry(selected_date) - -new_entry = st.text_area("Write your journal entry:", entry) - -if st.button("Submit"): - write_entry(formatted_date, new_entry) - st.success("Journal entry saved successfully!") - -st.header("Previous Journal Entries") -entries = get_journal_entries() - -if entries: - selected_entry_date = st.selectbox("Select an entry to view:", entries, format_func=lambda x: x.strftime("%m-%d-%Y %A")) - - if st.button("Load Entry"): - entry_text = read_entry(selected_entry_date) - st.write(f"**{selected_entry_date.strftime('%m-%d-%Y %A')}**") - st.markdown(entry_text.replace("\n", "<br>"), unsafe_allow_html=True) - -else: - st.write("No previous entries found.") - -st.markdown("---") -st.markdown("") -st.markdown("<p style='text-align: center'><a href='https://github.com/Kaludii'>Github</a> | <a href='https://huggingface.co/Kaludi'>HuggingFace</a></p>", unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/ipex/gradscaler.py b/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/ipex/gradscaler.py deleted file mode 100644 index 3c265ddb37453f02870afb481360c9cc30b05d81..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/infer/modules/ipex/gradscaler.py +++ /dev/null @@ -1,179 +0,0 @@ -from collections import defaultdict -import torch -import intel_extension_for_pytorch as ipex # pylint: disable=import-error, unused-import -import intel_extension_for_pytorch._C as core # pylint: disable=import-error, unused-import - -# pylint: disable=protected-access, missing-function-docstring, line-too-long - -OptState = ipex.cpu.autocast._grad_scaler.OptState -_MultiDeviceReplicator = ipex.cpu.autocast._grad_scaler._MultiDeviceReplicator -_refresh_per_optimizer_state = ipex.cpu.autocast._grad_scaler._refresh_per_optimizer_state - -def _unscale_grads_(self, optimizer, inv_scale, found_inf, allow_fp16): # pylint: disable=unused-argument - per_device_inv_scale = _MultiDeviceReplicator(inv_scale) - per_device_found_inf = _MultiDeviceReplicator(found_inf) - - # To set up _amp_foreach_non_finite_check_and_unscale_, split grads by device and dtype. - # There could be hundreds of grads, so we'd like to iterate through them just once. - # However, we don't know their devices or dtypes in advance. - - # https://stackoverflow.com/questions/5029934/defaultdict-of-defaultdict - # Google says mypy struggles with defaultdicts type annotations. - per_device_and_dtype_grads = defaultdict(lambda: defaultdict(list)) # type: ignore[var-annotated] - # sync grad to master weight - if hasattr(optimizer, "sync_grad"): - optimizer.sync_grad() - with torch.no_grad(): - for group in optimizer.param_groups: - for param in group["params"]: - if param.grad is None: - continue - if (not allow_fp16) and param.grad.dtype == torch.float16: - raise ValueError("Attempting to unscale FP16 gradients.") - if param.grad.is_sparse: - # is_coalesced() == False means the sparse grad has values with duplicate indices. - # coalesce() deduplicates indices and adds all values that have the same index. - # For scaled fp16 values, there's a good chance coalescing will cause overflow, - # so we should check the coalesced _values(). - if param.grad.dtype is torch.float16: - param.grad = param.grad.coalesce() - to_unscale = param.grad._values() - else: - to_unscale = param.grad - - # -: is there a way to split by device and dtype without appending in the inner loop? - to_unscale = to_unscale.to("cpu") - per_device_and_dtype_grads[to_unscale.device][ - to_unscale.dtype - ].append(to_unscale) - - for _, per_dtype_grads in per_device_and_dtype_grads.items(): - for grads in per_dtype_grads.values(): - core._amp_foreach_non_finite_check_and_unscale_( - grads, - per_device_found_inf.get("cpu"), - per_device_inv_scale.get("cpu"), - ) - - return per_device_found_inf._per_device_tensors - -def unscale_(self, optimizer): - """ - Divides ("unscales") the optimizer's gradient tensors by the scale factor. - :meth:`unscale_` is optional, serving cases where you need to - :ref:`modify or inspect gradients<working-with-unscaled-gradients>` - between the backward pass(es) and :meth:`step`. - If :meth:`unscale_` is not called explicitly, gradients will be unscaled automatically during :meth:`step`. - Simple example, using :meth:`unscale_` to enable clipping of unscaled gradients:: - ... - scaler.scale(loss).backward() - scaler.unscale_(optimizer) - torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm) - scaler.step(optimizer) - scaler.update() - Args: - optimizer (torch.optim.Optimizer): Optimizer that owns the gradients to be unscaled. - .. warning:: - :meth:`unscale_` should only be called once per optimizer per :meth:`step` call, - and only after all gradients for that optimizer's assigned parameters have been accumulated. - Calling :meth:`unscale_` twice for a given optimizer between each :meth:`step` triggers a RuntimeError. - .. warning:: - :meth:`unscale_` may unscale sparse gradients out of place, replacing the ``.grad`` attribute. - """ - if not self._enabled: - return - - self._check_scale_growth_tracker("unscale_") - - optimizer_state = self._per_optimizer_states[id(optimizer)] - - if optimizer_state["stage"] is OptState.UNSCALED: # pylint: disable=no-else-raise - raise RuntimeError( - "unscale_() has already been called on this optimizer since the last update()." - ) - elif optimizer_state["stage"] is OptState.STEPPED: - raise RuntimeError("unscale_() is being called after step().") - - # FP32 division can be imprecise for certain compile options, so we carry out the reciprocal in FP64. - assert self._scale is not None - inv_scale = self._scale.to("cpu").double().reciprocal().float().to(self._scale.device) - found_inf = torch.full( - (1,), 0.0, dtype=torch.float32, device=self._scale.device - ) - - optimizer_state["found_inf_per_device"] = self._unscale_grads_( - optimizer, inv_scale, found_inf, False - ) - optimizer_state["stage"] = OptState.UNSCALED - -def update(self, new_scale=None): - """ - Updates the scale factor. - If any optimizer steps were skipped the scale is multiplied by ``backoff_factor`` - to reduce it. If ``growth_interval`` unskipped iterations occurred consecutively, - the scale is multiplied by ``growth_factor`` to increase it. - Passing ``new_scale`` sets the new scale value manually. (``new_scale`` is not - used directly, it's used to fill GradScaler's internal scale tensor. So if - ``new_scale`` was a tensor, later in-place changes to that tensor will not further - affect the scale GradScaler uses internally.) - Args: - new_scale (float or :class:`torch.FloatTensor`, optional, default=None): New scale factor. - .. warning:: - :meth:`update` should only be called at the end of the iteration, after ``scaler.step(optimizer)`` has - been invoked for all optimizers used this iteration. - """ - if not self._enabled: - return - - _scale, _growth_tracker = self._check_scale_growth_tracker("update") - - if new_scale is not None: - # Accept a new user-defined scale. - if isinstance(new_scale, float): - self._scale.fill_(new_scale) # type: ignore[union-attr] - else: - reason = "new_scale should be a float or a 1-element torch.FloatTensor with requires_grad=False." - assert isinstance(new_scale, torch.FloatTensor), reason # type: ignore[attr-defined] - assert new_scale.numel() == 1, reason - assert new_scale.requires_grad is False, reason - self._scale.copy_(new_scale) # type: ignore[union-attr] - else: - # Consume shared inf/nan data collected from optimizers to update the scale. - # If all found_inf tensors are on the same device as self._scale, this operation is asynchronous. - found_infs = [ - found_inf.to(device="cpu", non_blocking=True) - for state in self._per_optimizer_states.values() - for found_inf in state["found_inf_per_device"].values() - ] - - assert len(found_infs) > 0, "No inf checks were recorded prior to update." - - found_inf_combined = found_infs[0] - if len(found_infs) > 1: - for i in range(1, len(found_infs)): - found_inf_combined += found_infs[i] - - to_device = _scale.device - _scale = _scale.to("cpu") - _growth_tracker = _growth_tracker.to("cpu") - - core._amp_update_scale_( - _scale, - _growth_tracker, - found_inf_combined, - self._growth_factor, - self._backoff_factor, - self._growth_interval, - ) - - _scale = _scale.to(to_device) - _growth_tracker = _growth_tracker.to(to_device) - # To prepare for next iteration, clear the data collected from optimizers this iteration. - self._per_optimizer_states = defaultdict(_refresh_per_optimizer_state) - -def gradscaler_init(): - torch.xpu.amp.GradScaler = ipex.cpu.autocast._grad_scaler.GradScaler - torch.xpu.amp.GradScaler._unscale_grads_ = _unscale_grads_ - torch.xpu.amp.GradScaler.unscale_ = unscale_ - torch.xpu.amp.GradScaler.update = update - return torch.xpu.amp.GradScaler \ No newline at end of file diff --git a/spaces/KarmaCST/Dzongkha-To-English-Translation-NLLB-Fine-tuning/README.md b/spaces/KarmaCST/Dzongkha-To-English-Translation-NLLB-Fine-tuning/README.md deleted file mode 100644 index 4232e9bcf2fe18b8cdc468d2d91332151f17034f..0000000000000000000000000000000000000000 --- a/spaces/KarmaCST/Dzongkha-To-English-Translation-NLLB-Fine-tuning/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Dzongkha to English Translation-NLLB-Finetuning -emoji: 📈 -colorFrom: red -colorTo: purple -sdk: gradio -sdk_version: 3.16.2 -app_file: app.py -pinned: false -duplicated_from: KarmaCST/English-To-Dzongkha-Translation-NLLB ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/audio.py b/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/audio.py deleted file mode 100644 index 2e03ae5eecdf50bd88b1a76c6bff59f8d4947291..0000000000000000000000000000000000000000 --- a/spaces/Kevin676/ChatGPT-with-Voice-Cloning-in-Chinese/synthesizer/audio.py +++ /dev/null @@ -1,206 +0,0 @@ -import librosa -import librosa.filters -import numpy as np -from scipy import signal -from scipy.io import wavfile -import soundfile as sf - - -def load_wav(path, sr): - return librosa.core.load(path, sr=sr)[0] - -def save_wav(wav, path, sr): - wav *= 32767 / max(0.01, np.max(np.abs(wav))) - #proposed by @dsmiller - wavfile.write(path, sr, wav.astype(np.int16)) - -def save_wavenet_wav(wav, path, sr): - sf.write(path, wav.astype(np.float32), sr) - -def preemphasis(wav, k, preemphasize=True): - if preemphasize: - return signal.lfilter([1, -k], [1], wav) - return wav - -def inv_preemphasis(wav, k, inv_preemphasize=True): - if inv_preemphasize: - return signal.lfilter([1], [1, -k], wav) - return wav - -#From https://github.com/r9y9/wavenet_vocoder/blob/master/audio.py -def start_and_end_indices(quantized, silence_threshold=2): - for start in range(quantized.size): - if abs(quantized[start] - 127) > silence_threshold: - break - for end in range(quantized.size - 1, 1, -1): - if abs(quantized[end] - 127) > silence_threshold: - break - - assert abs(quantized[start] - 127) > silence_threshold - assert abs(quantized[end] - 127) > silence_threshold - - return start, end - -def get_hop_size(hparams): - hop_size = hparams.hop_size - if hop_size is None: - assert hparams.frame_shift_ms is not None - hop_size = int(hparams.frame_shift_ms / 1000 * hparams.sample_rate) - return hop_size - -def linearspectrogram(wav, hparams): - D = _stft(preemphasis(wav, hparams.preemphasis, hparams.preemphasize), hparams) - S = _amp_to_db(np.abs(D), hparams) - hparams.ref_level_db - - if hparams.signal_normalization: - return _normalize(S, hparams) - return S - -def melspectrogram(wav, hparams): - D = _stft(preemphasis(wav, hparams.preemphasis, hparams.preemphasize), hparams) - S = _amp_to_db(_linear_to_mel(np.abs(D), hparams), hparams) - hparams.ref_level_db - - if hparams.signal_normalization: - return _normalize(S, hparams) - return S - -def inv_linear_spectrogram(linear_spectrogram, hparams): - """Converts linear spectrogram to waveform using librosa""" - if hparams.signal_normalization: - D = _denormalize(linear_spectrogram, hparams) - else: - D = linear_spectrogram - - S = _db_to_amp(D + hparams.ref_level_db) #Convert back to linear - - if hparams.use_lws: - processor = _lws_processor(hparams) - D = processor.run_lws(S.astype(np.float64).T ** hparams.power) - y = processor.istft(D).astype(np.float32) - return inv_preemphasis(y, hparams.preemphasis, hparams.preemphasize) - else: - return inv_preemphasis(_griffin_lim(S ** hparams.power, hparams), hparams.preemphasis, hparams.preemphasize) - -def inv_mel_spectrogram(mel_spectrogram, hparams): - """Converts mel spectrogram to waveform using librosa""" - if hparams.signal_normalization: - D = _denormalize(mel_spectrogram, hparams) - else: - D = mel_spectrogram - - S = _mel_to_linear(_db_to_amp(D + hparams.ref_level_db), hparams) # Convert back to linear - - if hparams.use_lws: - processor = _lws_processor(hparams) - D = processor.run_lws(S.astype(np.float64).T ** hparams.power) - y = processor.istft(D).astype(np.float32) - return inv_preemphasis(y, hparams.preemphasis, hparams.preemphasize) - else: - return inv_preemphasis(_griffin_lim(S ** hparams.power, hparams), hparams.preemphasis, hparams.preemphasize) - -def _lws_processor(hparams): - import lws - return lws.lws(hparams.n_fft, get_hop_size(hparams), fftsize=hparams.win_size, mode="speech") - -def _griffin_lim(S, hparams): - """librosa implementation of Griffin-Lim - Based on https://github.com/librosa/librosa/issues/434 - """ - angles = np.exp(2j * np.pi * np.random.rand(*S.shape)) - S_complex = np.abs(S).astype(np.complex) - y = _istft(S_complex * angles, hparams) - for i in range(hparams.griffin_lim_iters): - angles = np.exp(1j * np.angle(_stft(y, hparams))) - y = _istft(S_complex * angles, hparams) - return y - -def _stft(y, hparams): - if hparams.use_lws: - return _lws_processor(hparams).stft(y).T - else: - return librosa.stft(y=y, n_fft=hparams.n_fft, hop_length=get_hop_size(hparams), win_length=hparams.win_size) - -def _istft(y, hparams): - return librosa.istft(y, hop_length=get_hop_size(hparams), win_length=hparams.win_size) - -########################################################## -#Those are only correct when using lws!!! (This was messing with Wavenet quality for a long time!) -def num_frames(length, fsize, fshift): - """Compute number of time frames of spectrogram - """ - pad = (fsize - fshift) - if length % fshift == 0: - M = (length + pad * 2 - fsize) // fshift + 1 - else: - M = (length + pad * 2 - fsize) // fshift + 2 - return M - - -def pad_lr(x, fsize, fshift): - """Compute left and right padding - """ - M = num_frames(len(x), fsize, fshift) - pad = (fsize - fshift) - T = len(x) + 2 * pad - r = (M - 1) * fshift + fsize - T - return pad, pad + r -########################################################## -#Librosa correct padding -def librosa_pad_lr(x, fsize, fshift): - return 0, (x.shape[0] // fshift + 1) * fshift - x.shape[0] - -# Conversions -_mel_basis = None -_inv_mel_basis = None - -def _linear_to_mel(spectogram, hparams): - global _mel_basis - if _mel_basis is None: - _mel_basis = _build_mel_basis(hparams) - return np.dot(_mel_basis, spectogram) - -def _mel_to_linear(mel_spectrogram, hparams): - global _inv_mel_basis - if _inv_mel_basis is None: - _inv_mel_basis = np.linalg.pinv(_build_mel_basis(hparams)) - return np.maximum(1e-10, np.dot(_inv_mel_basis, mel_spectrogram)) - -def _build_mel_basis(hparams): - assert hparams.fmax <= hparams.sample_rate // 2 - return librosa.filters.mel(sr=hparams.sample_rate, n_fft=hparams.n_fft, n_mels=hparams.num_mels, - fmin=hparams.fmin, fmax=hparams.fmax) - -def _amp_to_db(x, hparams): - min_level = np.exp(hparams.min_level_db / 20 * np.log(10)) - return 20 * np.log10(np.maximum(min_level, x)) - -def _db_to_amp(x): - return np.power(10.0, (x) * 0.05) - -def _normalize(S, hparams): - if hparams.allow_clipping_in_normalization: - if hparams.symmetric_mels: - return np.clip((2 * hparams.max_abs_value) * ((S - hparams.min_level_db) / (-hparams.min_level_db)) - hparams.max_abs_value, - -hparams.max_abs_value, hparams.max_abs_value) - else: - return np.clip(hparams.max_abs_value * ((S - hparams.min_level_db) / (-hparams.min_level_db)), 0, hparams.max_abs_value) - - assert S.max() <= 0 and S.min() - hparams.min_level_db >= 0 - if hparams.symmetric_mels: - return (2 * hparams.max_abs_value) * ((S - hparams.min_level_db) / (-hparams.min_level_db)) - hparams.max_abs_value - else: - return hparams.max_abs_value * ((S - hparams.min_level_db) / (-hparams.min_level_db)) - -def _denormalize(D, hparams): - if hparams.allow_clipping_in_normalization: - if hparams.symmetric_mels: - return (((np.clip(D, -hparams.max_abs_value, - hparams.max_abs_value) + hparams.max_abs_value) * -hparams.min_level_db / (2 * hparams.max_abs_value)) - + hparams.min_level_db) - else: - return ((np.clip(D, 0, hparams.max_abs_value) * -hparams.min_level_db / hparams.max_abs_value) + hparams.min_level_db) - - if hparams.symmetric_mels: - return (((D + hparams.max_abs_value) * -hparams.min_level_db / (2 * hparams.max_abs_value)) + hparams.min_level_db) - else: - return ((D * -hparams.min_level_db / hparams.max_abs_value) + hparams.min_level_db) diff --git a/spaces/Kuachi/hololive/app.py b/spaces/Kuachi/hololive/app.py deleted file mode 100644 index 1b81fef9407b0621a9285d92342bd78db96782eb..0000000000000000000000000000000000000000 --- a/spaces/Kuachi/hololive/app.py +++ /dev/null @@ -1,185 +0,0 @@ -import os -import json -import argparse -import traceback -import logging -import gradio as gr -import numpy as np -import librosa -import torch -import asyncio -import edge_tts -from datetime import datetime -from fairseq import checkpoint_utils -from infer_pack.models import SynthesizerTrnMs256NSFsid, SynthesizerTrnMs256NSFsid_nono -from vc_infer_pipeline import VC -from config import ( - is_half, - device -) -logging.getLogger("numba").setLevel(logging.WARNING) -limitation = os.getenv("SYSTEM") == "spaces" # limit audio length in huggingface spaces - -def create_vc_fn(tgt_sr, net_g, vc, if_f0, file_index, file_big_npy): - def vc_fn( - input_audio, - f0_up_key, - f0_method, - index_rate, - tts_mode, - tts_text, - tts_voice - ): - try: - if tts_mode: - if len(tts_text) > 600 and limitation: - return "Text is too long", None - if tts_text is None or tts_voice is None: - return "You need to enter text and select a voice", None - asyncio.run(edge_tts.Communicate(tts_text, "-".join(tts_voice.split('-')[:-1])).save("tts.mp3")) - audio, sr = librosa.load("tts.mp3", sr=16000, mono=True) - else: - if args.files: - audio, sr = librosa.load(input_audio, sr=16000, mono=True) - else: - if input_audio is None: - return "You need to upload an audio", None - sampling_rate, audio = input_audio - duration = audio.shape[0] / sampling_rate - if duration > 330 and limitation: - return "Please upload an audio file that is less than 5 minutes 30 seconds.", None - audio = (audio / np.iinfo(audio.dtype).max).astype(np.float32) - if len(audio.shape) > 1: - audio = librosa.to_mono(audio.transpose(1, 0)) - if sampling_rate != 16000: - audio = librosa.resample(audio, orig_sr=sampling_rate, target_sr=16000) - times = [0, 0, 0] - f0_up_key = int(f0_up_key) - audio_opt = vc.pipeline( - hubert_model, - net_g, - 0, - audio, - times, - f0_up_key, - f0_method, - file_index, - file_big_npy, - index_rate, - if_f0, - ) - print( - f"[{datetime.now().strftime('%Y-%m-%d %H:%M')}]: npy: {times[0]}, f0: {times[1]}s, infer: {times[2]}s" - ) - return "Success", (tgt_sr, audio_opt) - except: - info = traceback.format_exc() - print(info) - return info, (None, None) - return vc_fn - -def load_hubert(): - global hubert_model - models, _, _ = checkpoint_utils.load_model_ensemble_and_task( - ["hubert_base.pt"], - suffix="", - ) - hubert_model = models[0] - hubert_model = hubert_model.to(device) - if is_half: - hubert_model = hubert_model.half() - else: - hubert_model = hubert_model.float() - hubert_model.eval() - -def change_to_tts_mode(tts_mode): - if tts_mode: - return gr.Audio.update(visible=False), gr.Textbox.update(visible=True), gr.Dropdown.update(visible=True) - else: - return gr.Audio.update(visible=True), gr.Textbox.update(visible=False), gr.Dropdown.update(visible=False) - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--api', action="store_true", default=False) - parser.add_argument("--share", action="store_true", default=False, help="share gradio app") - parser.add_argument("--files", action="store_true", default=False, help="load audio from path") - args, unknown = parser.parse_known_args() - load_hubert() - models = [] - tts_voice_list = asyncio.get_event_loop().run_until_complete(edge_tts.list_voices()) - voices = [f"{v['ShortName']}-{v['Gender']}" for v in tts_voice_list] - with open("weights/model_info.json", "r", encoding="utf-8") as f: - models_info = json.load(f) - for name, info in models_info.items(): - if not info['enable']: - continue - title = info['title'] - author = info.get("author", None) - cover = f"weights/{name}/{info['cover']}" - index = f"weights/{name}/{info['feature_retrieval_library']}" - npy = f"weights/{name}/{info['feature_file']}" - cpt = torch.load(f"weights/{name}/{name}.pth", map_location="cpu") - tgt_sr = cpt["config"][-1] - cpt["config"][-3] = cpt["weight"]["emb_g.weight"].shape[0] # n_spk - if_f0 = cpt.get("f0", 1) - if if_f0 == 1: - net_g = SynthesizerTrnMs256NSFsid(*cpt["config"], is_half=is_half) - else: - net_g = SynthesizerTrnMs256NSFsid_nono(*cpt["config"]) - del net_g.enc_q - print(net_g.load_state_dict(cpt["weight"], strict=False)) # 不加这一行清不干净, 真奇葩 - net_g.eval().to(device) - if is_half: - net_g = net_g.half() - else: - net_g = net_g.float() - vc = VC(tgt_sr, device, is_half) - models.append((name, title, author, cover, create_vc_fn(tgt_sr, net_g, vc, if_f0, index, npy))) - with gr.Blocks() as app: - gr.Markdown( - "# <center> Hololive RVC Models\n" - "## <center> The input audio should be clean and pure voice without background music.\n" - "[![image](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/aziib/Create-Google-Shared-Drive/blob/master/Hololive-RVC-Models.ipynb)\n\n" - "[![ko-fi](https://ko-fi.com/img/githubbutton_sm.svg)](https://ko-fi.com/megaaziib)\n\n" - ) - with gr.Tabs(): - for (name, title, author, cover, vc_fn) in models: - with gr.TabItem(name): - with gr.Row(): - gr.Markdown( - '<div align="center">' - f'<div>{title}</div>\n'+ - (f'<div>Model author: {author}</div>' if author else "")+ - (f'<img style="width:auto;height:300px;" src="file/{cover}">' if cover else "")+ - '</div>' - ) - with gr.Row(): - with gr.Column(): - if args.files: - vc_input = gr.Textbox(label="Input audio path") - else: - vc_input = gr.Audio(label="Input audio"+' (less than 5 minutes 30 seconds)' if limitation else '') - vc_transpose = gr.Number(label="Transpose", value=0) - vc_f0method = gr.Radio( - label="Pitch extraction algorithm, PM is fast but Harvest is better for low frequencies", - choices=["pm", "harvest"], - value="pm", - interactive=True, - ) - vc_index_ratio = gr.Slider( - minimum=0, - maximum=1, - label="Retrieval feature ratio", - value=0.6, - interactive=True, - ) - tts_mode = gr.Checkbox(label="tts (use edge-tts as input)", value=False) - tts_text = gr.Textbox(visible=False,label="TTS text (600 words limitation)" if limitation else "TTS text") - tts_voice = gr.Dropdown(label="Edge-tts speaker", choices=voices, visible=False, allow_custom_value=False, value="en-US-AnaNeural-Female") - vc_submit = gr.Button("Generate", variant="primary") - with gr.Column(): - vc_output1 = gr.Textbox(label="Output Message") - vc_output2 = gr.Audio(label="Output Audio") - vc_submit.click(vc_fn, [vc_input, vc_transpose, vc_f0method, vc_index_ratio, tts_mode, tts_text, tts_voice], [vc_output1, vc_output2]) - tts_mode.change(change_to_tts_mode, [tts_mode], [vc_input, tts_text, tts_voice]) - app.queue(concurrency_count=1, max_size=20, api_open=args.api).launch(share=args.share) \ No newline at end of file diff --git a/spaces/KyanChen/RSPrompter/configs/rsprompter/samdet_fasterrcnn_nwpu_config.py b/spaces/KyanChen/RSPrompter/configs/rsprompter/samdet_fasterrcnn_nwpu_config.py deleted file mode 100644 index 18a2b2d9576ee3f6bff73b23ca922a7356ea4aea..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/RSPrompter/configs/rsprompter/samdet_fasterrcnn_nwpu_config.py +++ /dev/null @@ -1,338 +0,0 @@ -custom_imports = dict(imports=['mmseg.datasets', 'mmseg.models'], allow_failed_imports=False) - -sub_model_train = [ - 'whole_model' -] - -sub_model_optim = { - 'whole_model': {'lr_mult': 1}, -} - -max_epochs = 1000 - -optimizer = dict( - type='AdamW', - sub_model=sub_model_optim, - lr=0.0005, - weight_decay=1e-3 -) - -param_scheduler = [ - # warm up learning rate scheduler - dict( - type='LinearLR', - start_factor=5e-4, - by_epoch=True, - begin=0, - end=1, - # update by iter - convert_to_iter_based=True), - # main learning rate scheduler - dict( - type='CosineAnnealingLR', - T_max=max_epochs, - by_epoch=True, - begin=1, - end=max_epochs, - ), -] - -param_scheduler_callback = dict( - type='ParamSchedulerHook' -) - -evaluator_ = dict( - type='CocoPLMetric', - metric=['bbox', 'segm'], - proposal_nums=[1, 10, 100] -) - -evaluator = dict( - # train_evaluator=evaluator_, - val_evaluator=evaluator_, -) - - -image_size = (1024, 1024) - -data_preprocessor = dict( - type='mmdet.DetDataPreprocessor', - mean=[123.675, 116.28, 103.53], - std=[58.395, 57.12, 57.375], - bgr_to_rgb=True, - pad_size_divisor=32, - pad_mask=True, - mask_pad_value=0, -) - -num_things_classes = 10 -num_stuff_classes = 0 -num_classes = num_things_classes + num_stuff_classes - -model = dict( - type='mmdet.FasterRCNN', - data_preprocessor=data_preprocessor, - backbone=dict( - type='mmdet.ResNet', - depth=50, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=dict(type='BN', requires_grad=True), - norm_eval=True, - style='pytorch', - init_cfg=dict(type='Pretrained', checkpoint='torchvision://resnet50')), - neck=dict( - type='mmdet.FPN', - in_channels=[256, 512, 1024, 2048], - out_channels=256, - num_outs=5), - rpn_head=dict( - type='mmdet.RPNHead', - in_channels=256, - feat_channels=256, - anchor_generator=dict( - type='mmdet.AnchorGenerator', - scales=[8], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[.0, .0, .0, .0], - target_stds=[1.0, 1.0, 1.0, 1.0]), - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=True, loss_weight=1.0), - loss_bbox=dict(type='mmdet.L1Loss', loss_weight=1.0)), - roi_head=dict( - type='mmdet.StandardRoIHead', - bbox_roi_extractor=dict( - type='mmdet.SingleRoIExtractor', - roi_layer=dict(type='RoIAlign', output_size=7, sampling_ratio=0), - out_channels=256, - featmap_strides=[4, 8, 16, 32]), - bbox_head=dict( - type='mmdet.Shared2FCBBoxHead', - in_channels=256, - fc_out_channels=1024, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='mmdet.DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=False, - loss_cls=dict( - type='mmdet.CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0), - loss_bbox=dict(type='mmdet.L1Loss', loss_weight=1.0))), - # model training and testing settings - train_cfg=dict( - rpn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.7, - neg_iou_thr=0.3, - min_pos_iou=0.3, - match_low_quality=True, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=256, - pos_fraction=0.5, - neg_pos_ub=-1, - add_gt_as_proposals=False), - allowed_border=-1, - pos_weight=-1, - debug=False), - rpn_proposal=dict( - nms_pre=2000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - assigner=dict( - type='mmdet.MaxIoUAssigner', - pos_iou_thr=0.5, - neg_iou_thr=0.5, - min_pos_iou=0.5, - match_low_quality=False, - ignore_iof_thr=-1), - sampler=dict( - type='mmdet.RandomSampler', - num=512, - pos_fraction=0.25, - neg_pos_ub=-1, - add_gt_as_proposals=True), - pos_weight=-1, - debug=False)), - test_cfg=dict( - rpn=dict( - nms_pre=1000, - max_per_img=1000, - nms=dict(type='nms', iou_threshold=0.7), - min_bbox_size=0), - rcnn=dict( - score_thr=0.05, - nms=dict(type='nms', iou_threshold=0.5), - max_per_img=100) - # soft-nms is also supported for rcnn testing - # e.g., nms=dict(type='soft_nms', iou_threshold=0.5, min_score=0.05) - )) - -model_cfg = dict( - type='SegSAMDetPLer', - hyperparameters=dict( - optimizer=optimizer, - param_scheduler=param_scheduler, - evaluator=evaluator, - ), - need_train_names=sub_model_train, - whole_model=model, - backbone=dict( - type='vit_h', - checkpoint='pretrain/sam/sam_vit_h_4b8939.pth', - # type='vit_b', - # checkpoint='pretrain/sam/sam_vit_b_01ec64.pth', - ) -) - -task_name = 'nwpu_ins' -exp_name = 'E20230531_9' -logger = dict( - type='WandbLogger', - project=task_name, - group='samdet', - name=exp_name -) -# logger = None - -callbacks = [ - param_scheduler_callback, - dict( - type='ModelCheckpoint', - dirpath=f'results/{task_name}/{exp_name}/checkpoints', - save_last=True, - mode='max', - monitor='valsegm_map_0', - save_top_k=2, - filename='epoch_{epoch}-map_{valsegm_map_0:.4f}' - ), - dict( - type='LearningRateMonitor', - logging_interval='step' - ) -] - - -trainer_cfg = dict( - compiled_model=False, - accelerator="auto", - # strategy="auto", - # strategy="ddp", - strategy='ddp_find_unused_parameters_true', - # precision='32', - # precision='16-mixed', - devices=8, - default_root_dir=f'results/{task_name}/{exp_name}', - # default_root_dir='results/tmp', - max_epochs=max_epochs, - logger=logger, - callbacks=callbacks, - log_every_n_steps=5, - check_val_every_n_epoch=5, - benchmark=True, - # sync_batchnorm=True, - # fast_dev_run=True, - - # limit_train_batches=1, - # limit_val_batches=0, - # limit_test_batches=None, - # limit_predict_batches=None, - # overfit_batches=0.0, - - # val_check_interval=None, - # num_sanity_val_steps=0, - # enable_checkpointing=None, - # enable_progress_bar=None, - # enable_model_summary=None, - # accumulate_grad_batches=32, - # gradient_clip_val=15, - # gradient_clip_algorithm='norm', - # deterministic=None, - # inference_mode: bool=True, - use_distributed_sampler=True, - # profiler="simple", - # detect_anomaly=False, - # barebones=False, - # plugins=None, - # reload_dataloaders_every_n_epochs=0, -) - - -backend_args = None -train_pipeline = [ - dict(type='mmdet.LoadImageFromFile'), - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='mmdet.Resize', scale=image_size), - dict(type='mmdet.RandomFlip', prob=0.5), - dict(type='mmdet.PackDetInputs') -] - -test_pipeline = [ - dict(type='mmdet.LoadImageFromFile', backend_args=backend_args), - dict(type='mmdet.Resize', scale=image_size), - # If you don't have a gt annotation, delete the pipeline - dict(type='mmdet.LoadAnnotations', with_bbox=True, with_mask=True), - dict( - type='mmdet.PackDetInputs', - meta_keys=('img_id', 'img_path', 'ori_shape', 'img_shape', - 'scale_factor')) -] - - -train_batch_size_per_gpu = 4 -train_num_workers = 4 -test_batch_size_per_gpu = 4 -test_num_workers = 4 -persistent_workers = True - -data_parent = '/mnt/search01/dataset/cky_data/NWPU10' -train_data_prefix = '' -val_data_prefix = '' - -dataset_type = 'NWPUInsSegDataset' - -val_loader = dict( - batch_size=test_batch_size_per_gpu, - num_workers=test_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - ann_file='NWPU_instances_val.json', - data_prefix=dict(img_path='positive image set'), - test_mode=True, - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=test_pipeline, - backend_args=backend_args)) - -datamodule_cfg = dict( - type='PLDataModule', - train_loader=dict( - batch_size=train_batch_size_per_gpu, - num_workers=train_num_workers, - persistent_workers=persistent_workers, - pin_memory=True, - dataset=dict( - type=dataset_type, - data_root=data_parent, - ann_file='NWPU_instances_train.json', - data_prefix=dict(img_path='positive image set'), - filter_cfg=dict(filter_empty_gt=True, min_size=32), - pipeline=train_pipeline, - backend_args=backend_args) - ), - val_loader=val_loader, - # test_loader=val_loader - predict_loader=val_loader -) \ No newline at end of file diff --git a/spaces/Latryna/roop/roop/predicter.py b/spaces/Latryna/roop/roop/predicter.py deleted file mode 100644 index 7ebc2b62e4152c12ce41e55d718222ca9c8a8b7f..0000000000000000000000000000000000000000 --- a/spaces/Latryna/roop/roop/predicter.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy -import opennsfw2 -from PIL import Image - -from roop.typing import Frame - -MAX_PROBABILITY = 0.85 - - -def predict_frame(target_frame: Frame) -> bool: - image = Image.fromarray(target_frame) - image = opennsfw2.preprocess_image(image, opennsfw2.Preprocessing.YAHOO) - model = opennsfw2.make_open_nsfw_model() - views = numpy.expand_dims(image, axis=0) - _, probability = model.predict(views)[0] - return probability > MAX_PROBABILITY - - -def predict_image(target_path: str) -> bool: - return opennsfw2.predict_image(target_path) > MAX_PROBABILITY - - -def predict_video(target_path: str) -> bool: - _, probabilities = opennsfw2.predict_video_frames(video_path=target_path, frame_interval=100) - return any(probability > MAX_PROBABILITY for probability in probabilities) diff --git a/spaces/LayBraid/SpaceVector_v0/README.md b/spaces/LayBraid/SpaceVector_v0/README.md deleted file mode 100644 index bbd1b50b61173a9a4a161e2c1ebed1f64784ed86..0000000000000000000000000000000000000000 --- a/spaces/LayBraid/SpaceVector_v0/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: OpenIA Clip Implementation V2 -emoji: 😻 -colorFrom: blue -colorTo: purple -sdk: streamlit -sdk_version: 1.10.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/MINAMONI/White-box-Cartoonization/wbc/cartoonize.py b/spaces/MINAMONI/White-box-Cartoonization/wbc/cartoonize.py deleted file mode 100644 index 25faf1ceb95aaed9a3f7a7982d17a03dc6bc32b1..0000000000000000000000000000000000000000 --- a/spaces/MINAMONI/White-box-Cartoonization/wbc/cartoonize.py +++ /dev/null @@ -1,112 +0,0 @@ -import os -import cv2 -import numpy as np -import tensorflow as tf -import wbc.network as network -import wbc.guided_filter as guided_filter -from tqdm import tqdm - - -def resize_crop(image): - h, w, c = np.shape(image) - if min(h, w) > 720: - if h > w: - h, w = int(720 * h / w), 720 - else: - h, w = 720, int(720 * w / h) - image = cv2.resize(image, (w, h), - interpolation=cv2.INTER_AREA) - h, w = (h // 8) * 8, (w // 8) * 8 - image = image[:h, :w, :] - return image - - -def cartoonize(load_folder, save_folder, model_path): - print(model_path) - input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - network_out = network.unet_generator(input_photo) - final_out = guided_filter.guided_filter(input_photo, network_out, r=1, eps=5e-3) - - all_vars = tf.trainable_variables() - gene_vars = [var for var in all_vars if 'generator' in var.name] - saver = tf.train.Saver(var_list=gene_vars) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - sess = tf.Session(config=config) - - sess.run(tf.global_variables_initializer()) - saver.restore(sess, tf.train.latest_checkpoint(model_path)) - name_list = os.listdir(load_folder) - for name in tqdm(name_list): - try: - load_path = os.path.join(load_folder, name) - save_path = os.path.join(save_folder, name) - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = sess.run(final_out, feed_dict={input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - -class Cartoonize: - def __init__(self, model_path): - print(model_path) - self.input_photo = tf.placeholder(tf.float32, [1, None, None, 3]) - network_out = network.unet_generator(self.input_photo) - self.final_out = guided_filter.guided_filter(self.input_photo, network_out, r=1, eps=5e-3) - - all_vars = tf.trainable_variables() - gene_vars = [var for var in all_vars if 'generator' in var.name] - saver = tf.train.Saver(var_list=gene_vars) - - config = tf.ConfigProto() - config.gpu_options.allow_growth = True - self.sess = tf.Session(config=config) - - self.sess.run(tf.global_variables_initializer()) - saver.restore(self.sess, tf.train.latest_checkpoint(model_path)) - - def run(self, load_folder, save_folder): - name_list = os.listdir(load_folder) - for name in tqdm(name_list): - try: - load_path = os.path.join(load_folder, name) - save_path = os.path.join(save_folder, name) - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - def run_sigle(self, load_path, save_path): - try: - image = cv2.imread(load_path) - image = resize_crop(image) - batch_image = image.astype(np.float32) / 127.5 - 1 - batch_image = np.expand_dims(batch_image, axis=0) - output = self.sess.run(self.final_out, feed_dict={self.input_photo: batch_image}) - output = (np.squeeze(output) + 1) * 127.5 - output = np.clip(output, 0, 255).astype(np.uint8) - cv2.imwrite(save_path, output) - except: - print('cartoonize {} failed'.format(load_path)) - - -if __name__ == '__main__': - model_path = 'saved_models' - load_folder = 'test_images' - save_folder = 'cartoonized_images' - if not os.path.exists(save_folder): - os.mkdir(save_folder) - cartoonize(load_folder, save_folder, model_path) diff --git a/spaces/MWilinski/bot/data/stackoverflow_python_dataset.py b/spaces/MWilinski/bot/data/stackoverflow_python_dataset.py deleted file mode 100644 index f29c8f1a4282f067bbd5c5a48db66e4d999d19f3..0000000000000000000000000000000000000000 --- a/spaces/MWilinski/bot/data/stackoverflow_python_dataset.py +++ /dev/null @@ -1,55 +0,0 @@ -from datetime import datetime -from datasets import load_dataset -from bs4 import BeautifulSoup - - -def preprocess_dataset(): - """ - Preprocesses the 'koutch/stackoverflow_python' dataset. - - Returns: - datasets.arrow_dataset.Dataset: The preprocessed dataset. - """ - dataset = load_dataset('koutch/stackoverflow_python', split='train') - dataset = dataset.filter( - lambda example: - example['question_score'] > 100 and - example['answer_score'] > 5 and - datetime.strptime(example['answer_date'], '%Y-%m-%dT%H:%M:%SZ').year > 2010 - ) - - def html2text(example): - soup = BeautifulSoup(example, 'html.parser') - return ''.join(soup.findAll(string=True)) - - def transforms(example): - example['answer'] = html2text(example['answer_body']) - example['question'] = html2text(example['question_body']) - return example - - dataset = dataset.map(lambda example: transforms(example)) - dataset = dataset.remove_columns([ - 'question_score', 'question_date', 'question_id', - 'answer_date', 'answer_id', 'answer_score', 'tags', - 'question_body', 'answer_body' - ]) - return dataset - - -def show_info(dataset): - """ - Print information about the dataset. - - Args: - dataset (datasets.arrow_dataset.Dataset): The dataset. - """ - print(dataset.info, '\n') - print(f'dataset len: {len(dataset)}') - print(f"example question: {dataset[0]['question']}") - print(f"example answer: {dataset[0]['answer']}") - - -if __name__ == '__main__': - dataset = preprocess_dataset() - dataset.push_to_hub('KonradSzafer/stackoverflow_python_preprocessed', private=False) - show_info(dataset) diff --git a/spaces/Marshalls/testmtd/feature_extraction/madmom/evaluation/notes.py b/spaces/Marshalls/testmtd/feature_extraction/madmom/evaluation/notes.py deleted file mode 100644 index a34daf75d66f0b549ae67785ae5bca29a6453ba5..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/feature_extraction/madmom/evaluation/notes.py +++ /dev/null @@ -1,367 +0,0 @@ -# encoding: utf-8 -# pylint: disable=no-member -# pylint: disable=invalid-name -# pylint: disable=too-many-arguments -""" -This module contains note evaluation functionality. - -""" - -from __future__ import absolute_import, division, print_function - -import warnings -import numpy as np - -from . import (evaluation_io, MultiClassEvaluation, SumEvaluation, - MeanEvaluation) -from .onsets import onset_evaluation, OnsetEvaluation -from ..io import load_notes - - -# default note evaluation values -WINDOW = 0.025 - - -def remove_duplicate_notes(data): - """ - Remove duplicate rows from the array. - - Parameters - ---------- - data : numpy array - Data. - - Returns - ------- - numpy array - Data array with duplicate rows removed. - - Notes - ----- - This function removes only exact duplicates. - - """ - if data.size == 0: - return data - # found here: http://stackoverflow.com/questions/2828059/ - # find the unique rows - order = np.ascontiguousarray(data).view( - np.dtype((np.void, data.dtype.itemsize * data.shape[1]))) - unique = np.unique(order, return_index=True)[1] - # only use the unique rows - data = data[unique] - # sort them by the first column and return them - return data[data[:, 0].argsort()] - - -# note onset evaluation function -def note_onset_evaluation(detections, annotations, window=WINDOW): - """ - Determine the true/false positive/negative note onset detections. - - Parameters - ---------- - detections : numpy array - Detected notes. - annotations : numpy array - Annotated ground truth notes. - window : float, optional - Evaluation window [seconds]. - - Returns - ------- - tp : numpy array, shape (num_tp, 2) - True positive detections. - fp : numpy array, shape (num_fp, 2) - False positive detections. - tn : numpy array, shape (0, 2) - True negative detections (empty, see notes). - fn : numpy array, shape (num_fn, 2) - False negative detections. - errors : numpy array, shape (num_tp, 2) - Errors of the true positive detections wrt. the annotations. - - Notes - ----- - The expected note row format is: - - 'note_time' 'MIDI_note' ['duration' ['MIDI_velocity']] - - The returned true negative array is empty, because we are not interested - in this class, since it is magnitudes bigger than true positives array. - - """ - # make sure the arrays have the correct types and dimensions - detections = np.asarray(detections, dtype=np.float) - annotations = np.asarray(annotations, dtype=np.float) - # check dimensions - if detections.ndim != 2 or annotations.ndim != 2: - raise ValueError('detections and annotations must be 2D arrays') - - # init TP, FP, TN and FN lists - tp = np.zeros((0, 2)) - fp = np.zeros((0, 2)) - tn = np.zeros((0, 2)) # this will not be altered - fn = np.zeros((0, 2)) - errors = np.zeros((0, 2)) - # if neither detections nor annotations are given - if detections.size == 0 and annotations.size == 0: - # return the arrays as is - return tp, fp, tn, fn, errors - # if only detections are given - elif annotations.size == 0: - # all detections are FP - return tp, detections, tn, fn, errors - # if only annotations are given - elif detections.size == 0: - # all annotations are FN - return tp, tp, tn, annotations, errors - - # TODO: extend to also evaluate the duration and velocity of notes - # for onset evaluation use only the onset time and midi note number - detections = detections[:, :2] - annotations = annotations[:, :2] - - # get a list of all notes detected / annotated - notes = np.unique(np.concatenate((detections[:, 1], - annotations[:, 1]))).tolist() - # iterate over all notes - for note in notes: - # perform normal onset detection on each note - det = detections[detections[:, 1] == note] - ann = annotations[annotations[:, 1] == note] - tp_, fp_, _, fn_, err_ = onset_evaluation(det[:, 0], ann[:, 0], window) - # convert returned arrays to lists and append the detections and - # annotations to the correct lists - tp = np.vstack((tp, det[np.in1d(det[:, 0], tp_)])) - fp = np.vstack((fp, det[np.in1d(det[:, 0], fp_)])) - fn = np.vstack((fn, ann[np.in1d(ann[:, 0], fn_)])) - # append the note number to the errors - err_ = np.vstack((np.array(err_), - np.repeat(np.asarray([note]), len(err_)))).T - errors = np.vstack((errors, err_)) - # check calculations - if len(tp) + len(fp) != len(detections): - raise AssertionError('bad TP / FP calculation') - if len(tp) + len(fn) != len(annotations): - raise AssertionError('bad FN calculation') - if len(tp) != len(errors): - raise AssertionError('bad errors calculation') - # sort the arrays - # Note: The errors must have the same sorting order as the TPs, so they - # must be done first (before the TPs get sorted) - errors = errors[tp[:, 0].argsort()] - tp = tp[tp[:, 0].argsort()] - fp = fp[fp[:, 0].argsort()] - fn = fn[fn[:, 0].argsort()] - # return the arrays - return tp, fp, tn, fn, errors - - -# for note evaluation with Precision, Recall, F-measure use the Evaluation -# class and just define the evaluation function -# TODO: extend to also report the measures without octave errors -class NoteEvaluation(MultiClassEvaluation): - """ - Evaluation class for measuring Precision, Recall and F-measure of notes. - - Parameters - ---------- - detections : str, list or numpy array - Detected notes. - annotations : str, list or numpy array - Annotated ground truth notes. - window : float, optional - F-measure evaluation window [seconds] - delay : float, optional - Delay the detections `delay` seconds for evaluation. - - """ - - def __init__(self, detections, annotations, window=WINDOW, delay=0, - **kwargs): - # convert to numpy array - detections = np.array(detections, dtype=np.float, ndmin=2) - annotations = np.array(annotations, dtype=np.float, ndmin=2) - # shift the detections if needed - if delay != 0: - detections[:, 0] += delay - # evaluate - numbers = note_onset_evaluation(detections, annotations, window) - tp, fp, tn, fn, errors = numbers - super(NoteEvaluation, self).__init__(tp, fp, tn, fn, **kwargs) - self.errors = errors - # save them for the individual note evaluation - self.detections = detections - self.annotations = annotations - self.window = window - - @property - def mean_error(self): - """Mean of the errors.""" - warnings.warn('mean_error is given for all notes, this will change!') - if len(self.errors) == 0: - return np.nan - return np.mean(self.errors[:, 0]) - - @property - def std_error(self): - """Standard deviation of the errors.""" - warnings.warn('std_error is given for all notes, this will change!') - if len(self.errors) == 0: - return np.nan - return np.std(self.errors[:, 0]) - - def tostring(self, notes=False, **kwargs): - """ - - Parameters - ---------- - notes : bool, optional - Display detailed output for all individual notes. - - Returns - ------- - str - Evaluation metrics formatted as a human readable string. - - """ - ret = '' - if self.name is not None: - ret += '%s\n ' % self.name - # add statistics for the individual note - if notes: - # determine which notes are present - notes = [] - if self.tp.any(): - notes = np.append(notes, np.unique(self.tp[:, 1])) - if self.fp.any(): - notes = np.append(notes, np.unique(self.fp[:, 1])) - if self.tn.any(): - notes = np.append(notes, np.unique(self.tn[:, 1])) - if self.fn.any(): - notes = np.append(notes, np.unique(self.fn[:, 1])) - # evaluate them individually - for note in sorted(np.unique(notes)): - # detections and annotations for this note (only onset times) - det = self.detections[self.detections[:, 1] == note][:, 0] - ann = self.annotations[self.annotations[:, 1] == note][:, 0] - name = 'MIDI note %s' % note - e = OnsetEvaluation(det, ann, self.window, name=name) - # append to the output string - ret += ' %s\n' % e.tostring(notes=False) - # normal formatting - ret += 'Notes: %5d TP: %5d FP: %4d FN: %4d ' \ - 'Precision: %.3f Recall: %.3f F-measure: %.3f ' \ - 'Acc: %.3f mean: %5.1f ms std: %5.1f ms' % \ - (self.num_annotations, self.num_tp, self.num_fp, self.num_fn, - self.precision, self.recall, self.fmeasure, self.accuracy, - self.mean_error * 1000., self.std_error * 1000.) - # return - return ret - - -class NoteSumEvaluation(SumEvaluation, NoteEvaluation): - """ - Class for summing note evaluations. - - """ - - @property - def errors(self): - """Errors of the true positive detections wrt. the ground truth.""" - if not self.eval_objects: - # return empty array - return np.zeros((0, 2)) - return np.concatenate([e.errors for e in self.eval_objects]) - - -class NoteMeanEvaluation(MeanEvaluation, NoteSumEvaluation): - """ - Class for averaging note evaluations. - - """ - - @property - def mean_error(self): - """Mean of the errors.""" - warnings.warn('mean_error is given for all notes, this will change!') - return np.nanmean([e.mean_error for e in self.eval_objects]) - - @property - def std_error(self): - """Standard deviation of the errors.""" - warnings.warn('std_error is given for all notes, this will change!') - return np.nanmean([e.std_error for e in self.eval_objects]) - - def tostring(self, **kwargs): - """ - Format the evaluation metrics as a human readable string. - - Returns - ------- - str - Evaluation metrics formatted as a human readable string. - - """ - # format with floats instead of integers - ret = '' - if self.name is not None: - ret += '%s\n ' % self.name - ret += 'Notes: %5.2f TP: %5.2f FP: %5.2f FN: %5.2f ' \ - 'Precision: %.3f Recall: %.3f F-measure: %.3f ' \ - 'Acc: %.3f mean: %5.1f ms std: %5.1f ms' % \ - (self.num_annotations, self.num_tp, self.num_fp, self.num_fn, - self.precision, self.recall, self.fmeasure, self.accuracy, - self.mean_error * 1000., self.std_error * 1000.) - return ret - - -def add_parser(parser): - """ - Add a note evaluation sub-parser to an existing parser. - - Parameters - ---------- - parser : argparse parser instance - Existing argparse parser object. - - Returns - ------- - sub_parser : argparse sub-parser instance - Note evaluation sub-parser. - parser_group : argparse argument group - Note evaluation argument group. - - """ - import argparse - # add tempo evaluation sub-parser to the existing parser - p = parser.add_parser( - 'notes', help='note evaluation', - formatter_class=argparse.RawDescriptionHelpFormatter, - description=''' - This program evaluates pairs of files containing the note annotations and - detections. Suffixes can be given to filter them from the list of files. - - Each line represents a note and must have the following format with values - being separated by whitespace [brackets indicate optional values]: - `onset_time MIDI_note [duration [velocity]]` - - Lines starting with # are treated as comments and are ignored. - - ''') - # set defaults - p.set_defaults(eval=NoteEvaluation, sum_eval=NoteSumEvaluation, - mean_eval=NoteMeanEvaluation, load_fn=load_notes) - # file I/O - evaluation_io(p, ann_suffix='.notes', det_suffix='.notes.txt') - # evaluation parameters - g = p.add_argument_group('note evaluation arguments') - g.add_argument('-w', dest='window', action='store', type=float, - default=0.025, - help='evaluation window (+/- the given size) ' - '[seconds, default=%(default)s]') - g.add_argument('--delay', action='store', type=float, default=0., - help='add given delay to all detections [seconds]') - # return the sub-parser and evaluation argument group - return p, g diff --git a/spaces/Marshalls/testmtd/models/optimizer.py b/spaces/Marshalls/testmtd/models/optimizer.py deleted file mode 100644 index 9284a2b084522a7b08882c74c30dd94dc018134a..0000000000000000000000000000000000000000 --- a/spaces/Marshalls/testmtd/models/optimizer.py +++ /dev/null @@ -1,213 +0,0 @@ -from torch.optim import lr_scheduler -import torch -from .nero import Nero -# import torch_optimizer as optim -import ast -from madgrad import MADGRAD -from pl_bolts.optimizers.lr_scheduler import LinearWarmupCosineAnnealingLR - - -def get_optimizers(net, opt): - if opt.optimizer == "adam": - optimizer = torch.optim.Adam(net.parameters(), lr=opt.learning_rate, weight_decay=opt.weight_decay) - elif opt.optimizer == "adamw": - optimizer = torch.optim.AdamW(net.parameters(), lr=opt.learning_rate, weight_decay=opt.weight_decay, eps=1e-05, betas=(0.9, 0.95)) - elif opt.optimizer == "sgd": - optimizer = torch.optim.SGD(net.parameters(), lr=opt.learning_rate, momentum=opt.momentum, weight_decay=opt.weight_decay) - elif opt.optimizer == "adagrad": - optimizer = torch.optim.Adagrad(net.parameters(), lr=opt.learning_rate, weight_decay=opt.weight_decay) - elif opt.optimizer == "adadelta": - optimizer = torch.optim.Adadelta(net.parameters(), lr=opt.learning_rate, weight_decay=opt.weight_decay) - elif opt.optimizer == "rmsprop": - optimizer = torch.optim.Rmsprop(net.parameters(), lr=opt.learning_rate, weight_decay=opt.weight_decay) - elif opt.optimizer == "nero": - optimizer = Nero(net.parameters(), lr=opt.learning_rate) - elif opt.optimizer == "madgrad": - optimizer = MADGRAD(net.parameters(), lr=opt.learning_rate, weight_decay=opt.weight_decay, momentum=opt.momentum) - # elif opt.optimizer == "ranger": - # optimizer = optim.Ranger(net.parameters(), lr=opt.learning_rate, alpha=0.5, k=6, N_sma_threshhold=5, betas=(.95, 0.999), eps=1e-5, weight_decay=0 ) - else: - return NotImplementedError('optimizer [%s] is not implemented', opt.optimizer) - return [optimizer] - -def get_scheduler(optimizer, opt): - if opt.lr_policy == 'lambda': - def lambda_rule(epoch): - nepochs = opt.max_epochs - opt.nepoch_decay #number of epochs before beginning to decay - lr_l = 1.0 - max(0, epoch + opt.epoch_count - nepochs) / float(opt.nepoch_decay + 1) - return lr_l - scheduler = lr_scheduler.LambdaLR(optimizer, lr_lambda=lambda_rule) - elif opt.lr_policy == 'exponential': - scheduler = lr_scheduler.ExponentialLR(optimizer = optimizer, gamma = opt.lr_decay_factor) - elif opt.lr_policy == 'step': - scheduler = lr_scheduler.StepLR(optimizer, step_size=opt.lr_decay_iters, gamma=opt.lr_decay_factor) - elif opt.lr_policy == 'multistep': - scheduler = lr_scheduler.MultiStepLR(optimizer, milestones=ast.literal_eval(opt.lr_decay_milestones), gamma=opt.lr_decay_factor) - elif opt.lr_policy == 'plateau': - scheduler = lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.2, threshold=0.01, patience=5) - elif opt.lr_policy == 'cosine': - scheduler = lr_scheduler.CosineAnnealingLR(optimizer, T_max=opt.nepoch, eta_min=0) - elif opt.lr_policy == 'cyclic': - scheduler = CyclicLR(optimizer, base_lr=opt.learning_rate / 10, max_lr=opt.learning_rate, - step_size=opt.nepoch_decay, mode='triangular2') - elif opt.lr_policy == 'reduceOnPlateau': - scheduler = ReduceLROnPlateau(optimizer, 'min', factor=0.2) - elif opt.lr_policy == 'LinearWarmupCosineAnnealing': - scheduler = LinearWarmupCosineAnnealingLR(optimizer, warmup_epochs=opt.warmup_epochs, max_epochs=opt.max_epochs) - else: - return NotImplementedError('learning rate policy [%s] is not implemented', opt.lr_policy) - return scheduler - - -class CyclicLR(object): - """Sets the learning rate of each parameter group according to - cyclical learning rate policy (CLR). The policy cycles the learning - rate between two boundaries with a constant frequency, as detailed in - the paper `Cyclical Learning Rates for Training Neural Networks`_. - The distance between the two boundaries can be scaled on a per-iteration - or per-cycle basis. - - Cyclical learning rate policy changes the learning rate after every batch. - `batch_step` should be called after a batch has been used for training. - To resume training, save `last_batch_iteration` and use it to instantiate `CycleLR`. - - This class has three built-in policies, as put forth in the paper: - "triangular": - A basic triangular cycle w/ no amplitude scaling. - "triangular2": - A basic triangular cycle that scales initial amplitude by half each cycle. - "exp_range": - A cycle that scales initial amplitude by gamma**(cycle iterations) at each - cycle iteration. - - This implementation was adapted from the github repo: `bckenstler/CLR`_ - - Args: - optimizer (Optimizer): Wrapped optimizer. - base_lr (float or list): Initial learning rate which is the - lower boundary in the cycle for eachparam groups. - Default: 0.001 - max_lr (float or list): Upper boundaries in the cycle for - each parameter group. Functionally, - it defines the cycle amplitude (max_lr - base_lr). - The lr at any cycle is the sum of base_lr - and some scaling of the amplitude; therefore - max_lr may not actually be reached depending on - scaling function. Default: 0.006 - step_size (int): Number of training iterations per - half cycle. Authors suggest setting step_size - 2-8 x training iterations in epoch. Default: 2000 - mode (str): One of {triangular, triangular2, exp_range}. - Values correspond to policies detailed above. - If scale_fn is not None, this argument is ignored. - Default: 'triangular' - gamma (float): Constant in 'exp_range' scaling function: - gamma**(cycle iterations) - Default: 1.0 - scale_fn (function): Custom scaling policy defined by a single - argument lambda function, where - 0 <= scale_fn(x) <= 1 for all x >= 0. - mode paramater is ignored - Default: None - scale_mode (str): {'cycle', 'iterations'}. - Defines whether scale_fn is evaluated on - cycle number or cycle iterations (training - iterations since start of cycle). - Default: 'cycle' - last_batch_iteration (int): The index of the last batch. Default: -1 - - Example: - >>> optimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9) - >>> scheduler = torch.optim.CyclicLR(optimizer) - >>> data_loader = torch.utils.data.DataLoader(...) - >>> for epoch in range(10): - >>> for batch in data_loader: - >>> scheduler.batch_step() - >>> train_batch(...) - - .. _Cyclical Learning Rates for Training Neural Networks: https://arxiv.org/abs/1506.01186 - .. _bckenstler/CLR: https://github.com/bckenstler/CLR - """ - - def __init__(self, optimizer, base_lr=1e-3, max_lr=6e-3, - step_size=2000, mode='triangular', gamma=1., - scale_fn=None, scale_mode='cycle', last_batch_iteration=-1): - - if not isinstance(optimizer, Optimizer): - raise TypeError('{} is not an Optimizer'.format( - type(optimizer).__name__)) - self.optimizer = optimizer - - if isinstance(base_lr, list) or isinstance(base_lr, tuple): - if len(base_lr) != len(optimizer.param_groups): - raise ValueError("expected {} base_lr, got {}".format( - len(optimizer.param_groups), len(base_lr))) - self.base_lrs = list(base_lr) - else: - self.base_lrs = [base_lr] * len(optimizer.param_groups) - - if isinstance(max_lr, list) or isinstance(max_lr, tuple): - if len(max_lr) != len(optimizer.param_groups): - raise ValueError("expected {} max_lr, got {}".format( - len(optimizer.param_groups), len(max_lr))) - self.max_lrs = list(max_lr) - else: - self.max_lrs = [max_lr] * len(optimizer.param_groups) - - self.step_size = step_size - - if mode not in ['triangular', 'triangular2', 'exp_range'] \ - and scale_fn is None: - raise ValueError('mode is invalid and scale_fn is None') - - self.mode = mode - self.gamma = gamma - - if scale_fn is None: - if self.mode == 'triangular': - self.scale_fn = self._triangular_scale_fn - self.scale_mode = 'cycle' - elif self.mode == 'triangular2': - self.scale_fn = self._triangular2_scale_fn - self.scale_mode = 'cycle' - elif self.mode == 'exp_range': - self.scale_fn = self._exp_range_scale_fn - self.scale_mode = 'iterations' - else: - self.scale_fn = scale_fn - self.scale_mode = scale_mode - - self.batch_step(last_batch_iteration + 1) - self.last_batch_iteration = last_batch_iteration - - def batch_step(self, batch_iteration=None): - if batch_iteration is None: - batch_iteration = self.last_batch_iteration + 1 - self.last_batch_iteration = batch_iteration - for param_group, lr in zip(self.optimizer.param_groups, self.get_lr()): - param_group['lr'] = lr - - def _triangular_scale_fn(self, x): - return 1. - - def _triangular2_scale_fn(self, x): - return 1 / (2. ** (x - 1)) - - def _exp_range_scale_fn(self, x): - return self.gamma**(x) - - def get_lr(self): - step_size = float(self.step_size) - cycle = np.floor(1 + self.last_batch_iteration / (2 * step_size)) - x = np.abs(self.last_batch_iteration / step_size - 2 * cycle + 1) - - lrs = [] - param_lrs = zip(self.optimizer.param_groups, self.base_lrs, self.max_lrs) - for param_group, base_lr, max_lr in param_lrs: - base_height = (max_lr - base_lr) * np.maximum(0, (1 - x)) - if self.scale_mode == 'cycle': - lr = base_lr + base_height * self.scale_fn(cycle) - else: - lr = base_lr + base_height * self.scale_fn(self.last_batch_iteration) - lrs.append(lr) - return lrs diff --git a/spaces/Mountchicken/MAERec-Gradio/mmocr/testing/data.py b/spaces/Mountchicken/MAERec-Gradio/mmocr/testing/data.py deleted file mode 100644 index fc0b4d2cddcda3e9200855853e58a8d2213c4194..0000000000000000000000000000000000000000 --- a/spaces/Mountchicken/MAERec-Gradio/mmocr/testing/data.py +++ /dev/null @@ -1,110 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from typing import Any, Dict, List, Optional, Sequence - -import numpy as np -import torch -from mmengine.structures import InstanceData - -from mmocr.structures import TextDetDataSample - - -def create_dummy_textdet_inputs(input_shape: Sequence[int] = (1, 3, 300, 300), - num_items: Optional[Sequence[int]] = None - ) -> Dict[str, Any]: - """Create dummy inputs to test text detectors. - - Args: - input_shape (tuple(int)): 4-d shape of the input image. Defaults to - (1, 3, 300, 300). - num_items (list[int], optional): Number of bboxes to create for each - image. If None, they will be randomly generated. Defaults to None. - - Returns: - Dict[str, Any]: A dictionary of demo inputs. - """ - (N, C, H, W) = input_shape - - rng = np.random.RandomState(0) - - imgs = rng.rand(*input_shape) - - metainfo = dict( - img_shape=(H, W, C), - ori_shape=(H, W, C), - pad_shape=(H, W, C), - filename='test.jpg', - scale_factor=(1, 1), - flip=False) - - gt_masks = [] - gt_kernels = [] - gt_effective_mask = [] - - data_samples = [] - - for batch_idx in range(N): - if num_items is None: - num_boxes = rng.randint(1, 10) - else: - num_boxes = num_items[batch_idx] - - data_sample = TextDetDataSample( - metainfo=metainfo, gt_instances=InstanceData()) - - cx, cy, bw, bh = rng.rand(num_boxes, 4).T - - tl_x = ((cx * W) - (W * bw / 2)).clip(0, W) - tl_y = ((cy * H) - (H * bh / 2)).clip(0, H) - br_x = ((cx * W) + (W * bw / 2)).clip(0, W) - br_y = ((cy * H) + (H * bh / 2)).clip(0, H) - - boxes = np.vstack([tl_x, tl_y, br_x, br_y]).T - class_idxs = [0] * num_boxes - - data_sample.gt_instances.bboxes = torch.FloatTensor(boxes) - data_sample.gt_instances.labels = torch.LongTensor(class_idxs) - data_sample.gt_instances.ignored = torch.BoolTensor([False] * - num_boxes) - data_samples.append(data_sample) - - # kernels = [] - # TODO: add support for multiple kernels (if necessary) - # for _ in range(num_kernels): - # kernel = np.random.rand(H, W) - # kernels.append(kernel) - gt_kernels.append(np.random.rand(H, W)) - gt_effective_mask.append(np.ones((H, W))) - - mask = np.random.randint(0, 2, (len(boxes), H, W), dtype=np.uint8) - gt_masks.append(mask) - - mm_inputs = { - 'imgs': torch.FloatTensor(imgs).requires_grad_(True), - 'data_samples': data_samples, - 'gt_masks': gt_masks, - 'gt_kernels': gt_kernels, - 'gt_mask': gt_effective_mask, - 'gt_thr_mask': gt_effective_mask, - 'gt_text_mask': gt_effective_mask, - 'gt_center_region_mask': gt_effective_mask, - 'gt_radius_map': gt_kernels, - 'gt_sin_map': gt_kernels, - 'gt_cos_map': gt_kernels, - } - return mm_inputs - - -def create_dummy_dict_file( - dict_file: str, - chars: List[str] = list('0123456789abcdefghijklmnopqrstuvwxyz') -) -> None: # NOQA - """Create a dummy dictionary file. - - Args: - dict_file (str): Path to the dummy dictionary file. - chars (list[str]): List of characters in dictionary. Defaults to - ``list('0123456789abcdefghijklmnopqrstuvwxyz')``. - """ - with open(dict_file, 'w') as f: - for char in chars: - f.write(char + '\n') diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/retinanet_model.py b/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/retinanet_model.py deleted file mode 100644 index ff299674f0044cd208a1657a962d133744b78b77..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/detection/modeling/retinanet_model.py +++ /dev/null @@ -1,170 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Model defination for the RetinaNet Model.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf - -from tensorflow.python.keras import backend -from official.vision.detection.dataloader import mode_keys -from official.vision.detection.evaluation import factory as eval_factory -from official.vision.detection.modeling import base_model -from official.vision.detection.modeling import losses -from official.vision.detection.modeling.architecture import factory -from official.vision.detection.ops import postprocess_ops - - -class RetinanetModel(base_model.Model): - """RetinaNet model function.""" - - def __init__(self, params): - super(RetinanetModel, self).__init__(params) - - # For eval metrics. - self._params = params - - # Architecture generators. - self._backbone_fn = factory.backbone_generator(params) - self._fpn_fn = factory.multilevel_features_generator(params) - self._head_fn = factory.retinanet_head_generator(params) - - # Loss function. - self._cls_loss_fn = losses.RetinanetClassLoss( - params.retinanet_loss, params.architecture.num_classes) - self._box_loss_fn = losses.RetinanetBoxLoss(params.retinanet_loss) - self._box_loss_weight = params.retinanet_loss.box_loss_weight - self._keras_model = None - - # Predict function. - self._generate_detections_fn = postprocess_ops.MultilevelDetectionGenerator( - params.architecture.min_level, - params.architecture.max_level, - params.postprocess) - - self._transpose_input = params.train.transpose_input - assert not self._transpose_input, 'Transpose input is not supportted.' - # Input layer. - input_shape = ( - params.retinanet_parser.output_size + - [params.retinanet_parser.num_channels]) - self._input_layer = tf.keras.layers.Input( - shape=input_shape, name='', - dtype=tf.bfloat16 if self._use_bfloat16 else tf.float32) - - def build_outputs(self, inputs, mode): - # If the input image is transposed (from NHWC to HWCN), we need to revert it - # back to the original shape before it's used in the computation. - if self._transpose_input: - inputs = tf.transpose(inputs, [3, 0, 1, 2]) - - backbone_features = self._backbone_fn( - inputs, is_training=(mode == mode_keys.TRAIN)) - fpn_features = self._fpn_fn( - backbone_features, is_training=(mode == mode_keys.TRAIN)) - cls_outputs, box_outputs = self._head_fn( - fpn_features, is_training=(mode == mode_keys.TRAIN)) - - if self._use_bfloat16: - levels = cls_outputs.keys() - for level in levels: - cls_outputs[level] = tf.cast(cls_outputs[level], tf.float32) - box_outputs[level] = tf.cast(box_outputs[level], tf.float32) - - model_outputs = { - 'cls_outputs': cls_outputs, - 'box_outputs': box_outputs, - } - return model_outputs - - def build_loss_fn(self): - if self._keras_model is None: - raise ValueError('build_loss_fn() must be called after build_model().') - - filter_fn = self.make_filter_trainable_variables_fn() - trainable_variables = filter_fn(self._keras_model.trainable_variables) - - def _total_loss_fn(labels, outputs): - cls_loss = self._cls_loss_fn(outputs['cls_outputs'], - labels['cls_targets'], - labels['num_positives']) - box_loss = self._box_loss_fn(outputs['box_outputs'], - labels['box_targets'], - labels['num_positives']) - model_loss = cls_loss + self._box_loss_weight * box_loss - l2_regularization_loss = self.weight_decay_loss(trainable_variables) - total_loss = model_loss + l2_regularization_loss - return { - 'total_loss': total_loss, - 'cls_loss': cls_loss, - 'box_loss': box_loss, - 'model_loss': model_loss, - 'l2_regularization_loss': l2_regularization_loss, - } - - return _total_loss_fn - - def build_model(self, params, mode=None): - if self._keras_model is None: - with backend.get_graph().as_default(): - outputs = self.model_outputs(self._input_layer, mode) - - model = tf.keras.models.Model( - inputs=self._input_layer, outputs=outputs, name='retinanet') - assert model is not None, 'Fail to build tf.keras.Model.' - model.optimizer = self.build_optimizer() - self._keras_model = model - - return self._keras_model - - def post_processing(self, labels, outputs): - # TODO(yeqing): Moves the output related part into build_outputs. - required_output_fields = ['cls_outputs', 'box_outputs'] - for field in required_output_fields: - if field not in outputs: - raise ValueError('"%s" is missing in outputs, requried %s found %s', - field, required_output_fields, outputs.keys()) - required_label_fields = ['image_info', 'groundtruths'] - for field in required_label_fields: - if field not in labels: - raise ValueError('"%s" is missing in outputs, requried %s found %s', - field, required_label_fields, labels.keys()) - boxes, scores, classes, valid_detections = self._generate_detections_fn( - outputs['box_outputs'], outputs['cls_outputs'], - labels['anchor_boxes'], labels['image_info'][:, 1:2, :]) - # Discards the old output tensors to save memory. The `cls_outputs` and - # `box_outputs` are pretty big and could potentiall lead to memory issue. - outputs = { - 'source_id': labels['groundtruths']['source_id'], - 'image_info': labels['image_info'], - 'num_detections': valid_detections, - 'detection_boxes': boxes, - 'detection_classes': classes, - 'detection_scores': scores, - } - - if 'groundtruths' in labels: - labels['source_id'] = labels['groundtruths']['source_id'] - labels['boxes'] = labels['groundtruths']['boxes'] - labels['classes'] = labels['groundtruths']['classes'] - labels['areas'] = labels['groundtruths']['areas'] - labels['is_crowds'] = labels['groundtruths']['is_crowds'] - - return labels, outputs - - def eval_metrics(self): - return eval_factory.evaluator_generator(self._params.eval) diff --git a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/learning_rate_test.py b/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/learning_rate_test.py deleted file mode 100644 index 272d2935fd7f1e6a7f1810e9247c4ef505021fde..0000000000000000000000000000000000000000 --- a/spaces/NCTCMumbai/NCTC/models/official/vision/image_classification/learning_rate_test.py +++ /dev/null @@ -1,99 +0,0 @@ -# Copyright 2019 The TensorFlow Authors. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -# ============================================================================== -"""Tests for learning_rate.""" - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import tensorflow as tf - -from official.vision.image_classification import learning_rate - - -class LearningRateTests(tf.test.TestCase): - - def test_warmup_decay(self): - """Basic computational test for warmup decay.""" - initial_lr = 0.01 - decay_steps = 100 - decay_rate = 0.01 - warmup_steps = 10 - - base_lr = tf.keras.optimizers.schedules.ExponentialDecay( - initial_learning_rate=initial_lr, - decay_steps=decay_steps, - decay_rate=decay_rate) - lr = learning_rate.WarmupDecaySchedule( - lr_schedule=base_lr, - warmup_steps=warmup_steps) - - for step in range(warmup_steps - 1): - config = lr.get_config() - self.assertEqual(config['warmup_steps'], warmup_steps) - self.assertAllClose(self.evaluate(lr(step)), - step / warmup_steps * initial_lr) - - def test_piecewise_constant_decay_with_warmup(self): - """Basic computational test for piecewise constant decay with warmup.""" - boundaries = [1, 2, 3] - warmup_epochs = boundaries[0] - learning_rate_multipliers = [1.0, 0.1, 0.001] - expected_keys = [ - 'rescaled_lr', 'step_boundaries', 'lr_values', 'warmup_steps', - ] - - expected_lrs = [0.0, 0.1, 0.1] - - lr = learning_rate.PiecewiseConstantDecayWithWarmup( - batch_size=256, - epoch_size=256, - warmup_epochs=warmup_epochs, - boundaries=boundaries[1:], - multipliers=learning_rate_multipliers) - - step = 0 - - config = lr.get_config() - self.assertAllInSet(list(config.keys()), expected_keys) - - for boundary, expected_lr in zip(boundaries, expected_lrs): - for _ in range(step, boundary): - self.assertAllClose(self.evaluate(lr(step)), expected_lr) - step += 1 - - def test_piecewise_constant_decay_invalid_boundaries(self): - with self.assertRaisesRegex(ValueError, - 'The length of boundaries must be 1 less '): - learning_rate.PiecewiseConstantDecayWithWarmup( - batch_size=256, - epoch_size=256, - warmup_epochs=1, - boundaries=[1, 2], - multipliers=[1, 2]) - - def test_cosine_decay_with_warmup(self): - """Basic computational test for cosine decay with warmup.""" - expected_lrs = [0.0, 0.1, 0.05, 0.0] - - lr = learning_rate.CosineDecayWithWarmup( - batch_size=256, total_steps=3, warmup_steps=1) - - for step in [0, 1, 2, 3]: - self.assertAllClose(lr(step), expected_lrs[step]) - - -if __name__ == '__main__': - tf.test.main() diff --git a/spaces/Norod78/Dragness/face_detection.py b/spaces/Norod78/Dragness/face_detection.py deleted file mode 100644 index abcf45019c2dfcf7ad01ec7af7de45785a9984f5..0000000000000000000000000000000000000000 --- a/spaces/Norod78/Dragness/face_detection.py +++ /dev/null @@ -1,140 +0,0 @@ -# Copyright (c) 2021 Justin Pinkney - -import dlib -import numpy as np -import os -from PIL import Image -from PIL import ImageOps -from scipy.ndimage import gaussian_filter -import cv2 - - -MODEL_PATH = "shape_predictor_5_face_landmarks.dat" -detector = dlib.get_frontal_face_detector() - - -def align(image_in, face_index=0, output_size=256): - try: - image_in = ImageOps.exif_transpose(image_in) - except: - print("exif problem, not rotating") - - landmarks = list(get_landmarks(image_in)) - n_faces = len(landmarks) - face_index = min(n_faces-1, face_index) - if n_faces == 0: - aligned_image = image_in - quad = None - else: - aligned_image, quad = image_align(image_in, landmarks[face_index], output_size=output_size) - - return aligned_image, n_faces, quad - - -def composite_images(quad, img, output): - """Composite an image into and output canvas according to transformed co-ords""" - output = output.convert("RGBA") - img = img.convert("RGBA") - input_size = img.size - src = np.array(((0, 0), (0, input_size[1]), input_size, (input_size[0], 0)), dtype=np.float32) - dst = np.float32(quad) - mtx = cv2.getPerspectiveTransform(dst, src) - img = img.transform(output.size, Image.PERSPECTIVE, mtx.flatten(), Image.BILINEAR) - output.alpha_composite(img) - - return output.convert("RGB") - - -def get_landmarks(image): - """Get landmarks from PIL image""" - shape_predictor = dlib.shape_predictor(MODEL_PATH) - - max_size = max(image.size) - reduction_scale = int(max_size/512) - if reduction_scale == 0: - reduction_scale = 1 - downscaled = image.reduce(reduction_scale) - img = np.array(downscaled) - detections = detector(img, 0) - - for detection in detections: - try: - face_landmarks = [(reduction_scale*item.x, reduction_scale*item.y) for item in shape_predictor(img, detection).parts()] - yield face_landmarks - except Exception as e: - print(e) - - -def image_align(src_img, face_landmarks, output_size=512, transform_size=2048, enable_padding=True, x_scale=1, y_scale=1, em_scale=0.1, alpha=False): - # Align function modified from ffhq-dataset - # See https://github.com/NVlabs/ffhq-dataset for license - - lm = np.array(face_landmarks) - lm_eye_left = lm[2:3] # left-clockwise - lm_eye_right = lm[0:1] # left-clockwise - - # Calculate auxiliary vectors. - eye_left = np.mean(lm_eye_left, axis=0) - eye_right = np.mean(lm_eye_right, axis=0) - eye_avg = (eye_left + eye_right) * 0.5 - eye_to_eye = 0.71*(eye_right - eye_left) - mouth_avg = lm[4] - eye_to_mouth = 1.35*(mouth_avg - eye_avg) - - # Choose oriented crop rectangle. - x = eye_to_eye.copy() - x /= np.hypot(*x) - x *= max(np.hypot(*eye_to_eye) * 2.0, np.hypot(*eye_to_mouth) * 1.8) - x *= x_scale - y = np.flipud(x) * [-y_scale, y_scale] - c = eye_avg + eye_to_mouth * em_scale - quad = np.stack([c - x - y, c - x + y, c + x + y, c + x - y]) - quad_orig = quad.copy() - qsize = np.hypot(*x) * 2 - - img = src_img.convert('RGBA').convert('RGB') - - # Shrink. - shrink = int(np.floor(qsize / output_size * 0.5)) - if shrink > 1: - rsize = (int(np.rint(float(img.size[0]) / shrink)), int(np.rint(float(img.size[1]) / shrink))) - img = img.resize(rsize, Image.ANTIALIAS) - quad /= shrink - qsize /= shrink - - # Crop. - border = max(int(np.rint(qsize * 0.1)), 3) - crop = (int(np.floor(min(quad[:,0]))), int(np.floor(min(quad[:,1]))), int(np.ceil(max(quad[:,0]))), int(np.ceil(max(quad[:,1])))) - crop = (max(crop[0] - border, 0), max(crop[1] - border, 0), min(crop[2] + border, img.size[0]), min(crop[3] + border, img.size[1])) - if crop[2] - crop[0] < img.size[0] or crop[3] - crop[1] < img.size[1]: - img = img.crop(crop) - quad -= crop[0:2] - - # Pad. - pad = (int(np.floor(min(quad[:,0]))), int(np.floor(min(quad[:,1]))), int(np.ceil(max(quad[:,0]))), int(np.ceil(max(quad[:,1])))) - pad = (max(-pad[0] + border, 0), max(-pad[1] + border, 0), max(pad[2] - img.size[0] + border, 0), max(pad[3] - img.size[1] + border, 0)) - if enable_padding and max(pad) > border - 4: - pad = np.maximum(pad, int(np.rint(qsize * 0.3))) - img = np.pad(np.float32(img), ((pad[1], pad[3]), (pad[0], pad[2]), (0, 0)), 'reflect') - h, w, _ = img.shape - y, x, _ = np.ogrid[:h, :w, :1] - mask = np.maximum(1.0 - np.minimum(np.float32(x) / pad[0], np.float32(w-1-x) / pad[2]), 1.0 - np.minimum(np.float32(y) / pad[1], np.float32(h-1-y) / pad[3])) - blur = qsize * 0.02 - img += (gaussian_filter(img, [blur, blur, 0]) - img) * np.clip(mask * 3.0 + 1.0, 0.0, 1.0) - img += (np.median(img, axis=(0,1)) - img) * np.clip(mask, 0.0, 1.0) - img = np.uint8(np.clip(np.rint(img), 0, 255)) - if alpha: - mask = 1-np.clip(3.0 * mask, 0.0, 1.0) - mask = np.uint8(np.clip(np.rint(mask*255), 0, 255)) - img = np.concatenate((img, mask), axis=2) - img = Image.fromarray(img, 'RGBA') - else: - img = Image.fromarray(img, 'RGB') - quad += pad[:2] - - # Transform. - img = img.transform((transform_size, transform_size), Image.QUAD, (quad + 0.5).flatten(), Image.BILINEAR) - if output_size < transform_size: - img = img.resize((output_size, output_size), Image.ANTIALIAS) - - return img, quad_orig diff --git a/spaces/Ntabukiraniro/Recipe/app.py b/spaces/Ntabukiraniro/Recipe/app.py deleted file mode 100644 index ddb17c2bf93cea099994ba0e250956b9ad7da16b..0000000000000000000000000000000000000000 --- a/spaces/Ntabukiraniro/Recipe/app.py +++ /dev/null @@ -1,104 +0,0 @@ -import streamlit as st -import matplotlib.pyplot as plt -import torch -import torch.nn as nn -import numpy as np -import os -from args import get_parser -import pickle -from model import get_model -from torchvision import transforms -from utils.output_utils import prepare_output -from PIL import Image -import time -import requests -from io import BytesIO - -st.set_option('deprecation.showfileUploaderEncoding', False) - -data_dir = '/data' -use_gpu = False -device = torch.device('cuda' if torch.cuda.is_available() and use_gpu else 'cpu') -map_loc = None if torch.cuda.is_available() and use_gpu else 'cpu' - -ingrs_vocab = pickle.load(open(os.path.join('ingr_vocab.pkl'), 'rb')) -vocab = pickle.load(open(os.path.join('instr_vocab.pkl'), 'rb')) - -ingr_vocab_size = len(ingrs_vocab) -instrs_vocab_size = len(vocab) -output_dim = instrs_vocab_size - -t = time.time() -import sys; sys.argv=['']; del sys -args = get_parser() -args.maxseqlen = 15 -args.ingrs_only=False - -@st.cache_data -def load_model(): - model = get_model(args, ingr_vocab_size, instrs_vocab_size) - model_path = os.path.join('modelbest.ckpt') - model.load_state_dict(torch.load(model_path, map_location=map_loc)) - model.to(device) - model.eval() - model.ingrs_only = False - model.recipe_only = False - return model - -model = load_model() -print ('loaded model') -print ("Elapsed time:", time.time() -t) - -transf_list_batch = [] -transf_list_batch.append(transforms.ToTensor()) -transf_list_batch.append(transforms.Normalize((0.485, 0.456, 0.406), - (0.229, 0.224, 0.225))) -to_input_transf = transforms.Compose(transf_list_batch) - -greedy = [True, False, False, False] -beam = [-1, -1, -1, -1] -temperature = 1.0 -numgens = len(greedy) - -st.title("Recipe Generator from Food Image") -uploaded_file = st.file_uploader("Choose an image...", type=['png', 'jpg', 'jpeg']) -image_url = st.text_input("Enter image URL") - -if uploaded_file is not None or image_url: - if image_url: - response = requests.get(image_url) - image = Image.open(BytesIO(response.content)) - else: - image = Image.open(uploaded_file).convert('RGB') - st.image(image, caption='Uploaded Image', use_column_width=True) - - transf_list = [] - transf_list.append(transforms.Resize(256)) - transf_list.append(transforms.CenterCrop(224)) - transform = transforms.Compose(transf_list) - - image_transf = transform(image) - image_tensor = to_input_transf(image_transf).unsqueeze(0).to(device) - - num_valid = 1 - for i in range(numgens): - with torch.no_grad(): - outputs = model.sample(image_tensor, greedy=greedy[i], - temperature=temperature, beam=beam[i], true_ingrs=None) - - ingr_ids = outputs['ingr_ids'].cpu().numpy() - recipe_ids = outputs['recipe_ids'].cpu().numpy() - - outs, valid = prepare_output(recipe_ids[0], ingr_ids[0], ingrs_vocab, vocab) - - if valid['is_valid']: - st.subheader(f'RECIPE {num_valid}') - num_valid += 1 - - st.markdown(f'**Title:** {outs["title"]}') - st.markdown(f'**Ingredients:** {", ".join(outs["ingrs"])}') - st.markdown(f'**Instructions:**\n{"-".join(outs["recipe"])}') - - else: - st.error("Not a valid recipe!") - st.write("Reason: ", valid['reason']) diff --git a/spaces/Nymisha123/InstagramQuoteDeveloper/README.md b/spaces/Nymisha123/InstagramQuoteDeveloper/README.md deleted file mode 100644 index 670eb11ef5cbcb45c4ef00e2199b8c8b27c504c6..0000000000000000000000000000000000000000 --- a/spaces/Nymisha123/InstagramQuoteDeveloper/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: InstagramQuoteDeveloper -emoji: 💻 -colorFrom: pink -colorTo: green -sdk: streamlit -sdk_version: 1.27.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/noisychannel/rerank_generate.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/noisychannel/rerank_generate.py deleted file mode 100644 index daeeae059a677a9fcd7c370be087f1f5c189bc52..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/noisychannel/rerank_generate.py +++ /dev/null @@ -1,397 +0,0 @@ -#!/usr/bin/env python3 -u -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Generate n-best translations using a trained model. -""" - -import os -import subprocess -from contextlib import redirect_stdout - -from fairseq import options -from fairseq_cli import generate, preprocess - -from examples.noisychannel import rerank_options, rerank_utils - - -def gen_and_reprocess_nbest(args): - if args.score_dict_dir is None: - args.score_dict_dir = args.data - if args.prefix_len is not None: - assert ( - args.right_to_left1 is False - ), "prefix length not compatible with right to left models" - assert ( - args.right_to_left2 is False - ), "prefix length not compatible with right to left models" - - if args.nbest_list is not None: - assert args.score_model2 is None - - if args.backwards1: - scorer1_src = args.target_lang - scorer1_tgt = args.source_lang - else: - scorer1_src = args.source_lang - scorer1_tgt = args.target_lang - - store_data = ( - os.path.join(os.path.dirname(__file__)) + "/rerank_data/" + args.data_dir_name - ) - if not os.path.exists(store_data): - os.makedirs(store_data) - - ( - pre_gen, - left_to_right_preprocessed_dir, - right_to_left_preprocessed_dir, - backwards_preprocessed_dir, - lm_preprocessed_dir, - ) = rerank_utils.get_directories( - args.data_dir_name, - args.num_rescore, - args.gen_subset, - args.gen_model_name, - args.shard_id, - args.num_shards, - args.sampling, - args.prefix_len, - args.target_prefix_frac, - args.source_prefix_frac, - ) - assert not ( - args.right_to_left1 and args.backwards1 - ), "backwards right to left not supported" - assert not ( - args.right_to_left2 and args.backwards2 - ), "backwards right to left not supported" - assert not ( - args.prefix_len is not None and args.target_prefix_frac is not None - ), "target prefix frac and target prefix len incompatible" - - # make directory to store generation results - if not os.path.exists(pre_gen): - os.makedirs(pre_gen) - - rerank1_is_gen = ( - args.gen_model == args.score_model1 and args.source_prefix_frac is None - ) - rerank2_is_gen = ( - args.gen_model == args.score_model2 and args.source_prefix_frac is None - ) - - if args.nbest_list is not None: - rerank2_is_gen = True - - # make directories to store preprossed nbest list for reranking - if not os.path.exists(left_to_right_preprocessed_dir): - os.makedirs(left_to_right_preprocessed_dir) - if not os.path.exists(right_to_left_preprocessed_dir): - os.makedirs(right_to_left_preprocessed_dir) - if not os.path.exists(lm_preprocessed_dir): - os.makedirs(lm_preprocessed_dir) - if not os.path.exists(backwards_preprocessed_dir): - os.makedirs(backwards_preprocessed_dir) - - score1_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model1_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards1, - ) - if args.score_model2 is not None: - score2_file = rerank_utils.rescore_file_name( - pre_gen, - args.prefix_len, - args.model2_name, - target_prefix_frac=args.target_prefix_frac, - source_prefix_frac=args.source_prefix_frac, - backwards=args.backwards2, - ) - - predictions_bpe_file = pre_gen + "/generate_output_bpe.txt" - - using_nbest = args.nbest_list is not None - - if using_nbest: - print("Using predefined n-best list from interactive.py") - predictions_bpe_file = args.nbest_list - - else: - if not os.path.isfile(predictions_bpe_file): - print("STEP 1: generate predictions using the p(T|S) model with bpe") - print(args.data) - param1 = [ - args.data, - "--path", - args.gen_model, - "--shard-id", - str(args.shard_id), - "--num-shards", - str(args.num_shards), - "--nbest", - str(args.num_rescore), - "--batch-size", - str(args.batch_size), - "--beam", - str(args.num_rescore), - "--batch-size", - str(args.num_rescore), - "--gen-subset", - args.gen_subset, - "--source-lang", - args.source_lang, - "--target-lang", - args.target_lang, - ] - if args.sampling: - param1 += ["--sampling"] - - gen_parser = options.get_generation_parser() - input_args = options.parse_args_and_arch(gen_parser, param1) - - print(input_args) - with open(predictions_bpe_file, "w") as f: - with redirect_stdout(f): - generate.main(input_args) - - gen_output = rerank_utils.BitextOutputFromGen( - predictions_bpe_file, - bpe_symbol=args.post_process, - nbest=using_nbest, - prefix_len=args.prefix_len, - target_prefix_frac=args.target_prefix_frac, - ) - - if args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.no_bpe_source, - gen_output.no_bpe_hypo, - gen_output.no_bpe_target, - pre_gen + "/source_gen_bpe." + args.source_lang, - pre_gen + "/target_gen_bpe." + args.target_lang, - pre_gen + "/reference_gen_bpe." + args.target_lang, - ) - bitext_bpe = args.rescore_bpe_code - bpe_src_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/source_gen_bpe." + args.source_lang, - "--output", - pre_gen + "/rescore_data." + args.source_lang, - ] - bpe_tgt_param = [ - "-c", - bitext_bpe, - "--input", - pre_gen + "/target_gen_bpe." + args.target_lang, - "--output", - pre_gen + "/rescore_data." + args.target_lang, - ] - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_src_param, - shell=False, - ) - - subprocess.call( - [ - "python", - os.path.join( - os.path.dirname(__file__), "subword-nmt/subword_nmt/apply_bpe.py" - ), - ] - + bpe_tgt_param, - shell=False, - ) - - if (not os.path.isfile(score1_file) and not rerank1_is_gen) or ( - args.score_model2 is not None - and not os.path.isfile(score2_file) - and not rerank2_is_gen - ): - print( - "STEP 2: process the output of generate.py so we have clean text files with the translations" - ) - - rescore_file = "/rescore_data" - if args.prefix_len is not None: - prefix_len_rescore_file = rescore_file + "prefix" + str(args.prefix_len) - if args.target_prefix_frac is not None: - target_prefix_frac_rescore_file = ( - rescore_file + "target_prefix_frac" + str(args.target_prefix_frac) - ) - if args.source_prefix_frac is not None: - source_prefix_frac_rescore_file = ( - rescore_file + "source_prefix_frac" + str(args.source_prefix_frac) - ) - - if not args.right_to_left1 or not args.right_to_left2: - if not args.diff_bpe: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + rescore_file + "." + args.source_lang, - pre_gen + rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - ) - if args.prefix_len is not None: - bw_rescore_file = prefix_len_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + prefix_len_rescore_file + "." + args.source_lang, - pre_gen + prefix_len_rescore_file + "." + args.target_lang, - pre_gen + "/reference_file", - prefix_len=args.prefix_len, - bpe_symbol=args.post_process, - ) - elif args.target_prefix_frac is not None: - bw_rescore_file = target_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + target_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - target_prefix_frac=args.target_prefix_frac, - ) - else: - bw_rescore_file = rescore_file - - if args.source_prefix_frac is not None: - fw_rescore_file = source_prefix_frac_rescore_file - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.source_lang, - pre_gen - + source_prefix_frac_rescore_file - + "." - + args.target_lang, - pre_gen + "/reference_file", - bpe_symbol=args.post_process, - source_prefix_frac=args.source_prefix_frac, - ) - else: - fw_rescore_file = rescore_file - - if args.right_to_left1 or args.right_to_left2: - rerank_utils.write_reprocessed( - gen_output.source, - gen_output.hypo, - gen_output.target, - pre_gen + "/right_to_left_rescore_data." + args.source_lang, - pre_gen + "/right_to_left_rescore_data." + args.target_lang, - pre_gen + "/right_to_left_reference_file", - right_to_left=True, - bpe_symbol=args.post_process, - ) - - print("STEP 3: binarize the translations") - if ( - not args.right_to_left1 - or args.score_model2 is not None - and not args.right_to_left2 - or not rerank1_is_gen - ): - - if args.backwards1 or args.backwards2: - if args.backwards_score_dict_dir is not None: - bw_dict = args.backwards_score_dict_dir - else: - bw_dict = args.score_dict_dir - bw_preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + bw_rescore_file, - "--srcdict", - bw_dict + "/dict." + scorer1_src + ".txt", - "--tgtdict", - bw_dict + "/dict." + scorer1_tgt + ".txt", - "--destdir", - backwards_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(bw_preprocess_param) - preprocess.main(input_args) - - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + fw_rescore_file, - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - left_to_right_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - if args.right_to_left1 or args.right_to_left2: - preprocess_param = [ - "--source-lang", - scorer1_src, - "--target-lang", - scorer1_tgt, - "--trainpref", - pre_gen + "/right_to_left_rescore_data", - "--srcdict", - args.score_dict_dir + "/dict." + scorer1_src + ".txt", - "--tgtdict", - args.score_dict_dir + "/dict." + scorer1_tgt + ".txt", - "--destdir", - right_to_left_preprocessed_dir, - ] - preprocess_parser = options.get_preprocessing_parser() - input_args = preprocess_parser.parse_args(preprocess_param) - preprocess.main(input_args) - - return gen_output - - -def cli_main(): - parser = rerank_options.get_reranking_parser() - args = options.parse_args_and_arch(parser) - gen_and_reprocess_nbest(args) - - -if __name__ == "__main__": - cli_main() diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/feature_transforms/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/feature_transforms/__init__.py deleted file mode 100644 index 359fa069716cba0dd615ce0959368b20828c31f7..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/feature_transforms/__init__.py +++ /dev/null @@ -1,82 +0,0 @@ -import importlib -import os -from abc import ABC, abstractmethod -from typing import Dict, Optional - - -class AudioFeatureTransform(ABC): - @classmethod - @abstractmethod - def from_config_dict(cls, config: Optional[Dict] = None): - pass - - -AUDIO_FEATURE_TRANSFORM_REGISTRY = {} -AUDIO_FEATURE_TRANSFORM_CLASS_NAMES = set() - - -def register_audio_feature_transform(name): - def register_audio_feature_transform_cls(cls): - if name in AUDIO_FEATURE_TRANSFORM_REGISTRY: - raise ValueError(f"Cannot register duplicate transform ({name})") - if not issubclass(cls, AudioFeatureTransform): - raise ValueError( - f"Transform ({name}: {cls.__name__}) must extend " - "AudioFeatureTransform" - ) - if cls.__name__ in AUDIO_FEATURE_TRANSFORM_CLASS_NAMES: - raise ValueError( - f"Cannot register audio feature transform with duplicate " - f"class name ({cls.__name__})" - ) - AUDIO_FEATURE_TRANSFORM_REGISTRY[name] = cls - AUDIO_FEATURE_TRANSFORM_CLASS_NAMES.add(cls.__name__) - return cls - - return register_audio_feature_transform_cls - - -def get_audio_feature_transform(name): - return AUDIO_FEATURE_TRANSFORM_REGISTRY[name] - - -transforms_dir = os.path.dirname(__file__) -for file in os.listdir(transforms_dir): - path = os.path.join(transforms_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - name = file[: file.find(".py")] if file.endswith(".py") else file - importlib.import_module("fairseq.data.audio.feature_transforms." + name) - - -class CompositeAudioFeatureTransform(AudioFeatureTransform): - @classmethod - def from_config_dict(cls, config=None): - _config = {} if config is None else config - _transforms = _config.get("transforms") - if _transforms is None: - return None - transforms = [ - get_audio_feature_transform(_t).from_config_dict(_config.get(_t)) - for _t in _transforms - ] - return CompositeAudioFeatureTransform(transforms) - - def __init__(self, transforms): - self.transforms = [t for t in transforms if t is not None] - - def __call__(self, x): - for t in self.transforms: - x = t(x) - return x - - def __repr__(self): - format_string = ( - [self.__class__.__name__ + "("] - + [f" {t.__repr__()}" for t in self.transforms] - + [")"] - ) - return "\n".join(format_string) diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/huggingface/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/huggingface/__init__.py deleted file mode 100644 index f7911c2c8edf516855023a285b18935e5389ec02..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/models/huggingface/__init__.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import importlib -import os - - -# automatically import any Python files in the models/huggingface/ directory -models_dir = os.path.dirname(__file__) -for file in os.listdir(models_dir): - path = os.path.join(models_dir, file) - if ( - not file.startswith("_") - and not file.startswith(".") - and (file.endswith(".py") or os.path.isdir(path)) - ): - model_name = file[: file.find(".py")] if file.endswith(".py") else file - module = importlib.import_module("fairseq.models.huggingface." + model_name) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/new/decoders/viterbi_decoder.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/new/decoders/viterbi_decoder.py deleted file mode 100644 index b1c47868fa3b4e21f939b0695ede8d14ba1b168d..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/speech_recognition/new/decoders/viterbi_decoder.py +++ /dev/null @@ -1,24 +0,0 @@ -#!/usr/bin/env python3 - -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch - -from typing import List, Dict - -from .base_decoder import BaseDecoder - - -class ViterbiDecoder(BaseDecoder): - def decode( - self, - emissions: torch.FloatTensor, - ) -> List[List[Dict[str, torch.LongTensor]]]: - def get_pred(e): - toks = e.argmax(dim=-1).unique_consecutive() - return toks[toks != self.blank] - - return [[{"tokens": get_pred(x), "score": 0}] for x in emissions] diff --git a/spaces/OpenBuddy/ChatWithBuddy/style.css b/spaces/OpenBuddy/ChatWithBuddy/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/OpenBuddy/ChatWithBuddy/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/GithubIcon.tsx b/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/GithubIcon.tsx deleted file mode 100644 index 6b106ffb1c3ed907b34634534f0439480456857d..0000000000000000000000000000000000000000 --- a/spaces/OpenDILabCommunity/DI-sheep/DI-sheep/ui/src/GithubIcon.tsx +++ /dev/null @@ -1,28 +0,0 @@ -import React, { FC } from 'react'; - -export const GithubIcon: FC = () => { - return ( - <a - href="https://github.com/orgs/opendilab/repositories" - target="_blank" - rel="noreferrer" - > - Light the Star for @OpenDILab - <svg - height="24" - aria-hidden="true" - viewBox="0 0 16 16" - version="1.1" - width="24" - data-view-component="true" - className="octicon octicon-mark-github v-align-middle" - fill="currentColor" - > - <path - fillRule="evenodd" - d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0016 8c0-4.42-3.58-8-8-8z" - /> - </svg> - </a> - ); -}; diff --git a/spaces/OverSky/mio-amadeus/app.py b/spaces/OverSky/mio-amadeus/app.py deleted file mode 100644 index 3283c4c6077d757f281d3959460909cec5203230..0000000000000000000000000000000000000000 --- a/spaces/OverSky/mio-amadeus/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/mio/amadeus").launch() \ No newline at end of file diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/repl/command.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/repl/command.go deleted file mode 100644 index 778ef197e7774dfd0d54457d6737a0ad9a8ab667..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/repl/command.go and /dev/null differ diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py b/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py deleted file mode 100644 index ba1d42d0c5781f56dc177d860d856bb34adce555..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/configs/_base_/datasets/pascal_voc12.py +++ /dev/null @@ -1,57 +0,0 @@ -# dataset settings -dataset_type = 'PascalVOCDataset' -data_root = 'data/VOCdevkit/VOC2012' -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) -crop_size = (512, 512) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations'), - dict(type='Resize', img_scale=(2048, 512), ratio_range=(0.5, 2.0)), - dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75), - dict(type='RandomFlip', prob=0.5), - dict(type='PhotoMetricDistortion'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=255), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_semantic_seg']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(2048, 512), - # img_ratios=[0.5, 0.75, 1.0, 1.25, 1.5, 1.75], - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - samples_per_gpu=4, - workers_per_gpu=4, - train=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/train.txt', - pipeline=train_pipeline), - val=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/val.txt', - pipeline=test_pipeline), - test=dict( - type=dataset_type, - data_root=data_root, - img_dir='JPEGImages', - ann_dir='SegmentationClass', - split='ImageSets/Segmentation/val.txt', - pipeline=test_pipeline)) diff --git a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/gc_head.py b/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/gc_head.py deleted file mode 100644 index 70741245af975800840709911bd18d72247e3e04..0000000000000000000000000000000000000000 --- a/spaces/Pie31415/control-animation/annotator/uniformer/mmseg/models/decode_heads/gc_head.py +++ /dev/null @@ -1,47 +0,0 @@ -import torch -from annotator.uniformer.mmcv.cnn import ContextBlock - -from ..builder import HEADS -from .fcn_head import FCNHead - - -@HEADS.register_module() -class GCHead(FCNHead): - """GCNet: Non-local Networks Meet Squeeze-Excitation Networks and Beyond. - - This head is the implementation of `GCNet - <https://arxiv.org/abs/1904.11492>`_. - - Args: - ratio (float): Multiplier of channels ratio. Default: 1/4. - pooling_type (str): The pooling type of context aggregation. - Options are 'att', 'avg'. Default: 'avg'. - fusion_types (tuple[str]): The fusion type for feature fusion. - Options are 'channel_add', 'channel_mul'. Default: ('channel_add',) - """ - - def __init__(self, - ratio=1 / 4., - pooling_type='att', - fusion_types=('channel_add', ), - **kwargs): - super(GCHead, self).__init__(num_convs=2, **kwargs) - self.ratio = ratio - self.pooling_type = pooling_type - self.fusion_types = fusion_types - self.gc_block = ContextBlock( - in_channels=self.channels, - ratio=self.ratio, - pooling_type=self.pooling_type, - fusion_types=self.fusion_types) - - def forward(self, inputs): - """Forward function.""" - x = self._transform_inputs(inputs) - output = self.convs[0](x) - output = self.gc_block(output) - output = self.convs[1](output) - if self.concat_input: - output = self.conv_cat(torch.cat([x, output], dim=1)) - output = self.cls_seg(output) - return output diff --git a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/solver/build.py b/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/solver/build.py deleted file mode 100644 index ff2e5198ffeb7798cfaa31e8d00128f5065cc6df..0000000000000000000000000000000000000000 --- a/spaces/Pinwheel/GLIP-BLIP-Object-Detection-VQA/maskrcnn_benchmark/solver/build.py +++ /dev/null @@ -1,116 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved. -import torch -import itertools - -from .lr_scheduler import WarmupMultiStepLR, WarmupCosineAnnealingLR, WarmupReduceLROnPlateau - - -def make_optimizer(cfg, model): - def maybe_add_full_model_gradient_clipping(optim): # optim: the optimizer class - # detectron2 doesn't have full model gradient clipping now - clip_norm_val = cfg.SOLVER.CLIP_GRADIENTS.CLIP_VALUE - enable = ( - cfg.SOLVER.CLIP_GRADIENTS.ENABLED - and cfg.SOLVER.CLIP_GRADIENTS.CLIP_TYPE == "full_model" - and clip_norm_val > 0.0 - ) - - class FullModelGradientClippingOptimizer(optim): - def step(self, closure=None): - all_params = itertools.chain(*[x["params"] for x in self.param_groups]) - torch.nn.utils.clip_grad_norm_(all_params, clip_norm_val) - super().step(closure=closure) - - return FullModelGradientClippingOptimizer if enable else optim - - params = [] - for key, value in model.named_parameters(): - if not value.requires_grad: - continue - lr = cfg.SOLVER.BASE_LR - weight_decay = cfg.SOLVER.WEIGHT_DECAY - - # different lr schedule - if "language_backbone" in key: - lr = cfg.SOLVER.LANG_LR - - if "backbone.body" in key and "language_backbone.body" not in key: - lr = cfg.SOLVER.BASE_LR * cfg.SOLVER.BACKBONE_BODY_LR_FACTOR - - if "bias" in key: - lr *= cfg.SOLVER.BIAS_LR_FACTOR - weight_decay = cfg.SOLVER.WEIGHT_DECAY_BIAS - - if 'norm' in key or 'Norm' in key: - weight_decay *= cfg.SOLVER.WEIGHT_DECAY_NORM_FACTOR - print("Setting weight decay of {} to {}".format(key, weight_decay)) - - params += [{"params": [value], "lr": lr, "weight_decay": weight_decay}] - - if cfg.SOLVER.OPTIMIZER == "SGD": - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.SGD)(params, lr, momentum=cfg.SOLVER.MOMENTUM) - elif cfg.SOLVER.OPTIMIZER == "ADAMW": - optimizer = maybe_add_full_model_gradient_clipping(torch.optim.AdamW)(params, lr) - - return optimizer - - -def make_lr_scheduler(cfg, optimizer): - if cfg.SOLVER.MULTI_MAX_EPOCH: - assert len(cfg.SOLVER.MULTI_MAX_EPOCH) == len(cfg.SOLVER.STEPS) - lr_scheduler = [] - - for stage_step, stage_max_epoch in zip(cfg.SOLVER.STEPS, cfg.SOLVER.MULTI_MAX_ITER): - milestones = [] - for step in stage_step: - milestones.append(round(step * stage_max_epoch)) - lr_scheduler.append(WarmupMultiStepLR(optimizer, - milestones, - cfg.SOLVER.GAMMA, - warmup_factor=cfg.SOLVER.WARMUP_FACTOR, - warmup_iters=cfg.SOLVER.WARMUP_ITERS, - warmup_method=cfg.SOLVER.WARMUP_METHOD, ) - ) - return lr_scheduler - - elif cfg.SOLVER.USE_COSINE: - max_iters = cfg.SOLVER.MAX_ITER - return WarmupCosineAnnealingLR( - optimizer, - max_iters, - cfg.SOLVER.GAMMA, - warmup_factor=cfg.SOLVER.WARMUP_FACTOR, - warmup_iters=cfg.SOLVER.WARMUP_ITERS, - warmup_method=cfg.SOLVER.WARMUP_METHOD, - eta_min=cfg.SOLVER.MIN_LR - ) - - elif cfg.SOLVER.USE_AUTOSTEP: - max_iters = cfg.SOLVER.MAX_ITER - return WarmupReduceLROnPlateau( - optimizer, - max_iters, - cfg.SOLVER.GAMMA, - warmup_factor=cfg.SOLVER.WARMUP_FACTOR, - warmup_iters=cfg.SOLVER.WARMUP_ITERS, - warmup_method=cfg.SOLVER.WARMUP_METHOD, - eta_min=cfg.SOLVER.MIN_LR, - patience=cfg.SOLVER.STEP_PATIENCE, - verbose=True - ) - - else: - milestones = [] - for step in cfg.SOLVER.STEPS: - if step < 1: - milestones.append(round(step * cfg.SOLVER.MAX_ITER)) - else: - milestones.append(step) - return WarmupMultiStepLR( - optimizer, - milestones, - cfg.SOLVER.GAMMA, - warmup_factor=cfg.SOLVER.WARMUP_FACTOR, - warmup_iters=cfg.SOLVER.WARMUP_ITERS, - warmup_method=cfg.SOLVER.WARMUP_METHOD, - ) diff --git a/spaces/Plurigrid/LifeSim/src/lib/triggerDownload.ts b/spaces/Plurigrid/LifeSim/src/lib/triggerDownload.ts deleted file mode 100644 index e5627a26a4bba34bdf28279d265c6a71440d8136..0000000000000000000000000000000000000000 --- a/spaces/Plurigrid/LifeSim/src/lib/triggerDownload.ts +++ /dev/null @@ -1,12 +0,0 @@ -export function triggerDownload(filename: string, text: string) { - var element = document.createElement('a'); - element.setAttribute('href', 'data:text/plain;charset=utf-8,' + encodeURIComponent(text)); - element.setAttribute('download', filename); - - element.style.display = 'none'; - document.body.appendChild(element); - - element.click(); - - document.body.removeChild(element); -} \ No newline at end of file diff --git a/spaces/Priyabrata017/Flamingo/app.py b/spaces/Priyabrata017/Flamingo/app.py deleted file mode 100644 index 64cc005dabe013b1c9e4af644008622a88fa24df..0000000000000000000000000000000000000000 --- a/spaces/Priyabrata017/Flamingo/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import os -import gradio as gr -import torch -import PIL - -from flamingo_mini import FlamingoConfig, FlamingoModel, FlamingoProcessor - - - -EXAMPLES_DIR = 'examples' -DEFAULT_PROMPT = "<image>" - -device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') - -model = FlamingoModel.from_pretrained('dhansmair/flamingo-mini') -model.to(device) -model.eval() - -processor = FlamingoProcessor(model.config, load_vision_processor=True) - -# setup some example images -examples = [] -if os.path.isdir(EXAMPLES_DIR): - for file in os.listdir(EXAMPLES_DIR): - path = EXAMPLES_DIR + "/" + file - examples.append([path, DEFAULT_PROMPT]) - - -def predict_caption(image, prompt): - assert isinstance(prompt, str) - - features = processor.extract_features(image).to(device) - caption = model.generate_captions(processor, - visual_features=features, - prompt=prompt) - - if isinstance(caption, list): - caption = caption[0] - - return caption - - -iface = gr.Interface(fn=predict_caption, - inputs=[gr.Image(type="pil"), gr.Textbox(value=DEFAULT_PROMPT, label="Prompt")], - examples=examples, - outputs="text") - -iface.launch() \ No newline at end of file diff --git a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/MUSICGEN.md b/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/MUSICGEN.md deleted file mode 100644 index 606ce85808a428432f4e77564fb97dcade3851a3..0000000000000000000000000000000000000000 --- a/spaces/Prof-Reza/Audiocraft_Music-Audio_Generation/docs/MUSICGEN.md +++ /dev/null @@ -1,362 +0,0 @@ -# MusicGen: Simple and Controllable Music Generation - -AudioCraft provides the code and models for MusicGen, [a simple and controllable model for music generation][arxiv]. -MusicGen is a single stage auto-regressive Transformer model trained over a 32kHz -<a href="https://github.com/facebookresearch/encodec">EnCodec tokenizer</a> with 4 codebooks sampled at 50 Hz. -Unlike existing methods like [MusicLM](https://arxiv.org/abs/2301.11325), MusicGen doesn't require -a self-supervised semantic representation, and it generates all 4 codebooks in one pass. By introducing -a small delay between the codebooks, we show we can predict them in parallel, thus having only 50 auto-regressive -steps per second of audio. -Check out our [sample page][musicgen_samples] or test the available demo! - -<a target="_blank" href="https://colab.research.google.com/drive/1JlTOjB-G0A2Hz3h8PK63vLZk4xdCI5QB?usp=sharing"> - <img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/> -</a> -<a target="_blank" href="https://huggingface.co/spaces/facebook/MusicGen"> - <img src="https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg" alt="Open in HugginFace"/> -</a> -<br> - -We use 20K hours of licensed music to train MusicGen. Specifically, we rely on an internal dataset -of 10K high-quality music tracks, and on the ShutterStock and Pond5 music data. - - -## Model Card - -See [the model card](../model_cards/MUSICGEN_MODEL_CARD.md). - - -## Installation - -Please follow the AudioCraft installation instructions from the [README](../README.md). - -AudioCraft requires a GPU with at least 16 GB of memory for running inference with the medium-sized models (~1.5B parameters). - -## Usage - -We offer a number of way to interact with MusicGen: -1. A demo is also available on the [`facebook/MusicGen` Hugging Face Space](https://huggingface.co/spaces/facebook/MusicGen) -(huge thanks to all the HF team for their support). -2. You can run the extended demo on a Colab: -[colab notebook](https://colab.research.google.com/drive/1JlTOjB-G0A2Hz3h8PK63vLZk4xdCI5QB?usp=sharing) -3. You can use the gradio demo locally by running [`python -m demos.musicgen_app --share`](../demos/musicgen_app.py). -4. You can play with MusicGen by running the jupyter notebook at [`demos/musicgen_demo.ipynb`](../demos/musicgen_demo.ipynb) locally (if you have a GPU). -5. Finally, checkout [@camenduru Colab page](https://github.com/camenduru/MusicGen-colab) -which is regularly updated with contributions from @camenduru and the community. - - -## API - -We provide a simple API and 4 pre-trained models. The pre trained models are: -- `facebook/musicgen-small`: 300M model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-small) -- `facebook/musicgen-medium`: 1.5B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-medium) -- `facebook/musicgen-melody`: 1.5B model, text to music and text+melody to music - [🤗 Hub](https://huggingface.co/facebook/musicgen-melody) -- `facebook/musicgen-large`: 3.3B model, text to music only - [🤗 Hub](https://huggingface.co/facebook/musicgen-large) - -We observe the best trade-off between quality and compute with the `facebook/musicgen-medium` or `facebook/musicgen-melody` model. -In order to use MusicGen locally **you must have a GPU**. We recommend 16GB of memory, but smaller -GPUs will be able to generate short sequences, or longer sequences with the `facebook/musicgen-small` model. - -See after a quick example for using the API. - -```python -import torchaudio -from audiocraft.models import MusicGen -from audiocraft.data.audio import audio_write - -model = MusicGen.get_pretrained('facebook/musicgen-melody') -model.set_generation_params(duration=8) # generate 8 seconds. -wav = model.generate_unconditional(4) # generates 4 unconditional audio samples -descriptions = ['happy rock', 'energetic EDM', 'sad jazz'] -wav = model.generate(descriptions) # generates 3 samples. - -melody, sr = torchaudio.load('./assets/bach.mp3') -# generates using the melody from the given audio and the provided descriptions. -wav = model.generate_with_chroma(descriptions, melody[None].expand(3, -1, -1), sr) - -for idx, one_wav in enumerate(wav): - # Will save under {idx}.wav, with loudness normalization at -14 db LUFS. - audio_write(f'{idx}', one_wav.cpu(), model.sample_rate, strategy="loudness", loudness_compressor=True) -``` - -## 🤗 Transformers Usage - -MusicGen is available in the 🤗 Transformers library from version 4.31.0 onwards, requiring minimal dependencies -and additional packages. Steps to get started: - -1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers) from main: - -```shell -pip install git+https://github.com/huggingface/transformers.git -``` - -2. Run the following Python code to generate text-conditional audio samples: - -```py -from transformers import AutoProcessor, MusicgenForConditionalGeneration - - -processor = AutoProcessor.from_pretrained("facebook/musicgen-small") -model = MusicgenForConditionalGeneration.from_pretrained("facebook/musicgen-small") - -inputs = processor( - text=["80s pop track with bassy drums and synth", "90s rock song with loud guitars and heavy drums"], - padding=True, - return_tensors="pt", -) - -audio_values = model.generate(**inputs, max_new_tokens=256) -``` - -3. Listen to the audio samples either in an ipynb notebook: - -```py -from IPython.display import Audio - -sampling_rate = model.config.audio_encoder.sampling_rate -Audio(audio_values[0].numpy(), rate=sampling_rate) -``` - -Or save them as a `.wav` file using a third-party library, e.g. `scipy`: - -```py -import scipy - -sampling_rate = model.config.audio_encoder.sampling_rate -scipy.io.wavfile.write("musicgen_out.wav", rate=sampling_rate, data=audio_values[0, 0].numpy()) -``` - -For more details on using the MusicGen model for inference using the 🤗 Transformers library, refer to the -[MusicGen docs](https://huggingface.co/docs/transformers/main/en/model_doc/musicgen) or the hands-on -[Google Colab](https://colab.research.google.com/github/sanchit-gandhi/notebooks/blob/main/MusicGen.ipynb). - - -## Training - -The [MusicGenSolver](../audiocraft/solvers/musicgen.py) implements MusicGen's training pipeline. -It defines an autoregressive language modeling task over multiple streams of discrete tokens -extracted from a pre-trained EnCodec model (see [EnCodec documentation](./ENCODEC.md) -for more details on how to train such model). - -Note that **we do NOT provide any of the datasets** used for training MusicGen. -We provide a dummy dataset containing just a few examples for illustrative purposes. - -Please read first the [TRAINING documentation](./TRAINING.md), in particular the Environment Setup section. - -### Example configurations and grids - -We provide configurations to reproduce the released models and our research. -MusicGen solvers configuration are available in [config/solver/musicgen](../config/solver/musicgen), -in particular: -* MusicGen base model for text-to-music: -[`solver=musicgen/musicgen_base_32khz`](../config/solver/musicgen/musicgen_base_32khz.yaml) -* MusicGen model with chromagram-conditioning support: -[`solver=musicgen/musicgen_melody_32khz`](../config/solver/musicgen/musicgen_melody_32khz.yaml) - -We provide 3 different scales, e.g. `model/lm/model_scale=small` (300M), or `medium` (1.5B), and `large` (3.3B). - -Please find some example grids to train MusicGen at -[audiocraft/grids/musicgen](../audiocraft/grids/musicgen/). - -```shell -# text-to-music -dora grid musicgen.musicgen_base_32khz --dry_run --init -# melody-guided music generation -dora grid musicgen.musicgen_melody_base_32khz --dry_run --init -# Remove the `--dry_run --init` flags to actually schedule the jobs once everything is setup. -``` - -### Music dataset and metadata - -MusicGen's underlying dataset is an AudioDataset augmented with music-specific metadata. -The MusicGen dataset implementation expects the metadata to be available as `.json` files -at the same location as the audio files. Learn more in the [datasets section](./DATASETS.md). - - -### Audio tokenizers - -We support a number of audio tokenizers: either pretrained EnCodec models, [DAC](https://github.com/descriptinc/descript-audio-codec), or your own models. -The tokenizer is controlled with the setting `compression_model_checkpoint`. -For instance, - -```bash -# Using the 32kHz EnCodec trained on music -dora run solver=musicgen/debug \ - compression_model_checkpoint=//pretrained/facebook/encodec_32khz \ - transformer_lm.n_q=4 transformer_lm.card=2048 - -# Using DAC -dora run solver=musicgen/debug \ - compression_model_checkpoint=//pretrained/dac_44khz \ - transformer_lm.n_q=9 transformer_lm.card=1024 \ - 'codebooks_pattern.delay.delays=[0,1,2,3,4,5,6,7,8]' - -# Using your own model after export (see ENCODEC.md) -dora run solver=musicgen/debug \ - compression_model_checkpoint=//pretrained//checkpoints/my_audio_lm/compression_state_dict.bin \ - transformer_lm.n_q=... transformer_lm.card=... - -# Using your own model from its training checkpoint. -dora run solver=musicgen/debug \ - compression_model_checkpoint=//sig/SIG \ # where SIG is the Dora signature of the EnCodec XP. - transformer_lm.n_q=... transformer_lm.card=... -``` - -**Warning:** you are responsible for setting the proper value for `transformer_lm.n_q` and `transformer_lm.card` (cardinality of the codebooks). You also have to update the codebook_pattern to match `n_q` as shown in the example for using DAC. . - - -### Fine tuning existing models - -You can initialize your model to one of the pretrained models by using the `continue_from` argument, in particular - -```bash -# Using pretrained MusicGen model. -dora run solver=musicgen/musicgen_base_32khz model/lm/model_scale=medium continue_from=//pretrained/facebook/musicgen-medium conditioner=text2music - -# Using another model you already trained with a Dora signature SIG. -dora run solver=musicgen/musicgen_base_32khz model/lm/model_scale=medium continue_from=//sig/SIG conditioner=text2music - -# Or providing manually a path -dora run solver=musicgen/musicgen_base_32khz model/lm/model_scale=medium continue_from=/checkpoints/my_other_xp/checkpoint.th -``` - -**Warning:** You are responsible for selecting the other parameters accordingly, in a way that make it compatible - with the model you are fine tuning. Configuration is NOT automatically inherited from the model you continue from. In particular make sure to select the proper `conditioner` and `model/lm/model_scale`. - -**Warning:** We currently do not support fine tuning a model with slightly different layers. If you decide - to change some parts, like the conditioning or some other parts of the model, you are responsible for manually crafting a checkpoint file from which we can safely run `load_state_dict`. - If you decide to do so, make sure your checkpoint is saved with `torch.save` and contains a dict - `{'best_state': {'model': model_state_dict_here}}`. Directly give the path to `continue_from` without a `//pretrained/` prefix. - -### Caching of EnCodec tokens - -It is possible to precompute the EnCodec tokens and other metadata. -An example of generating and using this cache provided in the [musicgen.musicgen_base_cached_32khz grid](../audiocraft/grids/musicgen/musicgen_base_cached_32khz.py). - -### Evaluation stage - -By default, evaluation stage is also computing the cross-entropy and the perplexity over the -evaluation dataset. Indeed the objective metrics used for evaluation can be costly to run -or require some extra dependencies. Please refer to the [metrics documentation](./METRICS.md) -for more details on the requirements for each metric. - -We provide an off-the-shelf configuration to enable running the objective metrics -for audio generation in -[config/solver/musicgen/evaluation/objective_eval](../config/solver/musicgen/evaluation/objective_eval.yaml). - -One can then activate evaluation the following way: -```shell -# using the configuration -dora run solver=musicgen/debug solver/musicgen/evaluation=objective_eval -# specifying each of the fields, e.g. to activate KL computation -dora run solver=musicgen/debug evaluate.metrics.kld=true -``` - -See [an example evaluation grid](../audiocraft/grids/musicgen/musicgen_pretrained_32khz_eval.py). - -### Generation stage - -The generation stage allows to generate samples conditionally and/or unconditionally and to perform -audio continuation (from a prompt). We currently support greedy sampling (argmax), sampling -from softmax with a given temperature, top-K and top-P (nucleus) sampling. The number of samples -generated and the batch size used are controlled by the `dataset.generate` configuration -while the other generation parameters are defined in `generate.lm`. - -```shell -# control sampling parameters -dora run solver=musicgen/debug generate.lm.gen_duration=10 generate.lm.use_sampling=true generate.lm.top_k=15 -``` - -#### Listening to samples - -Note that generation happens automatically every 25 epochs. You can easily access and -compare samples between models (as long as they are trained) on the same dataset using the -MOS tool. For that first `pip install Flask gunicorn`. Then -``` -gunicorn -w 4 -b 127.0.0.1:8895 -t 120 'scripts.mos:app' --access-logfile - -``` -And access the tool at [https://127.0.0.1:8895](https://127.0.0.1:8895). - -### Playing with the model - -Once you have launched some experiments, you can easily get access -to the Solver with the latest trained model using the following snippet. - -```python -from audiocraft.solvers.musicgen import MusicGen - -solver = MusicGen.get_eval_solver_from_sig('SIG', device='cpu', batch_size=8) -solver.model -solver.dataloaders -``` - -### Importing / Exporting models - -We do not support currently loading a model from the Hugging Face implementation or exporting to it. -If you want to export your model in a way that is compatible with `audiocraft.models.MusicGen` -API, you can run: - -```python -from audiocraft.utils import export -from audiocraft import train -xp = train.main.get_xp_from_sig('SIG_OF_LM') -export.export_lm(xp.folder / 'checkpoint.th', '/checkpoints/my_audio_lm/state_dict.bin') -# You also need to bundle the EnCodec model you used !! -## Case 1) you trained your own -xp_encodec = train.main.get_xp_from_sig('SIG_OF_ENCODEC') -export.export_encodec(xp_encodec.folder / 'checkpoint.th', '/checkpoints/my_audio_lm/compression_state_dict.bin') -## Case 2) you used a pretrained model. Give the name you used without the //pretrained/ prefix. -## This will actually not dump the actual model, simply a pointer to the right model to download. -export.export_pretrained_compression_model('facebook/encodec_32khz', '/checkpoints/my_audio_lm/compression_state_dict.bin') -``` - -Now you can load your custom model with: -```python -import audiocraft.models -musicgen = audiocraft.models.MusicGen.get_pretrained('/checkpoints/my_audio_lm/') -``` - - -### Learn more - -Learn more about AudioCraft training pipelines in the [dedicated section](./TRAINING.md). - -## FAQ - -#### I need help on Windows - -@FurkanGozukara made a complete tutorial for [AudioCraft/MusicGen on Windows](https://youtu.be/v-YpvPkhdO4) - -#### I need help for running the demo on Colab - -Check [@camenduru tutorial on YouTube](https://www.youtube.com/watch?v=EGfxuTy9Eeo). - -#### What are top-k, top-p, temperature and classifier-free guidance? - -Check out [@FurkanGozukara tutorial](https://github.com/FurkanGozukara/Stable-Diffusion/blob/main/Tutorials/AI-Music-Generation-Audiocraft-Tutorial.md#more-info-about-top-k-top-p-temperature-and-classifier-free-guidance-from-chatgpt). - -#### Should I use FSDP or autocast ? - -The two are mutually exclusive (because FSDP does autocast on its own). -You can use autocast up to 1.5B (medium), if you have enough RAM on your GPU. -FSDP makes everything more complex but will free up some memory for the actual -activations by sharding the optimizer state. - -## Citation -``` -@article{copet2023simple, - title={Simple and Controllable Music Generation}, - author={Jade Copet and Felix Kreuk and Itai Gat and Tal Remez and David Kant and Gabriel Synnaeve and Yossi Adi and Alexandre Défossez}, - year={2023}, - journal={arXiv preprint arXiv:2306.05284}, -} -``` - - -## License - -See license information in the [model card](../model_cards/MUSICGEN_MODEL_CARD.md). - - -[arxiv]: https://arxiv.org/abs/2306.05284 -[musicgen_samples]: https://ai.honu.io/papers/musicgen/ diff --git a/spaces/Purple11/Grounded-Diffusion/src/CLIP/tests/test_consistency.py b/spaces/Purple11/Grounded-Diffusion/src/CLIP/tests/test_consistency.py deleted file mode 100644 index f2c6fd4fe9074143803e0eb6c99fa02a47632094..0000000000000000000000000000000000000000 --- a/spaces/Purple11/Grounded-Diffusion/src/CLIP/tests/test_consistency.py +++ /dev/null @@ -1,25 +0,0 @@ -import numpy as np -import pytest -import torch -from PIL import Image - -import clip - - -@pytest.mark.parametrize('model_name', clip.available_models()) -def test_consistency(model_name): - device = "cpu" - jit_model, transform = clip.load(model_name, device=device, jit=True) - py_model, _ = clip.load(model_name, device=device, jit=False) - - image = transform(Image.open("CLIP.png")).unsqueeze(0).to(device) - text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device) - - with torch.no_grad(): - logits_per_image, _ = jit_model(image, text) - jit_probs = logits_per_image.softmax(dim=-1).cpu().numpy() - - logits_per_image, _ = py_model(image, text) - py_probs = logits_per_image.softmax(dim=-1).cpu().numpy() - - assert np.allclose(jit_probs, py_probs, atol=0.01, rtol=0.1) diff --git a/spaces/QianFeng/White-box-Cartoonization2308/README.md b/spaces/QianFeng/White-box-Cartoonization2308/README.md deleted file mode 100644 index 9860239cf42c94e385faaaa75a85311e010d64f7..0000000000000000000000000000000000000000 --- a/spaces/QianFeng/White-box-Cartoonization2308/README.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -python_version: 3.7 -title: White Box Cartoonization -emoji: 📚 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 2.9.4 -app_file: app.py -pinned: false -license: apache-2.0 -duplicated_from: hylee/White-box-Cartoonization ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/modules.py b/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/modules.py deleted file mode 100644 index 2201a58bee9b7808d386b3ef9ac2d1f9630e56ef..0000000000000000000000000000000000000000 --- a/spaces/RMXK/RVC_HFF/infer/lib/infer_pack/modules.py +++ /dev/null @@ -1,521 +0,0 @@ -import copy -import math - -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import AvgPool1d, Conv1d, Conv2d, ConvTranspose1d -from torch.nn import functional as F -from torch.nn.utils import remove_weight_norm, weight_norm - -from infer.lib.infer_pack import commons -from infer.lib.infer_pack.commons import get_padding, init_weights -from infer.lib.infer_pack.transforms import piecewise_rational_quadratic_transform - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Rakot2223/faster-whisper-webui/src/hooks/subTaskProgressListener.py b/spaces/Rakot2223/faster-whisper-webui/src/hooks/subTaskProgressListener.py deleted file mode 100644 index 9a8eaa876fcd18032875d67535e0558494842c60..0000000000000000000000000000000000000000 --- a/spaces/Rakot2223/faster-whisper-webui/src/hooks/subTaskProgressListener.py +++ /dev/null @@ -1,37 +0,0 @@ -from src.hooks.progressListener import ProgressListener - -from typing import Union - -class SubTaskProgressListener(ProgressListener): - """ - A sub task listener that reports the progress of a sub task to a base task listener - Parameters - ---------- - base_task_listener : ProgressListener - The base progress listener to accumulate overall progress in. - base_task_total : float - The maximum total progress that will be reported to the base progress listener. - sub_task_start : float - The starting progress of a sub task, in respect to the base progress listener. - sub_task_total : float - The total amount of progress a sub task will report to the base progress listener. - """ - def __init__( - self, - base_task_listener: ProgressListener, - base_task_total: float, - sub_task_start: float, - sub_task_total: float, - ): - self.base_task_listener = base_task_listener - self.base_task_total = base_task_total - self.sub_task_start = sub_task_start - self.sub_task_total = sub_task_total - - def on_progress(self, current: Union[int, float], total: Union[int, float]): - sub_task_progress_frac = current / total - sub_task_progress = self.sub_task_start + self.sub_task_total * sub_task_progress_frac - self.base_task_listener.on_progress(sub_task_progress, self.base_task_total) - - def on_finished(self): - self.base_task_listener.on_progress(self.sub_task_start + self.sub_task_total, self.base_task_total) \ No newline at end of file diff --git a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/demo_darkfeat.py b/spaces/Realcat/image-matching-webui/third_party/DarkFeat/demo_darkfeat.py deleted file mode 100644 index be9a25c92f7e77da57ca111311dd96d426ba0c36..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/DarkFeat/demo_darkfeat.py +++ /dev/null @@ -1,154 +0,0 @@ -from pathlib import Path -import argparse -import cv2 -import matplotlib.cm as cm -import torch -import numpy as np -from utils.nnmatching import NNMatching -from utils.misc import ( - AverageTimer, - VideoStreamer, - make_matching_plot_fast, - frame2tensor, -) - -torch.set_grad_enabled(False) - - -def compute_essential(matched_kp1, matched_kp2, K): - pts1 = cv2.undistortPoints( - matched_kp1, - cameraMatrix=K, - distCoeffs=(-0.117918271740560, 0.075246403574314, 0, 0), - ) - pts2 = cv2.undistortPoints( - matched_kp2, - cameraMatrix=K, - distCoeffs=(-0.117918271740560, 0.075246403574314, 0, 0), - ) - K_1 = np.eye(3) - # Estimate the homography between the matches using RANSAC - ransac_model, ransac_inliers = cv2.findEssentialMat( - pts1, pts2, K_1, method=cv2.RANSAC, prob=0.999, threshold=0.001, maxIters=10000 - ) - if ransac_inliers is None or ransac_model.shape != (3, 3): - ransac_inliers = np.array([]) - ransac_model = None - return ransac_model, ransac_inliers, pts1, pts2 - - -sizer = (960, 640) -focallength_x = 4.504986436499113e03 / (6744 / sizer[0]) -focallength_y = 4.513311442889859e03 / (4502 / sizer[1]) -K = np.eye(3) -K[0, 0] = focallength_x -K[1, 1] = focallength_y -K[0, 2] = 3.363322177533149e03 / (6744 / sizer[0]) # * 0.5 -K[1, 2] = 2.291824660547715e03 / (4502 / sizer[1]) # * 0.5 - - -if __name__ == "__main__": - parser = argparse.ArgumentParser( - description="DarkFeat demo", - formatter_class=argparse.ArgumentDefaultsHelpFormatter, - ) - parser.add_argument("--input", type=str, help="path to an image directory") - parser.add_argument( - "--output_dir", - type=str, - default=None, - help="Directory where to write output frames (If None, no output)", - ) - - parser.add_argument( - "--image_glob", - type=str, - nargs="+", - default=["*.ARW"], - help="Glob if a directory of images is specified", - ) - parser.add_argument( - "--resize", - type=int, - nargs="+", - default=[640, 480], - help="Resize the input image before running inference. If two numbers, " - "resize to the exact dimensions, if one number, resize the max " - "dimension, if -1, do not resize", - ) - parser.add_argument( - "--force_cpu", action="store_true", help="Force pytorch to run in CPU mode." - ) - parser.add_argument("--model_path", type=str, help="Path to the pretrained model") - - opt = parser.parse_args() - print(opt) - - assert len(opt.resize) == 2 - print("Will resize to {}x{} (WxH)".format(opt.resize[0], opt.resize[1])) - - device = "cuda" if torch.cuda.is_available() and not opt.force_cpu else "cpu" - print('Running inference on device "{}"'.format(device)) - matching = NNMatching(opt.model_path).eval().to(device) - keys = ["keypoints", "scores", "descriptors"] - - vs = VideoStreamer(opt.input, opt.resize, opt.image_glob) - frame, ret = vs.next_frame() - assert ret, "Error when reading the first frame (try different --input?)" - - frame_tensor = frame2tensor(frame, device) - last_data = matching.darkfeat({"image": frame_tensor}) - last_data = {k + "0": [last_data[k]] for k in keys} - last_data["image0"] = frame_tensor - last_frame = frame - last_image_id = 0 - - if opt.output_dir is not None: - print("==> Will write outputs to {}".format(opt.output_dir)) - Path(opt.output_dir).mkdir(exist_ok=True) - - timer = AverageTimer() - - while True: - frame, ret = vs.next_frame() - if not ret: - print("Finished demo_darkfeat.py") - break - timer.update("data") - stem0, stem1 = last_image_id, vs.i - 1 - - frame_tensor = frame2tensor(frame, device) - pred = matching({**last_data, "image1": frame_tensor}) - kpts0 = last_data["keypoints0"][0].cpu().numpy() - kpts1 = pred["keypoints1"][0].cpu().numpy() - matches = pred["matches0"][0].cpu().numpy() - confidence = pred["matching_scores0"][0].cpu().numpy() - timer.update("forward") - - valid = matches > -1 - mkpts0 = kpts0[valid] - mkpts1 = kpts1[matches[valid]] - - E, inliers, pts1, pts2 = compute_essential(mkpts0, mkpts1, K) - color = cm.jet( - np.clip(confidence[valid][inliers[:, 0].astype("bool")] * 2 - 1, -1, 1) - ) - - text = ["DarkFeat", "Matches: {}".format(inliers.sum())] - - out = make_matching_plot_fast( - last_frame, - frame, - mkpts0[inliers[:, 0].astype("bool")], - mkpts1[inliers[:, 0].astype("bool")], - color, - text, - path=None, - small_text=" ", - ) - - if opt.output_dir is not None: - stem = "matches_{:06}_{:06}".format(stem0, stem1) - out_file = str(Path(opt.output_dir, stem + ".png")) - print("Writing image to {}".format(out_file)) - cv2.imwrite(out_file, out) diff --git a/spaces/Realcat/image-matching-webui/third_party/SGMNet/datadump/dumper/scannet.py b/spaces/Realcat/image-matching-webui/third_party/SGMNet/datadump/dumper/scannet.py deleted file mode 100644 index ac45f41e3530fea49191188146187bcef7bd514d..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SGMNet/datadump/dumper/scannet.py +++ /dev/null @@ -1,143 +0,0 @@ -import os -import glob -import pickle -from posixpath import basename -import numpy as np -import h5py -from .base_dumper import BaseDumper - -import sys - -ROOT_DIR = os.path.abspath(os.path.join(os.path.dirname(__file__), "../../")) -sys.path.insert(0, ROOT_DIR) -import utils - - -class scannet(BaseDumper): - def get_seqs(self): - self.pair_list = np.loadtxt("../assets/scannet_eval_list.txt", dtype=str) - self.seq_list = np.unique( - np.asarray([path.split("/")[0] for path in self.pair_list[:, 0]], dtype=str) - ) - self.dump_seq, self.img_seq = [], [] - for seq in self.seq_list: - dump_dir = os.path.join(self.config["feature_dump_dir"], seq) - cur_img_seq = glob.glob( - os.path.join( - os.path.join(self.config["rawdata_dir"], seq, "img", "*.jpg") - ) - ) - cur_dump_seq = [ - os.path.join(dump_dir, path.split("/")[-1]) - + "_" - + self.config["extractor"]["name"] - + "_" - + str(self.config["extractor"]["num_kpt"]) - + ".hdf5" - for path in cur_img_seq - ] - self.img_seq += cur_img_seq - self.dump_seq += cur_dump_seq - - def format_dump_folder(self): - if not os.path.exists(self.config["feature_dump_dir"]): - os.mkdir(self.config["feature_dump_dir"]) - for seq in self.seq_list: - seq_dir = os.path.join(self.config["feature_dump_dir"], seq) - if not os.path.exists(seq_dir): - os.mkdir(seq_dir) - - def format_dump_data(self): - print("Formatting data...") - self.data = { - "K1": [], - "K2": [], - "R": [], - "T": [], - "e": [], - "f": [], - "fea_path1": [], - "fea_path2": [], - "img_path1": [], - "img_path2": [], - } - - for pair in self.pair_list: - img_path1, img_path2 = pair[0], pair[1] - seq = img_path1.split("/")[0] - index1, index2 = int(img_path1.split("/")[-1][:-4]), int( - img_path2.split("/")[-1][:-4] - ) - ex1, ex2 = np.loadtxt( - os.path.join( - self.config["rawdata_dir"], seq, "extrinsic", str(index1) + ".txt" - ), - dtype=float, - ), np.loadtxt( - os.path.join( - self.config["rawdata_dir"], seq, "extrinsic", str(index2) + ".txt" - ), - dtype=float, - ) - K1, K2 = np.loadtxt( - os.path.join( - self.config["rawdata_dir"], seq, "intrinsic", str(index1) + ".txt" - ), - dtype=float, - ), np.loadtxt( - os.path.join( - self.config["rawdata_dir"], seq, "intrinsic", str(index2) + ".txt" - ), - dtype=float, - ) - - relative_extrinsic = np.matmul(np.linalg.inv(ex2), ex1) - dR, dt = relative_extrinsic[:3, :3], relative_extrinsic[:3, 3] - dt /= np.sqrt(np.sum(dt**2)) - - e_gt_unnorm = np.reshape( - np.matmul( - np.reshape( - utils.evaluation_utils.np_skew_symmetric( - dt.astype("float64").reshape(1, 3) - ), - (3, 3), - ), - np.reshape(dR.astype("float64"), (3, 3)), - ), - (3, 3), - ) - e_gt = e_gt_unnorm / np.linalg.norm(e_gt_unnorm) - f_gt_unnorm = np.linalg.inv(K2.T) @ e_gt @ np.linalg.inv(K1) - f_gt = f_gt_unnorm / np.linalg.norm(f_gt_unnorm) - - self.data["K1"].append(K1), self.data["K2"].append(K2) - self.data["R"].append(dR), self.data["T"].append(dt) - self.data["e"].append(e_gt), self.data["f"].append(f_gt) - - dump_seq_dir = os.path.join(self.config["feature_dump_dir"], seq) - fea_path1, fea_path2 = os.path.join( - dump_seq_dir, - img_path1.split("/")[-1] - + "_" - + self.config["extractor"]["name"] - + "_" - + str(self.config["extractor"]["num_kpt"]) - + ".hdf5", - ), os.path.join( - dump_seq_dir, - img_path2.split("/")[-1] - + "_" - + self.config["extractor"]["name"] - + "_" - + str(self.config["extractor"]["num_kpt"]) - + ".hdf5", - ) - self.data["img_path1"].append(img_path1), self.data["img_path2"].append( - img_path2 - ) - self.data["fea_path1"].append(fea_path1), self.data["fea_path2"].append( - fea_path2 - ) - - self.form_standard_dataset() diff --git a/spaces/Realcat/image-matching-webui/third_party/SGMNet/sgmnet/match_model.py b/spaces/Realcat/image-matching-webui/third_party/SGMNet/sgmnet/match_model.py deleted file mode 100644 index ce185fd9748a0a1f5cfc9719f109ed31a40aa793..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SGMNet/sgmnet/match_model.py +++ /dev/null @@ -1,360 +0,0 @@ -import torch -import torch.nn as nn - -eps = 1e-8 - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") - - -def sinkhorn(M, r, c, iteration): - p = torch.softmax(M, dim=-1) - u = torch.ones_like(r) - v = torch.ones_like(c) - for _ in range(iteration): - u = r / ((p * v.unsqueeze(-2)).sum(-1) + eps) - v = c / ((p * u.unsqueeze(-1)).sum(-2) + eps) - p = p * u.unsqueeze(-1) * v.unsqueeze(-2) - return p - - -def sink_algorithm(M, dustbin, iteration): - M = torch.cat([M, dustbin.expand([M.shape[0], M.shape[1], 1])], dim=-1) - M = torch.cat([M, dustbin.expand([M.shape[0], 1, M.shape[2]])], dim=-2) - r = torch.ones([M.shape[0], M.shape[1] - 1], device=device) - r = torch.cat([r, torch.ones([M.shape[0], 1], device=device) * M.shape[1]], dim=-1) - c = torch.ones([M.shape[0], M.shape[2] - 1], device=device) - c = torch.cat([c, torch.ones([M.shape[0], 1], device=device) * M.shape[2]], dim=-1) - p = sinkhorn(M, r, c, iteration) - return p - - -def seeding( - nn_index1, - nn_index2, - x1, - x2, - topk, - match_score, - confbar, - nms_radius, - use_mc=True, - test=False, -): - - # apply mutual check before nms - if use_mc: - mask_not_mutual = nn_index2.gather(dim=-1, index=nn_index1) != torch.arange( - nn_index1.shape[1], device=device - ) - match_score[mask_not_mutual] = -1 - # NMS - pos_dismat1 = ( - ( - (x1.norm(p=2, dim=-1) ** 2).unsqueeze_(-1) - + (x1.norm(p=2, dim=-1) ** 2).unsqueeze_(-2) - - 2 * (x1 @ x1.transpose(1, 2)) - ) - .abs_() - .sqrt_() - ) - x2 = x2.gather(index=nn_index1.unsqueeze(-1).expand(-1, -1, 2), dim=1) - pos_dismat2 = ( - ( - (x2.norm(p=2, dim=-1) ** 2).unsqueeze_(-1) - + (x2.norm(p=2, dim=-1) ** 2).unsqueeze_(-2) - - 2 * (x2 @ x2.transpose(1, 2)) - ) - .abs_() - .sqrt_() - ) - radius1, radius2 = nms_radius * pos_dismat1.mean( - dim=(1, 2), keepdim=True - ), nms_radius * pos_dismat2.mean(dim=(1, 2), keepdim=True) - nms_mask = (pos_dismat1 >= radius1) & (pos_dismat2 >= radius2) - mask_not_local_max = ( - match_score.unsqueeze(-1) >= match_score.unsqueeze(-2) - ) | nms_mask - mask_not_local_max = ~(mask_not_local_max.min(dim=-1).values) - match_score[mask_not_local_max] = -1 - - # confidence bar - match_score[match_score < confbar] = -1 - mask_survive = match_score > 0 - if test: - topk = min(mask_survive.sum(dim=1)[0] + 2, topk) - _, topindex = torch.topk(match_score, topk, dim=-1) # b*k - seed_index1, seed_index2 = topindex, nn_index1.gather(index=topindex, dim=-1) - return seed_index1, seed_index2 - - -class PointCN(nn.Module): - def __init__(self, channels, out_channels): - nn.Module.__init__(self) - self.shot_cut = nn.Conv1d(channels, out_channels, kernel_size=1) - self.conv = nn.Sequential( - nn.InstanceNorm1d(channels, eps=1e-3), - nn.SyncBatchNorm(channels), - nn.ReLU(), - nn.Conv1d(channels, channels, kernel_size=1), - nn.InstanceNorm1d(channels, eps=1e-3), - nn.SyncBatchNorm(channels), - nn.ReLU(), - nn.Conv1d(channels, out_channels, kernel_size=1), - ) - - def forward(self, x): - return self.conv(x) + self.shot_cut(x) - - -class attention_propagantion(nn.Module): - def __init__(self, channel, head): - nn.Module.__init__(self) - self.head = head - self.head_dim = channel // head - self.query_filter, self.key_filter, self.value_filter = ( - nn.Conv1d(channel, channel, kernel_size=1), - nn.Conv1d(channel, channel, kernel_size=1), - nn.Conv1d(channel, channel, kernel_size=1), - ) - self.mh_filter = nn.Conv1d(channel, channel, kernel_size=1) - self.cat_filter = nn.Sequential( - nn.Conv1d(2 * channel, 2 * channel, kernel_size=1), - nn.SyncBatchNorm(2 * channel), - nn.ReLU(), - nn.Conv1d(2 * channel, channel, kernel_size=1), - ) - - def forward(self, desc1, desc2, weight_v=None): - # desc1(q) attend to desc2(k,v) - batch_size = desc1.shape[0] - query, key, value = ( - self.query_filter(desc1).view(batch_size, self.head, self.head_dim, -1), - self.key_filter(desc2).view(batch_size, self.head, self.head_dim, -1), - self.value_filter(desc2).view(batch_size, self.head, self.head_dim, -1), - ) - if weight_v is not None: - value = value * weight_v.view(batch_size, 1, 1, -1) - score = torch.softmax( - torch.einsum("bhdn,bhdm->bhnm", query, key) / self.head_dim**0.5, dim=-1 - ) - add_value = torch.einsum("bhnm,bhdm->bhdn", score, value).reshape( - batch_size, self.head_dim * self.head, -1 - ) - add_value = self.mh_filter(add_value) - desc1_new = desc1 + self.cat_filter(torch.cat([desc1, add_value], dim=1)) - return desc1_new - - -class hybrid_block(nn.Module): - def __init__(self, channel, head): - nn.Module.__init__(self) - self.head = head - self.channel = channel - self.attention_block_down = attention_propagantion(channel, head) - self.cluster_filter = nn.Sequential( - nn.Conv1d(2 * channel, 2 * channel, kernel_size=1), - nn.SyncBatchNorm(2 * channel), - nn.ReLU(), - nn.Conv1d(2 * channel, 2 * channel, kernel_size=1), - ) - self.cross_filter = attention_propagantion(channel, head) - self.confidence_filter = PointCN(2 * channel, 1) - self.attention_block_self = attention_propagantion(channel, head) - self.attention_block_up = attention_propagantion(channel, head) - - def forward(self, desc1, desc2, seed_index1, seed_index2): - cluster1, cluster2 = desc1.gather( - dim=-1, index=seed_index1.unsqueeze(1).expand(-1, self.channel, -1) - ), desc2.gather( - dim=-1, index=seed_index2.unsqueeze(1).expand(-1, self.channel, -1) - ) - - # pooling - cluster1, cluster2 = self.attention_block_down( - cluster1, desc1 - ), self.attention_block_down(cluster2, desc2) - concate_cluster = self.cluster_filter(torch.cat([cluster1, cluster2], dim=1)) - # filtering - cluster1, cluster2 = self.cross_filter( - concate_cluster[:, : self.channel], concate_cluster[:, self.channel :] - ), self.cross_filter( - concate_cluster[:, self.channel :], concate_cluster[:, : self.channel] - ) - cluster1, cluster2 = self.attention_block_self( - cluster1, cluster1 - ), self.attention_block_self(cluster2, cluster2) - # unpooling - seed_weight = self.confidence_filter(torch.cat([cluster1, cluster2], dim=1)) - seed_weight = torch.sigmoid(seed_weight).squeeze(1) - desc1_new, desc2_new = self.attention_block_up( - desc1, cluster1, seed_weight - ), self.attention_block_up(desc2, cluster2, seed_weight) - return desc1_new, desc2_new, seed_weight - - -class matcher(nn.Module): - def __init__(self, config): - nn.Module.__init__(self) - self.seed_top_k = config.seed_top_k - self.conf_bar = config.conf_bar - self.seed_radius_coe = config.seed_radius_coe - self.use_score_encoding = config.use_score_encoding - self.detach_iter = config.detach_iter - self.seedlayer = config.seedlayer - self.layer_num = config.layer_num - self.sink_iter = config.sink_iter - - self.position_encoder = nn.Sequential( - nn.Conv1d(3, 32, kernel_size=1) - if config.use_score_encoding - else nn.Conv1d(2, 32, kernel_size=1), - nn.SyncBatchNorm(32), - nn.ReLU(), - nn.Conv1d(32, 64, kernel_size=1), - nn.SyncBatchNorm(64), - nn.ReLU(), - nn.Conv1d(64, 128, kernel_size=1), - nn.SyncBatchNorm(128), - nn.ReLU(), - nn.Conv1d(128, 256, kernel_size=1), - nn.SyncBatchNorm(256), - nn.ReLU(), - nn.Conv1d(256, config.net_channels, kernel_size=1), - ) - - self.hybrid_block = nn.Sequential( - *[ - hybrid_block(config.net_channels, config.head) - for _ in range(config.layer_num) - ] - ) - self.final_project = nn.Conv1d( - config.net_channels, config.net_channels, kernel_size=1 - ) - self.dustbin = nn.Parameter(torch.tensor(1.5, dtype=torch.float32)) - - # if reseeding - if len(config.seedlayer) != 1: - self.mid_dustbin = nn.ParameterDict( - { - str(i): nn.Parameter(torch.tensor(2, dtype=torch.float32)) - for i in config.seedlayer[1:] - } - ) - self.mid_final_project = nn.Conv1d( - config.net_channels, config.net_channels, kernel_size=1 - ) - - def forward(self, data, test_mode=True): - x1, x2, desc1, desc2 = ( - data["x1"][:, :, :2], - data["x2"][:, :, :2], - data["desc1"], - data["desc2"], - ) - desc1, desc2 = torch.nn.functional.normalize( - desc1, dim=-1 - ), torch.nn.functional.normalize(desc2, dim=-1) - if test_mode: - encode_x1, encode_x2 = data["x1"], data["x2"] - else: - encode_x1, encode_x2 = data["aug_x1"], data["aug_x2"] - - # preparation - desc_dismat = (2 - 2 * torch.matmul(desc1, desc2.transpose(1, 2))).sqrt_() - values, nn_index = torch.topk( - desc_dismat, k=2, largest=False, dim=-1, sorted=True - ) - nn_index2 = torch.min(desc_dismat, dim=1).indices.squeeze(1) - inverse_ratio_score, nn_index1 = ( - values[:, :, 1] / values[:, :, 0], - nn_index[:, :, 0], - ) # get inverse score - - # initial seeding - seed_index1, seed_index2 = seeding( - nn_index1, - nn_index2, - x1, - x2, - self.seed_top_k[0], - inverse_ratio_score, - self.conf_bar[0], - self.seed_radius_coe, - test=test_mode, - ) - - # position encoding - desc1, desc2 = desc1.transpose(1, 2), desc2.transpose(1, 2) - if not self.use_score_encoding: - encode_x1, encode_x2 = encode_x1[:, :, :2], encode_x2[:, :, :2] - encode_x1, encode_x2 = encode_x1.transpose(1, 2), encode_x2.transpose(1, 2) - x1_pos_embedding, x2_pos_embedding = self.position_encoder( - encode_x1 - ), self.position_encoder(encode_x2) - aug_desc1, aug_desc2 = x1_pos_embedding + desc1, x2_pos_embedding + desc2 - - seed_weight_tower, mid_p_tower, seed_index_tower, nn_index_tower = ( - [], - [], - [], - [], - ) - seed_index_tower.append(torch.stack([seed_index1, seed_index2], dim=-1)) - nn_index_tower.append(nn_index1) - - seed_para_index = 0 - for i in range(self.layer_num): - # mid seeding - if i in self.seedlayer and i != 0: - seed_para_index += 1 - aug_desc1, aug_desc2 = self.mid_final_project( - aug_desc1 - ), self.mid_final_project(aug_desc2) - M = torch.matmul(aug_desc1.transpose(1, 2), aug_desc2) - p = sink_algorithm( - M, self.mid_dustbin[str(i)], self.sink_iter[seed_para_index - 1] - ) - mid_p_tower.append(p) - # rematching with p - values, nn_index = torch.topk(p[:, :-1, :-1], k=1, dim=-1) - nn_index2 = torch.max(p[:, :-1, :-1], dim=1).indices.squeeze(1) - p_match_score, nn_index1 = values[:, :, 0], nn_index[:, :, 0] - # reseeding - seed_index1, seed_index2 = seeding( - nn_index1, - nn_index2, - x1, - x2, - self.seed_top_k[seed_para_index], - p_match_score, - self.conf_bar[seed_para_index], - self.seed_radius_coe, - test=test_mode, - ) - seed_index_tower.append( - torch.stack([seed_index1, seed_index2], dim=-1) - ), nn_index_tower.append(nn_index1) - if not test_mode and data["step"] < self.detach_iter: - aug_desc1, aug_desc2 = aug_desc1.detach(), aug_desc2.detach() - - aug_desc1, aug_desc2, seed_weight = self.hybrid_block[i]( - aug_desc1, aug_desc2, seed_index1, seed_index2 - ) - seed_weight_tower.append(seed_weight) - - aug_desc1, aug_desc2 = self.final_project(aug_desc1), self.final_project( - aug_desc2 - ) - cmat = torch.matmul(aug_desc1.transpose(1, 2), aug_desc2) - p = sink_algorithm(cmat, self.dustbin, self.sink_iter[-1]) - # seed_weight_tower: l*b*k - # seed_index_tower: l*b*k*2 - # nn_index_tower: seed_l*b - return { - "p": p, - "seed_conf": seed_weight_tower, - "seed_index": seed_index_tower, - "mid_p": mid_p_tower, - "nn_index": nn_index_tower, - } diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/__init__.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/__init__.py deleted file mode 100644 index ce2930f62a0091e06b37575b96db2ae51ca7908e..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/__init__.py +++ /dev/null @@ -1,28 +0,0 @@ -import mmcv - -from .version import __version__, short_version - - -def digit_version(version_str): - digit_version = [] - for x in version_str.split('.'): - if x.isdigit(): - digit_version.append(int(x)) - elif x.find('rc') != -1: - patch_version = x.split('rc') - digit_version.append(int(patch_version[0]) - 1) - digit_version.append(int(patch_version[1])) - return digit_version - - -mmcv_minimum_version = '1.2.4' -mmcv_maximum_version = '1.4.0' -mmcv_version = digit_version(mmcv.__version__) - - -assert (mmcv_version >= digit_version(mmcv_minimum_version) - and mmcv_version <= digit_version(mmcv_maximum_version)), \ - f'MMCV=={mmcv.__version__} is used but incompatible. ' \ - f'Please install mmcv>={mmcv_minimum_version}, <={mmcv_maximum_version}.' - -__all__ = ['__version__', 'short_version'] diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/contour_expand.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/contour_expand.py deleted file mode 100644 index ea1111e1768b5f27e118bf7dbc0d9c70a7afd6d7..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/ops/contour_expand.py +++ /dev/null @@ -1,49 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import numpy as np -import torch - -from ..utils import ext_loader - -ext_module = ext_loader.load_ext('_ext', ['contour_expand']) - - -def contour_expand(kernel_mask, internal_kernel_label, min_kernel_area, - kernel_num): - """Expand kernel contours so that foreground pixels are assigned into - instances. - - Arguments: - kernel_mask (np.array or Tensor): The instance kernel mask with - size hxw. - internal_kernel_label (np.array or Tensor): The instance internal - kernel label with size hxw. - min_kernel_area (int): The minimum kernel area. - kernel_num (int): The instance kernel number. - - Returns: - label (list): The instance index map with size hxw. - """ - assert isinstance(kernel_mask, (torch.Tensor, np.ndarray)) - assert isinstance(internal_kernel_label, (torch.Tensor, np.ndarray)) - assert isinstance(min_kernel_area, int) - assert isinstance(kernel_num, int) - - if isinstance(kernel_mask, np.ndarray): - kernel_mask = torch.from_numpy(kernel_mask) - if isinstance(internal_kernel_label, np.ndarray): - internal_kernel_label = torch.from_numpy(internal_kernel_label) - - if torch.__version__ == 'parrots': - if kernel_mask.shape[0] == 0 or internal_kernel_label.shape[0] == 0: - label = [] - else: - label = ext_module.contour_expand( - kernel_mask, - internal_kernel_label, - min_kernel_area=min_kernel_area, - kernel_num=kernel_num) - label = label.tolist() - else: - label = ext_module.contour_expand(kernel_mask, internal_kernel_label, - min_kernel_area, kernel_num) - return label diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/sampler_seed.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/sampler_seed.py deleted file mode 100644 index ee0dc6bdd8df5775857028aaed5444c0f59caf80..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer_base/mmcv/runner/hooks/sampler_seed.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class DistSamplerSeedHook(Hook): - """Data-loading sampler for distributed training. - - When distributed training, it is only useful in conjunction with - :obj:`EpochBasedRunner`, while :obj:`IterBasedRunner` achieves the same - purpose with :obj:`IterLoader`. - """ - - def before_epoch(self, runner): - if hasattr(runner.data_loader.sampler, 'set_epoch'): - # in case the data loader uses `SequentialSampler` in Pytorch - runner.data_loader.sampler.set_epoch(runner.epoch) - elif hasattr(runner.data_loader.batch_sampler.sampler, 'set_epoch'): - # batch sampler in pytorch warps the sampler as its attributes. - runner.data_loader.batch_sampler.sampler.set_epoch(runner.epoch) diff --git a/spaces/Ryzal/rvc-models-new/lib/infer_pack/onnx_inference.py b/spaces/Ryzal/rvc-models-new/lib/infer_pack/onnx_inference.py deleted file mode 100644 index 6517853be49e61c427cf7cd9b5ed203f6d5f367e..0000000000000000000000000000000000000000 --- a/spaces/Ryzal/rvc-models-new/lib/infer_pack/onnx_inference.py +++ /dev/null @@ -1,145 +0,0 @@ -import onnxruntime -import librosa -import numpy as np -import soundfile - - -class ContentVec: - def __init__(self, vec_path="pretrained/vec-768-layer-12.onnx", device=None): - print("load model(s) from {}".format(vec_path)) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def __call__(self, wav): - return self.forward(wav) - - def forward(self, wav): - feats = wav - if feats.ndim == 2: # double channels - feats = feats.mean(-1) - assert feats.ndim == 1, feats.ndim - feats = np.expand_dims(np.expand_dims(feats, 0), 0) - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input)[0] - return logits.transpose(0, 2, 1) - - -def get_f0_predictor(f0_predictor, hop_length, sampling_rate, **kargs): - if f0_predictor == "pm": - from lib.infer_pack.modules.F0Predictor.PMF0Predictor import PMF0Predictor - - f0_predictor_object = PMF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "harvest": - from lib.infer_pack.modules.F0Predictor.HarvestF0Predictor import ( - HarvestF0Predictor, - ) - - f0_predictor_object = HarvestF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - elif f0_predictor == "dio": - from lib.infer_pack.modules.F0Predictor.DioF0Predictor import DioF0Predictor - - f0_predictor_object = DioF0Predictor( - hop_length=hop_length, sampling_rate=sampling_rate - ) - else: - raise Exception("Unknown f0 predictor") - return f0_predictor_object - - -class OnnxRVC: - def __init__( - self, - model_path, - sr=40000, - hop_size=512, - vec_path="vec-768-layer-12", - device="cpu", - ): - vec_path = f"pretrained/{vec_path}.onnx" - self.vec_model = ContentVec(vec_path, device) - if device == "cpu" or device is None: - providers = ["CPUExecutionProvider"] - elif device == "cuda": - providers = ["CUDAExecutionProvider", "CPUExecutionProvider"] - elif device == "dml": - providers = ["DmlExecutionProvider"] - else: - raise RuntimeError("Unsportted Device") - self.model = onnxruntime.InferenceSession(model_path, providers=providers) - self.sampling_rate = sr - self.hop_size = hop_size - - def forward(self, hubert, hubert_length, pitch, pitchf, ds, rnd): - onnx_input = { - self.model.get_inputs()[0].name: hubert, - self.model.get_inputs()[1].name: hubert_length, - self.model.get_inputs()[2].name: pitch, - self.model.get_inputs()[3].name: pitchf, - self.model.get_inputs()[4].name: ds, - self.model.get_inputs()[5].name: rnd, - } - return (self.model.run(None, onnx_input)[0] * 32767).astype(np.int16) - - def inference( - self, - raw_path, - sid, - f0_method="dio", - f0_up_key=0, - pad_time=0.5, - cr_threshold=0.02, - ): - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - f0_predictor = get_f0_predictor( - f0_method, - hop_length=self.hop_size, - sampling_rate=self.sampling_rate, - threshold=cr_threshold, - ) - wav, sr = librosa.load(raw_path, sr=self.sampling_rate) - org_length = len(wav) - if org_length / sr > 50.0: - raise RuntimeError("Reached Max Length") - - wav16k = librosa.resample(wav, orig_sr=self.sampling_rate, target_sr=16000) - wav16k = wav16k - - hubert = self.vec_model(wav16k) - hubert = np.repeat(hubert, 2, axis=2).transpose(0, 2, 1).astype(np.float32) - hubert_length = hubert.shape[1] - - pitchf = f0_predictor.compute_f0(wav, hubert_length) - pitchf = pitchf * 2 ** (f0_up_key / 12) - pitch = pitchf.copy() - f0_mel = 1127 * np.log(1 + pitch / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / ( - f0_mel_max - f0_mel_min - ) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - pitch = np.rint(f0_mel).astype(np.int64) - - pitchf = pitchf.reshape(1, len(pitchf)).astype(np.float32) - pitch = pitch.reshape(1, len(pitch)) - ds = np.array([sid]).astype(np.int64) - - rnd = np.random.randn(1, 192, hubert_length).astype(np.float32) - hubert_length = np.array([hubert_length]).astype(np.int64) - - out_wav = self.forward(hubert, hubert_length, pitch, pitchf, ds, rnd).squeeze() - out_wav = np.pad(out_wav, (0, 2 * self.hop_size), "constant") - return out_wav[0:org_length] diff --git a/spaces/SMD00/Image_Summarizer/README.md b/spaces/SMD00/Image_Summarizer/README.md deleted file mode 100644 index cb77116c80b0d3c4f8bc53a3f857101640b5255b..0000000000000000000000000000000000000000 --- a/spaces/SMD00/Image_Summarizer/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Image Summarizer -emoji: 🐠 -colorFrom: red -colorTo: gray -sdk: gradio -sdk_version: 3.35.2 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Salesforce/EDICT/my_diffusers/pipelines/ddpm/__init__.py b/spaces/Salesforce/EDICT/my_diffusers/pipelines/ddpm/__init__.py deleted file mode 100644 index 8889bdae1224e91916e0f8454bafba0ee566f3b9..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/EDICT/my_diffusers/pipelines/ddpm/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# flake8: noqa -from .pipeline_ddpm import DDPMPipeline diff --git a/spaces/SeViLA/SeViLA/lavis/runners/runner_iter.py b/spaces/SeViLA/SeViLA/lavis/runners/runner_iter.py deleted file mode 100644 index b8370f44ae8a7018763da64d693527223d220456..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/runners/runner_iter.py +++ /dev/null @@ -1,317 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import datetime -import logging -import os -import time - -import torch -import torch.distributed as dist -import webdataset as wds -from lavis.common.dist_utils import download_cached_file, is_main_process, main_process -from lavis.common.registry import registry -from lavis.common.utils import is_url -from lavis.datasets.data_utils import concat_datasets, reorg_datasets_by_split -from lavis.runners.runner_base import RunnerBase -from torch.utils.data.dataset import ChainDataset - - -@registry.register_runner("runner_iter") -class RunnerIter(RunnerBase): - """ - Run training based on the number of iterations. This is common when - the training dataset size is large. Underhood logic is similar to - epoch-based training by considering every #iters_per_inner_epoch as an - inner epoch. - - In iter-based runner, after every #iters_per_inner_epoch steps, we - - 1) do a validation epoch; - 2) schedule the learning rate; - 3) save the checkpoint. - - We refer every #iters_per_inner_epoch steps as an inner epoch. - """ - - def __init__(self, cfg, task, model, datasets, job_id): - super().__init__(cfg, task, model, datasets, job_id) - - self.start_iters = 0 - - self.max_iters = int(self.config.run_cfg.get("max_iters", -1)) - assert self.max_iters > 0, "max_iters must be greater than 0." - - self.iters_per_inner_epoch = int( - self.config.run_cfg.get("iters_per_inner_epoch", -1) - ) - assert ( - self.iters_per_inner_epoch > 0 - ), "iters_per_inner_epoch must be greater than 0." - - @property - def max_epoch(self): - return int(self.max_iters / self.iters_per_inner_epoch) - - @property - def cur_epoch(self): - try: - return self.train_loader.epoch - except AttributeError: - # pipeline data (e.g. LAION) is streaming, have no concept of epoch - return 0 - - def _progress(self, cur_iters): - return "{}_iters={}".format(self.cur_epoch, cur_iters) - - def train(self): - start_time = time.time() - best_agg_metric = 0 - best_iters = 0 - - self.log_config() - - # resume from checkpoint if specified - if not self.evaluate_only and self.resume_ckpt_path is not None: - self._load_checkpoint(self.resume_ckpt_path) - - for start_iters in range( - self.start_iters, self.max_iters, self.iters_per_inner_epoch - ): - end_iters = start_iters + self.iters_per_inner_epoch - - # training phase - if not self.evaluate_only: - logging.info( - "Start training, max_iters={}, in total {} inner epochs.".format( - self.max_iters, int(self.max_iters / self.iters_per_inner_epoch) - ) - ) - - train_stats = self.train_iters(self.cur_epoch, start_iters) - self.log_stats(split_name="train", stats=train_stats) - - # evaluation phase - if len(self.valid_splits) > 0: - for split_name in self.valid_splits: - logging.info("Evaluating on {}.".format(split_name)) - - val_log = self.eval_epoch( - split_name=split_name, cur_epoch=self._progress(end_iters) - ) - if val_log is not None: - if is_main_process(): - assert ( - "agg_metrics" in val_log - ), "No agg_metrics found in validation log." - - agg_metrics = val_log["agg_metrics"] - if agg_metrics > best_agg_metric and split_name == "val": - best_iters, best_agg_metric = end_iters, agg_metrics - - self._save_checkpoint(end_iters, is_best=True) - - val_log.update({"best_iters": best_iters}) - self.log_stats(val_log, split_name) - - else: - # if no validation split is provided, we just save the checkpoint at the end of each inner epoch. - if not self.evaluate_only: - self._save_checkpoint(end_iters, is_best=False) - - if self.evaluate_only: - break - dist.barrier() - - # testing phase - self.evaluate(cur_epoch=self.cur_epoch) - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - logging.info("Training time {}".format(total_time_str)) - - def train_iters(self, epoch, start_iters): - # train by iterations - self.model.train() - - return self.task.train_iters( - epoch=epoch, - start_iters=start_iters, - iters_per_inner_epoch=self.iters_per_inner_epoch, - model=self.model, - data_loader=self.train_loader, - optimizer=self.optimizer, - scaler=self.scaler, - lr_scheduler=self.lr_scheduler, - cuda_enabled=self.cuda_enabled, - log_freq=self.log_freq, - accum_grad_iters=self.accum_grad_iters, - ) - - @main_process - def _save_checkpoint(self, cur_iters, is_best=False): - save_obj = { - "model": self.unwrap_dist_model(self.model).state_dict(), - "optimizer": self.optimizer.state_dict(), - "config": self.config.to_dict(), - "scaler": self.scaler.state_dict() if self.scaler else None, - "iters": cur_iters, - } - save_to = os.path.join( - self.output_dir, - "checkpoint_{}.pth".format("best" if is_best else cur_iters), - ) - logging.info("Saving checkpoint at iters {} to {}.".format(cur_iters, save_to)) - torch.save(save_obj, save_to) - - def _load_checkpoint(self, url_or_filename): - """ - Resume from a checkpoint. - """ - if is_url(url_or_filename): - cached_file = download_cached_file( - url_or_filename, check_hash=False, progress=True - ) - checkpoint = torch.load(cached_file, map_location=self.device) - elif os.path.isfile(url_or_filename): - checkpoint = torch.load(url_or_filename, map_location=self.device) - else: - raise RuntimeError("checkpoint url or path is invalid") - - state_dict = checkpoint["model"] - self.unwrap_dist_model(self.model).load_state_dict(state_dict) - - self.optimizer.load_state_dict(checkpoint["optimizer"]) - if self.scaler and "scaler" in checkpoint: - self.scaler.load_state_dict(checkpoint["scaler"]) - - self.start_iters = checkpoint["iters"] + 1 - logging.info("Resume checkpoint from {}".format(url_or_filename)) - - @property - def dataloaders(self) -> dict: - """ - A property to get and create dataloaders by split just in need. - - If no train_dataset_ratio is provided, concatenate map-style datasets and - chain wds.DataPipe datasets separately. Training set becomes a tuple - (ConcatDataset, ChainDataset), both are optional but at least one of them is - required. The resultant ConcatDataset and ChainDataset will be sampled evenly. - - If train_dataset_ratio is provided, create a MultiIterLoader to sample - each dataset by ratios during training. - - Currently do not support multiple datasets for validation and test. - - Returns: - dict: {split_name: (tuples of) dataloader} - """ - if self._dataloaders is None: - # reoganize datasets by split and concatenate/chain if necessary - dataset_ratios = self.config.run_cfg.get("train_dataset_ratios", None) - - if dataset_ratios is None: - # concatenate map-style datasets and chain wds.DataPipe datasets separately - # training set becomes a tuple (ConcatDataset, ChainDataset), both are - # optional but at least one of them is required. The resultant ConcatDataset - # and ChainDataset will be sampled evenly. - logging.info( - "dataset_ratios not specified, datasets will be concatenated (map-style datasets) or chained (webdataset.DataPipeline)." - ) - - datasets = reorg_datasets_by_split(self.datasets) - self.datasets = concat_datasets(datasets) - else: - # create multi-loader with the provided ratios, without concatenating or chaining - missing_keys = [k for k in dataset_ratios if k not in self.datasets] - if len(missing_keys) > 0: - raise ValueError( - "Datasets with the following split names are not found: {}".format( - missing_keys - ) - ) - - unexpected_keys = [k for k in self.datasets if k not in dataset_ratios] - if len(unexpected_keys) > 0: - raise ValueError( - "Datasets with the following split names are not expected: {}".format( - unexpected_keys - ) - ) - - dataset_ratios = [float(dataset_ratios[k]) for k in self.datasets] - self.datasets = reorg_datasets_by_split(self.datasets) - # to keep the same structure as return value of concat_datasets - self.datasets = { - k: v[0] if len(v) == 1 else v for k, v in datasets.items() - } - - # print dataset statistics after concatenation/chaining - for split_name in self.datasets: - if isinstance(self.datasets[split_name], tuple) or isinstance( - self.datasets[split_name], list - ): - # mixed wds.DataPipeline and torch.utils.data.Dataset - num_records = sum( - [ - len(d) - if not type(d) in [wds.DataPipeline, ChainDataset] - else 0 - for d in self.datasets[split_name] - ] - ) - - else: - try: - # a single map-style dataset - num_records = len(self.datasets[split_name]) - except TypeError: - # a single wds.DataPipeline or ChainDataset - num_records = -1 - logging.info( - "Only a single wds.DataPipeline dataset, no __len__ attribute." - ) - - if num_records >= 0: - logging.info( - "Loaded {} records for {} split from the dataset.".format( - num_records, split_name - ) - ) - - # create dataloaders - split_names = sorted(self.datasets.keys()) - - datasets = [self.datasets[split] for split in split_names] - is_trains = [split in self.train_splits for split in split_names] - - batch_sizes = [ - self.config.run_cfg.batch_size_train - if split == "train" - else self.config.run_cfg.batch_size_eval - for split in split_names - ] - - collate_fns = [] - for dataset in datasets: - if isinstance(dataset, tuple) or isinstance(dataset, list): - collate_fns.append([getattr(d, "collater", None) for d in dataset]) - else: - collate_fns.append(getattr(dataset, "collater", None)) - - dataloaders = self.create_loaders( - datasets=datasets, - num_workers=self.config.run_cfg.num_workers, - batch_sizes=batch_sizes, - is_trains=is_trains, - collate_fns=collate_fns, - dataset_ratios=dataset_ratios, - ) - - self._dataloaders = {k: v for k, v in zip(split_names, dataloaders)} - - return self._dataloaders diff --git a/spaces/SeyedAli/Persian-Speech-synthesis/app.py b/spaces/SeyedAli/Persian-Speech-synthesis/app.py deleted file mode 100644 index fded38ee9c5956d513eff0a6c2c04218cd2baf5e..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Persian-Speech-synthesis/app.py +++ /dev/null @@ -1,20 +0,0 @@ -import tempfile ,os -import gradio as gr -from transformers import VitsModel, AutoTokenizer,pipeline -import torch -import numpy as np -import torchaudio - -model = VitsModel.from_pretrained("SeyedAli/Persian-Speech-synthesis") -tokenizer = AutoTokenizer.from_pretrained("SeyedAli/Persian-Speech-synthesis") -text_input = gr.TextArea(label="متن فارسی",text_align="right",rtl=True,type="text") -audio_output = gr.Audio(label="صوت گفتار فارسی", type="filepath") -def TTS(text): - inputs = tokenizer(text, return_tensors="pt") - with torch.no_grad(): - output = model(**inputs).waveform - with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp: - torchaudio.save(fp, output, model.config.sampling_rate,format="wav") - return fp.name -iface = gr.Interface(fn=TTS, inputs=text_input, outputs=audio_output) -iface.launch(share=False) \ No newline at end of file diff --git a/spaces/ShreyashS/NLP-Sentiment_Analysis/app.py b/spaces/ShreyashS/NLP-Sentiment_Analysis/app.py deleted file mode 100644 index 7bee4df26c87001f6e2405b7b355828b9b629338..0000000000000000000000000000000000000000 --- a/spaces/ShreyashS/NLP-Sentiment_Analysis/app.py +++ /dev/null @@ -1,26 +0,0 @@ -#!/usr/bin/env python -# coding: utf-8 - -# In[4]: - - -import pickle -import streamlit as st - -model=pickle.load(open('sentiment_analysis_model.p','rb')) - -st.title(' Sentiment Analysis Model ') - -st.write('Enter text for sentiment analysis:') -message = st.text_area("","Type Here ...") -if st.button('PREDICT'): - disp=" " - a = model.predict([message])[0] - if(a == 'pos'): - disp = "Positive Sentiment!" - else: - disp = "Negative Sentiment!" - st.write('The sentiment of given text is:', disp) - # st.header(f"**{a}**") - # q = model.predict_proba([message]) - diff --git a/spaces/Silentlin/DiffSinger/usr/diff/shallow_diffusion_tts.py b/spaces/Silentlin/DiffSinger/usr/diff/shallow_diffusion_tts.py deleted file mode 100644 index 8295d48ea0028dd0b3fdf1315bb8f129d0070810..0000000000000000000000000000000000000000 --- a/spaces/Silentlin/DiffSinger/usr/diff/shallow_diffusion_tts.py +++ /dev/null @@ -1,324 +0,0 @@ -import math -import random -from collections import deque -from functools import partial -from inspect import isfunction -from pathlib import Path -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from tqdm import tqdm -from einops import rearrange - -from modules.fastspeech.fs2 import FastSpeech2 -from modules.diffsinger_midi.fs2 import FastSpeech2MIDI -from utils.hparams import hparams - - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -# gaussian diffusion trainer class - -def extract(a, t, x_shape): - b, *_ = t.shape - out = a.gather(-1, t) - return out.reshape(b, *((1,) * (len(x_shape) - 1))) - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() - - -def linear_beta_schedule(timesteps, max_beta=hparams.get('max_beta', 0.01)): - """ - linear schedule - """ - betas = np.linspace(1e-4, max_beta, timesteps) - return betas - - -def cosine_beta_schedule(timesteps, s=0.008): - """ - cosine schedule - as proposed in https://openreview.net/forum?id=-NEXDKk8gZ - """ - steps = timesteps + 1 - x = np.linspace(0, steps, steps) - alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2 - alphas_cumprod = alphas_cumprod / alphas_cumprod[0] - betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1]) - return np.clip(betas, a_min=0, a_max=0.999) - - -beta_schedule = { - "cosine": cosine_beta_schedule, - "linear": linear_beta_schedule, -} - - -class GaussianDiffusion(nn.Module): - def __init__(self, phone_encoder, out_dims, denoise_fn, - timesteps=1000, K_step=1000, loss_type=hparams.get('diff_loss_type', 'l1'), betas=None, spec_min=None, spec_max=None): - super().__init__() - self.denoise_fn = denoise_fn - if hparams.get('use_midi') is not None and hparams['use_midi']: - self.fs2 = FastSpeech2MIDI(phone_encoder, out_dims) - else: - self.fs2 = FastSpeech2(phone_encoder, out_dims) - self.mel_bins = out_dims - - if exists(betas): - betas = betas.detach().cpu().numpy() if isinstance(betas, torch.Tensor) else betas - else: - if 'schedule_type' in hparams.keys(): - betas = beta_schedule[hparams['schedule_type']](timesteps) - else: - betas = cosine_beta_schedule(timesteps) - - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.K_step = K_step - self.loss_type = loss_type - - self.noise_list = deque(maxlen=4) - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod) - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - self.register_buffer('spec_min', torch.FloatTensor(spec_min)[None, None, :hparams['keep_bins']]) - self.register_buffer('spec_max', torch.FloatTensor(spec_max)[None, None, :hparams['keep_bins']]) - - def q_mean_variance(self, x_start, t): - mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - variance = extract(1. - self.alphas_cumprod, t, x_start.shape) - log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, cond, clip_denoised: bool): - noise_pred = self.denoise_fn(x, t, cond=cond) - x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred) - - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_plms(self, x, t, interval, cond, clip_denoised=True, repeat_noise=False): - """ - Use the PLMS method from [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778). - """ - - def get_x_pred(x, noise_t, t): - a_t = extract(self.alphas_cumprod, t, x.shape) - if t[0] < interval: - a_prev = torch.ones_like(a_t) - else: - a_prev = extract(self.alphas_cumprod, torch.max(t-interval, torch.zeros_like(t)), x.shape) - a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt() - - x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x - 1 / (a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t) - x_pred = x + x_delta - - return x_pred - - noise_list = self.noise_list - noise_pred = self.denoise_fn(x, t, cond=cond) - - if len(noise_list) == 0: - x_pred = get_x_pred(x, noise_pred, t) - noise_pred_prev = self.denoise_fn(x_pred, max(t-interval, 0), cond=cond) - noise_pred_prime = (noise_pred + noise_pred_prev) / 2 - elif len(noise_list) == 1: - noise_pred_prime = (3 * noise_pred - noise_list[-1]) / 2 - elif len(noise_list) == 2: - noise_pred_prime = (23 * noise_pred - 16 * noise_list[-1] + 5 * noise_list[-2]) / 12 - elif len(noise_list) >= 3: - noise_pred_prime = (55 * noise_pred - 59 * noise_list[-1] + 37 * noise_list[-2] - 9 * noise_list[-3]) / 24 - - x_prev = get_x_pred(x, noise_pred_prime, t) - noise_list.append(noise_pred) - - return x_prev - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return ( - extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise - ) - - def p_losses(self, x_start, t, cond, noise=None, nonpadding=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - x_recon = self.denoise_fn(x_noisy, t, cond) - - if self.loss_type == 'l1': - if nonpadding is not None: - loss = ((noise - x_recon).abs() * nonpadding.unsqueeze(1)).mean() - else: - # print('are you sure w/o nonpadding?') - loss = (noise - x_recon).abs().mean() - - elif self.loss_type == 'l2': - loss = F.mse_loss(noise, x_recon) - else: - raise NotImplementedError() - - return loss - - def forward(self, txt_tokens, mel2ph=None, spk_embed=None, - ref_mels=None, f0=None, uv=None, energy=None, infer=False, **kwargs): - b, *_, device = *txt_tokens.shape, txt_tokens.device - ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy, - skip_decoder=(not infer), infer=infer, **kwargs) - cond = ret['decoder_inp'].transpose(1, 2) - - if not infer: - t = torch.randint(0, self.K_step, (b,), device=device).long() - x = ref_mels - x = self.norm_spec(x) - x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T] - ret['diff_loss'] = self.p_losses(x, t, cond) - # nonpadding = (mel2ph != 0).float() - # ret['diff_loss'] = self.p_losses(x, t, cond, nonpadding=nonpadding) - else: - ret['fs2_mel'] = ret['mel_out'] - fs2_mels = ret['mel_out'] - t = self.K_step - fs2_mels = self.norm_spec(fs2_mels) - fs2_mels = fs2_mels.transpose(1, 2)[:, None, :, :] - - x = self.q_sample(x_start=fs2_mels, t=torch.tensor([t - 1], device=device).long()) - if hparams.get('gaussian_start') is not None and hparams['gaussian_start']: - print('===> gaussion start.') - shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2]) - x = torch.randn(shape, device=device) - - if hparams.get('pndm_speedup'): - print('===> pndm speed:', hparams['pndm_speedup']) - self.noise_list = deque(maxlen=4) - iteration_interval = hparams['pndm_speedup'] - for i in tqdm(reversed(range(0, t, iteration_interval)), desc='sample time step', - total=t // iteration_interval): - x = self.p_sample_plms(x, torch.full((b,), i, device=device, dtype=torch.long), iteration_interval, - cond) - else: - for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t): - x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond) - x = x[:, 0].transpose(1, 2) - if mel2ph is not None: # for singing - ret['mel_out'] = self.denorm_spec(x) * ((mel2ph > 0).float()[:, :, None]) - else: - ret['mel_out'] = self.denorm_spec(x) - return ret - - def norm_spec(self, x): - return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1 - - def denorm_spec(self, x): - return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min - - def cwt2f0_norm(self, cwt_spec, mean, std, mel2ph): - return self.fs2.cwt2f0_norm(cwt_spec, mean, std, mel2ph) - - def out2mel(self, x): - return x - - -class OfflineGaussianDiffusion(GaussianDiffusion): - def forward(self, txt_tokens, mel2ph=None, spk_embed=None, - ref_mels=None, f0=None, uv=None, energy=None, infer=False, **kwargs): - b, *_, device = *txt_tokens.shape, txt_tokens.device - - ret = self.fs2(txt_tokens, mel2ph, spk_embed, ref_mels, f0, uv, energy, - skip_decoder=True, infer=True, **kwargs) - cond = ret['decoder_inp'].transpose(1, 2) - fs2_mels = ref_mels[1] - ref_mels = ref_mels[0] - - if not infer: - t = torch.randint(0, self.K_step, (b,), device=device).long() - x = ref_mels - x = self.norm_spec(x) - x = x.transpose(1, 2)[:, None, :, :] # [B, 1, M, T] - ret['diff_loss'] = self.p_losses(x, t, cond) - else: - t = self.K_step - fs2_mels = self.norm_spec(fs2_mels) - fs2_mels = fs2_mels.transpose(1, 2)[:, None, :, :] - - x = self.q_sample(x_start=fs2_mels, t=torch.tensor([t - 1], device=device).long()) - - if hparams.get('gaussian_start') is not None and hparams['gaussian_start']: - print('===> gaussion start.') - shape = (cond.shape[0], 1, self.mel_bins, cond.shape[2]) - x = torch.randn(shape, device=device) - for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t): - x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond) - x = x[:, 0].transpose(1, 2) - ret['mel_out'] = self.denorm_spec(x) - return ret diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/debugger.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/debugger.py deleted file mode 100644 index c8082e34e7c96aa3823a2ae13d5a92f82d9d417e..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/IPython/core/debugger.py +++ /dev/null @@ -1,997 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Pdb debugger class. - - -This is an extension to PDB which adds a number of new features. -Note that there is also the `IPython.terminal.debugger` class which provides UI -improvements. - -We also strongly recommend to use this via the `ipdb` package, which provides -extra configuration options. - -Among other things, this subclass of PDB: - - supports many IPython magics like pdef/psource - - hide frames in tracebacks based on `__tracebackhide__` - - allows to skip frames based on `__debuggerskip__` - -The skipping and hiding frames are configurable via the `skip_predicates` -command. - -By default, frames from readonly files will be hidden, frames containing -``__tracebackhide__=True`` will be hidden. - -Frames containing ``__debuggerskip__`` will be stepped over, frames who's parent -frames value of ``__debuggerskip__`` is ``True`` will be skipped. - - >>> def helpers_helper(): - ... pass - ... - ... def helper_1(): - ... print("don't step in me") - ... helpers_helpers() # will be stepped over unless breakpoint set. - ... - ... - ... def helper_2(): - ... print("in me neither") - ... - -One can define a decorator that wraps a function between the two helpers: - - >>> def pdb_skipped_decorator(function): - ... - ... - ... def wrapped_fn(*args, **kwargs): - ... __debuggerskip__ = True - ... helper_1() - ... __debuggerskip__ = False - ... result = function(*args, **kwargs) - ... __debuggerskip__ = True - ... helper_2() - ... # setting __debuggerskip__ to False again is not necessary - ... return result - ... - ... return wrapped_fn - -When decorating a function, ipdb will directly step into ``bar()`` by -default: - - >>> @foo_decorator - ... def bar(x, y): - ... return x * y - - -You can toggle the behavior with - - ipdb> skip_predicates debuggerskip false - -or configure it in your ``.pdbrc`` - - - -License -------- - -Modified from the standard pdb.Pdb class to avoid including readline, so that -the command line completion of other programs which include this isn't -damaged. - -In the future, this class will be expanded with improvements over the standard -pdb. - -The original code in this file is mainly lifted out of cmd.py in Python 2.2, -with minor changes. Licensing should therefore be under the standard Python -terms. For details on the PSF (Python Software Foundation) standard license, -see: - -https://docs.python.org/2/license.html - - -All the changes since then are under the same license as IPython. - -""" - -#***************************************************************************** -# -# This file is licensed under the PSF license. -# -# Copyright (C) 2001 Python Software Foundation, www.python.org -# Copyright (C) 2005-2006 Fernando Perez. <fperez@colorado.edu> -# -# -#***************************************************************************** - -import inspect -import linecache -import sys -import re -import os - -from IPython import get_ipython -from IPython.utils import PyColorize -from IPython.utils import coloransi, py3compat -from IPython.core.excolors import exception_colors - -# skip module docstests -__skip_doctest__ = True - -prompt = 'ipdb> ' - -# We have to check this directly from sys.argv, config struct not yet available -from pdb import Pdb as OldPdb - -# Allow the set_trace code to operate outside of an ipython instance, even if -# it does so with some limitations. The rest of this support is implemented in -# the Tracer constructor. - -DEBUGGERSKIP = "__debuggerskip__" - - -def make_arrow(pad): - """generate the leading arrow in front of traceback or debugger""" - if pad >= 2: - return '-'*(pad-2) + '> ' - elif pad == 1: - return '>' - return '' - - -def BdbQuit_excepthook(et, ev, tb, excepthook=None): - """Exception hook which handles `BdbQuit` exceptions. - - All other exceptions are processed using the `excepthook` - parameter. - """ - raise ValueError( - "`BdbQuit_excepthook` is deprecated since version 5.1", - ) - - -def BdbQuit_IPython_excepthook(self, et, ev, tb, tb_offset=None): - raise ValueError( - "`BdbQuit_IPython_excepthook` is deprecated since version 5.1", - DeprecationWarning, stacklevel=2) - - -RGX_EXTRA_INDENT = re.compile(r'(?<=\n)\s+') - - -def strip_indentation(multiline_string): - return RGX_EXTRA_INDENT.sub('', multiline_string) - - -def decorate_fn_with_doc(new_fn, old_fn, additional_text=""): - """Make new_fn have old_fn's doc string. This is particularly useful - for the ``do_...`` commands that hook into the help system. - Adapted from from a comp.lang.python posting - by Duncan Booth.""" - def wrapper(*args, **kw): - return new_fn(*args, **kw) - if old_fn.__doc__: - wrapper.__doc__ = strip_indentation(old_fn.__doc__) + additional_text - return wrapper - - -class Pdb(OldPdb): - """Modified Pdb class, does not load readline. - - for a standalone version that uses prompt_toolkit, see - `IPython.terminal.debugger.TerminalPdb` and - `IPython.terminal.debugger.set_trace()` - - - This debugger can hide and skip frames that are tagged according to some predicates. - See the `skip_predicates` commands. - - """ - - default_predicates = { - "tbhide": True, - "readonly": False, - "ipython_internal": True, - "debuggerskip": True, - } - - def __init__(self, completekey=None, stdin=None, stdout=None, context=5, **kwargs): - """Create a new IPython debugger. - - Parameters - ---------- - completekey : default None - Passed to pdb.Pdb. - stdin : default None - Passed to pdb.Pdb. - stdout : default None - Passed to pdb.Pdb. - context : int - Number of lines of source code context to show when - displaying stacktrace information. - **kwargs - Passed to pdb.Pdb. - - Notes - ----- - The possibilities are python version dependent, see the python - docs for more info. - """ - - # Parent constructor: - try: - self.context = int(context) - if self.context <= 0: - raise ValueError("Context must be a positive integer") - except (TypeError, ValueError) as e: - raise ValueError("Context must be a positive integer") from e - - # `kwargs` ensures full compatibility with stdlib's `pdb.Pdb`. - OldPdb.__init__(self, completekey, stdin, stdout, **kwargs) - - # IPython changes... - self.shell = get_ipython() - - if self.shell is None: - save_main = sys.modules['__main__'] - # No IPython instance running, we must create one - from IPython.terminal.interactiveshell import \ - TerminalInteractiveShell - self.shell = TerminalInteractiveShell.instance() - # needed by any code which calls __import__("__main__") after - # the debugger was entered. See also #9941. - sys.modules["__main__"] = save_main - - - color_scheme = self.shell.colors - - self.aliases = {} - - # Create color table: we copy the default one from the traceback - # module and add a few attributes needed for debugging - self.color_scheme_table = exception_colors() - - # shorthands - C = coloransi.TermColors - cst = self.color_scheme_table - - cst['NoColor'].colors.prompt = C.NoColor - cst['NoColor'].colors.breakpoint_enabled = C.NoColor - cst['NoColor'].colors.breakpoint_disabled = C.NoColor - - cst['Linux'].colors.prompt = C.Green - cst['Linux'].colors.breakpoint_enabled = C.LightRed - cst['Linux'].colors.breakpoint_disabled = C.Red - - cst['LightBG'].colors.prompt = C.Blue - cst['LightBG'].colors.breakpoint_enabled = C.LightRed - cst['LightBG'].colors.breakpoint_disabled = C.Red - - cst['Neutral'].colors.prompt = C.Blue - cst['Neutral'].colors.breakpoint_enabled = C.LightRed - cst['Neutral'].colors.breakpoint_disabled = C.Red - - # Add a python parser so we can syntax highlight source while - # debugging. - self.parser = PyColorize.Parser(style=color_scheme) - self.set_colors(color_scheme) - - # Set the prompt - the default prompt is '(Pdb)' - self.prompt = prompt - self.skip_hidden = True - self.report_skipped = True - - # list of predicates we use to skip frames - self._predicates = self.default_predicates - - # - def set_colors(self, scheme): - """Shorthand access to the color table scheme selector method.""" - self.color_scheme_table.set_active_scheme(scheme) - self.parser.style = scheme - - def set_trace(self, frame=None): - if frame is None: - frame = sys._getframe().f_back - self.initial_frame = frame - return super().set_trace(frame) - - def _hidden_predicate(self, frame): - """ - Given a frame return whether it it should be hidden or not by IPython. - """ - - if self._predicates["readonly"]: - fname = frame.f_code.co_filename - # we need to check for file existence and interactively define - # function would otherwise appear as RO. - if os.path.isfile(fname) and not os.access(fname, os.W_OK): - return True - - if self._predicates["tbhide"]: - if frame in (self.curframe, getattr(self, "initial_frame", None)): - return False - frame_locals = self._get_frame_locals(frame) - if "__tracebackhide__" not in frame_locals: - return False - return frame_locals["__tracebackhide__"] - return False - - def hidden_frames(self, stack): - """ - Given an index in the stack return whether it should be skipped. - - This is used in up/down and where to skip frames. - """ - # The f_locals dictionary is updated from the actual frame - # locals whenever the .f_locals accessor is called, so we - # avoid calling it here to preserve self.curframe_locals. - # Furthermore, there is no good reason to hide the current frame. - ip_hide = [self._hidden_predicate(s[0]) for s in stack] - ip_start = [i for i, s in enumerate(ip_hide) if s == "__ipython_bottom__"] - if ip_start and self._predicates["ipython_internal"]: - ip_hide = [h if i > ip_start[0] else True for (i, h) in enumerate(ip_hide)] - return ip_hide - - def interaction(self, frame, traceback): - try: - OldPdb.interaction(self, frame, traceback) - except KeyboardInterrupt: - self.stdout.write("\n" + self.shell.get_exception_only()) - - def precmd(self, line): - """Perform useful escapes on the command before it is executed.""" - - if line.endswith("??"): - line = "pinfo2 " + line[:-2] - elif line.endswith("?"): - line = "pinfo " + line[:-1] - - line = super().precmd(line) - - return line - - def new_do_frame(self, arg): - OldPdb.do_frame(self, arg) - - def new_do_quit(self, arg): - - if hasattr(self, 'old_all_completions'): - self.shell.Completer.all_completions = self.old_all_completions - - return OldPdb.do_quit(self, arg) - - do_q = do_quit = decorate_fn_with_doc(new_do_quit, OldPdb.do_quit) - - def new_do_restart(self, arg): - """Restart command. In the context of ipython this is exactly the same - thing as 'quit'.""" - self.msg("Restart doesn't make sense here. Using 'quit' instead.") - return self.do_quit(arg) - - def print_stack_trace(self, context=None): - Colors = self.color_scheme_table.active_colors - ColorsNormal = Colors.Normal - if context is None: - context = self.context - try: - context = int(context) - if context <= 0: - raise ValueError("Context must be a positive integer") - except (TypeError, ValueError) as e: - raise ValueError("Context must be a positive integer") from e - try: - skipped = 0 - for hidden, frame_lineno in zip(self.hidden_frames(self.stack), self.stack): - if hidden and self.skip_hidden: - skipped += 1 - continue - if skipped: - print( - f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n" - ) - skipped = 0 - self.print_stack_entry(frame_lineno, context=context) - if skipped: - print( - f"{Colors.excName} [... skipping {skipped} hidden frame(s)]{ColorsNormal}\n" - ) - except KeyboardInterrupt: - pass - - def print_stack_entry(self, frame_lineno, prompt_prefix='\n-> ', - context=None): - if context is None: - context = self.context - try: - context = int(context) - if context <= 0: - raise ValueError("Context must be a positive integer") - except (TypeError, ValueError) as e: - raise ValueError("Context must be a positive integer") from e - print(self.format_stack_entry(frame_lineno, '', context), file=self.stdout) - - # vds: >> - frame, lineno = frame_lineno - filename = frame.f_code.co_filename - self.shell.hooks.synchronize_with_editor(filename, lineno, 0) - # vds: << - - def _get_frame_locals(self, frame): - """ " - Accessing f_local of current frame reset the namespace, so we want to avoid - that or the following can happen - - ipdb> foo - "old" - ipdb> foo = "new" - ipdb> foo - "new" - ipdb> where - ipdb> foo - "old" - - So if frame is self.current_frame we instead return self.curframe_locals - - """ - if frame is self.curframe: - return self.curframe_locals - else: - return frame.f_locals - - def format_stack_entry(self, frame_lineno, lprefix=': ', context=None): - if context is None: - context = self.context - try: - context = int(context) - if context <= 0: - print("Context must be a positive integer", file=self.stdout) - except (TypeError, ValueError): - print("Context must be a positive integer", file=self.stdout) - - import reprlib - - ret = [] - - Colors = self.color_scheme_table.active_colors - ColorsNormal = Colors.Normal - tpl_link = "%s%%s%s" % (Colors.filenameEm, ColorsNormal) - tpl_call = "%s%%s%s%%s%s" % (Colors.vName, Colors.valEm, ColorsNormal) - tpl_line = "%%s%s%%s %s%%s" % (Colors.lineno, ColorsNormal) - tpl_line_em = "%%s%s%%s %s%%s%s" % (Colors.linenoEm, Colors.line, ColorsNormal) - - frame, lineno = frame_lineno - - return_value = '' - loc_frame = self._get_frame_locals(frame) - if "__return__" in loc_frame: - rv = loc_frame["__return__"] - # return_value += '->' - return_value += reprlib.repr(rv) + "\n" - ret.append(return_value) - - #s = filename + '(' + `lineno` + ')' - filename = self.canonic(frame.f_code.co_filename) - link = tpl_link % py3compat.cast_unicode(filename) - - if frame.f_code.co_name: - func = frame.f_code.co_name - else: - func = "<lambda>" - - call = "" - if func != "?": - if "__args__" in loc_frame: - args = reprlib.repr(loc_frame["__args__"]) - else: - args = '()' - call = tpl_call % (func, args) - - # The level info should be generated in the same format pdb uses, to - # avoid breaking the pdbtrack functionality of python-mode in *emacs. - if frame is self.curframe: - ret.append('> ') - else: - ret.append(" ") - ret.append("%s(%s)%s\n" % (link, lineno, call)) - - start = lineno - 1 - context//2 - lines = linecache.getlines(filename) - start = min(start, len(lines) - context) - start = max(start, 0) - lines = lines[start : start + context] - - for i, line in enumerate(lines): - show_arrow = start + 1 + i == lineno - linetpl = (frame is self.curframe or show_arrow) and tpl_line_em or tpl_line - ret.append( - self.__format_line( - linetpl, filename, start + 1 + i, line, arrow=show_arrow - ) - ) - return "".join(ret) - - def __format_line(self, tpl_line, filename, lineno, line, arrow=False): - bp_mark = "" - bp_mark_color = "" - - new_line, err = self.parser.format2(line, 'str') - if not err: - line = new_line - - bp = None - if lineno in self.get_file_breaks(filename): - bps = self.get_breaks(filename, lineno) - bp = bps[-1] - - if bp: - Colors = self.color_scheme_table.active_colors - bp_mark = str(bp.number) - bp_mark_color = Colors.breakpoint_enabled - if not bp.enabled: - bp_mark_color = Colors.breakpoint_disabled - - numbers_width = 7 - if arrow: - # This is the line with the error - pad = numbers_width - len(str(lineno)) - len(bp_mark) - num = '%s%s' % (make_arrow(pad), str(lineno)) - else: - num = '%*s' % (numbers_width - len(bp_mark), str(lineno)) - - return tpl_line % (bp_mark_color + bp_mark, num, line) - - def print_list_lines(self, filename, first, last): - """The printing (as opposed to the parsing part of a 'list' - command.""" - try: - Colors = self.color_scheme_table.active_colors - ColorsNormal = Colors.Normal - tpl_line = '%%s%s%%s %s%%s' % (Colors.lineno, ColorsNormal) - tpl_line_em = '%%s%s%%s %s%%s%s' % (Colors.linenoEm, Colors.line, ColorsNormal) - src = [] - if filename == "<string>" and hasattr(self, "_exec_filename"): - filename = self._exec_filename - - for lineno in range(first, last+1): - line = linecache.getline(filename, lineno) - if not line: - break - - if lineno == self.curframe.f_lineno: - line = self.__format_line( - tpl_line_em, filename, lineno, line, arrow=True - ) - else: - line = self.__format_line( - tpl_line, filename, lineno, line, arrow=False - ) - - src.append(line) - self.lineno = lineno - - print(''.join(src), file=self.stdout) - - except KeyboardInterrupt: - pass - - def do_skip_predicates(self, args): - """ - Turn on/off individual predicates as to whether a frame should be hidden/skip. - - The global option to skip (or not) hidden frames is set with skip_hidden - - To change the value of a predicate - - skip_predicates key [true|false] - - Call without arguments to see the current values. - - To permanently change the value of an option add the corresponding - command to your ``~/.pdbrc`` file. If you are programmatically using the - Pdb instance you can also change the ``default_predicates`` class - attribute. - """ - if not args.strip(): - print("current predicates:") - for p, v in self._predicates.items(): - print(" ", p, ":", v) - return - type_value = args.strip().split(" ") - if len(type_value) != 2: - print( - f"Usage: skip_predicates <type> <value>, with <type> one of {set(self._predicates.keys())}" - ) - return - - type_, value = type_value - if type_ not in self._predicates: - print(f"{type_!r} not in {set(self._predicates.keys())}") - return - if value.lower() not in ("true", "yes", "1", "no", "false", "0"): - print( - f"{value!r} is invalid - use one of ('true', 'yes', '1', 'no', 'false', '0')" - ) - return - - self._predicates[type_] = value.lower() in ("true", "yes", "1") - if not any(self._predicates.values()): - print( - "Warning, all predicates set to False, skip_hidden may not have any effects." - ) - - def do_skip_hidden(self, arg): - """ - Change whether or not we should skip frames with the - __tracebackhide__ attribute. - """ - if not arg.strip(): - print( - f"skip_hidden = {self.skip_hidden}, use 'yes','no', 'true', or 'false' to change." - ) - elif arg.strip().lower() in ("true", "yes"): - self.skip_hidden = True - elif arg.strip().lower() in ("false", "no"): - self.skip_hidden = False - if not any(self._predicates.values()): - print( - "Warning, all predicates set to False, skip_hidden may not have any effects." - ) - - def do_list(self, arg): - """Print lines of code from the current stack frame - """ - self.lastcmd = 'list' - last = None - if arg: - try: - x = eval(arg, {}, {}) - if type(x) == type(()): - first, last = x - first = int(first) - last = int(last) - if last < first: - # Assume it's a count - last = first + last - else: - first = max(1, int(x) - 5) - except: - print('*** Error in argument:', repr(arg), file=self.stdout) - return - elif self.lineno is None: - first = max(1, self.curframe.f_lineno - 5) - else: - first = self.lineno + 1 - if last is None: - last = first + 10 - self.print_list_lines(self.curframe.f_code.co_filename, first, last) - - # vds: >> - lineno = first - filename = self.curframe.f_code.co_filename - self.shell.hooks.synchronize_with_editor(filename, lineno, 0) - # vds: << - - do_l = do_list - - def getsourcelines(self, obj): - lines, lineno = inspect.findsource(obj) - if inspect.isframe(obj) and obj.f_globals is self._get_frame_locals(obj): - # must be a module frame: do not try to cut a block out of it - return lines, 1 - elif inspect.ismodule(obj): - return lines, 1 - return inspect.getblock(lines[lineno:]), lineno+1 - - def do_longlist(self, arg): - """Print lines of code from the current stack frame. - - Shows more lines than 'list' does. - """ - self.lastcmd = 'longlist' - try: - lines, lineno = self.getsourcelines(self.curframe) - except OSError as err: - self.error(err) - return - last = lineno + len(lines) - self.print_list_lines(self.curframe.f_code.co_filename, lineno, last) - do_ll = do_longlist - - def do_debug(self, arg): - """debug code - Enter a recursive debugger that steps through the code - argument (which is an arbitrary expression or statement to be - executed in the current environment). - """ - trace_function = sys.gettrace() - sys.settrace(None) - globals = self.curframe.f_globals - locals = self.curframe_locals - p = self.__class__(completekey=self.completekey, - stdin=self.stdin, stdout=self.stdout) - p.use_rawinput = self.use_rawinput - p.prompt = "(%s) " % self.prompt.strip() - self.message("ENTERING RECURSIVE DEBUGGER") - sys.call_tracing(p.run, (arg, globals, locals)) - self.message("LEAVING RECURSIVE DEBUGGER") - sys.settrace(trace_function) - self.lastcmd = p.lastcmd - - def do_pdef(self, arg): - """Print the call signature for any callable object. - - The debugger interface to %pdef""" - namespaces = [ - ("Locals", self.curframe_locals), - ("Globals", self.curframe.f_globals), - ] - self.shell.find_line_magic("pdef")(arg, namespaces=namespaces) - - def do_pdoc(self, arg): - """Print the docstring for an object. - - The debugger interface to %pdoc.""" - namespaces = [ - ("Locals", self.curframe_locals), - ("Globals", self.curframe.f_globals), - ] - self.shell.find_line_magic("pdoc")(arg, namespaces=namespaces) - - def do_pfile(self, arg): - """Print (or run through pager) the file where an object is defined. - - The debugger interface to %pfile. - """ - namespaces = [ - ("Locals", self.curframe_locals), - ("Globals", self.curframe.f_globals), - ] - self.shell.find_line_magic("pfile")(arg, namespaces=namespaces) - - def do_pinfo(self, arg): - """Provide detailed information about an object. - - The debugger interface to %pinfo, i.e., obj?.""" - namespaces = [ - ("Locals", self.curframe_locals), - ("Globals", self.curframe.f_globals), - ] - self.shell.find_line_magic("pinfo")(arg, namespaces=namespaces) - - def do_pinfo2(self, arg): - """Provide extra detailed information about an object. - - The debugger interface to %pinfo2, i.e., obj??.""" - namespaces = [ - ("Locals", self.curframe_locals), - ("Globals", self.curframe.f_globals), - ] - self.shell.find_line_magic("pinfo2")(arg, namespaces=namespaces) - - def do_psource(self, arg): - """Print (or run through pager) the source code for an object.""" - namespaces = [ - ("Locals", self.curframe_locals), - ("Globals", self.curframe.f_globals), - ] - self.shell.find_line_magic("psource")(arg, namespaces=namespaces) - - def do_where(self, arg): - """w(here) - Print a stack trace, with the most recent frame at the bottom. - An arrow indicates the "current frame", which determines the - context of most commands. 'bt' is an alias for this command. - - Take a number as argument as an (optional) number of context line to - print""" - if arg: - try: - context = int(arg) - except ValueError as err: - self.error(err) - return - self.print_stack_trace(context) - else: - self.print_stack_trace() - - do_w = do_where - - def break_anywhere(self, frame): - """ - _stop_in_decorator_internals is overly restrictive, as we may still want - to trace function calls, so we need to also update break_anywhere so - that is we don't `stop_here`, because of debugger skip, we may still - stop at any point inside the function - - """ - - sup = super().break_anywhere(frame) - if sup: - return sup - if self._predicates["debuggerskip"]: - if DEBUGGERSKIP in frame.f_code.co_varnames: - return True - if frame.f_back and self._get_frame_locals(frame.f_back).get(DEBUGGERSKIP): - return True - return False - - def _is_in_decorator_internal_and_should_skip(self, frame): - """ - Utility to tell us whether we are in a decorator internal and should stop. - - """ - - # if we are disabled don't skip - if not self._predicates["debuggerskip"]: - return False - - # if frame is tagged, skip by default. - if DEBUGGERSKIP in frame.f_code.co_varnames: - return True - - # if one of the parent frame value set to True skip as well. - - cframe = frame - while getattr(cframe, "f_back", None): - cframe = cframe.f_back - if self._get_frame_locals(cframe).get(DEBUGGERSKIP): - return True - - return False - - def stop_here(self, frame): - if self._is_in_decorator_internal_and_should_skip(frame) is True: - return False - - hidden = False - if self.skip_hidden: - hidden = self._hidden_predicate(frame) - if hidden: - if self.report_skipped: - Colors = self.color_scheme_table.active_colors - ColorsNormal = Colors.Normal - print( - f"{Colors.excName} [... skipped 1 hidden frame]{ColorsNormal}\n" - ) - return super().stop_here(frame) - - def do_up(self, arg): - """u(p) [count] - Move the current frame count (default one) levels up in the - stack trace (to an older frame). - - Will skip hidden frames. - """ - # modified version of upstream that skips - # frames with __tracebackhide__ - if self.curindex == 0: - self.error("Oldest frame") - return - try: - count = int(arg or 1) - except ValueError: - self.error("Invalid frame count (%s)" % arg) - return - skipped = 0 - if count < 0: - _newframe = 0 - else: - counter = 0 - hidden_frames = self.hidden_frames(self.stack) - for i in range(self.curindex - 1, -1, -1): - if hidden_frames[i] and self.skip_hidden: - skipped += 1 - continue - counter += 1 - if counter >= count: - break - else: - # if no break occurred. - self.error( - "all frames above hidden, use `skip_hidden False` to get get into those." - ) - return - - Colors = self.color_scheme_table.active_colors - ColorsNormal = Colors.Normal - _newframe = i - self._select_frame(_newframe) - if skipped: - print( - f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n" - ) - - def do_down(self, arg): - """d(own) [count] - Move the current frame count (default one) levels down in the - stack trace (to a newer frame). - - Will skip hidden frames. - """ - if self.curindex + 1 == len(self.stack): - self.error("Newest frame") - return - try: - count = int(arg or 1) - except ValueError: - self.error("Invalid frame count (%s)" % arg) - return - if count < 0: - _newframe = len(self.stack) - 1 - else: - counter = 0 - skipped = 0 - hidden_frames = self.hidden_frames(self.stack) - for i in range(self.curindex + 1, len(self.stack)): - if hidden_frames[i] and self.skip_hidden: - skipped += 1 - continue - counter += 1 - if counter >= count: - break - else: - self.error( - "all frames below hidden, use `skip_hidden False` to get get into those." - ) - return - - Colors = self.color_scheme_table.active_colors - ColorsNormal = Colors.Normal - if skipped: - print( - f"{Colors.excName} [... skipped {skipped} hidden frame(s)]{ColorsNormal}\n" - ) - _newframe = i - - self._select_frame(_newframe) - - do_d = do_down - do_u = do_up - - def do_context(self, context): - """context number_of_lines - Set the number of lines of source code to show when displaying - stacktrace information. - """ - try: - new_context = int(context) - if new_context <= 0: - raise ValueError() - self.context = new_context - except ValueError: - self.error("The 'context' command requires a positive integer argument.") - - -class InterruptiblePdb(Pdb): - """Version of debugger where KeyboardInterrupt exits the debugger altogether.""" - - def cmdloop(self, intro=None): - """Wrap cmdloop() such that KeyboardInterrupt stops the debugger.""" - try: - return OldPdb.cmdloop(self, intro=intro) - except KeyboardInterrupt: - self.stop_here = lambda frame: False - self.do_quit("") - sys.settrace(None) - self.quitting = False - raise - - def _cmdloop(self): - while True: - try: - # keyboard interrupts allow for an easy way to cancel - # the current command, so allow them during interactive input - self.allow_kbdint = True - self.cmdloop() - self.allow_kbdint = False - break - except KeyboardInterrupt: - self.message('--KeyboardInterrupt--') - raise - - -def set_trace(frame=None): - """ - Start debugging from `frame`. - - If frame is not specified, debugging starts from caller's frame. - """ - Pdb().set_trace(frame or sys._getframe().f_back) diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/PalmImagePlugin.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/PalmImagePlugin.py deleted file mode 100644 index a88a907917dce5dace64fd1e38df86246c8e0305..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/PIL/PalmImagePlugin.py +++ /dev/null @@ -1,225 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# - -## -# Image plugin for Palm pixmap images (output only). -## - -from . import Image, ImageFile -from ._binary import o8 -from ._binary import o16be as o16b - -# fmt: off -_Palm8BitColormapValues = ( - (255, 255, 255), (255, 204, 255), (255, 153, 255), (255, 102, 255), - (255, 51, 255), (255, 0, 255), (255, 255, 204), (255, 204, 204), - (255, 153, 204), (255, 102, 204), (255, 51, 204), (255, 0, 204), - (255, 255, 153), (255, 204, 153), (255, 153, 153), (255, 102, 153), - (255, 51, 153), (255, 0, 153), (204, 255, 255), (204, 204, 255), - (204, 153, 255), (204, 102, 255), (204, 51, 255), (204, 0, 255), - (204, 255, 204), (204, 204, 204), (204, 153, 204), (204, 102, 204), - (204, 51, 204), (204, 0, 204), (204, 255, 153), (204, 204, 153), - (204, 153, 153), (204, 102, 153), (204, 51, 153), (204, 0, 153), - (153, 255, 255), (153, 204, 255), (153, 153, 255), (153, 102, 255), - (153, 51, 255), (153, 0, 255), (153, 255, 204), (153, 204, 204), - (153, 153, 204), (153, 102, 204), (153, 51, 204), (153, 0, 204), - (153, 255, 153), (153, 204, 153), (153, 153, 153), (153, 102, 153), - (153, 51, 153), (153, 0, 153), (102, 255, 255), (102, 204, 255), - (102, 153, 255), (102, 102, 255), (102, 51, 255), (102, 0, 255), - (102, 255, 204), (102, 204, 204), (102, 153, 204), (102, 102, 204), - (102, 51, 204), (102, 0, 204), (102, 255, 153), (102, 204, 153), - (102, 153, 153), (102, 102, 153), (102, 51, 153), (102, 0, 153), - (51, 255, 255), (51, 204, 255), (51, 153, 255), (51, 102, 255), - (51, 51, 255), (51, 0, 255), (51, 255, 204), (51, 204, 204), - (51, 153, 204), (51, 102, 204), (51, 51, 204), (51, 0, 204), - (51, 255, 153), (51, 204, 153), (51, 153, 153), (51, 102, 153), - (51, 51, 153), (51, 0, 153), (0, 255, 255), (0, 204, 255), - (0, 153, 255), (0, 102, 255), (0, 51, 255), (0, 0, 255), - (0, 255, 204), (0, 204, 204), (0, 153, 204), (0, 102, 204), - (0, 51, 204), (0, 0, 204), (0, 255, 153), (0, 204, 153), - (0, 153, 153), (0, 102, 153), (0, 51, 153), (0, 0, 153), - (255, 255, 102), (255, 204, 102), (255, 153, 102), (255, 102, 102), - (255, 51, 102), (255, 0, 102), (255, 255, 51), (255, 204, 51), - (255, 153, 51), (255, 102, 51), (255, 51, 51), (255, 0, 51), - (255, 255, 0), (255, 204, 0), (255, 153, 0), (255, 102, 0), - (255, 51, 0), (255, 0, 0), (204, 255, 102), (204, 204, 102), - (204, 153, 102), (204, 102, 102), (204, 51, 102), (204, 0, 102), - (204, 255, 51), (204, 204, 51), (204, 153, 51), (204, 102, 51), - (204, 51, 51), (204, 0, 51), (204, 255, 0), (204, 204, 0), - (204, 153, 0), (204, 102, 0), (204, 51, 0), (204, 0, 0), - (153, 255, 102), (153, 204, 102), (153, 153, 102), (153, 102, 102), - (153, 51, 102), (153, 0, 102), (153, 255, 51), (153, 204, 51), - (153, 153, 51), (153, 102, 51), (153, 51, 51), (153, 0, 51), - (153, 255, 0), (153, 204, 0), (153, 153, 0), (153, 102, 0), - (153, 51, 0), (153, 0, 0), (102, 255, 102), (102, 204, 102), - (102, 153, 102), (102, 102, 102), (102, 51, 102), (102, 0, 102), - (102, 255, 51), (102, 204, 51), (102, 153, 51), (102, 102, 51), - (102, 51, 51), (102, 0, 51), (102, 255, 0), (102, 204, 0), - (102, 153, 0), (102, 102, 0), (102, 51, 0), (102, 0, 0), - (51, 255, 102), (51, 204, 102), (51, 153, 102), (51, 102, 102), - (51, 51, 102), (51, 0, 102), (51, 255, 51), (51, 204, 51), - (51, 153, 51), (51, 102, 51), (51, 51, 51), (51, 0, 51), - (51, 255, 0), (51, 204, 0), (51, 153, 0), (51, 102, 0), - (51, 51, 0), (51, 0, 0), (0, 255, 102), (0, 204, 102), - (0, 153, 102), (0, 102, 102), (0, 51, 102), (0, 0, 102), - (0, 255, 51), (0, 204, 51), (0, 153, 51), (0, 102, 51), - (0, 51, 51), (0, 0, 51), (0, 255, 0), (0, 204, 0), - (0, 153, 0), (0, 102, 0), (0, 51, 0), (17, 17, 17), - (34, 34, 34), (68, 68, 68), (85, 85, 85), (119, 119, 119), - (136, 136, 136), (170, 170, 170), (187, 187, 187), (221, 221, 221), - (238, 238, 238), (192, 192, 192), (128, 0, 0), (128, 0, 128), - (0, 128, 0), (0, 128, 128), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0), - (0, 0, 0), (0, 0, 0), (0, 0, 0), (0, 0, 0)) -# fmt: on - - -# so build a prototype image to be used for palette resampling -def build_prototype_image(): - image = Image.new("L", (1, len(_Palm8BitColormapValues))) - image.putdata(list(range(len(_Palm8BitColormapValues)))) - palettedata = () - for colormapValue in _Palm8BitColormapValues: - palettedata += colormapValue - palettedata += (0, 0, 0) * (256 - len(_Palm8BitColormapValues)) - image.putpalette(palettedata) - return image - - -Palm8BitColormapImage = build_prototype_image() - -# OK, we now have in Palm8BitColormapImage, -# a "P"-mode image with the right palette -# -# -------------------------------------------------------------------- - -_FLAGS = {"custom-colormap": 0x4000, "is-compressed": 0x8000, "has-transparent": 0x2000} - -_COMPRESSION_TYPES = {"none": 0xFF, "rle": 0x01, "scanline": 0x00} - - -# -# -------------------------------------------------------------------- - -## -# (Internal) Image save plugin for the Palm format. - - -def _save(im, fp, filename): - if im.mode == "P": - # we assume this is a color Palm image with the standard colormap, - # unless the "info" dict has a "custom-colormap" field - - rawmode = "P" - bpp = 8 - version = 1 - - elif im.mode == "L": - if im.encoderinfo.get("bpp") in (1, 2, 4): - # this is 8-bit grayscale, so we shift it to get the high-order bits, - # and invert it because - # Palm does greyscale from white (0) to black (1) - bpp = im.encoderinfo["bpp"] - im = im.point( - lambda x, shift=8 - bpp, maxval=(1 << bpp) - 1: maxval - (x >> shift) - ) - elif im.info.get("bpp") in (1, 2, 4): - # here we assume that even though the inherent mode is 8-bit grayscale, - # only the lower bpp bits are significant. - # We invert them to match the Palm. - bpp = im.info["bpp"] - im = im.point(lambda x, maxval=(1 << bpp) - 1: maxval - (x & maxval)) - else: - msg = f"cannot write mode {im.mode} as Palm" - raise OSError(msg) - - # we ignore the palette here - im.mode = "P" - rawmode = "P;" + str(bpp) - version = 1 - - elif im.mode == "1": - # monochrome -- write it inverted, as is the Palm standard - rawmode = "1;I" - bpp = 1 - version = 0 - - else: - msg = f"cannot write mode {im.mode} as Palm" - raise OSError(msg) - - # - # make sure image data is available - im.load() - - # write header - - cols = im.size[0] - rows = im.size[1] - - rowbytes = int((cols + (16 // bpp - 1)) / (16 // bpp)) * 2 - transparent_index = 0 - compression_type = _COMPRESSION_TYPES["none"] - - flags = 0 - if im.mode == "P" and "custom-colormap" in im.info: - flags = flags & _FLAGS["custom-colormap"] - colormapsize = 4 * 256 + 2 - colormapmode = im.palette.mode - colormap = im.getdata().getpalette() - else: - colormapsize = 0 - - if "offset" in im.info: - offset = (rowbytes * rows + 16 + 3 + colormapsize) // 4 - else: - offset = 0 - - fp.write(o16b(cols) + o16b(rows) + o16b(rowbytes) + o16b(flags)) - fp.write(o8(bpp)) - fp.write(o8(version)) - fp.write(o16b(offset)) - fp.write(o8(transparent_index)) - fp.write(o8(compression_type)) - fp.write(o16b(0)) # reserved by Palm - - # now write colormap if necessary - - if colormapsize > 0: - fp.write(o16b(256)) - for i in range(256): - fp.write(o8(i)) - if colormapmode == "RGB": - fp.write( - o8(colormap[3 * i]) - + o8(colormap[3 * i + 1]) - + o8(colormap[3 * i + 2]) - ) - elif colormapmode == "RGBA": - fp.write( - o8(colormap[4 * i]) - + o8(colormap[4 * i + 1]) - + o8(colormap[4 * i + 2]) - ) - - # now convert data to raw form - ImageFile._save(im, fp, [("raw", (0, 0) + im.size, 0, (rawmode, rowbytes, 1))]) - - if hasattr(fp, "flush"): - fp.flush() - - -# -# -------------------------------------------------------------------- - -Image.register_save("Palm", _save) - -Image.register_extension("Palm", ".palm") - -Image.register_mime("Palm", "image/palm") diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/contourpy/_version.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/contourpy/_version.py deleted file mode 100644 index 6849410aae0a8010e76d5f0a44ced13d750b0989..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/contourpy/_version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "1.1.0" diff --git a/spaces/Superlang/ImageProcessor/annotator/keypose/hrnet_w48_coco_256x192.py b/spaces/Superlang/ImageProcessor/annotator/keypose/hrnet_w48_coco_256x192.py deleted file mode 100644 index 9755e6773cd3a8c0d2ac684c612d716cfd44b0ca..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/keypose/hrnet_w48_coco_256x192.py +++ /dev/null @@ -1,169 +0,0 @@ -# _base_ = [ -# '../../../../_base_/default_runtime.py', -# '../../../../_base_/datasets/coco.py' -# ] -evaluation = dict(interval=10, metric='mAP', save_best='AP') - -optimizer = dict( - type='Adam', - lr=5e-4, -) -optimizer_config = dict(grad_clip=None) -# learning policy -lr_config = dict( - policy='step', - warmup='linear', - warmup_iters=500, - warmup_ratio=0.001, - step=[170, 200]) -total_epochs = 210 -channel_cfg = dict( - num_output_channels=17, - dataset_joints=17, - dataset_channel=[ - [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16], - ], - inference_channel=[ - 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 - ]) - -# model settings -model = dict( - type='TopDown', - pretrained='https://download.openmmlab.com/mmpose/' - 'pretrain_models/hrnet_w48-8ef0771d.pth', - backbone=dict( - type='HRNet', - in_channels=3, - extra=dict( - stage1=dict( - num_modules=1, - num_branches=1, - block='BOTTLENECK', - num_blocks=(4, ), - num_channels=(64, )), - stage2=dict( - num_modules=1, - num_branches=2, - block='BASIC', - num_blocks=(4, 4), - num_channels=(48, 96)), - stage3=dict( - num_modules=4, - num_branches=3, - block='BASIC', - num_blocks=(4, 4, 4), - num_channels=(48, 96, 192)), - stage4=dict( - num_modules=3, - num_branches=4, - block='BASIC', - num_blocks=(4, 4, 4, 4), - num_channels=(48, 96, 192, 384))), - ), - keypoint_head=dict( - type='TopdownHeatmapSimpleHead', - in_channels=48, - out_channels=channel_cfg['num_output_channels'], - num_deconv_layers=0, - extra=dict(final_conv_kernel=1, ), - loss_keypoint=dict(type='JointsMSELoss', use_target_weight=True)), - train_cfg=dict(), - test_cfg=dict( - flip_test=True, - post_process='default', - shift_heatmap=True, - modulate_kernel=11)) - -data_cfg = dict( - image_size=[192, 256], - heatmap_size=[48, 64], - num_output_channels=channel_cfg['num_output_channels'], - num_joints=channel_cfg['dataset_joints'], - dataset_channel=channel_cfg['dataset_channel'], - inference_channel=channel_cfg['inference_channel'], - soft_nms=False, - nms_thr=1.0, - oks_thr=0.9, - vis_thr=0.2, - use_gt_bbox=False, - det_bbox_thr=0.0, - bbox_file='data/coco/person_detection_results/' - 'COCO_val2017_detections_AP_H_56_person.json', -) - -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='TopDownGetBboxCenterScale', padding=1.25), - dict(type='TopDownRandomShiftBboxCenter', shift_factor=0.16, prob=0.3), - dict(type='TopDownRandomFlip', flip_prob=0.5), - dict( - type='TopDownHalfBodyTransform', - num_joints_half_body=8, - prob_half_body=0.3), - dict( - type='TopDownGetRandomScaleRotation', rot_factor=40, scale_factor=0.5), - dict(type='TopDownAffine'), - dict(type='ToTensor'), - dict( - type='NormalizeTensor', - mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]), - dict(type='TopDownGenerateTarget', sigma=2), - dict( - type='Collect', - keys=['img', 'target', 'target_weight'], - meta_keys=[ - 'image_file', 'joints_3d', 'joints_3d_visible', 'center', 'scale', - 'rotation', 'bbox_score', 'flip_pairs' - ]), -] - -val_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='TopDownGetBboxCenterScale', padding=1.25), - dict(type='TopDownAffine'), - dict(type='ToTensor'), - dict( - type='NormalizeTensor', - mean=[0.485, 0.456, 0.406], - std=[0.229, 0.224, 0.225]), - dict( - type='Collect', - keys=['img'], - meta_keys=[ - 'image_file', 'center', 'scale', 'rotation', 'bbox_score', - 'flip_pairs' - ]), -] - -test_pipeline = val_pipeline - -data_root = 'data/coco' -data = dict( - samples_per_gpu=32, - workers_per_gpu=2, - val_dataloader=dict(samples_per_gpu=32), - test_dataloader=dict(samples_per_gpu=32), - train=dict( - type='TopDownCocoDataset', - ann_file=f'{data_root}/annotations/person_keypoints_train2017.json', - img_prefix=f'{data_root}/train2017/', - data_cfg=data_cfg, - pipeline=train_pipeline, - dataset_info={{_base_.dataset_info}}), - val=dict( - type='TopDownCocoDataset', - ann_file=f'{data_root}/annotations/person_keypoints_val2017.json', - img_prefix=f'{data_root}/val2017/', - data_cfg=data_cfg, - pipeline=val_pipeline, - dataset_info={{_base_.dataset_info}}), - test=dict( - type='TopDownCocoDataset', - ann_file=f'{data_root}/annotations/person_keypoints_val2017.json', - img_prefix=f'{data_root}/val2017/', - data_cfg=data_cfg, - pipeline=test_pipeline, - dataset_info={{_base_.dataset_info}}), -) diff --git a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/activations.py b/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/activations.py deleted file mode 100644 index bdea692d1397673b2513d898c33edbcb37d94240..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/normalbae/models/submodules/efficientnet_repo/geffnet/activations/activations.py +++ /dev/null @@ -1,102 +0,0 @@ -""" Activations - -A collection of activations fn and modules with a common interface so that they can -easily be swapped. All have an `inplace` arg even if not used. - -Copyright 2020 Ross Wightman -""" -from torch import nn as nn -from torch.nn import functional as F - - -def swish(x, inplace: bool = False): - """Swish - Described originally as SiLU (https://arxiv.org/abs/1702.03118v3) - and also as Swish (https://arxiv.org/abs/1710.05941). - - TODO Rename to SiLU with addition to PyTorch - """ - return x.mul_(x.sigmoid()) if inplace else x.mul(x.sigmoid()) - - -class Swish(nn.Module): - def __init__(self, inplace: bool = False): - super(Swish, self).__init__() - self.inplace = inplace - - def forward(self, x): - return swish(x, self.inplace) - - -def mish(x, inplace: bool = False): - """Mish: A Self Regularized Non-Monotonic Neural Activation Function - https://arxiv.org/abs/1908.08681 - """ - return x.mul(F.softplus(x).tanh()) - - -class Mish(nn.Module): - def __init__(self, inplace: bool = False): - super(Mish, self).__init__() - self.inplace = inplace - - def forward(self, x): - return mish(x, self.inplace) - - -def sigmoid(x, inplace: bool = False): - return x.sigmoid_() if inplace else x.sigmoid() - - -# PyTorch has this, but not with a consistent inplace argmument interface -class Sigmoid(nn.Module): - def __init__(self, inplace: bool = False): - super(Sigmoid, self).__init__() - self.inplace = inplace - - def forward(self, x): - return x.sigmoid_() if self.inplace else x.sigmoid() - - -def tanh(x, inplace: bool = False): - return x.tanh_() if inplace else x.tanh() - - -# PyTorch has this, but not with a consistent inplace argmument interface -class Tanh(nn.Module): - def __init__(self, inplace: bool = False): - super(Tanh, self).__init__() - self.inplace = inplace - - def forward(self, x): - return x.tanh_() if self.inplace else x.tanh() - - -def hard_swish(x, inplace: bool = False): - inner = F.relu6(x + 3.).div_(6.) - return x.mul_(inner) if inplace else x.mul(inner) - - -class HardSwish(nn.Module): - def __init__(self, inplace: bool = False): - super(HardSwish, self).__init__() - self.inplace = inplace - - def forward(self, x): - return hard_swish(x, self.inplace) - - -def hard_sigmoid(x, inplace: bool = False): - if inplace: - return x.add_(3.).clamp_(0., 6.).div_(6.) - else: - return F.relu6(x + 3.) / 6. - - -class HardSigmoid(nn.Module): - def __init__(self, inplace: bool = False): - super(HardSigmoid, self).__init__() - self.inplace = inplace - - def forward(self, x): - return hard_sigmoid(x, self.inplace) - - diff --git a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/fileio/handlers/yaml_handler.py b/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/fileio/handlers/yaml_handler.py deleted file mode 100644 index c5aa2eea1e8c76f8baf753d1c8c959dee665e543..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/uniformer/mmcv/fileio/handlers/yaml_handler.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import yaml - -try: - from yaml import CLoader as Loader, CDumper as Dumper -except ImportError: - from yaml import Loader, Dumper - -from .base import BaseFileHandler # isort:skip - - -class YamlHandler(BaseFileHandler): - - def load_from_fileobj(self, file, **kwargs): - kwargs.setdefault('Loader', Loader) - return yaml.load(file, **kwargs) - - def dump_to_fileobj(self, obj, file, **kwargs): - kwargs.setdefault('Dumper', Dumper) - yaml.dump(obj, file, **kwargs) - - def dump_to_str(self, obj, **kwargs): - kwargs.setdefault('Dumper', Dumper) - return yaml.dump(obj, **kwargs) diff --git a/spaces/Superlang/remove_background/DIS/models/__init__.py b/spaces/Superlang/remove_background/DIS/models/__init__.py deleted file mode 100644 index eb53422f92830e8c360e1744941a38e83289da6e..0000000000000000000000000000000000000000 --- a/spaces/Superlang/remove_background/DIS/models/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .isnet import ISNetGTEncoder, ISNetDIS diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/configuration.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/configuration.py deleted file mode 100644 index 84b134e490b081d661daf69f98e0b9b1fdddd36f..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/configuration.py +++ /dev/null @@ -1,282 +0,0 @@ -import logging -import os -import subprocess -from optparse import Values -from typing import Any, List, Optional - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.configuration import ( - Configuration, - Kind, - get_configuration_files, - kinds, -) -from pip._internal.exceptions import PipError -from pip._internal.utils.logging import indent_log -from pip._internal.utils.misc import get_prog, write_output - -logger = logging.getLogger(__name__) - - -class ConfigurationCommand(Command): - """ - Manage local and global configuration. - - Subcommands: - - - list: List the active configuration (or from the file specified) - - edit: Edit the configuration file in an editor - - get: Get the value associated with command.option - - set: Set the command.option=value - - unset: Unset the value associated with command.option - - debug: List the configuration files and values defined under them - - Configuration keys should be dot separated command and option name, - with the special prefix "global" affecting any command. For example, - "pip config set global.index-url https://example.org/" would configure - the index url for all commands, but "pip config set download.timeout 10" - would configure a 10 second timeout only for "pip download" commands. - - If none of --user, --global and --site are passed, a virtual - environment configuration file is used if one is active and the file - exists. Otherwise, all modifications happen to the user file by - default. - """ - - ignore_require_venv = True - usage = """ - %prog [<file-option>] list - %prog [<file-option>] [--editor <editor-path>] edit - - %prog [<file-option>] get command.option - %prog [<file-option>] set command.option value - %prog [<file-option>] unset command.option - %prog [<file-option>] debug - """ - - def add_options(self) -> None: - self.cmd_opts.add_option( - "--editor", - dest="editor", - action="store", - default=None, - help=( - "Editor to use to edit the file. Uses VISUAL or EDITOR " - "environment variables if not provided." - ), - ) - - self.cmd_opts.add_option( - "--global", - dest="global_file", - action="store_true", - default=False, - help="Use the system-wide configuration file only", - ) - - self.cmd_opts.add_option( - "--user", - dest="user_file", - action="store_true", - default=False, - help="Use the user configuration file only", - ) - - self.cmd_opts.add_option( - "--site", - dest="site_file", - action="store_true", - default=False, - help="Use the current environment configuration file only", - ) - - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - handlers = { - "list": self.list_values, - "edit": self.open_in_editor, - "get": self.get_name, - "set": self.set_name_value, - "unset": self.unset_name, - "debug": self.list_config_values, - } - - # Determine action - if not args or args[0] not in handlers: - logger.error( - "Need an action (%s) to perform.", - ", ".join(sorted(handlers)), - ) - return ERROR - - action = args[0] - - # Determine which configuration files are to be loaded - # Depends on whether the command is modifying. - try: - load_only = self._determine_file( - options, need_value=(action in ["get", "set", "unset", "edit"]) - ) - except PipError as e: - logger.error(e.args[0]) - return ERROR - - # Load a new configuration - self.configuration = Configuration( - isolated=options.isolated_mode, load_only=load_only - ) - self.configuration.load() - - # Error handling happens here, not in the action-handlers. - try: - handlers[action](options, args[1:]) - except PipError as e: - logger.error(e.args[0]) - return ERROR - - return SUCCESS - - def _determine_file(self, options: Values, need_value: bool) -> Optional[Kind]: - file_options = [ - key - for key, value in ( - (kinds.USER, options.user_file), - (kinds.GLOBAL, options.global_file), - (kinds.SITE, options.site_file), - ) - if value - ] - - if not file_options: - if not need_value: - return None - # Default to user, unless there's a site file. - elif any( - os.path.exists(site_config_file) - for site_config_file in get_configuration_files()[kinds.SITE] - ): - return kinds.SITE - else: - return kinds.USER - elif len(file_options) == 1: - return file_options[0] - - raise PipError( - "Need exactly one file to operate upon " - "(--user, --site, --global) to perform." - ) - - def list_values(self, options: Values, args: List[str]) -> None: - self._get_n_args(args, "list", n=0) - - for key, value in sorted(self.configuration.items()): - write_output("%s=%r", key, value) - - def get_name(self, options: Values, args: List[str]) -> None: - key = self._get_n_args(args, "get [name]", n=1) - value = self.configuration.get_value(key) - - write_output("%s", value) - - def set_name_value(self, options: Values, args: List[str]) -> None: - key, value = self._get_n_args(args, "set [name] [value]", n=2) - self.configuration.set_value(key, value) - - self._save_configuration() - - def unset_name(self, options: Values, args: List[str]) -> None: - key = self._get_n_args(args, "unset [name]", n=1) - self.configuration.unset_value(key) - - self._save_configuration() - - def list_config_values(self, options: Values, args: List[str]) -> None: - """List config key-value pairs across different config files""" - self._get_n_args(args, "debug", n=0) - - self.print_env_var_values() - # Iterate over config files and print if they exist, and the - # key-value pairs present in them if they do - for variant, files in sorted(self.configuration.iter_config_files()): - write_output("%s:", variant) - for fname in files: - with indent_log(): - file_exists = os.path.exists(fname) - write_output("%s, exists: %r", fname, file_exists) - if file_exists: - self.print_config_file_values(variant) - - def print_config_file_values(self, variant: Kind) -> None: - """Get key-value pairs from the file of a variant""" - for name, value in self.configuration.get_values_in_config(variant).items(): - with indent_log(): - write_output("%s: %s", name, value) - - def print_env_var_values(self) -> None: - """Get key-values pairs present as environment variables""" - write_output("%s:", "env_var") - with indent_log(): - for key, value in sorted(self.configuration.get_environ_vars()): - env_var = f"PIP_{key.upper()}" - write_output("%s=%r", env_var, value) - - def open_in_editor(self, options: Values, args: List[str]) -> None: - editor = self._determine_editor(options) - - fname = self.configuration.get_file_to_edit() - if fname is None: - raise PipError("Could not determine appropriate file.") - elif '"' in fname: - # This shouldn't happen, unless we see a username like that. - # If that happens, we'd appreciate a pull request fixing this. - raise PipError( - f'Can not open an editor for a file name containing "\n{fname}' - ) - - try: - subprocess.check_call(f'{editor} "{fname}"', shell=True) - except FileNotFoundError as e: - if not e.filename: - e.filename = editor - raise - except subprocess.CalledProcessError as e: - raise PipError( - "Editor Subprocess exited with exit code {}".format(e.returncode) - ) - - def _get_n_args(self, args: List[str], example: str, n: int) -> Any: - """Helper to make sure the command got the right number of arguments""" - if len(args) != n: - msg = ( - "Got unexpected number of arguments, expected {}. " - '(example: "{} config {}")' - ).format(n, get_prog(), example) - raise PipError(msg) - - if n == 1: - return args[0] - else: - return args - - def _save_configuration(self) -> None: - # We successfully ran a modifying command. Need to save the - # configuration. - try: - self.configuration.save() - except Exception: - logger.exception( - "Unable to save configuration. Please report this as a bug." - ) - raise PipError("Internal Error.") - - def _determine_editor(self, options: Values) -> str: - if options.editor is not None: - return options.editor - elif "VISUAL" in os.environ: - return os.environ["VISUAL"] - elif "EDITOR" in os.environ: - return os.environ["EDITOR"] - else: - raise PipError("Could not determine editor to use.") diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/hash.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/hash.py deleted file mode 100644 index 042dac813e74b8187c3754cb9a937c7f7183e331..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/commands/hash.py +++ /dev/null @@ -1,59 +0,0 @@ -import hashlib -import logging -import sys -from optparse import Values -from typing import List - -from pip._internal.cli.base_command import Command -from pip._internal.cli.status_codes import ERROR, SUCCESS -from pip._internal.utils.hashes import FAVORITE_HASH, STRONG_HASHES -from pip._internal.utils.misc import read_chunks, write_output - -logger = logging.getLogger(__name__) - - -class HashCommand(Command): - """ - Compute a hash of a local package archive. - - These can be used with --hash in a requirements file to do repeatable - installs. - """ - - usage = "%prog [options] <file> ..." - ignore_require_venv = True - - def add_options(self) -> None: - self.cmd_opts.add_option( - "-a", - "--algorithm", - dest="algorithm", - choices=STRONG_HASHES, - action="store", - default=FAVORITE_HASH, - help="The hash algorithm to use: one of {}".format( - ", ".join(STRONG_HASHES) - ), - ) - self.parser.insert_option_group(0, self.cmd_opts) - - def run(self, options: Values, args: List[str]) -> int: - if not args: - self.parser.print_usage(sys.stderr) - return ERROR - - algorithm = options.algorithm - for path in args: - write_output( - "%s:\n--hash=%s:%s", path, algorithm, _hash_of_file(path, algorithm) - ) - return SUCCESS - - -def _hash_of_file(path: str, algorithm: str) -> str: - """Return the hash digest of a file.""" - with open(path, "rb") as archive: - hash = hashlib.new(algorithm) - for chunk in read_chunks(archive): - hash.update(chunk) - return hash.hexdigest() diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/dist.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/dist.py deleted file mode 100644 index 4458a580337fb601943e5859e84a0ff387a8f0f2..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/setuptools/dist.py +++ /dev/null @@ -1,1239 +0,0 @@ -__all__ = ['Distribution'] - -import io -import sys -import re -import os -import numbers -import distutils.log -import distutils.core -import distutils.cmd -import distutils.dist -import distutils.command -from distutils.util import strtobool -from distutils.debug import DEBUG -from distutils.fancy_getopt import translate_longopt -from glob import iglob -import itertools -import textwrap -from contextlib import suppress -from typing import List, Optional, Set, TYPE_CHECKING -from pathlib import Path - -from collections import defaultdict -from email import message_from_file - -from distutils.errors import DistutilsOptionError, DistutilsSetupError -from distutils.util import rfc822_escape - -from setuptools.extern import packaging -from setuptools.extern import ordered_set -from setuptools.extern.more_itertools import unique_everseen, partition - -import setuptools -import setuptools.command -from setuptools import windows_support -from setuptools.monkey import get_unpatched -from setuptools.config import setupcfg, pyprojecttoml -from setuptools.discovery import ConfigDiscovery - -from setuptools.extern.packaging import version -from . import _reqs -from . import _entry_points -from . import _normalization -from ._importlib import metadata -from .warnings import InformationOnly, SetuptoolsDeprecationWarning - -if TYPE_CHECKING: - from email.message import Message - -__import__('setuptools.extern.packaging.specifiers') -__import__('setuptools.extern.packaging.version') - - -def get_metadata_version(self): - mv = getattr(self, 'metadata_version', None) - if mv is None: - mv = version.Version('2.1') - self.metadata_version = mv - return mv - - -def rfc822_unescape(content: str) -> str: - """Reverse RFC-822 escaping by removing leading whitespaces from content.""" - lines = content.splitlines() - if len(lines) == 1: - return lines[0].lstrip() - return '\n'.join((lines[0].lstrip(), textwrap.dedent('\n'.join(lines[1:])))) - - -def _read_field_from_msg(msg: "Message", field: str) -> Optional[str]: - """Read Message header field.""" - value = msg[field] - if value == 'UNKNOWN': - return None - return value - - -def _read_field_unescaped_from_msg(msg: "Message", field: str) -> Optional[str]: - """Read Message header field and apply rfc822_unescape.""" - value = _read_field_from_msg(msg, field) - if value is None: - return value - return rfc822_unescape(value) - - -def _read_list_from_msg(msg: "Message", field: str) -> Optional[List[str]]: - """Read Message header field and return all results as list.""" - values = msg.get_all(field, None) - if values == []: - return None - return values - - -def _read_payload_from_msg(msg: "Message") -> Optional[str]: - value = msg.get_payload().strip() - if value == 'UNKNOWN' or not value: - return None - return value - - -def read_pkg_file(self, file): - """Reads the metadata values from a file object.""" - msg = message_from_file(file) - - self.metadata_version = version.Version(msg['metadata-version']) - self.name = _read_field_from_msg(msg, 'name') - self.version = _read_field_from_msg(msg, 'version') - self.description = _read_field_from_msg(msg, 'summary') - # we are filling author only. - self.author = _read_field_from_msg(msg, 'author') - self.maintainer = None - self.author_email = _read_field_from_msg(msg, 'author-email') - self.maintainer_email = None - self.url = _read_field_from_msg(msg, 'home-page') - self.download_url = _read_field_from_msg(msg, 'download-url') - self.license = _read_field_unescaped_from_msg(msg, 'license') - - self.long_description = _read_field_unescaped_from_msg(msg, 'description') - if ( - self.long_description is None and - self.metadata_version >= version.Version('2.1') - ): - self.long_description = _read_payload_from_msg(msg) - self.description = _read_field_from_msg(msg, 'summary') - - if 'keywords' in msg: - self.keywords = _read_field_from_msg(msg, 'keywords').split(',') - - self.platforms = _read_list_from_msg(msg, 'platform') - self.classifiers = _read_list_from_msg(msg, 'classifier') - - # PEP 314 - these fields only exist in 1.1 - if self.metadata_version == version.Version('1.1'): - self.requires = _read_list_from_msg(msg, 'requires') - self.provides = _read_list_from_msg(msg, 'provides') - self.obsoletes = _read_list_from_msg(msg, 'obsoletes') - else: - self.requires = None - self.provides = None - self.obsoletes = None - - self.license_files = _read_list_from_msg(msg, 'license-file') - - -def single_line(val): - """ - Quick and dirty validation for Summary pypa/setuptools#1390. - """ - if '\n' in val: - # TODO: Replace with `raise ValueError("newlines not allowed")` - # after reviewing #2893. - msg = "newlines are not allowed in `summary` and will break in the future" - SetuptoolsDeprecationWarning.emit("Invalid config.", msg) - # due_date is undefined. Controversial change, there was a lot of push back. - val = val.strip().split('\n')[0] - return val - - -# Based on Python 3.5 version -def write_pkg_file(self, file): # noqa: C901 # is too complex (14) # FIXME - """Write the PKG-INFO format data to a file object.""" - version = self.get_metadata_version() - - def write_field(key, value): - file.write("%s: %s\n" % (key, value)) - - write_field('Metadata-Version', str(version)) - write_field('Name', self.get_name()) - write_field('Version', self.get_version()) - - summary = self.get_description() - if summary: - write_field('Summary', single_line(summary)) - - optional_fields = ( - ('Home-page', 'url'), - ('Download-URL', 'download_url'), - ('Author', 'author'), - ('Author-email', 'author_email'), - ('Maintainer', 'maintainer'), - ('Maintainer-email', 'maintainer_email'), - ) - - for field, attr in optional_fields: - attr_val = getattr(self, attr, None) - if attr_val is not None: - write_field(field, attr_val) - - license = self.get_license() - if license: - write_field('License', rfc822_escape(license)) - - for project_url in self.project_urls.items(): - write_field('Project-URL', '%s, %s' % project_url) - - keywords = ','.join(self.get_keywords()) - if keywords: - write_field('Keywords', keywords) - - platforms = self.get_platforms() or [] - for platform in platforms: - write_field('Platform', platform) - - self._write_list(file, 'Classifier', self.get_classifiers()) - - # PEP 314 - self._write_list(file, 'Requires', self.get_requires()) - self._write_list(file, 'Provides', self.get_provides()) - self._write_list(file, 'Obsoletes', self.get_obsoletes()) - - # Setuptools specific for PEP 345 - if hasattr(self, 'python_requires'): - write_field('Requires-Python', self.python_requires) - - # PEP 566 - if self.long_description_content_type: - write_field('Description-Content-Type', self.long_description_content_type) - if self.provides_extras: - for extra in self.provides_extras: - write_field('Provides-Extra', extra) - - self._write_list(file, 'License-File', self.license_files or []) - - long_description = self.get_long_description() - if long_description: - file.write("\n%s" % long_description) - if not long_description.endswith("\n"): - file.write("\n") - - -sequence = tuple, list - - -def check_importable(dist, attr, value): - try: - ep = metadata.EntryPoint(value=value, name=None, group=None) - assert not ep.extras - except (TypeError, ValueError, AttributeError, AssertionError) as e: - raise DistutilsSetupError( - "%r must be importable 'module:attrs' string (got %r)" % (attr, value) - ) from e - - -def assert_string_list(dist, attr, value): - """Verify that value is a string list""" - try: - # verify that value is a list or tuple to exclude unordered - # or single-use iterables - assert isinstance(value, (list, tuple)) - # verify that elements of value are strings - assert ''.join(value) != value - except (TypeError, ValueError, AttributeError, AssertionError) as e: - raise DistutilsSetupError( - "%r must be a list of strings (got %r)" % (attr, value) - ) from e - - -def check_nsp(dist, attr, value): - """Verify that namespace packages are valid""" - ns_packages = value - assert_string_list(dist, attr, ns_packages) - for nsp in ns_packages: - if not dist.has_contents_for(nsp): - raise DistutilsSetupError( - "Distribution contains no modules or packages for " - + "namespace package %r" % nsp - ) - parent, sep, child = nsp.rpartition('.') - if parent and parent not in ns_packages: - distutils.log.warn( - "WARNING: %r is declared as a package namespace, but %r" - " is not: please correct this in setup.py", - nsp, - parent, - ) - SetuptoolsDeprecationWarning.emit( - "The namespace_packages parameter is deprecated.", - "Please replace its usage with implicit namespaces (PEP 420).", - see_docs="references/keywords.html#keyword-namespace-packages" - # TODO: define due_date, it may break old packages that are no longer - # maintained (e.g. sphinxcontrib extensions) when installed from source. - # Warning officially introduced in May 2022, however the deprecation - # was mentioned much earlier in the docs (May 2020, see #2149). - ) - - -def check_extras(dist, attr, value): - """Verify that extras_require mapping is valid""" - try: - list(itertools.starmap(_check_extra, value.items())) - except (TypeError, ValueError, AttributeError) as e: - raise DistutilsSetupError( - "'extras_require' must be a dictionary whose values are " - "strings or lists of strings containing valid project/version " - "requirement specifiers." - ) from e - - -def _check_extra(extra, reqs): - name, sep, marker = extra.partition(':') - try: - _check_marker(marker) - except packaging.markers.InvalidMarker: - msg = f"Invalid environment marker: {marker} ({extra!r})" - raise DistutilsSetupError(msg) from None - list(_reqs.parse(reqs)) - - -def _check_marker(marker): - if not marker: - return - m = packaging.markers.Marker(marker) - m.evaluate() - - -def assert_bool(dist, attr, value): - """Verify that value is True, False, 0, or 1""" - if bool(value) != value: - tmpl = "{attr!r} must be a boolean value (got {value!r})" - raise DistutilsSetupError(tmpl.format(attr=attr, value=value)) - - -def invalid_unless_false(dist, attr, value): - if not value: - DistDeprecationWarning.emit(f"{attr} is ignored.") - # TODO: should there be a `due_date` here? - return - raise DistutilsSetupError(f"{attr} is invalid.") - - -def check_requirements(dist, attr, value): - """Verify that install_requires is a valid requirements list""" - try: - list(_reqs.parse(value)) - if isinstance(value, (dict, set)): - raise TypeError("Unordered types are not allowed") - except (TypeError, ValueError) as error: - tmpl = ( - "{attr!r} must be a string or list of strings " - "containing valid project/version requirement specifiers; {error}" - ) - raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error - - -def check_specifier(dist, attr, value): - """Verify that value is a valid version specifier""" - try: - packaging.specifiers.SpecifierSet(value) - except (packaging.specifiers.InvalidSpecifier, AttributeError) as error: - tmpl = ( - "{attr!r} must be a string " "containing valid version specifiers; {error}" - ) - raise DistutilsSetupError(tmpl.format(attr=attr, error=error)) from error - - -def check_entry_points(dist, attr, value): - """Verify that entry_points map is parseable""" - try: - _entry_points.load(value) - except Exception as e: - raise DistutilsSetupError(e) from e - - -def check_test_suite(dist, attr, value): - if not isinstance(value, str): - raise DistutilsSetupError("test_suite must be a string") - - -def check_package_data(dist, attr, value): - """Verify that value is a dictionary of package names to glob lists""" - if not isinstance(value, dict): - raise DistutilsSetupError( - "{!r} must be a dictionary mapping package names to lists of " - "string wildcard patterns".format(attr) - ) - for k, v in value.items(): - if not isinstance(k, str): - raise DistutilsSetupError( - "keys of {!r} dict must be strings (got {!r})".format(attr, k) - ) - assert_string_list(dist, 'values of {!r} dict'.format(attr), v) - - -def check_packages(dist, attr, value): - for pkgname in value: - if not re.match(r'\w+(\.\w+)*', pkgname): - distutils.log.warn( - "WARNING: %r not a valid package name; please use only " - ".-separated package names in setup.py", - pkgname, - ) - - -_Distribution = get_unpatched(distutils.core.Distribution) - - -class Distribution(_Distribution): - """Distribution with support for tests and package data - - This is an enhanced version of 'distutils.dist.Distribution' that - effectively adds the following new optional keyword arguments to 'setup()': - - 'install_requires' -- a string or sequence of strings specifying project - versions that the distribution requires when installed, in the format - used by 'pkg_resources.require()'. They will be installed - automatically when the package is installed. If you wish to use - packages that are not available in PyPI, or want to give your users an - alternate download location, you can add a 'find_links' option to the - '[easy_install]' section of your project's 'setup.cfg' file, and then - setuptools will scan the listed web pages for links that satisfy the - requirements. - - 'extras_require' -- a dictionary mapping names of optional "extras" to the - additional requirement(s) that using those extras incurs. For example, - this:: - - extras_require = dict(reST = ["docutils>=0.3", "reSTedit"]) - - indicates that the distribution can optionally provide an extra - capability called "reST", but it can only be used if docutils and - reSTedit are installed. If the user installs your package using - EasyInstall and requests one of your extras, the corresponding - additional requirements will be installed if needed. - - 'test_suite' -- the name of a test suite to run for the 'test' command. - If the user runs 'python setup.py test', the package will be installed, - and the named test suite will be run. The format is the same as - would be used on a 'unittest.py' command line. That is, it is the - dotted name of an object to import and call to generate a test suite. - - 'package_data' -- a dictionary mapping package names to lists of filenames - or globs to use to find data files contained in the named packages. - If the dictionary has filenames or globs listed under '""' (the empty - string), those names will be searched for in every package, in addition - to any names for the specific package. Data files found using these - names/globs will be installed along with the package, in the same - location as the package. Note that globs are allowed to reference - the contents of non-package subdirectories, as long as you use '/' as - a path separator. (Globs are automatically converted to - platform-specific paths at runtime.) - - In addition to these new keywords, this class also has several new methods - for manipulating the distribution's contents. For example, the 'include()' - and 'exclude()' methods can be thought of as in-place add and subtract - commands that add or remove packages, modules, extensions, and so on from - the distribution. - """ - - _DISTUTILS_UNSUPPORTED_METADATA = { - 'long_description_content_type': lambda: None, - 'project_urls': dict, - 'provides_extras': ordered_set.OrderedSet, - 'license_file': lambda: None, - 'license_files': lambda: None, - } - - _patched_dist = None - - def patch_missing_pkg_info(self, attrs): - # Fake up a replacement for the data that would normally come from - # PKG-INFO, but which might not yet be built if this is a fresh - # checkout. - # - if not attrs or 'name' not in attrs or 'version' not in attrs: - return - name = _normalization.safe_name(str(attrs['name'])).lower() - with suppress(metadata.PackageNotFoundError): - dist = metadata.distribution(name) - if dist is not None and not dist.read_text('PKG-INFO'): - dist._version = _normalization.safe_version(str(attrs['version'])) - self._patched_dist = dist - - def __init__(self, attrs=None): - have_package_data = hasattr(self, "package_data") - if not have_package_data: - self.package_data = {} - attrs = attrs or {} - self.dist_files = [] - # Filter-out setuptools' specific options. - self.src_root = attrs.pop("src_root", None) - self.patch_missing_pkg_info(attrs) - self.dependency_links = attrs.pop('dependency_links', []) - self.setup_requires = attrs.pop('setup_requires', []) - for ep in metadata.entry_points(group='distutils.setup_keywords'): - vars(self).setdefault(ep.name, None) - _Distribution.__init__( - self, - { - k: v - for k, v in attrs.items() - if k not in self._DISTUTILS_UNSUPPORTED_METADATA - }, - ) - - # Private API (setuptools-use only, not restricted to Distribution) - # Stores files that are referenced by the configuration and need to be in the - # sdist (e.g. `version = file: VERSION.txt`) - self._referenced_files: Set[str] = set() - - # Save the original dependencies before they are processed into the egg format - self._orig_extras_require = {} - self._orig_install_requires = [] - self._tmp_extras_require = defaultdict(ordered_set.OrderedSet) - - self.set_defaults = ConfigDiscovery(self) - - self._set_metadata_defaults(attrs) - - self.metadata.version = self._normalize_version( - self._validate_version(self.metadata.version) - ) - self._finalize_requires() - - def _validate_metadata(self): - required = {"name"} - provided = { - key - for key in vars(self.metadata) - if getattr(self.metadata, key, None) is not None - } - missing = required - provided - - if missing: - msg = f"Required package metadata is missing: {missing}" - raise DistutilsSetupError(msg) - - def _set_metadata_defaults(self, attrs): - """ - Fill-in missing metadata fields not supported by distutils. - Some fields may have been set by other tools (e.g. pbr). - Those fields (vars(self.metadata)) take precedence to - supplied attrs. - """ - for option, default in self._DISTUTILS_UNSUPPORTED_METADATA.items(): - vars(self.metadata).setdefault(option, attrs.get(option, default())) - - @staticmethod - def _normalize_version(version): - if isinstance(version, setuptools.sic) or version is None: - return version - - normalized = str(packaging.version.Version(version)) - if version != normalized: - InformationOnly.emit(f"Normalizing '{version}' to '{normalized}'") - return normalized - return version - - @staticmethod - def _validate_version(version): - if isinstance(version, numbers.Number): - # Some people apparently take "version number" too literally :) - version = str(version) - - if version is not None: - try: - packaging.version.Version(version) - except (packaging.version.InvalidVersion, TypeError): - SetuptoolsDeprecationWarning.emit( - f"Invalid version: {version!r}.", - """ - The version specified is not a valid version according to PEP 440. - This may not work as expected with newer versions of - setuptools, pip, and PyPI. - """, - see_url="https://peps.python.org/pep-0440/", - due_date=(2023, 9, 26), - # Warning initially introduced in 26 Sept 2014 - # pypa/packaging already removed legacy versions. - ) - return setuptools.sic(version) - return version - - def _finalize_requires(self): - """ - Set `metadata.python_requires` and fix environment markers - in `install_requires` and `extras_require`. - """ - if getattr(self, 'python_requires', None): - self.metadata.python_requires = self.python_requires - - if getattr(self, 'extras_require', None): - # Save original before it is messed by _convert_extras_requirements - self._orig_extras_require = self._orig_extras_require or self.extras_require - for extra in self.extras_require.keys(): - # Since this gets called multiple times at points where the - # keys have become 'converted' extras, ensure that we are only - # truly adding extras we haven't seen before here. - extra = extra.split(':')[0] - if extra: - self.metadata.provides_extras.add(extra) - - if getattr(self, 'install_requires', None) and not self._orig_install_requires: - # Save original before it is messed by _move_install_requirements_markers - self._orig_install_requires = self.install_requires - - self._convert_extras_requirements() - self._move_install_requirements_markers() - - def _convert_extras_requirements(self): - """ - Convert requirements in `extras_require` of the form - `"extra": ["barbazquux; {marker}"]` to - `"extra:{marker}": ["barbazquux"]`. - """ - spec_ext_reqs = getattr(self, 'extras_require', None) or {} - tmp = defaultdict(ordered_set.OrderedSet) - self._tmp_extras_require = getattr(self, '_tmp_extras_require', tmp) - for section, v in spec_ext_reqs.items(): - # Do not strip empty sections. - self._tmp_extras_require[section] - for r in _reqs.parse(v): - suffix = self._suffix_for(r) - self._tmp_extras_require[section + suffix].append(r) - - @staticmethod - def _suffix_for(req): - """ - For a requirement, return the 'extras_require' suffix for - that requirement. - """ - return ':' + str(req.marker) if req.marker else '' - - def _move_install_requirements_markers(self): - """ - Move requirements in `install_requires` that are using environment - markers `extras_require`. - """ - - # divide the install_requires into two sets, simple ones still - # handled by install_requires and more complex ones handled - # by extras_require. - - def is_simple_req(req): - return not req.marker - - spec_inst_reqs = getattr(self, 'install_requires', None) or () - inst_reqs = list(_reqs.parse(spec_inst_reqs)) - simple_reqs = filter(is_simple_req, inst_reqs) - complex_reqs = itertools.filterfalse(is_simple_req, inst_reqs) - self.install_requires = list(map(str, simple_reqs)) - - for r in complex_reqs: - self._tmp_extras_require[':' + str(r.marker)].append(r) - self.extras_require = dict( - # list(dict.fromkeys(...)) ensures a list of unique strings - (k, list(dict.fromkeys(str(r) for r in map(self._clean_req, v)))) - for k, v in self._tmp_extras_require.items() - ) - - def _clean_req(self, req): - """ - Given a Requirement, remove environment markers and return it. - """ - req.marker = None - return req - - def _finalize_license_files(self): - """Compute names of all license files which should be included.""" - license_files: Optional[List[str]] = self.metadata.license_files - patterns: List[str] = license_files if license_files else [] - - license_file: Optional[str] = self.metadata.license_file - if license_file and license_file not in patterns: - patterns.append(license_file) - - if license_files is None and license_file is None: - # Default patterns match the ones wheel uses - # See https://wheel.readthedocs.io/en/stable/user_guide.html - # -> 'Including license files in the generated wheel file' - patterns = ('LICEN[CS]E*', 'COPYING*', 'NOTICE*', 'AUTHORS*') - - self.metadata.license_files = list( - unique_everseen(self._expand_patterns(patterns)) - ) - - @staticmethod - def _expand_patterns(patterns): - """ - >>> list(Distribution._expand_patterns(['LICENSE'])) - ['LICENSE'] - >>> list(Distribution._expand_patterns(['setup.cfg', 'LIC*'])) - ['setup.cfg', 'LICENSE'] - """ - return ( - path - for pattern in patterns - for path in sorted(iglob(pattern)) - if not path.endswith('~') and os.path.isfile(path) - ) - - # FIXME: 'Distribution._parse_config_files' is too complex (14) - def _parse_config_files(self, filenames=None): # noqa: C901 - """ - Adapted from distutils.dist.Distribution.parse_config_files, - this method provides the same functionality in subtly-improved - ways. - """ - from configparser import ConfigParser - - # Ignore install directory options if we have a venv - ignore_options = ( - [] - if sys.prefix == sys.base_prefix - else [ - 'install-base', - 'install-platbase', - 'install-lib', - 'install-platlib', - 'install-purelib', - 'install-headers', - 'install-scripts', - 'install-data', - 'prefix', - 'exec-prefix', - 'home', - 'user', - 'root', - ] - ) - - ignore_options = frozenset(ignore_options) - - if filenames is None: - filenames = self.find_config_files() - - if DEBUG: - self.announce("Distribution.parse_config_files():") - - parser = ConfigParser() - parser.optionxform = str - for filename in filenames: - with io.open(filename, encoding='utf-8') as reader: - if DEBUG: - self.announce(" reading {filename}".format(**locals())) - parser.read_file(reader) - for section in parser.sections(): - options = parser.options(section) - opt_dict = self.get_option_dict(section) - - for opt in options: - if opt == '__name__' or opt in ignore_options: - continue - - val = parser.get(section, opt) - opt = self.warn_dash_deprecation(opt, section) - opt = self.make_option_lowercase(opt, section) - opt_dict[opt] = (filename, val) - - # Make the ConfigParser forget everything (so we retain - # the original filenames that options come from) - parser.__init__() - - if 'global' not in self.command_options: - return - - # If there was a "global" section in the config file, use it - # to set Distribution options. - - for (opt, (src, val)) in self.command_options['global'].items(): - alias = self.negative_opt.get(opt) - if alias: - val = not strtobool(val) - elif opt in ('verbose', 'dry_run'): # ugh! - val = strtobool(val) - - try: - setattr(self, alias or opt, val) - except ValueError as e: - raise DistutilsOptionError(e) from e - - def warn_dash_deprecation(self, opt, section): - if section in ( - 'options.extras_require', - 'options.data_files', - ): - return opt - - underscore_opt = opt.replace('-', '_') - commands = list(itertools.chain( - distutils.command.__all__, - self._setuptools_commands(), - )) - if ( - not section.startswith('options') - and section != 'metadata' - and section not in commands - ): - return underscore_opt - - if '-' in opt: - SetuptoolsDeprecationWarning.emit( - "Invalid dash-separated options", - f""" - Usage of dash-separated {opt!r} will not be supported in future - versions. Please use the underscore name {underscore_opt!r} instead. - """, - see_docs="userguide/declarative_config.html", - due_date=(2023, 9, 26), - # Warning initially introduced in 3 Mar 2021 - ) - return underscore_opt - - def _setuptools_commands(self): - try: - return metadata.distribution('setuptools').entry_points.names - except metadata.PackageNotFoundError: - # during bootstrapping, distribution doesn't exist - return [] - - def make_option_lowercase(self, opt, section): - if section != 'metadata' or opt.islower(): - return opt - - lowercase_opt = opt.lower() - SetuptoolsDeprecationWarning.emit( - "Invalid uppercase configuration", - f""" - Usage of uppercase key {opt!r} in {section!r} will not be supported in - future versions. Please use lowercase {lowercase_opt!r} instead. - """, - see_docs="userguide/declarative_config.html", - due_date=(2023, 9, 26), - # Warning initially introduced in 6 Mar 2021 - ) - return lowercase_opt - - # FIXME: 'Distribution._set_command_options' is too complex (14) - def _set_command_options(self, command_obj, option_dict=None): # noqa: C901 - """ - Set the options for 'command_obj' from 'option_dict'. Basically - this means copying elements of a dictionary ('option_dict') to - attributes of an instance ('command'). - - 'command_obj' must be a Command instance. If 'option_dict' is not - supplied, uses the standard option dictionary for this command - (from 'self.command_options'). - - (Adopted from distutils.dist.Distribution._set_command_options) - """ - command_name = command_obj.get_command_name() - if option_dict is None: - option_dict = self.get_option_dict(command_name) - - if DEBUG: - self.announce(" setting options for '%s' command:" % command_name) - for (option, (source, value)) in option_dict.items(): - if DEBUG: - self.announce(" %s = %s (from %s)" % (option, value, source)) - try: - bool_opts = [translate_longopt(o) for o in command_obj.boolean_options] - except AttributeError: - bool_opts = [] - try: - neg_opt = command_obj.negative_opt - except AttributeError: - neg_opt = {} - - try: - is_string = isinstance(value, str) - if option in neg_opt and is_string: - setattr(command_obj, neg_opt[option], not strtobool(value)) - elif option in bool_opts and is_string: - setattr(command_obj, option, strtobool(value)) - elif hasattr(command_obj, option): - setattr(command_obj, option, value) - else: - raise DistutilsOptionError( - "error in %s: command '%s' has no such option '%s'" - % (source, command_name, option) - ) - except ValueError as e: - raise DistutilsOptionError(e) from e - - def _get_project_config_files(self, filenames): - """Add default file and split between INI and TOML""" - tomlfiles = [] - standard_project_metadata = Path(self.src_root or os.curdir, "pyproject.toml") - if filenames is not None: - parts = partition(lambda f: Path(f).suffix == ".toml", filenames) - filenames = list(parts[0]) # 1st element => predicate is False - tomlfiles = list(parts[1]) # 2nd element => predicate is True - elif standard_project_metadata.exists(): - tomlfiles = [standard_project_metadata] - return filenames, tomlfiles - - def parse_config_files(self, filenames=None, ignore_option_errors=False): - """Parses configuration files from various levels - and loads configuration. - """ - inifiles, tomlfiles = self._get_project_config_files(filenames) - - self._parse_config_files(filenames=inifiles) - - setupcfg.parse_configuration( - self, self.command_options, ignore_option_errors=ignore_option_errors - ) - for filename in tomlfiles: - pyprojecttoml.apply_configuration(self, filename, ignore_option_errors) - - self._finalize_requires() - self._finalize_license_files() - - def fetch_build_eggs(self, requires): - """Resolve pre-setup requirements""" - from setuptools.installer import _fetch_build_eggs - - return _fetch_build_eggs(self, requires) - - def finalize_options(self): - """ - Allow plugins to apply arbitrary operations to the - distribution. Each hook may optionally define a 'order' - to influence the order of execution. Smaller numbers - go first and the default is 0. - """ - group = 'setuptools.finalize_distribution_options' - - def by_order(hook): - return getattr(hook, 'order', 0) - - defined = metadata.entry_points(group=group) - filtered = itertools.filterfalse(self._removed, defined) - loaded = map(lambda e: e.load(), filtered) - for ep in sorted(loaded, key=by_order): - ep(self) - - @staticmethod - def _removed(ep): - """ - When removing an entry point, if metadata is loaded - from an older version of Setuptools, that removed - entry point will attempt to be loaded and will fail. - See #2765 for more details. - """ - removed = { - # removed 2021-09-05 - '2to3_doctests', - } - return ep.name in removed - - def _finalize_setup_keywords(self): - for ep in metadata.entry_points(group='distutils.setup_keywords'): - value = getattr(self, ep.name, None) - if value is not None: - ep.load()(self, ep.name, value) - - def get_egg_cache_dir(self): - egg_cache_dir = os.path.join(os.curdir, '.eggs') - if not os.path.exists(egg_cache_dir): - os.mkdir(egg_cache_dir) - windows_support.hide_file(egg_cache_dir) - readme_txt_filename = os.path.join(egg_cache_dir, 'README.txt') - with open(readme_txt_filename, 'w') as f: - f.write( - 'This directory contains eggs that were downloaded ' - 'by setuptools to build, test, and run plug-ins.\n\n' - ) - f.write( - 'This directory caches those eggs to prevent ' - 'repeated downloads.\n\n' - ) - f.write('However, it is safe to delete this directory.\n\n') - - return egg_cache_dir - - def fetch_build_egg(self, req): - """Fetch an egg needed for building""" - from setuptools.installer import fetch_build_egg - - return fetch_build_egg(self, req) - - def get_command_class(self, command): - """Pluggable version of get_command_class()""" - if command in self.cmdclass: - return self.cmdclass[command] - - eps = metadata.entry_points(group='distutils.commands', name=command) - for ep in eps: - self.cmdclass[command] = cmdclass = ep.load() - return cmdclass - else: - return _Distribution.get_command_class(self, command) - - def print_commands(self): - for ep in metadata.entry_points(group='distutils.commands'): - if ep.name not in self.cmdclass: - cmdclass = ep.load() - self.cmdclass[ep.name] = cmdclass - return _Distribution.print_commands(self) - - def get_command_list(self): - for ep in metadata.entry_points(group='distutils.commands'): - if ep.name not in self.cmdclass: - cmdclass = ep.load() - self.cmdclass[ep.name] = cmdclass - return _Distribution.get_command_list(self) - - def include(self, **attrs): - """Add items to distribution that are named in keyword arguments - - For example, 'dist.include(py_modules=["x"])' would add 'x' to - the distribution's 'py_modules' attribute, if it was not already - there. - - Currently, this method only supports inclusion for attributes that are - lists or tuples. If you need to add support for adding to other - attributes in this or a subclass, you can add an '_include_X' method, - where 'X' is the name of the attribute. The method will be called with - the value passed to 'include()'. So, 'dist.include(foo={"bar":"baz"})' - will try to call 'dist._include_foo({"bar":"baz"})', which can then - handle whatever special inclusion logic is needed. - """ - for k, v in attrs.items(): - include = getattr(self, '_include_' + k, None) - if include: - include(v) - else: - self._include_misc(k, v) - - def exclude_package(self, package): - """Remove packages, modules, and extensions in named package""" - - pfx = package + '.' - if self.packages: - self.packages = [ - p for p in self.packages if p != package and not p.startswith(pfx) - ] - - if self.py_modules: - self.py_modules = [ - p for p in self.py_modules if p != package and not p.startswith(pfx) - ] - - if self.ext_modules: - self.ext_modules = [ - p - for p in self.ext_modules - if p.name != package and not p.name.startswith(pfx) - ] - - def has_contents_for(self, package): - """Return true if 'exclude_package(package)' would do something""" - - pfx = package + '.' - - for p in self.iter_distribution_names(): - if p == package or p.startswith(pfx): - return True - - def _exclude_misc(self, name, value): - """Handle 'exclude()' for list/tuple attrs without a special handler""" - if not isinstance(value, sequence): - raise DistutilsSetupError( - "%s: setting must be a list or tuple (%r)" % (name, value) - ) - try: - old = getattr(self, name) - except AttributeError as e: - raise DistutilsSetupError("%s: No such distribution setting" % name) from e - if old is not None and not isinstance(old, sequence): - raise DistutilsSetupError( - name + ": this setting cannot be changed via include/exclude" - ) - elif old: - setattr(self, name, [item for item in old if item not in value]) - - def _include_misc(self, name, value): - """Handle 'include()' for list/tuple attrs without a special handler""" - - if not isinstance(value, sequence): - raise DistutilsSetupError("%s: setting must be a list (%r)" % (name, value)) - try: - old = getattr(self, name) - except AttributeError as e: - raise DistutilsSetupError("%s: No such distribution setting" % name) from e - if old is None: - setattr(self, name, value) - elif not isinstance(old, sequence): - raise DistutilsSetupError( - name + ": this setting cannot be changed via include/exclude" - ) - else: - new = [item for item in value if item not in old] - setattr(self, name, old + new) - - def exclude(self, **attrs): - """Remove items from distribution that are named in keyword arguments - - For example, 'dist.exclude(py_modules=["x"])' would remove 'x' from - the distribution's 'py_modules' attribute. Excluding packages uses - the 'exclude_package()' method, so all of the package's contained - packages, modules, and extensions are also excluded. - - Currently, this method only supports exclusion from attributes that are - lists or tuples. If you need to add support for excluding from other - attributes in this or a subclass, you can add an '_exclude_X' method, - where 'X' is the name of the attribute. The method will be called with - the value passed to 'exclude()'. So, 'dist.exclude(foo={"bar":"baz"})' - will try to call 'dist._exclude_foo({"bar":"baz"})', which can then - handle whatever special exclusion logic is needed. - """ - for k, v in attrs.items(): - exclude = getattr(self, '_exclude_' + k, None) - if exclude: - exclude(v) - else: - self._exclude_misc(k, v) - - def _exclude_packages(self, packages): - if not isinstance(packages, sequence): - raise DistutilsSetupError( - "packages: setting must be a list or tuple (%r)" % (packages,) - ) - list(map(self.exclude_package, packages)) - - def _parse_command_opts(self, parser, args): - # Remove --with-X/--without-X options when processing command args - self.global_options = self.__class__.global_options - self.negative_opt = self.__class__.negative_opt - - # First, expand any aliases - command = args[0] - aliases = self.get_option_dict('aliases') - while command in aliases: - src, alias = aliases[command] - del aliases[command] # ensure each alias can expand only once! - import shlex - - args[:1] = shlex.split(alias, True) - command = args[0] - - nargs = _Distribution._parse_command_opts(self, parser, args) - - # Handle commands that want to consume all remaining arguments - cmd_class = self.get_command_class(command) - if getattr(cmd_class, 'command_consumes_arguments', None): - self.get_option_dict(command)['args'] = ("command line", nargs) - if nargs is not None: - return [] - - return nargs - - def get_cmdline_options(self): - """Return a '{cmd: {opt:val}}' map of all command-line options - - Option names are all long, but do not include the leading '--', and - contain dashes rather than underscores. If the option doesn't take - an argument (e.g. '--quiet'), the 'val' is 'None'. - - Note that options provided by config files are intentionally excluded. - """ - - d = {} - - for cmd, opts in self.command_options.items(): - - for opt, (src, val) in opts.items(): - - if src != "command line": - continue - - opt = opt.replace('_', '-') - - if val == 0: - cmdobj = self.get_command_obj(cmd) - neg_opt = self.negative_opt.copy() - neg_opt.update(getattr(cmdobj, 'negative_opt', {})) - for neg, pos in neg_opt.items(): - if pos == opt: - opt = neg - val = None - break - else: - raise AssertionError("Shouldn't be able to get here") - - elif val == 1: - val = None - - d.setdefault(cmd, {})[opt] = val - - return d - - def iter_distribution_names(self): - """Yield all packages, modules, and extension names in distribution""" - - for pkg in self.packages or (): - yield pkg - - for module in self.py_modules or (): - yield module - - for ext in self.ext_modules or (): - if isinstance(ext, tuple): - name, buildinfo = ext - else: - name = ext.name - if name.endswith('module'): - name = name[:-6] - yield name - - def handle_display_options(self, option_order): - """If there were any non-global "display-only" options - (--help-commands or the metadata display options) on the command - line, display the requested info and return true; else return - false. - """ - import sys - - if self.help_commands: - return _Distribution.handle_display_options(self, option_order) - - # Stdout may be StringIO (e.g. in tests) - if not isinstance(sys.stdout, io.TextIOWrapper): - return _Distribution.handle_display_options(self, option_order) - - # Don't wrap stdout if utf-8 is already the encoding. Provides - # workaround for #334. - if sys.stdout.encoding.lower() in ('utf-8', 'utf8'): - return _Distribution.handle_display_options(self, option_order) - - # Print metadata in UTF-8 no matter the platform - encoding = sys.stdout.encoding - sys.stdout.reconfigure(encoding='utf-8') - try: - return _Distribution.handle_display_options(self, option_order) - finally: - sys.stdout.reconfigure(encoding=encoding) - - def run_command(self, command): - self.set_defaults() - # Postpone defaults until all explicit configuration is considered - # (setup() args, config files, command line and plugins) - - super().run_command(command) - - -class DistDeprecationWarning(SetuptoolsDeprecationWarning): - """Class for warning about deprecations in dist in - setuptools. Not ignored by default, unlike DeprecationWarning.""" diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/panoptic_fpn.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/panoptic_fpn.py deleted file mode 100644 index 13aeabce162f4114109efe2c7fb4770b89087ab0..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/detectron2/modeling/meta_arch/panoptic_fpn.py +++ /dev/null @@ -1,266 +0,0 @@ -# -*- coding: utf-8 -*- -# Copyright (c) Facebook, Inc. and its affiliates. - -import logging -from typing import Dict, List -import torch -from torch import nn - -from detectron2.config import configurable -from detectron2.structures import ImageList - -from ..postprocessing import detector_postprocess, sem_seg_postprocess -from .build import META_ARCH_REGISTRY -from .rcnn import GeneralizedRCNN -from .semantic_seg import build_sem_seg_head - -__all__ = ["PanopticFPN"] - - -@META_ARCH_REGISTRY.register() -class PanopticFPN(GeneralizedRCNN): - """ - Implement the paper :paper:`PanopticFPN`. - """ - - @configurable - def __init__( - self, - *, - sem_seg_head: nn.Module, - combine_overlap_thresh: float = 0.5, - combine_stuff_area_thresh: float = 4096, - combine_instances_score_thresh: float = 0.5, - **kwargs, - ): - """ - NOTE: this interface is experimental. - - Args: - sem_seg_head: a module for the semantic segmentation head. - combine_overlap_thresh: combine masks into one instances if - they have enough overlap - combine_stuff_area_thresh: ignore stuff areas smaller than this threshold - combine_instances_score_thresh: ignore instances whose score is - smaller than this threshold - - Other arguments are the same as :class:`GeneralizedRCNN`. - """ - super().__init__(**kwargs) - self.sem_seg_head = sem_seg_head - # options when combining instance & semantic outputs - self.combine_overlap_thresh = combine_overlap_thresh - self.combine_stuff_area_thresh = combine_stuff_area_thresh - self.combine_instances_score_thresh = combine_instances_score_thresh - - @classmethod - def from_config(cls, cfg): - ret = super().from_config(cfg) - ret.update( - { - "combine_overlap_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.OVERLAP_THRESH, - "combine_stuff_area_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.STUFF_AREA_LIMIT, - "combine_instances_score_thresh": cfg.MODEL.PANOPTIC_FPN.COMBINE.INSTANCES_CONFIDENCE_THRESH, # noqa - } - ) - ret["sem_seg_head"] = build_sem_seg_head(cfg, ret["backbone"].output_shape()) - logger = logging.getLogger(__name__) - if not cfg.MODEL.PANOPTIC_FPN.COMBINE.ENABLED: - logger.warning( - "PANOPTIC_FPN.COMBINED.ENABLED is no longer used. " - " model.inference(do_postprocess=) should be used to toggle postprocessing." - ) - if cfg.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT != 1.0: - w = cfg.MODEL.PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT - logger.warning( - "PANOPTIC_FPN.INSTANCE_LOSS_WEIGHT should be replaced by weights on each ROI head." - ) - - def update_weight(x): - if isinstance(x, dict): - return {k: v * w for k, v in x.items()} - else: - return x * w - - roi_heads = ret["roi_heads"] - roi_heads.box_predictor.loss_weight = update_weight(roi_heads.box_predictor.loss_weight) - roi_heads.mask_head.loss_weight = update_weight(roi_heads.mask_head.loss_weight) - return ret - - def forward(self, batched_inputs): - """ - Args: - batched_inputs: a list, batched outputs of :class:`DatasetMapper`. - Each item in the list contains the inputs for one image. - - For now, each item in the list is a dict that contains: - - * "image": Tensor, image in (C, H, W) format. - * "instances": Instances - * "sem_seg": semantic segmentation ground truth. - * Other information that's included in the original dicts, such as: - "height", "width" (int): the output resolution of the model, used in inference. - See :meth:`postprocess` for details. - - Returns: - list[dict]: - each dict has the results for one image. The dict contains the following keys: - - * "instances": see :meth:`GeneralizedRCNN.forward` for its format. - * "sem_seg": see :meth:`SemanticSegmentor.forward` for its format. - * "panoptic_seg": See the return value of - :func:`combine_semantic_and_instance_outputs` for its format. - """ - if not self.training: - return self.inference(batched_inputs) - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - - assert "sem_seg" in batched_inputs[0] - gt_sem_seg = [x["sem_seg"].to(self.device) for x in batched_inputs] - gt_sem_seg = ImageList.from_tensors( - gt_sem_seg, self.backbone.size_divisibility, self.sem_seg_head.ignore_value - ).tensor - sem_seg_results, sem_seg_losses = self.sem_seg_head(features, gt_sem_seg) - - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - proposals, proposal_losses = self.proposal_generator(images, features, gt_instances) - detector_results, detector_losses = self.roi_heads( - images, features, proposals, gt_instances - ) - - losses = sem_seg_losses - losses.update(proposal_losses) - losses.update(detector_losses) - return losses - - def inference(self, batched_inputs: List[Dict[str, torch.Tensor]], do_postprocess: bool = True): - """ - Run inference on the given inputs. - - Args: - batched_inputs (list[dict]): same as in :meth:`forward` - do_postprocess (bool): whether to apply post-processing on the outputs. - - Returns: - When do_postprocess=True, see docs in :meth:`forward`. - Otherwise, returns a (list[Instances], list[Tensor]) that contains - the raw detector outputs, and raw semantic segmentation outputs. - """ - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - sem_seg_results, sem_seg_losses = self.sem_seg_head(features, None) - proposals, _ = self.proposal_generator(images, features, None) - detector_results, _ = self.roi_heads(images, features, proposals, None) - - if do_postprocess: - processed_results = [] - for sem_seg_result, detector_result, input_per_image, image_size in zip( - sem_seg_results, detector_results, batched_inputs, images.image_sizes - ): - height = input_per_image.get("height", image_size[0]) - width = input_per_image.get("width", image_size[1]) - sem_seg_r = sem_seg_postprocess(sem_seg_result, image_size, height, width) - detector_r = detector_postprocess(detector_result, height, width) - - processed_results.append({"sem_seg": sem_seg_r, "instances": detector_r}) - - panoptic_r = combine_semantic_and_instance_outputs( - detector_r, - sem_seg_r.argmax(dim=0), - self.combine_overlap_thresh, - self.combine_stuff_area_thresh, - self.combine_instances_score_thresh, - ) - processed_results[-1]["panoptic_seg"] = panoptic_r - return processed_results - else: - return detector_results, sem_seg_results - - -def combine_semantic_and_instance_outputs( - instance_results, - semantic_results, - overlap_threshold, - stuff_area_thresh, - instances_score_thresh, -): - """ - Implement a simple combining logic following - "combine_semantic_and_instance_predictions.py" in panopticapi - to produce panoptic segmentation outputs. - - Args: - instance_results: output of :func:`detector_postprocess`. - semantic_results: an (H, W) tensor, each element is the contiguous semantic - category id - - Returns: - panoptic_seg (Tensor): of shape (height, width) where the values are ids for each segment. - segments_info (list[dict]): Describe each segment in `panoptic_seg`. - Each dict contains keys "id", "category_id", "isthing". - """ - panoptic_seg = torch.zeros_like(semantic_results, dtype=torch.int32) - - # sort instance outputs by scores - sorted_inds = torch.argsort(-instance_results.scores) - - current_segment_id = 0 - segments_info = [] - - instance_masks = instance_results.pred_masks.to(dtype=torch.bool, device=panoptic_seg.device) - - # Add instances one-by-one, check for overlaps with existing ones - for inst_id in sorted_inds: - score = instance_results.scores[inst_id].item() - if score < instances_score_thresh: - break - mask = instance_masks[inst_id] # H,W - mask_area = mask.sum().item() - - if mask_area == 0: - continue - - intersect = (mask > 0) & (panoptic_seg > 0) - intersect_area = intersect.sum().item() - - if intersect_area * 1.0 / mask_area > overlap_threshold: - continue - - if intersect_area > 0: - mask = mask & (panoptic_seg == 0) - - current_segment_id += 1 - panoptic_seg[mask] = current_segment_id - segments_info.append( - { - "id": current_segment_id, - "isthing": True, - "score": score, - "category_id": instance_results.pred_classes[inst_id].item(), - "instance_id": inst_id.item(), - } - ) - - # Add semantic results to remaining empty areas - semantic_labels = torch.unique(semantic_results).cpu().tolist() - for semantic_label in semantic_labels: - if semantic_label == 0: # 0 is a special "thing" class - continue - mask = (semantic_results == semantic_label) & (panoptic_seg == 0) - mask_area = mask.sum().item() - if mask_area < stuff_area_thresh: - continue - - current_segment_id += 1 - panoptic_seg[mask] = current_segment_id - segments_info.append( - { - "id": current_segment_id, - "isthing": False, - "category_id": semantic_label, - "area": mask_area, - } - ) - - return panoptic_seg, segments_info diff --git a/spaces/Tinki/text_generator/app.py b/spaces/Tinki/text_generator/app.py deleted file mode 100644 index 96d20ef5b60cd0ea1d6917014dc18215fce042e3..0000000000000000000000000000000000000000 --- a/spaces/Tinki/text_generator/app.py +++ /dev/null @@ -1,12 +0,0 @@ -import gradio as gr -from gradio.mix import Parallel - -myfirstvariable="My First Text Generator" -mylovelysecondvariable="Input text and submit." - -model1=gr.Interface.load("huggingface/gpt2") -model2=gr.Interface.load("huggingface/EleutherAI/gpt-neo-1.3B") -model3=gr.Interface.load("huggingface/bigscience/bloom-560m") - - -gr.Parallel(model1, model2, model3, title=myfirstvariable, description=mylovelysecondvariable).launch() \ No newline at end of file diff --git a/spaces/Treav/DICOMDeidentify2/app.py b/spaces/Treav/DICOMDeidentify2/app.py deleted file mode 100644 index 8bf1114d2854dbd332bb310a68640b87e890bebf..0000000000000000000000000000000000000000 --- a/spaces/Treav/DICOMDeidentify2/app.py +++ /dev/null @@ -1,51 +0,0 @@ -import gradio as gr -import glob -from pathlib import Path -import matplotlib.pyplot as plt -import pydicom -from presidio_image_redactor import DicomImageRedactorEngine -import pytesseract -from pydicom import dcmread, dcmwrite -import deidentify as deid -import os -def Deidentify(input_file): - #do something - output_file = deid.dicomprocess(input_file.name) # #os.path.dirname(os.path.realpath(input_file.name))+'/'+ input_file.name.split('/')[-1]) - print("-------------------{}-{}---------------".format(output_file,input_file.name)) - - return output_file - -def ImageOut(originalDCM, dicomProcessed):#originalDCM, - print("ImageOut({})".format(dicomProcessed)) - # print("-----------{}--------".format(os.path.dirname(__file__))) - ds = pydicom.dcmread(originalDCM.name) - plt.imshow(ds.pixel_array, cmap=plt.cm.bone) - plt.savefig("input.jpg") - plt.close() - ds2 = pydicom.dcmread(dicomProcessed.name) - plt.imshow(ds2.pixel_array, cmap=plt.cm.bone) - plt.savefig("output.jpg") - plt.close() - return os.path.realpath("input.jpg"), os.path.realpath("output.jpg")# - - -with gr.Blocks() as demo: - gr.Markdown("DICOM File deidentification") - with gr.Row(): - file_input = gr.File(file_count='single', label = 'Upload the DICOM File') - file_output = gr.File(label = 'Deidentified file') - with gr.Row(): - deidentify_btn = gr.Button('Deidentify') - view_btn = gr.Button('ViewDICOM') - - deidentify_btn.click(Deidentify, inputs=[file_input], - outputs=[file_output]) - with gr.Row(): - imageOrg_view = gr.Image(type="pil") - image_view = gr.Image(type="pil") - - view_btn.click(ImageOut,inputs=[file_input, file_output],outputs=[imageOrg_view, image_view]) - - -demo.launch(debug=True,enable_queue=True) - diff --git a/spaces/Ukrania/RVC-Models/lib/infer_pack/models.py b/spaces/Ukrania/RVC-Models/lib/infer_pack/models.py deleted file mode 100644 index 44c08d361bcb13b84b38dc29beff5cdaddad4ea2..0000000000000000000000000000000000000000 --- a/spaces/Ukrania/RVC-Models/lib/infer_pack/models.py +++ /dev/null @@ -1,1124 +0,0 @@ -import math, pdb, os -from time import time as ttime -import torch -from torch import nn -from torch.nn import functional as F -from lib.infer_pack import modules -from lib.infer_pack import attentions -from lib.infer_pack import commons -from lib.infer_pack.commons import init_weights, get_padding -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm, spectral_norm -from lib.infer_pack.commons import init_weights -import numpy as np -from lib.infer_pack import commons - - -class TextEncoder256(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(256, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class TextEncoder768(nn.Module): - def __init__( - self, - out_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=True, - ): - super().__init__() - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.emb_phone = nn.Linear(768, hidden_channels) - self.lrelu = nn.LeakyReLU(0.1, inplace=True) - if f0 == True: - self.emb_pitch = nn.Embedding(256, hidden_channels) # pitch 256 - self.encoder = attentions.Encoder( - hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, phone, pitch, lengths): - if pitch == None: - x = self.emb_phone(phone) - else: - x = self.emb_phone(phone) + self.emb_pitch(pitch) - x = x * math.sqrt(self.hidden_channels) # [b, t, h] - x = self.lrelu(x) - x = torch.transpose(x, 1, -1) # [b, h, t] - x_mask = torch.unsqueeze(commons.sequence_mask(lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.encoder(x * x_mask, x_mask) - stats = self.proj(x) * x_mask - - m, logs = torch.split(stats, self.out_channels, dim=1) - return m, logs, x_mask - - -class ResidualCouplingBlock(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - n_flows=4, - gin_channels=0, - ): - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.n_flows = n_flows - self.gin_channels = gin_channels - - self.flows = nn.ModuleList() - for i in range(n_flows): - self.flows.append( - modules.ResidualCouplingLayer( - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - mean_only=True, - ) - ) - self.flows.append(modules.Flip()) - - def forward(self, x, x_mask, g=None, reverse=False): - if not reverse: - for flow in self.flows: - x, _ = flow(x, x_mask, g=g, reverse=reverse) - else: - for flow in reversed(self.flows): - x = flow(x, x_mask, g=g, reverse=reverse) - return x - - def remove_weight_norm(self): - for i in range(self.n_flows): - self.flows[i * 2].remove_weight_norm() - - -class PosteriorEncoder(nn.Module): - def __init__( - self, - in_channels, - out_channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - ): - super().__init__() - self.in_channels = in_channels - self.out_channels = out_channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - - self.pre = nn.Conv1d(in_channels, hidden_channels, 1) - self.enc = modules.WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=gin_channels, - ) - self.proj = nn.Conv1d(hidden_channels, out_channels * 2, 1) - - def forward(self, x, x_lengths, g=None): - x_mask = torch.unsqueeze(commons.sequence_mask(x_lengths, x.size(2)), 1).to( - x.dtype - ) - x = self.pre(x) * x_mask - x = self.enc(x, x_mask, g=g) - stats = self.proj(x) * x_mask - m, logs = torch.split(stats, self.out_channels, dim=1) - z = (m + torch.randn_like(m) * torch.exp(logs)) * x_mask - return z, m, logs, x_mask - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class Generator(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=0, - ): - super(Generator, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - def forward(self, x, g=None): - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -class SineGen(torch.nn.Module): - """Definition of sine generator - SineGen(samp_rate, harmonic_num = 0, - sine_amp = 0.1, noise_std = 0.003, - voiced_threshold = 0, - flag_for_pulse=False) - samp_rate: sampling rate in Hz - harmonic_num: number of harmonic overtones (default 0) - sine_amp: amplitude of sine-wavefrom (default 0.1) - noise_std: std of Gaussian noise (default 0.003) - voiced_thoreshold: F0 threshold for U/V classification (default 0) - flag_for_pulse: this SinGen is used inside PulseGen (default False) - Note: when flag_for_pulse is True, the first time step of a voiced - segment is always sin(np.pi) or cos(0) - """ - - def __init__( - self, - samp_rate, - harmonic_num=0, - sine_amp=0.1, - noise_std=0.003, - voiced_threshold=0, - flag_for_pulse=False, - ): - super(SineGen, self).__init__() - self.sine_amp = sine_amp - self.noise_std = noise_std - self.harmonic_num = harmonic_num - self.dim = self.harmonic_num + 1 - self.sampling_rate = samp_rate - self.voiced_threshold = voiced_threshold - - def _f02uv(self, f0): - # generate uv signal - uv = torch.ones_like(f0) - uv = uv * (f0 > self.voiced_threshold) - return uv - - def forward(self, f0, upp): - """sine_tensor, uv = forward(f0) - input F0: tensor(batchsize=1, length, dim=1) - f0 for unvoiced steps should be 0 - output sine_tensor: tensor(batchsize=1, length, dim) - output uv: tensor(batchsize=1, length, 1) - """ - with torch.no_grad(): - f0 = f0[:, None].transpose(1, 2) - f0_buf = torch.zeros(f0.shape[0], f0.shape[1], self.dim, device=f0.device) - # fundamental component - f0_buf[:, :, 0] = f0[:, :, 0] - for idx in np.arange(self.harmonic_num): - f0_buf[:, :, idx + 1] = f0_buf[:, :, 0] * ( - idx + 2 - ) # idx + 2: the (idx+1)-th overtone, (idx+2)-th harmonic - rad_values = (f0_buf / self.sampling_rate) % 1 ###%1意味着n_har的乘积无法后处理优化 - rand_ini = torch.rand( - f0_buf.shape[0], f0_buf.shape[2], device=f0_buf.device - ) - rand_ini[:, 0] = 0 - rad_values[:, 0, :] = rad_values[:, 0, :] + rand_ini - tmp_over_one = torch.cumsum(rad_values, 1) # % 1 #####%1意味着后面的cumsum无法再优化 - tmp_over_one *= upp - tmp_over_one = F.interpolate( - tmp_over_one.transpose(2, 1), - scale_factor=upp, - mode="linear", - align_corners=True, - ).transpose(2, 1) - rad_values = F.interpolate( - rad_values.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose( - 2, 1 - ) ####### - tmp_over_one %= 1 - tmp_over_one_idx = (tmp_over_one[:, 1:, :] - tmp_over_one[:, :-1, :]) < 0 - cumsum_shift = torch.zeros_like(rad_values) - cumsum_shift[:, 1:, :] = tmp_over_one_idx * -1.0 - sine_waves = torch.sin( - torch.cumsum(rad_values + cumsum_shift, dim=1) * 2 * np.pi - ) - sine_waves = sine_waves * self.sine_amp - uv = self._f02uv(f0) - uv = F.interpolate( - uv.transpose(2, 1), scale_factor=upp, mode="nearest" - ).transpose(2, 1) - noise_amp = uv * self.noise_std + (1 - uv) * self.sine_amp / 3 - noise = noise_amp * torch.randn_like(sine_waves) - sine_waves = sine_waves * uv + noise - return sine_waves, uv, noise - - -class SourceModuleHnNSF(torch.nn.Module): - """SourceModule for hn-nsf - SourceModule(sampling_rate, harmonic_num=0, sine_amp=0.1, - add_noise_std=0.003, voiced_threshod=0) - sampling_rate: sampling_rate in Hz - harmonic_num: number of harmonic above F0 (default: 0) - sine_amp: amplitude of sine source signal (default: 0.1) - add_noise_std: std of additive Gaussian noise (default: 0.003) - note that amplitude of noise in unvoiced is decided - by sine_amp - voiced_threshold: threhold to set U/V given F0 (default: 0) - Sine_source, noise_source = SourceModuleHnNSF(F0_sampled) - F0_sampled (batchsize, length, 1) - Sine_source (batchsize, length, 1) - noise_source (batchsize, length 1) - uv (batchsize, length, 1) - """ - - def __init__( - self, - sampling_rate, - harmonic_num=0, - sine_amp=0.1, - add_noise_std=0.003, - voiced_threshod=0, - is_half=True, - ): - super(SourceModuleHnNSF, self).__init__() - - self.sine_amp = sine_amp - self.noise_std = add_noise_std - self.is_half = is_half - # to produce sine waveforms - self.l_sin_gen = SineGen( - sampling_rate, harmonic_num, sine_amp, add_noise_std, voiced_threshod - ) - - # to merge source harmonics into a single excitation - self.l_linear = torch.nn.Linear(harmonic_num + 1, 1) - self.l_tanh = torch.nn.Tanh() - - def forward(self, x, upp=None): - sine_wavs, uv, _ = self.l_sin_gen(x, upp) - if self.is_half: - sine_wavs = sine_wavs.half() - sine_merge = self.l_tanh(self.l_linear(sine_wavs)) - return sine_merge, None, None # noise, uv - - -class GeneratorNSF(torch.nn.Module): - def __init__( - self, - initial_channel, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels, - sr, - is_half=False, - ): - super(GeneratorNSF, self).__init__() - self.num_kernels = len(resblock_kernel_sizes) - self.num_upsamples = len(upsample_rates) - - self.f0_upsamp = torch.nn.Upsample(scale_factor=np.prod(upsample_rates)) - self.m_source = SourceModuleHnNSF( - sampling_rate=sr, harmonic_num=0, is_half=is_half - ) - self.noise_convs = nn.ModuleList() - self.conv_pre = Conv1d( - initial_channel, upsample_initial_channel, 7, 1, padding=3 - ) - resblock = modules.ResBlock1 if resblock == "1" else modules.ResBlock2 - - self.ups = nn.ModuleList() - for i, (u, k) in enumerate(zip(upsample_rates, upsample_kernel_sizes)): - c_cur = upsample_initial_channel // (2 ** (i + 1)) - self.ups.append( - weight_norm( - ConvTranspose1d( - upsample_initial_channel // (2**i), - upsample_initial_channel // (2 ** (i + 1)), - k, - u, - padding=(k - u) // 2, - ) - ) - ) - if i + 1 < len(upsample_rates): - stride_f0 = np.prod(upsample_rates[i + 1 :]) - self.noise_convs.append( - Conv1d( - 1, - c_cur, - kernel_size=stride_f0 * 2, - stride=stride_f0, - padding=stride_f0 // 2, - ) - ) - else: - self.noise_convs.append(Conv1d(1, c_cur, kernel_size=1)) - - self.resblocks = nn.ModuleList() - for i in range(len(self.ups)): - ch = upsample_initial_channel // (2 ** (i + 1)) - for j, (k, d) in enumerate( - zip(resblock_kernel_sizes, resblock_dilation_sizes) - ): - self.resblocks.append(resblock(ch, k, d)) - - self.conv_post = Conv1d(ch, 1, 7, 1, padding=3, bias=False) - self.ups.apply(init_weights) - - if gin_channels != 0: - self.cond = nn.Conv1d(gin_channels, upsample_initial_channel, 1) - - self.upp = np.prod(upsample_rates) - - def forward(self, x, f0, g=None): - har_source, noi_source, uv = self.m_source(f0, self.upp) - har_source = har_source.transpose(1, 2) - x = self.conv_pre(x) - if g is not None: - x = x + self.cond(g) - - for i in range(self.num_upsamples): - x = F.leaky_relu(x, modules.LRELU_SLOPE) - x = self.ups[i](x) - x_source = self.noise_convs[i](har_source) - x = x + x_source - xs = None - for j in range(self.num_kernels): - if xs is None: - xs = self.resblocks[i * self.num_kernels + j](x) - else: - xs += self.resblocks[i * self.num_kernels + j](x) - x = xs / self.num_kernels - x = F.leaky_relu(x) - x = self.conv_post(x) - x = torch.tanh(x) - return x - - def remove_weight_norm(self): - for l in self.ups: - remove_weight_norm(l) - for l in self.resblocks: - l.remove_weight_norm() - - -sr2sr = { - "32k": 32000, - "40k": 40000, - "48k": 48000, -} - - -class SynthesizerTrnMs256NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr, - **kwargs - ): - super().__init__() - if type(sr) == type("strr"): - sr = sr2sr[sr] - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - ) - self.dec = GeneratorNSF( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - sr=sr, - is_half=kwargs["is_half"], - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward( - self, phone, phone_lengths, pitch, pitchf, y, y_lengths, ds - ): # 这里ds是id,[bs,1] - # print(1,pitch.shape)#[bs,t] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - # print(-1,pitchf.shape,ids_slice,self.segment_size,self.hop_length,self.segment_size//self.hop_length) - pitchf = commons.slice_segments2(pitchf, ids_slice, self.segment_size) - # print(-2,pitchf.shape,z_slice.shape) - o = self.dec(z_slice, pitchf, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, pitch, nsff0, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, pitch, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], nsff0, g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs256NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder256( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class SynthesizerTrnMs768NSFsid_nono(nn.Module): - def __init__( - self, - spec_channels, - segment_size, - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - spk_embed_dim, - gin_channels, - sr=None, - **kwargs - ): - super().__init__() - self.spec_channels = spec_channels - self.inter_channels = inter_channels - self.hidden_channels = hidden_channels - self.filter_channels = filter_channels - self.n_heads = n_heads - self.n_layers = n_layers - self.kernel_size = kernel_size - self.p_dropout = p_dropout - self.resblock = resblock - self.resblock_kernel_sizes = resblock_kernel_sizes - self.resblock_dilation_sizes = resblock_dilation_sizes - self.upsample_rates = upsample_rates - self.upsample_initial_channel = upsample_initial_channel - self.upsample_kernel_sizes = upsample_kernel_sizes - self.segment_size = segment_size - self.gin_channels = gin_channels - # self.hop_length = hop_length# - self.spk_embed_dim = spk_embed_dim - self.enc_p = TextEncoder768( - inter_channels, - hidden_channels, - filter_channels, - n_heads, - n_layers, - kernel_size, - p_dropout, - f0=False, - ) - self.dec = Generator( - inter_channels, - resblock, - resblock_kernel_sizes, - resblock_dilation_sizes, - upsample_rates, - upsample_initial_channel, - upsample_kernel_sizes, - gin_channels=gin_channels, - ) - self.enc_q = PosteriorEncoder( - spec_channels, - inter_channels, - hidden_channels, - 5, - 1, - 16, - gin_channels=gin_channels, - ) - self.flow = ResidualCouplingBlock( - inter_channels, hidden_channels, 5, 1, 3, gin_channels=gin_channels - ) - self.emb_g = nn.Embedding(self.spk_embed_dim, gin_channels) - print("gin_channels:", gin_channels, "self.spk_embed_dim:", self.spk_embed_dim) - - def remove_weight_norm(self): - self.dec.remove_weight_norm() - self.flow.remove_weight_norm() - self.enc_q.remove_weight_norm() - - def forward(self, phone, phone_lengths, y, y_lengths, ds): # 这里ds是id,[bs,1] - g = self.emb_g(ds).unsqueeze(-1) # [b, 256, 1]##1是t,广播的 - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z, m_q, logs_q, y_mask = self.enc_q(y, y_lengths, g=g) - z_p = self.flow(z, y_mask, g=g) - z_slice, ids_slice = commons.rand_slice_segments( - z, y_lengths, self.segment_size - ) - o = self.dec(z_slice, g=g) - return o, ids_slice, x_mask, y_mask, (z, z_p, m_p, logs_p, m_q, logs_q) - - def infer(self, phone, phone_lengths, sid, max_len=None): - g = self.emb_g(sid).unsqueeze(-1) - m_p, logs_p, x_mask = self.enc_p(phone, None, phone_lengths) - z_p = (m_p + torch.exp(logs_p) * torch.randn_like(m_p) * 0.66666) * x_mask - z = self.flow(z_p, x_mask, g=g, reverse=True) - o = self.dec((z * x_mask)[:, :, :max_len], g=g) - return o, x_mask, (z, z_p, m_p, logs_p) - - -class MultiPeriodDiscriminator(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminator, self).__init__() - periods = [2, 3, 5, 7, 11, 17] - # periods = [3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class MultiPeriodDiscriminatorV2(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(MultiPeriodDiscriminatorV2, self).__init__() - # periods = [2, 3, 5, 7, 11, 17] - periods = [2, 3, 5, 7, 11, 17, 23, 37] - - discs = [DiscriminatorS(use_spectral_norm=use_spectral_norm)] - discs = discs + [ - DiscriminatorP(i, use_spectral_norm=use_spectral_norm) for i in periods - ] - self.discriminators = nn.ModuleList(discs) - - def forward(self, y, y_hat): - y_d_rs = [] # - y_d_gs = [] - fmap_rs = [] - fmap_gs = [] - for i, d in enumerate(self.discriminators): - y_d_r, fmap_r = d(y) - y_d_g, fmap_g = d(y_hat) - # for j in range(len(fmap_r)): - # print(i,j,y.shape,y_hat.shape,fmap_r[j].shape,fmap_g[j].shape) - y_d_rs.append(y_d_r) - y_d_gs.append(y_d_g) - fmap_rs.append(fmap_r) - fmap_gs.append(fmap_g) - - return y_d_rs, y_d_gs, fmap_rs, fmap_gs - - -class DiscriminatorS(torch.nn.Module): - def __init__(self, use_spectral_norm=False): - super(DiscriminatorS, self).__init__() - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f(Conv1d(1, 16, 15, 1, padding=7)), - norm_f(Conv1d(16, 64, 41, 4, groups=4, padding=20)), - norm_f(Conv1d(64, 256, 41, 4, groups=16, padding=20)), - norm_f(Conv1d(256, 1024, 41, 4, groups=64, padding=20)), - norm_f(Conv1d(1024, 1024, 41, 4, groups=256, padding=20)), - norm_f(Conv1d(1024, 1024, 5, 1, padding=2)), - ] - ) - self.conv_post = norm_f(Conv1d(1024, 1, 3, 1, padding=1)) - - def forward(self, x): - fmap = [] - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap - - -class DiscriminatorP(torch.nn.Module): - def __init__(self, period, kernel_size=5, stride=3, use_spectral_norm=False): - super(DiscriminatorP, self).__init__() - self.period = period - self.use_spectral_norm = use_spectral_norm - norm_f = weight_norm if use_spectral_norm == False else spectral_norm - self.convs = nn.ModuleList( - [ - norm_f( - Conv2d( - 1, - 32, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 32, - 128, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 128, - 512, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 512, - 1024, - (kernel_size, 1), - (stride, 1), - padding=(get_padding(kernel_size, 1), 0), - ) - ), - norm_f( - Conv2d( - 1024, - 1024, - (kernel_size, 1), - 1, - padding=(get_padding(kernel_size, 1), 0), - ) - ), - ] - ) - self.conv_post = norm_f(Conv2d(1024, 1, (3, 1), 1, padding=(1, 0))) - - def forward(self, x): - fmap = [] - - # 1d to 2d - b, c, t = x.shape - if t % self.period != 0: # pad first - n_pad = self.period - (t % self.period) - x = F.pad(x, (0, n_pad), "reflect") - t = t + n_pad - x = x.view(b, c, t // self.period, self.period) - - for l in self.convs: - x = l(x) - x = F.leaky_relu(x, modules.LRELU_SLOPE) - fmap.append(x) - x = self.conv_post(x) - fmap.append(x) - x = torch.flatten(x, 1, -1) - - return x, fmap diff --git a/spaces/User1342/WatchTower/Pinpoint/Aggregator_NGram.py b/spaces/User1342/WatchTower/Pinpoint/Aggregator_NGram.py deleted file mode 100644 index bd90c38ac1d2ad9f797883b8f37f734000d51f80..0000000000000000000000000000000000000000 --- a/spaces/User1342/WatchTower/Pinpoint/Aggregator_NGram.py +++ /dev/null @@ -1,103 +0,0 @@ -from sklearn.feature_extraction.text import CountVectorizer - -from Pinpoint.Logger import * - -c_vec = CountVectorizer(ngram_range=(1, 5)) - - -class n_gram_aggregator(): - """ - This class is used to retrieve the most common NGrams for a given dataset corpus. - """ - - def _get_average_ngram_count(self, n_grams_dict): - """ - takes a dict of Ngrams and identifies the average weighting - :param n_grams_dict: - :return: - """ - all_count = [] - for n_gram in n_grams_dict: - ng_count = n_grams_dict[n_gram] - all_count.append(ng_count) - - average_count = sum(all_count) / len(all_count) - # print(all_count) - return average_count - - def _get_all_ngrams(self, data): - """ - Returns all ngrams (tri, bi, and uni) for a given piece of text - :param data: - :return: - """ - - if type(data) is not list: - data = [data] - - # input to fit_transform() should be an iterable with strings - ngrams = c_vec.fit_transform(data) - - # needs to happen after fit_transform() - vocab = c_vec.vocabulary_ - - count_values = ngrams.toarray().sum(axis=0) - - # output n-grams - uni_grams = {} - bi_grams = {} - tri_grams = {} - - for ng_count, ng_text in sorted([(count_values[i], k) for k, i in vocab.items()], reverse=True): - sentence_length = len(ng_text.split(" ")) - - if sentence_length == 3: - tri_grams[ng_text] = ng_count - elif sentence_length == 2: - bi_grams[ng_text] = ng_count - elif sentence_length == 1: - uni_grams[ng_text] = ng_count - - return uni_grams, bi_grams, tri_grams - - def _get_popular_ngrams(self, ngrams_dict): - """ - Returns ngrams for a given piece of text that are the most popular (i.e. their weighting is - above the average ngram wighting) - :param ngrams_dict: - :return: - """ - average_count = self._get_average_ngram_count(ngrams_dict) - - popular_ngrams = {} - for n_gram in ngrams_dict: - ng_count = ngrams_dict[n_gram] - - if ng_count >= average_count: - popular_ngrams[n_gram] = ng_count - return popular_ngrams - - def get_ngrams(self, data=None, file_name_to_read=None): - """ - Wrapper function for returning uni, bi, and tri grams that are the most popular (above the average weighting in - a given piece of text). - :param data: - :param file_name_to_read: - :return: - """ - logger().print_message("Getting Ngrams") - - if data is None and file_name_to_read is None: - raise Exception("No data supplied to retrieve n_grams") - - if data is None and file_name_to_read is not None: - with open(file_name_to_read, 'r') as file_to_read: - data = file_to_read.read() - - uni_grams, bi_grams, tri_grams = self._get_all_ngrams(data) - - popular_uni_grams = list(self._get_popular_ngrams(uni_grams).keys()) - popular_bi_grams = list(self._get_popular_ngrams(bi_grams).keys()) - popular_tri_grams = list(self._get_popular_ngrams(tri_grams).keys()) - - return popular_uni_grams, popular_bi_grams, popular_tri_grams diff --git a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/datasets/transforms.py b/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/datasets/transforms.py deleted file mode 100644 index 91cf9269e4b31008a3ddca34a19b038a9b399991..0000000000000000000000000000000000000000 --- a/spaces/Volkopat/SegmentAnythingxGroundingDINO/groundingdino/datasets/transforms.py +++ /dev/null @@ -1,311 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -""" -Transforms and data augmentation for both image + bbox. -""" -import os -import random - -import PIL -import torch -import torchvision.transforms as T -import torchvision.transforms.functional as F - -from groundingdino.util.box_ops import box_xyxy_to_cxcywh -from groundingdino.util.misc import interpolate - - -def crop(image, target, region): - cropped_image = F.crop(image, *region) - - target = target.copy() - i, j, h, w = region - - # should we do something wrt the original size? - target["size"] = torch.tensor([h, w]) - - fields = ["labels", "area", "iscrowd", "positive_map"] - - if "boxes" in target: - boxes = target["boxes"] - max_size = torch.as_tensor([w, h], dtype=torch.float32) - cropped_boxes = boxes - torch.as_tensor([j, i, j, i]) - cropped_boxes = torch.min(cropped_boxes.reshape(-1, 2, 2), max_size) - cropped_boxes = cropped_boxes.clamp(min=0) - area = (cropped_boxes[:, 1, :] - cropped_boxes[:, 0, :]).prod(dim=1) - target["boxes"] = cropped_boxes.reshape(-1, 4) - target["area"] = area - fields.append("boxes") - - if "masks" in target: - # FIXME should we update the area here if there are no boxes? - target["masks"] = target["masks"][:, i : i + h, j : j + w] - fields.append("masks") - - # remove elements for which the boxes or masks that have zero area - if "boxes" in target or "masks" in target: - # favor boxes selection when defining which elements to keep - # this is compatible with previous implementation - if "boxes" in target: - cropped_boxes = target["boxes"].reshape(-1, 2, 2) - keep = torch.all(cropped_boxes[:, 1, :] > cropped_boxes[:, 0, :], dim=1) - else: - keep = target["masks"].flatten(1).any(1) - - for field in fields: - if field in target: - target[field] = target[field][keep] - - if os.environ.get("IPDB_SHILONG_DEBUG", None) == "INFO": - # for debug and visualization only. - if "strings_positive" in target: - target["strings_positive"] = [ - _i for _i, _j in zip(target["strings_positive"], keep) if _j - ] - - return cropped_image, target - - -def hflip(image, target): - flipped_image = F.hflip(image) - - w, h = image.size - - target = target.copy() - if "boxes" in target: - boxes = target["boxes"] - boxes = boxes[:, [2, 1, 0, 3]] * torch.as_tensor([-1, 1, -1, 1]) + torch.as_tensor( - [w, 0, w, 0] - ) - target["boxes"] = boxes - - if "masks" in target: - target["masks"] = target["masks"].flip(-1) - - return flipped_image, target - - -def resize(image, target, size, max_size=None): - # size can be min_size (scalar) or (w, h) tuple - - def get_size_with_aspect_ratio(image_size, size, max_size=None): - w, h = image_size - if max_size is not None: - min_original_size = float(min((w, h))) - max_original_size = float(max((w, h))) - if max_original_size / min_original_size * size > max_size: - size = int(round(max_size * min_original_size / max_original_size)) - - if (w <= h and w == size) or (h <= w and h == size): - return (h, w) - - if w < h: - ow = size - oh = int(size * h / w) - else: - oh = size - ow = int(size * w / h) - - return (oh, ow) - - def get_size(image_size, size, max_size=None): - if isinstance(size, (list, tuple)): - return size[::-1] - else: - return get_size_with_aspect_ratio(image_size, size, max_size) - - size = get_size(image.size, size, max_size) - rescaled_image = F.resize(image, size) - - if target is None: - return rescaled_image, None - - ratios = tuple(float(s) / float(s_orig) for s, s_orig in zip(rescaled_image.size, image.size)) - ratio_width, ratio_height = ratios - - target = target.copy() - if "boxes" in target: - boxes = target["boxes"] - scaled_boxes = boxes * torch.as_tensor( - [ratio_width, ratio_height, ratio_width, ratio_height] - ) - target["boxes"] = scaled_boxes - - if "area" in target: - area = target["area"] - scaled_area = area * (ratio_width * ratio_height) - target["area"] = scaled_area - - h, w = size - target["size"] = torch.tensor([h, w]) - - if "masks" in target: - target["masks"] = ( - interpolate(target["masks"][:, None].float(), size, mode="nearest")[:, 0] > 0.5 - ) - - return rescaled_image, target - - -def pad(image, target, padding): - # assumes that we only pad on the bottom right corners - padded_image = F.pad(image, (0, 0, padding[0], padding[1])) - if target is None: - return padded_image, None - target = target.copy() - # should we do something wrt the original size? - target["size"] = torch.tensor(padded_image.size[::-1]) - if "masks" in target: - target["masks"] = torch.nn.functional.pad(target["masks"], (0, padding[0], 0, padding[1])) - return padded_image, target - - -class ResizeDebug(object): - def __init__(self, size): - self.size = size - - def __call__(self, img, target): - return resize(img, target, self.size) - - -class RandomCrop(object): - def __init__(self, size): - self.size = size - - def __call__(self, img, target): - region = T.RandomCrop.get_params(img, self.size) - return crop(img, target, region) - - -class RandomSizeCrop(object): - def __init__(self, min_size: int, max_size: int, respect_boxes: bool = False): - # respect_boxes: True to keep all boxes - # False to tolerence box filter - self.min_size = min_size - self.max_size = max_size - self.respect_boxes = respect_boxes - - def __call__(self, img: PIL.Image.Image, target: dict): - init_boxes = len(target["boxes"]) - max_patience = 10 - for i in range(max_patience): - w = random.randint(self.min_size, min(img.width, self.max_size)) - h = random.randint(self.min_size, min(img.height, self.max_size)) - region = T.RandomCrop.get_params(img, [h, w]) - result_img, result_target = crop(img, target, region) - if ( - not self.respect_boxes - or len(result_target["boxes"]) == init_boxes - or i == max_patience - 1 - ): - return result_img, result_target - return result_img, result_target - - -class CenterCrop(object): - def __init__(self, size): - self.size = size - - def __call__(self, img, target): - image_width, image_height = img.size - crop_height, crop_width = self.size - crop_top = int(round((image_height - crop_height) / 2.0)) - crop_left = int(round((image_width - crop_width) / 2.0)) - return crop(img, target, (crop_top, crop_left, crop_height, crop_width)) - - -class RandomHorizontalFlip(object): - def __init__(self, p=0.5): - self.p = p - - def __call__(self, img, target): - if random.random() < self.p: - return hflip(img, target) - return img, target - - -class RandomResize(object): - def __init__(self, sizes, max_size=None): - assert isinstance(sizes, (list, tuple)) - self.sizes = sizes - self.max_size = max_size - - def __call__(self, img, target=None): - size = random.choice(self.sizes) - return resize(img, target, size, self.max_size) - - -class RandomPad(object): - def __init__(self, max_pad): - self.max_pad = max_pad - - def __call__(self, img, target): - pad_x = random.randint(0, self.max_pad) - pad_y = random.randint(0, self.max_pad) - return pad(img, target, (pad_x, pad_y)) - - -class RandomSelect(object): - """ - Randomly selects between transforms1 and transforms2, - with probability p for transforms1 and (1 - p) for transforms2 - """ - - def __init__(self, transforms1, transforms2, p=0.5): - self.transforms1 = transforms1 - self.transforms2 = transforms2 - self.p = p - - def __call__(self, img, target): - if random.random() < self.p: - return self.transforms1(img, target) - return self.transforms2(img, target) - - -class ToTensor(object): - def __call__(self, img, target): - return F.to_tensor(img), target - - -class RandomErasing(object): - def __init__(self, *args, **kwargs): - self.eraser = T.RandomErasing(*args, **kwargs) - - def __call__(self, img, target): - return self.eraser(img), target - - -class Normalize(object): - def __init__(self, mean, std): - self.mean = mean - self.std = std - - def __call__(self, image, target=None): - image = F.normalize(image, mean=self.mean, std=self.std) - if target is None: - return image, None - target = target.copy() - h, w = image.shape[-2:] - if "boxes" in target: - boxes = target["boxes"] - boxes = box_xyxy_to_cxcywh(boxes) - boxes = boxes / torch.tensor([w, h, w, h], dtype=torch.float32) - target["boxes"] = boxes - return image, target - - -class Compose(object): - def __init__(self, transforms): - self.transforms = transforms - - def __call__(self, image, target): - for t in self.transforms: - image, target = t(image, target) - return image, target - - def __repr__(self): - format_string = self.__class__.__name__ + "(" - for t in self.transforms: - format_string += "\n" - format_string += " {0}".format(t) - format_string += "\n)" - return format_string diff --git a/spaces/WatchOutForMike/Character/README.md b/spaces/WatchOutForMike/Character/README.md deleted file mode 100644 index fe2fb15ea6a23e47f4cc707e24a0683f8cc9e64e..0000000000000000000000000000000000000000 --- a/spaces/WatchOutForMike/Character/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Character -emoji: 🐠 -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Wootang01/question_generator_three/README.md b/spaces/Wootang01/question_generator_three/README.md deleted file mode 100644 index f4e6540dba61d23974756a804073789bbd4accb8..0000000000000000000000000000000000000000 --- a/spaces/Wootang01/question_generator_three/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Question_generator_three -emoji: 💻 -colorFrom: indigo -colorTo: green -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/XzJosh/Echo-Bert-VITS2/modules.py b/spaces/XzJosh/Echo-Bert-VITS2/modules.py deleted file mode 100644 index 92e0f32a51c472bfd1659a50a95a95d195281d2b..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/Echo-Bert-VITS2/modules.py +++ /dev/null @@ -1,452 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -import commons -from commons import init_weights, get_padding -from transforms import piecewise_rational_quadratic_transform -from attentions import Encoder - -LRELU_SLOPE = 0.1 - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - -class ConvReluNorm(nn.Module): - def __init__(self, in_channels, hidden_channels, out_channels, kernel_size, n_layers, p_dropout): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append(nn.Conv1d(in_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential( - nn.ReLU(), - nn.Dropout(p_dropout)) - for _ in range(n_layers-1): - self.conv_layers.append(nn.Conv1d(hidden_channels, hidden_channels, kernel_size, padding=kernel_size//2)) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size ** i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append(nn.Conv1d(channels, channels, kernel_size, - groups=channels, dilation=dilation, padding=padding - )) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__(self, hidden_channels, kernel_size, dilation_rate, n_layers, gin_channels=0, p_dropout=0): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - self.hidden_channels =hidden_channels - self.kernel_size = kernel_size, - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d(gin_channels, 2*hidden_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = dilation_rate ** i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d(hidden_channels, 2*hidden_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:,cond_offset:cond_offset+2*self.hidden_channels,:] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply( - x_in, - g_l, - n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:,:self.hidden_channels,:] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:,self.hidden_channels:,:] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]))) - ]) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=1, - padding=get_padding(kernel_size, 1))) - ]) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList([ - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]))), - weight_norm(Conv1d(channels, channels, kernel_size, 1, dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]))) - ]) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels,1)) - self.logs = nn.Parameter(torch.zeros(channels,1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1,2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN(hidden_channels, kernel_size, dilation_rate, n_layers, p_dropout=p_dropout, gin_channels=gin_channels) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - -class ConvFlow(nn.Module): - def __init__(self, in_channels, filter_channels, kernel_size, n_layers, num_bins=10, tail_bound=5.0): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.) - self.proj = nn.Conv1d(filter_channels, self.half_channels * (num_bins * 3 - 1), 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., :self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins:2*self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_derivatives = h[..., 2 * self.num_bins:] - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x -class TransformerCouplingLayer(nn.Module): - def __init__(self, - channels, - hidden_channels, - kernel_size, - n_layers, - n_heads, - p_dropout=0, - filter_channels=0, - mean_only=False, - wn_sharing_parameter=None, - gin_channels = 0 - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = Encoder(hidden_channels, filter_channels, n_heads, n_layers, kernel_size, p_dropout, isflow = True, gin_channels = gin_channels) if wn_sharing_parameter is None else wn_sharing_parameter - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels]*2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels]*2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1,2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - x1, logabsdet = piecewise_rational_quadratic_transform(x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails='linear', - tail_bound=self.tail_bound - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1,2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/Y-T-G/Blur-Anything/utils/painter.py b/spaces/Y-T-G/Blur-Anything/utils/painter.py deleted file mode 100644 index b1bd4e14cfa27c3151be739fc7ce18fc49683175..0000000000000000000000000000000000000000 --- a/spaces/Y-T-G/Blur-Anything/utils/painter.py +++ /dev/null @@ -1,360 +0,0 @@ -import cv2 -import numpy as np -from PIL import Image - - -def colormap(rgb=True): - color_list = np.array( - [ - 0.000, - 0.000, - 0.000, - 1.000, - 1.000, - 1.000, - 1.000, - 0.498, - 0.313, - 0.392, - 0.581, - 0.929, - 0.000, - 0.447, - 0.741, - 0.850, - 0.325, - 0.098, - 0.929, - 0.694, - 0.125, - 0.494, - 0.184, - 0.556, - 0.466, - 0.674, - 0.188, - 0.301, - 0.745, - 0.933, - 0.635, - 0.078, - 0.184, - 0.300, - 0.300, - 0.300, - 0.600, - 0.600, - 0.600, - 1.000, - 0.000, - 0.000, - 1.000, - 0.500, - 0.000, - 0.749, - 0.749, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 1.000, - 0.667, - 0.000, - 1.000, - 0.333, - 0.333, - 0.000, - 0.333, - 0.667, - 0.000, - 0.333, - 1.000, - 0.000, - 0.667, - 0.333, - 0.000, - 0.667, - 0.667, - 0.000, - 0.667, - 1.000, - 0.000, - 1.000, - 0.333, - 0.000, - 1.000, - 0.667, - 0.000, - 1.000, - 1.000, - 0.000, - 0.000, - 0.333, - 0.500, - 0.000, - 0.667, - 0.500, - 0.000, - 1.000, - 0.500, - 0.333, - 0.000, - 0.500, - 0.333, - 0.333, - 0.500, - 0.333, - 0.667, - 0.500, - 0.333, - 1.000, - 0.500, - 0.667, - 0.000, - 0.500, - 0.667, - 0.333, - 0.500, - 0.667, - 0.667, - 0.500, - 0.667, - 1.000, - 0.500, - 1.000, - 0.000, - 0.500, - 1.000, - 0.333, - 0.500, - 1.000, - 0.667, - 0.500, - 1.000, - 1.000, - 0.500, - 0.000, - 0.333, - 1.000, - 0.000, - 0.667, - 1.000, - 0.000, - 1.000, - 1.000, - 0.333, - 0.000, - 1.000, - 0.333, - 0.333, - 1.000, - 0.333, - 0.667, - 1.000, - 0.333, - 1.000, - 1.000, - 0.667, - 0.000, - 1.000, - 0.667, - 0.333, - 1.000, - 0.667, - 0.667, - 1.000, - 0.667, - 1.000, - 1.000, - 1.000, - 0.000, - 1.000, - 1.000, - 0.333, - 1.000, - 1.000, - 0.667, - 1.000, - 0.167, - 0.000, - 0.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.167, - 0.000, - 0.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.000, - 0.000, - 0.000, - 0.167, - 0.000, - 0.000, - 0.333, - 0.000, - 0.000, - 0.500, - 0.000, - 0.000, - 0.667, - 0.000, - 0.000, - 0.833, - 0.000, - 0.000, - 1.000, - 0.143, - 0.143, - 0.143, - 0.286, - 0.286, - 0.286, - 0.429, - 0.429, - 0.429, - 0.571, - 0.571, - 0.571, - 0.714, - 0.714, - 0.714, - 0.857, - 0.857, - 0.857, - ] - ).astype(np.float32) - color_list = color_list.reshape((-1, 3)) * 255 - if not rgb: - color_list = color_list[:, ::-1] - return color_list - - -color_list = colormap() -color_list = color_list.astype("uint8").tolist() - - -def vis_add_mask(image, mask, color, alpha): - color = np.array(color_list[color]) - mask = mask > 0.5 - image[mask] = image[mask] * (1 - alpha) + color * alpha - return image.astype("uint8") - - -def point_painter( - input_image, - input_points, - point_color=5, - point_alpha=0.9, - point_radius=15, - contour_color=2, - contour_width=5, -): - h, w = input_image.shape[:2] - point_mask = np.zeros((h, w)).astype("uint8") - for point in input_points: - point_mask[point[1], point[0]] = 1 - - kernel = cv2.getStructuringElement(2, (point_radius, point_radius)) - point_mask = cv2.dilate(point_mask, kernel) - - contour_radius = (contour_width - 1) // 2 - dist_transform_fore = cv2.distanceTransform(point_mask, cv2.DIST_L2, 3) - dist_transform_back = cv2.distanceTransform(1 - point_mask, cv2.DIST_L2, 3) - dist_map = dist_transform_fore - dist_transform_back - # ...:::!!!:::... - contour_radius += 2 - contour_mask = np.abs(np.clip(dist_map, -contour_radius, contour_radius)) - contour_mask = contour_mask / np.max(contour_mask) - contour_mask[contour_mask > 0.5] = 1.0 - - # paint mask - painted_image = vis_add_mask( - input_image.copy(), point_mask, point_color, point_alpha - ) - # paint contour - painted_image = vis_add_mask( - painted_image.copy(), 1 - contour_mask, contour_color, 1 - ) - return painted_image - - -def mask_painter( - input_image, - input_mask, - mask_color=5, - mask_alpha=0.7, - contour_color=1, - contour_width=3, -): - assert ( - input_image.shape[:2] == input_mask.shape - ), "different shape between image and mask" - # 0: background, 1: foreground - mask = np.clip(input_mask, 0, 1) - contour_radius = (contour_width - 1) // 2 - - dist_transform_fore = cv2.distanceTransform(mask, cv2.DIST_L2, 3) - dist_transform_back = cv2.distanceTransform(1 - mask, cv2.DIST_L2, 3) - dist_map = dist_transform_fore - dist_transform_back - # ...:::!!!:::... - contour_radius += 2 - contour_mask = np.abs(np.clip(dist_map, -contour_radius, contour_radius)) - contour_mask = contour_mask / np.max(contour_mask) - contour_mask[contour_mask > 0.5] = 1.0 - - # paint mask - painted_image = vis_add_mask( - input_image.copy(), mask.copy(), mask_color, mask_alpha - ) - # paint contour - painted_image = vis_add_mask( - painted_image.copy(), 1 - contour_mask, contour_color, 1 - ) - - return painted_image - - -def background_remover(input_image, input_mask): - """ - input_image: H, W, 3, np.array - input_mask: H, W, np.array - - image_wo_background: PIL.Image - """ - assert ( - input_image.shape[:2] == input_mask.shape - ), "different shape between image and mask" - # 0: background, 1: foreground - mask = np.expand_dims(np.clip(input_mask, 0, 1), axis=2) * 255 - image_wo_background = np.concatenate([input_image, mask], axis=2) # H, W, 4 - image_wo_background = Image.fromarray(image_wo_background).convert("RGBA") - - return image_wo_background diff --git a/spaces/YUANAI/DiffspeechResearch/egs/datasets/audio/lj/preprocess.py b/spaces/YUANAI/DiffspeechResearch/egs/datasets/audio/lj/preprocess.py deleted file mode 100644 index a3d45c9aa855bb7ce40b5e8374547014350fa92b..0000000000000000000000000000000000000000 --- a/spaces/YUANAI/DiffspeechResearch/egs/datasets/audio/lj/preprocess.py +++ /dev/null @@ -1,9 +0,0 @@ -from data_gen.tts.base_preprocess import BasePreprocessor - - -class LJPreprocess(BasePreprocessor): - def meta_data(self): - for l in open(f'{self.raw_data_dir}/metadata.csv').readlines(): - item_name, _, txt = l.strip().split("|") - wav_fn = f"{self.raw_data_dir}/wavs/{item_name}.wav" - yield {'item_name': item_name, 'wav_fn': wav_fn, 'txt': txt} diff --git a/spaces/Yuliang/ICON/lib/pymaf/models/res_module.py b/spaces/Yuliang/ICON/lib/pymaf/models/res_module.py deleted file mode 100644 index dc283f8f5ca946e5edc6e0c17a91763d95ddda75..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/pymaf/models/res_module.py +++ /dev/null @@ -1,385 +0,0 @@ -# code brought in part from https://github.com/microsoft/human-pose-estimation.pytorch/blob/master/lib/models/pose_resnet.py - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import torch -import torch.nn as nn -import torch.nn.functional as F -from collections import OrderedDict -import os -from lib.pymaf.core.cfgs import cfg - -import logging - -logger = logging.getLogger(__name__) - -BN_MOMENTUM = 0.1 - - -def conv3x3(in_planes, out_planes, stride=1, bias=False, groups=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes * groups, - out_planes * groups, - kernel_size=3, - stride=stride, - padding=1, - bias=bias, - groups=groups) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1): - super().__init__() - self.conv1 = conv3x3(inplanes, planes, stride, groups=groups) - self.bn1 = nn.BatchNorm2d(planes * groups, momentum=BN_MOMENTUM) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes, groups=groups) - self.bn2 = nn.BatchNorm2d(planes * groups, momentum=BN_MOMENTUM) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None, groups=1): - super().__init__() - self.conv1 = nn.Conv2d(inplanes * groups, - planes * groups, - kernel_size=1, - bias=False, - groups=groups) - self.bn1 = nn.BatchNorm2d(planes * groups, momentum=BN_MOMENTUM) - self.conv2 = nn.Conv2d(planes * groups, - planes * groups, - kernel_size=3, - stride=stride, - padding=1, - bias=False, - groups=groups) - self.bn2 = nn.BatchNorm2d(planes * groups, momentum=BN_MOMENTUM) - self.conv3 = nn.Conv2d(planes * groups, - planes * self.expansion * groups, - kernel_size=1, - bias=False, - groups=groups) - self.bn3 = nn.BatchNorm2d(planes * self.expansion * groups, - momentum=BN_MOMENTUM) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -resnet_spec = { - 18: (BasicBlock, [2, 2, 2, 2]), - 34: (BasicBlock, [3, 4, 6, 3]), - 50: (Bottleneck, [3, 4, 6, 3]), - 101: (Bottleneck, [3, 4, 23, 3]), - 152: (Bottleneck, [3, 8, 36, 3]) -} - - -class IUV_predict_layer(nn.Module): - def __init__(self, - feat_dim=256, - final_cov_k=3, - part_out_dim=25, - with_uv=True): - super().__init__() - - self.with_uv = with_uv - if self.with_uv: - self.predict_u = nn.Conv2d(in_channels=feat_dim, - out_channels=25, - kernel_size=final_cov_k, - stride=1, - padding=1 if final_cov_k == 3 else 0) - - self.predict_v = nn.Conv2d(in_channels=feat_dim, - out_channels=25, - kernel_size=final_cov_k, - stride=1, - padding=1 if final_cov_k == 3 else 0) - - self.predict_ann_index = nn.Conv2d( - in_channels=feat_dim, - out_channels=15, - kernel_size=final_cov_k, - stride=1, - padding=1 if final_cov_k == 3 else 0) - - self.predict_uv_index = nn.Conv2d(in_channels=feat_dim, - out_channels=25, - kernel_size=final_cov_k, - stride=1, - padding=1 if final_cov_k == 3 else 0) - - self.inplanes = feat_dim - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - return_dict = {} - - predict_uv_index = self.predict_uv_index(x) - predict_ann_index = self.predict_ann_index(x) - - return_dict['predict_uv_index'] = predict_uv_index - return_dict['predict_ann_index'] = predict_ann_index - - if self.with_uv: - predict_u = self.predict_u(x) - predict_v = self.predict_v(x) - return_dict['predict_u'] = predict_u - return_dict['predict_v'] = predict_v - else: - return_dict['predict_u'] = None - return_dict['predict_v'] = None - # return_dict['predict_u'] = torch.zeros(predict_uv_index.shape).to(predict_uv_index.device) - # return_dict['predict_v'] = torch.zeros(predict_uv_index.shape).to(predict_uv_index.device) - - return return_dict - - -class SmplResNet(nn.Module): - def __init__(self, - resnet_nums, - in_channels=3, - num_classes=229, - last_stride=2, - n_extra_feat=0, - truncate=0, - **kwargs): - super().__init__() - - self.inplanes = 64 - self.truncate = truncate - # extra = cfg.MODEL.EXTRA - # self.deconv_with_bias = extra.DECONV_WITH_BIAS - block, layers = resnet_spec[resnet_nums] - - self.conv1 = nn.Conv2d(in_channels, - 64, - kernel_size=7, - stride=2, - padding=3, - bias=False) - self.bn1 = nn.BatchNorm2d(64, momentum=BN_MOMENTUM) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], - stride=2) if truncate < 2 else None - self.layer4 = self._make_layer( - block, 512, layers[3], - stride=last_stride) if truncate < 1 else None - - self.avg_pooling = nn.AdaptiveAvgPool2d(1) - - self.num_classes = num_classes - if num_classes > 0: - self.final_layer = nn.Linear(512 * block.expansion, num_classes) - nn.init.xavier_uniform_(self.final_layer.weight, gain=0.01) - - self.n_extra_feat = n_extra_feat - if n_extra_feat > 0: - self.trans_conv = nn.Sequential( - nn.Conv2d(n_extra_feat + 512 * block.expansion, - 512 * block.expansion, - kernel_size=1, - bias=False), - nn.BatchNorm2d(512 * block.expansion, momentum=BN_MOMENTUM), - nn.ReLU(True)) - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion, momentum=BN_MOMENTUM), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x, infeat=None): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - - x1 = self.layer1(x) - x2 = self.layer2(x1) - x3 = self.layer3(x2) if self.truncate < 2 else x2 - x4 = self.layer4(x3) if self.truncate < 1 else x3 - - if infeat is not None: - x4 = self.trans_conv(torch.cat([infeat, x4], 1)) - - if self.num_classes > 0: - xp = self.avg_pooling(x4) - cls = self.final_layer(xp.view(xp.size(0), -1)) - if not cfg.DANET.USE_MEAN_PARA: - # for non-negative scale - scale = F.relu(cls[:, 0]).unsqueeze(1) - cls = torch.cat((scale, cls[:, 1:]), dim=1) - else: - cls = None - - return cls, {'x4': x4} - - def init_weights(self, pretrained=''): - if os.path.isfile(pretrained): - logger.info('=> loading pretrained model {}'.format(pretrained)) - # self.load_state_dict(pretrained_state_dict, strict=False) - checkpoint = torch.load(pretrained) - if isinstance(checkpoint, OrderedDict): - # state_dict = checkpoint - state_dict_old = self.state_dict() - for key in state_dict_old.keys(): - if key in checkpoint.keys(): - if state_dict_old[key].shape != checkpoint[key].shape: - del checkpoint[key] - state_dict = checkpoint - elif isinstance(checkpoint, dict) and 'state_dict' in checkpoint: - state_dict_old = checkpoint['state_dict'] - state_dict = OrderedDict() - # delete 'module.' because it is saved from DataParallel module - for key in state_dict_old.keys(): - if key.startswith('module.'): - # state_dict[key[7:]] = state_dict[key] - # state_dict.pop(key) - state_dict[key[7:]] = state_dict_old[key] - else: - state_dict[key] = state_dict_old[key] - else: - raise RuntimeError( - 'No state_dict found in checkpoint file {}'.format( - pretrained)) - self.load_state_dict(state_dict, strict=False) - else: - logger.error('=> imagenet pretrained model dose not exist') - logger.error('=> please download it first') - raise ValueError('imagenet pretrained model does not exist') - - -class LimbResLayers(nn.Module): - def __init__(self, - resnet_nums, - inplanes, - outplanes=None, - groups=1, - **kwargs): - super().__init__() - - self.inplanes = inplanes - block, layers = resnet_spec[resnet_nums] - self.outplanes = 512 if outplanes == None else outplanes - self.layer4 = self._make_layer(block, - self.outplanes, - layers[3], - stride=2, - groups=groups) - - self.avg_pooling = nn.AdaptiveAvgPool2d(1) - - def _make_layer(self, block, planes, blocks, stride=1, groups=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes * groups, - planes * block.expansion * groups, - kernel_size=1, - stride=stride, - bias=False, - groups=groups), - nn.BatchNorm2d(planes * block.expansion * groups, - momentum=BN_MOMENTUM), - ) - - layers = [] - layers.append( - block(self.inplanes, planes, stride, downsample, groups=groups)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes, groups=groups)) - - return nn.Sequential(*layers) - - def forward(self, x): - x = self.layer4(x) - x = self.avg_pooling(x) - - return x diff --git a/spaces/Yusin/ChatGPT-Speech/commons.py b/spaces/Yusin/ChatGPT-Speech/commons.py deleted file mode 100644 index 2153153f527d94e2abb641ea00c80b518ff6c5bd..0000000000000000000000000000000000000000 --- a/spaces/Yusin/ChatGPT-Speech/commons.py +++ /dev/null @@ -1,97 +0,0 @@ -import math -import torch -from torch.nn import functional as F -import torch.jit - - -def script_method(fn, _rcb=None): - return fn - - -def script(obj, optimize=True, _frames_up=0, _rcb=None): - return obj - - -torch.jit.script_method = script_method -torch.jit.script = script - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size*dilation - dilation)/2) - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) * ids_str_max).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2,3) * mask - return path diff --git a/spaces/abdvl/datahub_qa_bot/docs/authorization/access-policies-guide.md b/spaces/abdvl/datahub_qa_bot/docs/authorization/access-policies-guide.md deleted file mode 100644 index 296b65427022d01058f702c145e7ef397e7a2ff2..0000000000000000000000000000000000000000 --- a/spaces/abdvl/datahub_qa_bot/docs/authorization/access-policies-guide.md +++ /dev/null @@ -1,338 +0,0 @@ -# About DataHub Access Policies - -<FeatureAvailability/> - -Access Policies define who can do what to which resources. In conjunction with [Roles](./roles.md), Access Policies determine what users are allowed to do on DataHub. - -## Policy Types - -There are 2 types of Access Policy within DataHub: - -1. **Platform** Policies -2. **Metadata** Policies - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-policy-type.png"/> -</p> - -**Platform** Policies determine who has platform-level Privileges on DataHub. These include: - -- Managing Users & Groups -- Viewing the DataHub Analytics Page -- Managing Policies themselves - -Platform policies can be broken down into 2 parts: - -1. **Privileges**: Which privileges should be assigned to the Actors (e.g. "View Analytics") -2. **Actors**: Who the should be granted the privileges (Users, or Groups) - -A few Platform Policies in plain English include: - -- The Data Platform team should be allowed to manage users & groups, view platform analytics, & manage policies themselves -- John from IT should be able to invite new users - -**Metadata** policies determine who can do what to which Metadata Entities. For example: - -- Who can edit Dataset Documentation & Links? -- Who can add Owners to a Chart? -- Who can add Tags to a Dashboard? - -Metadata policies can be broken down into 3 parts: - -1. **Privileges**: The 'what'. What actions are being permitted by a Policy, e.g. "Add Tags". -2. **Resources**: The 'which'. Resources that the Policy applies to, e.g. "All Datasets". -3. **Actors**: The 'who'. Specific users, groups, & roles that the Policy applies to. - -A few **Metadata** Policies in plain English include: - -- Dataset Owners should be allowed to edit documentation, but not Tags. -- Jenny, our Data Steward, should be allowed to edit Tags for any Dashboard, but no other metadata. -- James, a Data Analyst, should be allowed to edit the Links for a specific Data Pipeline he is a downstream consumer of. - -Each of these can be implemented by constructing DataHub Access Policies. - -## Access Policies Setup, Prerequisites, and Permissions - -What you need to manage Access Policies on DataHub: - -* **Manage Policies** Privilege - -This Platform Privilege allows users to create, edit, and remove all Access Policies on DataHub. Therefore, it should only be -given to those users who will be serving as Admins of the platform. The default `Admin` role has this Privilege. - - -## Using Access Policies - -Policies can be created by first navigating to **Settings > Permissions > Policies**. - -To begin building a new Policy, click **Create new Policy**. - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/manage-permissions.png"/> -</p> - -### Creating a Platform Policy - -#### Step 1. Provide a Name & Description - -In the first step, we select the **Platform** Policy type, and define a name and description for the new Policy. - -Good Policy names describe the high-level purpose of the Policy. For example, a Policy named -"View DataHub Analytics - Data Governance Team" would be a great way to describe a Platform -Policy which grants abilities to view DataHub's Analytics view to anyone on the Data Governance team. - -You can optionally provide a text description to add richer details about the purpose of the Policy. - -#### Step 2: Configure Privileges - -In the second step, we can simply select the Privileges that this Platform Policy will grant. - -<p align="center"> - <img width="70%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-platform-privileges.png"/> -</p> - -**Platform** Privileges most often provide access to perform administrative functions on the Platform. These include: - -| Platform Privileges | Description | -|-------------------------------------|--------------------------------------------------------------------------------------------------------------------------------| -| Manage Policies | Allow actor to create and remove access control policies. Be careful - Actors with this Privilege are effectively super users. | -| Manage Metadata Ingestion | Allow actor to create, remove, and update Metadata Ingestion sources. | -| Manage Secrets | Allow actor to create & remove secrets stored inside DataHub. | -| Manage Users & Groups | Allow actor to create, remove, and update users and groups on DataHub. | -| Manage All Access Tokens | Allow actor to create, remove, and list access tokens for all users on DataHub. | -| Create Domains | Allow the actor to create new Domains | -| Manage Domains | Allow actor to create and remove any Domains. | -| View Analytics | Allow the actor access to the DataHub analytics dashboard. | -| Generate Personal Access Tokens | Allow the actor to generate access tokens for personal use with DataHub APIs. | -| Manage User Credentials | Allow the actor to generate invite links for new native DataHub users, and password reset links for existing native users. | -| Manage Glossaries | Allow the actor to create, edit, move, and delete Glossary Terms and Term Groups | -| Create Tags | Allow the actor to create new Tags | -| Manage Tags | Allow the actor to create and remove any Tags | -| Manage Public Views | Allow the actor to create, edit, and remove any public (shared) Views. | -| Restore Indices API[^1] | Allow the actor to restore indices for a set of entities via API | -| Enable/Disable Writeability API[^1] | Allow the actor to enable or disable GMS writeability for use in data migrations | -| Apply Retention API[^1] | Allow the actor to apply aspect retention via API | - -[^1]: Only active if REST_API_AUTHORIZATION_ENABLED environment flag is enabled - -#### Step 3: Choose Policy Actors - -In this step, we can select the actors who should be granted Privileges appearing on this Policy. - -To do so, simply search and select the Users or Groups that the Policy should apply to. - -**Assigning a Policy to a User** - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-users.png"/> -</p> - -**Assigning a Policy to a Group** - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-groups.png"/> -</p> - -### Creating a Metadata Policy - -#### Step 1. Provide a Name & Description - -In the first step, we select the **Metadata** Policy, and define a name and description for the new Policy. - -Good Policy names describe the high-level purpose of the Policy. For example, a Policy named -"Full Dataset Edit Privileges - Data Platform Engineering" would be a great way to describe a Metadata -Policy which grants all abilities to edit Dataset Metadata to anyone in the "Data Platform" group. - -You can optionally provide a text description to add richer detail about the purpose of the Policy. - -#### Step 2: Configure Privileges - -In the second step, we can simply select the Privileges that this Metadata Policy will grant. -To begin, we should first determine which assets that the Privileges should be granted for (i.e. the *scope*), then -select the appropriate Privileges to grant. - -Using the `Resource Type` selector, we can narrow down the *type* of the assets that the Policy applies to. If left blank, -all entity types will be in scope. - -For example, if we only want to grant access for `Datasets` on DataHub, we can select -`Datasets`. - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-resource-type.png"/> -</p> - -Next, we can search for specific Entities of the that the Policy should grant privileges on. -If left blank, all entities of the selected types are in scope. - -For example, if we only want to grant access for a specific sample dataset, we can search and -select it directly. - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-resource-urn.png"/> -</p> - -We can also limit the scope of the Policy to assets that live in a specific **Domain**. If left blank, -entities from all Domains will be in scope. - -For example, if we only want to grant access for assets part of a "Marketing" Domain, we can search and -select it. - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-resource-domain.png"/> -</p> - -Finally, we will choose the Privileges to grant when the selected entities fall into the defined -scope. - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-metadata-privileges.png"/> -</p> - -**Metadata** Privileges grant access to change specific *entities* (i.e. data assets) on DataHub. - -The common Metadata Privileges, which span across entity types, include: - -| Common Privileges | Description | -|----------------------------------|----------------------------------------------------------------------------------------------------------------------------------| -| View Entity Page | Allow actor to access the entity page for the resource in the UI. If not granted, it will redirect them to an unauthorized page. | -| Edit Tags | Allow actor to add and remove tags to an asset. | -| Edit Glossary Terms | Allow actor to add and remove glossary terms to an asset. | -| Edit Owners | Allow actor to add and remove owners of an entity. | -| Edit Description | Allow actor to edit the description (documentation) of an entity. | -| Edit Links | Allow actor to edit links associated with an entity. | -| Edit Status | Allow actor to edit the status of an entity (soft deleted or not). | -| Edit Domain | Allow actor to edit the Domain of an entity. | -| Edit Deprecation | Allow actor to edit the Deprecation status of an entity. | -| Edit Assertions | Allow actor to add and remove assertions from an entity. | -| Edit All | Allow actor to edit any information about an entity. Super user privileges. | -| Get Timeline API[^1] | Allow actor to get the timeline of an entity via API. | -| Get Entity API[^1] | Allow actor to get an entity via API. | -| Get Timeseries Aspect API[^1] | Allow actor to get a timeseries aspect via API. | -| Get Aspect/Entity Count APIs[^1] | Allow actor to get aspect and entity counts via API. | -| Search API | Allow actor to search for entities via API. | -| Produce Platform Event API | Allow actor to ingest a platform event via API. | - -[^1]: Only active if REST_API_AUTHORIZATION_ENABLED is true - -**Specific Metadata Privileges** include - -| Entity | Privilege | Description | -|--------------|------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------| -| Dataset | Edit Dataset Column Tags | Allow actor to edit the column (field) tags associated with a dataset schema. | -| Dataset | Edit Dataset Column Glossary Terms | Allow actor to edit the column (field) glossary terms associated with a dataset schema. | -| Dataset | Edit Dataset Column Descriptions | Allow actor to edit the column (field) descriptions associated with a dataset schema. | -| Dataset | Edit Dataset Queries | Allow actor to edit the Highlighted Queries on the Queries tab of the dataset. | -| Dataset | View Dataset Usage | Allow actor to access usage metadata about a dataset both in the UI and in the GraphQL API. This includes example queries, number of queries, etc. | -| Dataset | View Dataset Profile | Allow actor to access a dataset's profile both in the UI and in the GraphQL API. This includes snapshot statistics like #rows, #columns, null percentage per field, etc. | -| Tag | Edit Tag Color | Allow actor to change the color of a Tag. | -| Group | Edit Group Members | Allow actor to add and remove members to a group. | -| User | Edit User Profile | Allow actor to change the user's profile including display name, bio, title, profile image, etc. | -| User + Group | Edit Contact Information | Allow actor to change the contact information such as email & chat handles. | - -> **Still have questions about Privileges?** Let us know in [Slack](https://slack.datahubproject.io)! - - -#### Step 3: Choose Policy Actors - -In this step, we can select the actors who should be granted the Privileges on this Policy. Metadata Policies -can target specific Users & Groups, or the *owners* of the Entities that are included in the scope of the Policy. - -To do so, simply search and select the Users or Groups that the Policy should apply to. - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-users.png"/> -</p> - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-groups.png"/> -</p> - -We can also grant the Privileges to the *owners* of Entities (or *Resources*) that are in scope for the Policy. -This advanced functionality allows of Admins of DataHub to closely control which actions can or cannot be performed by owners. - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/policies-select-owners.png"/> -</p> - -### Updating an Existing Policy - -To update an existing Policy, simply click the **Edit** on the Policy you wish to change. - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/edit-policy.png"/> -</p> - -Then, make the changes required and click **Save**. When you save a Policy, it may take up to 2 minutes for changes -to be reflected. - - -### Removing a Policy - -To remove a Policy, simply click on the trashcan icon located on the Policies list. This will remove the Policy and -deactivate it so that it no longer applies. - -When you delete a Policy, it may take up to 2 minutes for changes to be reflected. - - -### Deactivating a Policy - -In addition to deletion, DataHub also supports "deactivating" a Policy. This is useful if you need to temporarily disable -a particular Policy, but do not want to remove it altogether. - -To deactivate a Policy, simply click the **Deactivate** button on the Policy you wish to deactivate. When you change -the state of a Policy, it may take up to 2 minutes for the changes to be reflected. - -<p align="center"> - <img width="80%" src="https://raw.githubusercontent.com/datahub-project/static-assets/main/imgs/deactivate-policy.png"/> -</p> - -After deactivating, you can re-enable a Policy by clicking **Activate**. - - -### Default Policies - -Out of the box, DataHub is deployed with a set of pre-baked Policies. This set of policies serves the -following purposes: - -1. Assigns immutable super-user privileges for the root `datahub` user account (Immutable) -2. Assigns all Platform Privileges for all Users by default (Editable) - -The reason for #1 is to prevent people from accidentally deleting all policies and getting locked out (`datahub` super user account can be a backup) -The reason for #2 is to permit administrators to log in via OIDC or another means outside of the `datahub` root account -when they are bootstrapping with DataHub. This way, those setting up DataHub can start managing Access Policies without friction. -Note that these Privileges *can* and likely *should* be changed inside the **Policies** page before onboarding -your company's users. - - -## Additional Resources - -- [Authorization Overview](./README.md) -- [Roles Overview](./roles.md) -- [Authorization using Groups](./groups.md) - - -### Videos - -- [Introducing DataHub Access Policies](https://youtu.be/19zQCznqhMI?t=282) - -### GraphQL - -* [listPolicies](../../graphql/queries.md#listPolicies) -* [createPolicy](../../graphql/mutations.md#createPolicy) -* [updatePolicy](../../graphql/mutations.md#updatePolicy) -* [deletePolicy](../../graphql/mutations.md#deletePolicy) - -## FAQ and Troubleshooting - -**How do Policies relate to Roles?** - -Policies are the lowest level primitive for granting Privileges to users on DataHub. - -Roles are built for convenience on top of Policies. Roles grant Privileges to actors indirectly, driven by Policies -behind the scenes. Both can be used in conjunction to grant Privileges to end users. - -*Need more help? Join the conversation in [Slack](http://slack.datahubproject.io)!* - -### Related Features - -- [Roles](./roles.md) \ No newline at end of file diff --git a/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/GPT.py b/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/GPT.py deleted file mode 100644 index 122ba53f376a10321a127ee87a909b5ed888f0d1..0000000000000000000000000000000000000000 --- a/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/GPT.py +++ /dev/null @@ -1,226 +0,0 @@ -import numpy as np -import pandas as pd -import tensorflow as tf -import math -from tqdm import tqdm - -def scaled_dot_product_attention(q, k, v): - # calculate the dot product of query and key - dot_product = tf.matmul(q, k, transpose_b=True) - - - # scale the dot product - scaled_dot_product = dot_product / tf.math.sqrt(tf.cast(tf.shape(k)[-1], dtype=tf.float32)) - - # apply softmax activation to obtain attention weights - attention_weights = tf.nn.softmax(scaled_dot_product, axis=-1) - - # compute the weighted sum of the value vectors with attention weights - output = tf.matmul(attention_weights, v) - - return output - - -class LinearLayer(tf.keras.layers.Layer): - def __init__(self, ix, ox): - super().__init__() - self.ix = ix - self.ox = ox - - - def build(self, input_shapes): - self.w1 = self.add_weight(shape=(self.ix, self.ox)) - self.b1 = self.add_weight(shape=(1, self.ox)) - - def call(self, inputs): - bz, key = tf.shape(inputs)[0], tf.shape(inputs)[1] - inputs = tf.reshape(inputs, (-1, self.ix)) - inputs = tf.matmul(inputs, self.w1) + self.b1 - inputs = tf.reshape(inputs, (bz, key, self.ox)) - return inputs - - - -class split_heads(tf.keras.layers.Layer): - def __init__(self, num_heads = 10): - super().__init__() - self.num_heads = num_heads - - def call(self, inputs): - bz, key = tf.shape(inputs)[0], tf.shape(inputs)[1] - - inputs = tf.reshape(inputs, (bz, key, self.num_heads, -1)) - inputs = tf.transpose(inputs, (0, 2, 1, 3)) - - return inputs - - -class merge_heads(tf.keras.layers.Layer): - def __init__(self): - super().__init__() - - def call(self, inputs): - bz, key = tf.shape(inputs)[0], tf.shape(inputs)[2] - - inputs = tf.transpose(inputs, (0, 2, 1, 3)) - inputs = tf.reshape(inputs, (bz, key, -1)) - return inputs - - - -class GPT_Attention(tf.keras.layers.Layer): - - def __init__(self, ix, ox, num_heads): - super().__init__() - self.ix = ix - self.ox = ox - self.num_heads = num_heads - self.linear1 = LinearLayer(self.ix, self.ox * 3) - self.split = split_heads(num_heads = self.num_heads) - self.merge = merge_heads() - self.linear2 = LinearLayer(self.ox, self.ix) - - if self.ox % self.num_heads != 0: - raise ValueError('The value ox = '+ str(self.ox) +' SHOULD be divisible by number of heads provided') - - def call(self, inputs): - if len(inputs) > 0: - inputs = inputs[0] - inputs = self.linear1(inputs) - k, q, v = tf.split(inputs, 3, axis = -1) - k = self.split(k) - q = self.split(q) - v = self.split(v) - #k, q, v = tf.split(inputs, 3, axis = -1) - inputs = scaled_dot_product_attention(k, q, v) - inputs = self.merge(inputs) - inputs = self.linear2(inputs) - - return inputs - - - -class MultiHeadAttention(tf.keras.layers.Layer): - def __init__(self, num_heads = 8, key_dim = 64, key_embedding = 512): - super(MultiHeadAttention, self).__init__() - self.num_heads = num_heads - self.key_dim = key_dim - self.key_embedding = key_embedding - self.head_vectors = [] - - def build(self, input_shape): - #print(input_shape) - - self.W_k = self.add_weight(shape=(self.num_heads, self.key_dim, self.key_embedding), name='key') - self.W_q = self.add_weight(shape=(self.num_heads, self.key_dim, self.key_embedding), name='query') - self.W_v = self.add_weight(shape=(self.num_heads, self.key_dim, self.key_embedding), name='value') - - self.W_o = self.add_weight(shape=(self.key_dim, self.key_embedding)) - - - def call(self, inputs): - query, key, value = inputs - - self.head_vectors = [] - head_concat = None - - for i in range(self.num_heads): - q = tf.einsum('bij, ij -> bij', query, self.W_q[i]) - k = tf.einsum('bij, ij -> bij', key, self.W_k[i]) - v = tf.einsum('bij, ij -> bij', value, self.W_v[i]) - - self.head_vectors += [scaled_dot_product_attention(q, k, v)] - - - head_concat = tf.concat(self.head_vectors, -2) - #print(tf.shape(head_concat)) - output =tf.einsum('bij, kj -> bkj', head_concat, self.W_o) - - - return output - -class Decoder(tf.keras.layers.Layer): - def __init__(self, num_heads = 8, key_dim = 64, key_embedding = 512, GPT_attention = False): - super(Decoder, self).__init__() - - self.num_heads = num_heads - self.key_dim = key_dim - self.key_embedding = key_embedding - if GPT_attention: - self.attention = GPT_Attention(key_embedding, key_embedding, num_heads) - else: - self.attention = MultiHeadAttention(num_heads = num_heads, key_dim = key_dim, key_embedding = key_embedding) - self.normalize1 = tf.keras.layers.LayerNormalization(axis = -2) - self.normalize2 = tf.keras.layers.LayerNormalization(axis = -2) - - - def build(self, input_shape): - #print(input_shape) - - self.x1 = self.add_weight(shape=(self.key_dim, self.key_embedding), name='vec1') - self.x2 = self.add_weight(shape=(self.key_dim, self.key_embedding), name='vec2') - - self.y1 = self.add_weight(shape=(self.key_dim, self.key_embedding), name='bias1') - self.y2 = self.add_weight(shape=(self.key_dim, self.key_embedding), name='bias2') - - def call(self, inputs): - - first_sublayer_output = self.attention((inputs, inputs, inputs)) - first_sublayer_output = self.normalize1(first_sublayer_output + inputs) - - first_nn = tf.einsum('bij, ij -> bij', first_sublayer_output, self.x1) + self.y1 - first_nn = tf.keras.activations.relu(first_nn, alpha=0.0, max_value=None, threshold=0.0) - second_nn = tf.einsum('bij, ij -> bij', first_nn, self.x2) + self.y2 - - second_sublayer_output = self.normalize2(second_nn + first_sublayer_output) - - - - return second_sublayer_output - -def positional_function(words, embedding): - pos = np.zeros((words, embedding)) - - for i in range(words): - for j in range(embedding): - if j%2 == 0: - pos[i, j] = math.sin(i/pow(10000, 2*j/(512))) - else: - pos[i, j] = math.cos(i/pow(10000, 2*j/(512))) - - return pos - - -class PositionalEmbedding(tf.keras.layers.Layer): - def __init__(self, positional_function = positional_function, embedding_size = 512, words = 64): - super(PositionalEmbedding, self).__init__() - self.embedding_size = embedding_size - self.words = words - self.pos_mat = tf.cast(tf.convert_to_tensor(positional_function(self.words, self.embedding_size)), tf.float32) - - def build(self, input_sizes): - print(input_sizes) - - def call(self, inputs): - embed = tf.einsum("bij, ij -> bij", inputs, self.pos_mat) - return embed - -def generate_output(model, vectorizer, text_size = 70, gpt_input = 64, input_sequence = []): - - if input_sequence == []: - input_sequence = tf.zeros((1, gpt_input)).numpy() - - text = tf.zeros((1, text_size)).numpy() - text[0][: gpt_input] = input_sequence[0][: gpt_input] - - GPT = model - - - for i in tqdm(range(gpt_input, text_size)): - #print("Iteration number:" + str(i)) - output = tf.argmax(GPT(input_sequence), -1).numpy() - text[0][i - 1] = output - input_sequence = text[0][i - gpt_input : i].reshape(1, gpt_input) - - op = [vectorizer.get_vocabulary()[int(text[0][i])] for i in range(len(text[0]))] - return ' '.join(op) \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/assigners/region_assigner.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/assigners/region_assigner.py deleted file mode 100644 index 2e8464b97c8d8f44488d7bb781ca2e733a258e55..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet/core/bbox/assigners/region_assigner.py +++ /dev/null @@ -1,221 +0,0 @@ -import torch - -from mmdet.core import anchor_inside_flags -from ..builder import BBOX_ASSIGNERS -from .assign_result import AssignResult -from .base_assigner import BaseAssigner - - -def calc_region(bbox, ratio, stride, featmap_size=None): - """Calculate region of the box defined by the ratio, the ratio is from the - center of the box to every edge.""" - # project bbox on the feature - f_bbox = bbox / stride - x1 = torch.round((1 - ratio) * f_bbox[0] + ratio * f_bbox[2]) - y1 = torch.round((1 - ratio) * f_bbox[1] + ratio * f_bbox[3]) - x2 = torch.round(ratio * f_bbox[0] + (1 - ratio) * f_bbox[2]) - y2 = torch.round(ratio * f_bbox[1] + (1 - ratio) * f_bbox[3]) - if featmap_size is not None: - x1 = x1.clamp(min=0, max=featmap_size[1]) - y1 = y1.clamp(min=0, max=featmap_size[0]) - x2 = x2.clamp(min=0, max=featmap_size[1]) - y2 = y2.clamp(min=0, max=featmap_size[0]) - return (x1, y1, x2, y2) - - -def anchor_ctr_inside_region_flags(anchors, stride, region): - """Get the flag indicate whether anchor centers are inside regions.""" - x1, y1, x2, y2 = region - f_anchors = anchors / stride - x = (f_anchors[:, 0] + f_anchors[:, 2]) * 0.5 - y = (f_anchors[:, 1] + f_anchors[:, 3]) * 0.5 - flags = (x >= x1) & (x <= x2) & (y >= y1) & (y <= y2) - return flags - - -@BBOX_ASSIGNERS.register_module() -class RegionAssigner(BaseAssigner): - """Assign a corresponding gt bbox or background to each bbox. - - Each proposals will be assigned with `-1`, `0`, or a positive integer - indicating the ground truth index. - - - -1: don't care - - 0: negative sample, no assigned gt - - positive integer: positive sample, index (1-based) of assigned gt - - Args: - center_ratio: ratio of the region in the center of the bbox to - define positive sample. - ignore_ratio: ratio of the region to define ignore samples. - """ - - def __init__(self, center_ratio=0.2, ignore_ratio=0.5): - self.center_ratio = center_ratio - self.ignore_ratio = ignore_ratio - - def assign(self, - mlvl_anchors, - mlvl_valid_flags, - gt_bboxes, - img_meta, - featmap_sizes, - anchor_scale, - anchor_strides, - gt_bboxes_ignore=None, - gt_labels=None, - allowed_border=0): - """Assign gt to anchors. - - This method assign a gt bbox to every bbox (proposal/anchor), each bbox - will be assigned with -1, 0, or a positive number. -1 means don't care, - 0 means negative sample, positive number is the index (1-based) of - assigned gt. - The assignment is done in following steps, the order matters. - - 1. Assign every anchor to 0 (negative) - For each gt_bboxes: - 2. Compute ignore flags based on ignore_region then - assign -1 to anchors w.r.t. ignore flags - 3. Compute pos flags based on center_region then - assign gt_bboxes to anchors w.r.t. pos flags - 4. Compute ignore flags based on adjacent anchor lvl then - assign -1 to anchors w.r.t. ignore flags - 5. Assign anchor outside of image to -1 - - Args: - mlvl_anchors (list[Tensor]): Multi level anchors. - mlvl_valid_flags (list[Tensor]): Multi level valid flags. - gt_bboxes (Tensor): Ground truth bboxes of image - img_meta (dict): Meta info of image. - featmap_sizes (list[Tensor]): Feature mapsize each level - anchor_scale (int): Scale of the anchor. - anchor_strides (list[int]): Stride of the anchor. - gt_bboxes (Tensor): Groundtruth boxes, shape (k, 4). - gt_bboxes_ignore (Tensor, optional): Ground truth bboxes that are - labelled as `ignored`, e.g., crowd boxes in COCO. - gt_labels (Tensor, optional): Label of gt_bboxes, shape (k, ). - allowed_border (int, optional): The border to allow the valid - anchor. Defaults to 0. - - Returns: - :obj:`AssignResult`: The assign result. - """ - if gt_bboxes_ignore is not None: - raise NotImplementedError - - num_gts = gt_bboxes.shape[0] - num_bboxes = sum(x.shape[0] for x in mlvl_anchors) - - if num_gts == 0 or num_bboxes == 0: - # No ground truth or boxes, return empty assignment - max_overlaps = gt_bboxes.new_zeros((num_bboxes, )) - assigned_gt_inds = gt_bboxes.new_zeros((num_bboxes, ), - dtype=torch.long) - if gt_labels is None: - assigned_labels = None - else: - assigned_labels = gt_bboxes.new_full((num_bboxes, ), - -1, - dtype=torch.long) - return AssignResult( - num_gts, - assigned_gt_inds, - max_overlaps, - labels=assigned_labels) - - num_lvls = len(mlvl_anchors) - r1 = (1 - self.center_ratio) / 2 - r2 = (1 - self.ignore_ratio) / 2 - - scale = torch.sqrt((gt_bboxes[:, 2] - gt_bboxes[:, 0]) * - (gt_bboxes[:, 3] - gt_bboxes[:, 1])) - min_anchor_size = scale.new_full( - (1, ), float(anchor_scale * anchor_strides[0])) - target_lvls = torch.floor( - torch.log2(scale) - torch.log2(min_anchor_size) + 0.5) - target_lvls = target_lvls.clamp(min=0, max=num_lvls - 1).long() - - # 1. assign 0 (negative) by default - mlvl_assigned_gt_inds = [] - mlvl_ignore_flags = [] - for lvl in range(num_lvls): - h, w = featmap_sizes[lvl] - assert h * w == mlvl_anchors[lvl].shape[0] - assigned_gt_inds = gt_bboxes.new_full((h * w, ), - 0, - dtype=torch.long) - ignore_flags = torch.zeros_like(assigned_gt_inds) - mlvl_assigned_gt_inds.append(assigned_gt_inds) - mlvl_ignore_flags.append(ignore_flags) - - for gt_id in range(num_gts): - lvl = target_lvls[gt_id].item() - featmap_size = featmap_sizes[lvl] - stride = anchor_strides[lvl] - anchors = mlvl_anchors[lvl] - gt_bbox = gt_bboxes[gt_id, :4] - - # Compute regions - ignore_region = calc_region(gt_bbox, r2, stride, featmap_size) - ctr_region = calc_region(gt_bbox, r1, stride, featmap_size) - - # 2. Assign -1 to ignore flags - ignore_flags = anchor_ctr_inside_region_flags( - anchors, stride, ignore_region) - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 3. Assign gt_bboxes to pos flags - pos_flags = anchor_ctr_inside_region_flags(anchors, stride, - ctr_region) - mlvl_assigned_gt_inds[lvl][pos_flags] = gt_id + 1 - - # 4. Assign -1 to ignore adjacent lvl - if lvl > 0: - d_lvl = lvl - 1 - d_anchors = mlvl_anchors[d_lvl] - d_featmap_size = featmap_sizes[d_lvl] - d_stride = anchor_strides[d_lvl] - d_ignore_region = calc_region(gt_bbox, r2, d_stride, - d_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - d_anchors, d_stride, d_ignore_region) - mlvl_ignore_flags[d_lvl][ignore_flags] = 1 - if lvl < num_lvls - 1: - u_lvl = lvl + 1 - u_anchors = mlvl_anchors[u_lvl] - u_featmap_size = featmap_sizes[u_lvl] - u_stride = anchor_strides[u_lvl] - u_ignore_region = calc_region(gt_bbox, r2, u_stride, - u_featmap_size) - ignore_flags = anchor_ctr_inside_region_flags( - u_anchors, u_stride, u_ignore_region) - mlvl_ignore_flags[u_lvl][ignore_flags] = 1 - - # 4. (cont.) Assign -1 to ignore adjacent lvl - for lvl in range(num_lvls): - ignore_flags = mlvl_ignore_flags[lvl] - mlvl_assigned_gt_inds[lvl][ignore_flags] = -1 - - # 5. Assign -1 to anchor outside of image - flat_assigned_gt_inds = torch.cat(mlvl_assigned_gt_inds) - flat_anchors = torch.cat(mlvl_anchors) - flat_valid_flags = torch.cat(mlvl_valid_flags) - assert (flat_assigned_gt_inds.shape[0] == flat_anchors.shape[0] == - flat_valid_flags.shape[0]) - inside_flags = anchor_inside_flags(flat_anchors, flat_valid_flags, - img_meta['img_shape'], - allowed_border) - outside_flags = ~inside_flags - flat_assigned_gt_inds[outside_flags] = -1 - - if gt_labels is not None: - assigned_labels = torch.zeros_like(flat_assigned_gt_inds) - pos_flags = assigned_gt_inds > 0 - assigned_labels[pos_flags] = gt_labels[ - flat_assigned_gt_inds[pos_flags] - 1] - else: - assigned_labels = None - - return AssignResult( - num_gts, flat_assigned_gt_inds, None, labels=assigned_labels) diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/__init__.py deleted file mode 100644 index 95e34a848652f2ab3ca6d3489aa2934d24817888..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer/mmdet_null/core/bbox/assigners/__init__.py +++ /dev/null @@ -1,16 +0,0 @@ -from .approx_max_iou_assigner import ApproxMaxIoUAssigner -from .assign_result import AssignResult -from .atss_assigner import ATSSAssigner -from .base_assigner import BaseAssigner -from .center_region_assigner import CenterRegionAssigner -from .grid_assigner import GridAssigner -from .hungarian_assigner import HungarianAssigner -from .max_iou_assigner import MaxIoUAssigner -from .point_assigner import PointAssigner -from .region_assigner import RegionAssigner - -__all__ = [ - 'BaseAssigner', 'MaxIoUAssigner', 'ApproxMaxIoUAssigner', 'AssignResult', - 'PointAssigner', 'ATSSAssigner', 'CenterRegionAssigner', 'GridAssigner', - 'HungarianAssigner', 'RegionAssigner' -] diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/scale.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/scale.py deleted file mode 100644 index c905fffcc8bf998d18d94f927591963c428025e2..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmcv/cnn/bricks/scale.py +++ /dev/null @@ -1,21 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import torch -import torch.nn as nn - - -class Scale(nn.Module): - """A learnable scale parameter. - - This layer scales the input by a learnable factor. It multiplies a - learnable scale parameter of shape (1,) with input of any shape. - - Args: - scale (float): Initial value of scale factor. Default: 1.0 - """ - - def __init__(self, scale=1.0): - super(Scale, self).__init__() - self.scale = nn.Parameter(torch.tensor(scale, dtype=torch.float)) - - def forward(self, x): - return x * self.scale diff --git a/spaces/abidlabs/Voice-Cloning/app.py b/spaces/abidlabs/Voice-Cloning/app.py deleted file mode 100644 index 98689cad4e3bddea3bb3190f8dd39ce76e44803f..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/Voice-Cloning/app.py +++ /dev/null @@ -1,164 +0,0 @@ -from turtle import title -import gradio as gr - -import git -import os -os.system('git clone https://github.com/Edresson/Coqui-TTS -b multilingual-torchaudio-SE TTS') -os.system('pip install -q -e TTS/') -os.system('pip install -q torchaudio==0.9.0') - -import sys -TTS_PATH = "TTS/" - -# add libraries into environment -sys.path.append(TTS_PATH) # set this if TTS is not installed globally - -import os -import string -import time -import argparse -import json - -import numpy as np -import IPython -from IPython.display import Audio - - -import torch - -from TTS.tts.utils.synthesis import synthesis -from TTS.tts.utils.text.symbols import make_symbols, phonemes, symbols -try: - from TTS.utils.audio import AudioProcessor -except: - from TTS.utils.audio import AudioProcessor - - -from TTS.tts.models import setup_model -from TTS.config import load_config -from TTS.tts.models.vits import * - -OUT_PATH = 'out/' - -# create output path -os.makedirs(OUT_PATH, exist_ok=True) - -# model vars -MODEL_PATH = '/home/user/app/best_model_latest.pth.tar' -CONFIG_PATH = '/home/user/app/config.json' -TTS_LANGUAGES = "/home/user/app/language_ids.json" -TTS_SPEAKERS = "/home/user/app/speakers.json" -USE_CUDA = torch.cuda.is_available() - -# load the config -C = load_config(CONFIG_PATH) - - -# load the audio processor -ap = AudioProcessor(**C.audio) - -speaker_embedding = None - -C.model_args['d_vector_file'] = TTS_SPEAKERS -C.model_args['use_speaker_encoder_as_loss'] = False - -model = setup_model(C) -model.language_manager.set_language_ids_from_file(TTS_LANGUAGES) -# print(model.language_manager.num_languages, model.embedded_language_dim) -# print(model.emb_l) -cp = torch.load(MODEL_PATH, map_location=torch.device('cpu')) -# remove speaker encoder -model_weights = cp['model'].copy() -for key in list(model_weights.keys()): - if "speaker_encoder" in key: - del model_weights[key] - -model.load_state_dict(model_weights) - - -model.eval() - -if USE_CUDA: - model = model.cuda() - -# synthesize voice -use_griffin_lim = False - -os.system('pip install -q pydub ffmpeg-normalize') - -CONFIG_SE_PATH = "config_se.json" -CHECKPOINT_SE_PATH = "SE_checkpoint.pth.tar" - -from TTS.tts.utils.speakers import SpeakerManager -from pydub import AudioSegment -import librosa - -SE_speaker_manager = SpeakerManager(encoder_model_path=CHECKPOINT_SE_PATH, encoder_config_path=CONFIG_SE_PATH, use_cuda=USE_CUDA) - -def compute_spec(ref_file): - y, sr = librosa.load(ref_file, sr=ap.sample_rate) - spec = ap.spectrogram(y) - spec = torch.FloatTensor(spec).unsqueeze(0) - return spec - - - -def greet(Text,Voicetoclone,VoiceMicrophone): - text= "%s" % (Text) - if Voicetoclone is not None: - reference_files= "%s" % (Voicetoclone) - print("path url") - print(Voicetoclone) - sample= str(Voicetoclone) - else: - reference_files= "%s" % (VoiceMicrophone) - print("path url") - print(VoiceMicrophone) - sample= str(VoiceMicrophone) - size= len(reference_files)*sys.getsizeof(reference_files) - size2= size / 1000000 - if (size2 > 0.012) or len(text)>2000: - message="File is greater than 30mb or Text inserted is longer than 2000 characters. Please re-try with smaller sizes." - print(message) - raise SystemExit("File is greater than 30mb. Please re-try or Text inserted is longer than 2000 characters. Please re-try with smaller sizes.") - else: - os.system('ffmpeg-normalize $sample -nt rms -t=-27 -o $sample -ar 16000 -f') - reference_emb = SE_speaker_manager.compute_d_vector_from_clip(reference_files) - model.length_scale = 1 # scaler for the duration predictor. The larger it is, the slower the speech. - model.inference_noise_scale = 0.3 # defines the noise variance applied to the random z vector at inference. - model.inference_noise_scale_dp = 0.3 # defines the noise variance applied to the duration predictor z vector at inference. - text = text - model.language_manager.language_id_mapping - language_id = 0 - - print(" > text: {}".format(text)) - wav, alignment, _, _ = synthesis( - model, - text, - C, - "cuda" in str(next(model.parameters()).device), - ap, - speaker_id=None, - d_vector=reference_emb, - style_wav=None, - language_id=language_id, - enable_eos_bos_chars=C.enable_eos_bos_chars, - use_griffin_lim=True, - do_trim_silence=False, - ).values() - print("Generated Audio") - IPython.display.display(Audio(wav, rate=ap.sample_rate)) - file_name = text.replace(" ", "_") - file_name = file_name.translate(str.maketrans('', '', string.punctuation.replace('_', ''))) + '.wav' - out_path = os.path.join(OUT_PATH, file_name) - print(" > Saving output to {}".format(out_path)) - ap.save_wav(wav, out_path) - return out_path - -demo = gr.Interface( - fn=greet, - inputs=[gr.inputs.Textbox(label='What would you like the voice to say? (max. 2000 characters per request)'),gr.Audio(type="filepath", source="upload",label='Please upload a voice to clone (max. 30mb)'),gr.Audio(source="microphone", type="filepath", streaming=True)], - outputs="audio", - title="Bilal's Voice Cloning Tool" - ) -demo.launch() \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/tests/unit/test_cameras.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/tests/unit/test_cameras.py deleted file mode 100644 index 7544ad8f8e3ee55236fd2e32dbc12065153cbe5b..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/tests/unit/test_cameras.py +++ /dev/null @@ -1,164 +0,0 @@ -import numpy as np -import pytest - -from pyrender import PerspectiveCamera, OrthographicCamera - - -def test_perspective_camera(): - - # Set up constants - znear = 0.05 - zfar = 100 - yfov = np.pi / 3.0 - width = 1000.0 - height = 500.0 - aspectRatio = 640.0 / 480.0 - - # Test basics - with pytest.raises(TypeError): - p = PerspectiveCamera() - - p = PerspectiveCamera(yfov=yfov) - assert p.yfov == yfov - assert p.znear == 0.05 - assert p.zfar is None - assert p.aspectRatio is None - p.name = 'asdf' - p.name = None - - with pytest.raises(ValueError): - p.yfov = 0.0 - - with pytest.raises(ValueError): - p.yfov = -1.0 - - with pytest.raises(ValueError): - p.znear = -1.0 - - p.znear = 0.0 - p.znear = 0.05 - p.zfar = 100.0 - assert p.zfar == 100.0 - - with pytest.raises(ValueError): - p.zfar = 0.03 - - with pytest.raises(ValueError): - p.zfar = 0.05 - - p.aspectRatio = 10.0 - assert p.aspectRatio == 10.0 - - with pytest.raises(ValueError): - p.aspectRatio = 0.0 - - with pytest.raises(ValueError): - p.aspectRatio = -1.0 - - # Test matrix getting/setting - - # NF - p.znear = 0.05 - p.zfar = 100 - p.aspectRatio = None - - with pytest.raises(ValueError): - p.get_projection_matrix() - - assert np.allclose( - p.get_projection_matrix(width, height), - np.array([ - [1.0 / (width / height * np.tan(yfov / 2.0)), 0.0, 0.0, 0.0], - [0.0, 1.0 / np.tan(yfov / 2.0), 0.0, 0.0], - [0.0, 0.0, (zfar + znear) / (znear - zfar), - (2 * zfar * znear) / (znear - zfar)], - [0.0, 0.0, -1.0, 0.0] - ]) - ) - - # NFA - p.aspectRatio = aspectRatio - assert np.allclose( - p.get_projection_matrix(width, height), - np.array([ - [1.0 / (aspectRatio * np.tan(yfov / 2.0)), 0.0, 0.0, 0.0], - [0.0, 1.0 / np.tan(yfov / 2.0), 0.0, 0.0], - [0.0, 0.0, (zfar + znear) / (znear - zfar), - (2 * zfar * znear) / (znear - zfar)], - [0.0, 0.0, -1.0, 0.0] - ]) - ) - assert np.allclose( - p.get_projection_matrix(), p.get_projection_matrix(width, height) - ) - - # N - p.zfar = None - p.aspectRatio = None - assert np.allclose( - p.get_projection_matrix(width, height), - np.array([ - [1.0 / (width / height * np.tan(yfov / 2.0)), 0.0, 0.0, 0.0], - [0.0, 1.0 / np.tan(yfov / 2.0), 0.0, 0.0], - [0.0, 0.0, -1.0, -2.0 * znear], - [0.0, 0.0, -1.0, 0.0] - ]) - ) - - -def test_orthographic_camera(): - xm = 1.0 - ym = 2.0 - n = 0.05 - f = 100.0 - - with pytest.raises(TypeError): - c = OrthographicCamera() - - c = OrthographicCamera(xmag=xm, ymag=ym) - - assert c.xmag == xm - assert c.ymag == ym - assert c.znear == 0.05 - assert c.zfar == 100.0 - assert c.name is None - - with pytest.raises(TypeError): - c.ymag = None - - with pytest.raises(ValueError): - c.ymag = 0.0 - - with pytest.raises(ValueError): - c.ymag = -1.0 - - with pytest.raises(TypeError): - c.xmag = None - - with pytest.raises(ValueError): - c.xmag = 0.0 - - with pytest.raises(ValueError): - c.xmag = -1.0 - - with pytest.raises(TypeError): - c.znear = None - - with pytest.raises(ValueError): - c.znear = 0.0 - - with pytest.raises(ValueError): - c.znear = -1.0 - - with pytest.raises(ValueError): - c.zfar = 0.01 - - assert np.allclose( - c.get_projection_matrix(), - np.array([ - [1.0 / xm, 0, 0, 0], - [0, 1.0 / ym, 0, 0], - [0, 0, 2.0 / (n - f), (f + n) / (n - f)], - [0, 0, 0, 1.0] - ]) - ) diff --git a/spaces/adpro/avinev3_04/README.md b/spaces/adpro/avinev3_04/README.md deleted file mode 100644 index 555d485f05e1f6e4dadd1c054d50dd0776c7b20e..0000000000000000000000000000000000000000 --- a/spaces/adpro/avinev3_04/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: AnimeGANv2 -emoji: ⚡ -colorFrom: yellow -colorTo: blue -sdk: gradio -sdk_version: 3.1.3 -app_file: app.py -pinned: false -duplicated_from: adpro/avinev2_04 ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/aiEDUcurriculum/introtoAI-mental-health-project/app.py b/spaces/aiEDUcurriculum/introtoAI-mental-health-project/app.py deleted file mode 100644 index db5dcf3d31f99184527b594de5d457f8a065760a..0000000000000000000000000000000000000000 --- a/spaces/aiEDUcurriculum/introtoAI-mental-health-project/app.py +++ /dev/null @@ -1,172 +0,0 @@ -### ----------------------------- ### -### libraries ### -### ----------------------------- ### - -import gradio as gr -import pandas as pd -import numpy as np -from sklearn.model_selection import train_test_split -from sklearn.linear_model import LogisticRegression -from sklearn import metrics - - -### ------------------------------ ### -### data transformation ### -### ------------------------------ ### - -# load dataset -uncleaned_data = pd.read_csv('data.csv') - -# remove timestamp from dataset (always first column) -uncleaned_data = uncleaned_data.iloc[: , 1:] -data = pd.DataFrame() - -# keep track of which columns are categorical and what -# those columns' value mappings are -# structure: {colname1: {...}, colname2: {...} } -cat_value_dicts = {} -final_colname = uncleaned_data.columns[len(uncleaned_data.columns) - 1] - -# for each column... -for (colname, colval) in uncleaned_data.iteritems(): - - # check if col is already a number; if so, add col directly - # to new dataframe and skip to next column - if isinstance(colval.values[0], (np.integer, float)): - data[colname] = uncleaned_data[colname].copy() - continue - - # structure: {0: "lilac", 1: "blue", ...} - new_dict = {} - val = 0 # first index per column - transformed_col_vals = [] # new numeric datapoints - - # if not, for each item in that column... - for (row, item) in enumerate(colval.values): - - # if item is not in this col's dict... - if item not in new_dict: - new_dict[item] = val - val += 1 - - # then add numerical value to transformed dataframe - transformed_col_vals.append(new_dict[item]) - - # reverse dictionary only for final col (0, 1) => (vals) - if colname == final_colname: - new_dict = {value : key for (key, value) in new_dict.items()} - - cat_value_dicts[colname] = new_dict - data[colname] = transformed_col_vals - - -### -------------------------------- ### -### model training ### -### -------------------------------- ### - -# select features and predicton; automatically selects last column as prediction -cols = len(data.columns) -num_features = cols - 1 -x = data.iloc[: , :num_features] -y = data.iloc[: , num_features:] - -# split data into training and testing sets -x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25) - -# instantiate the model (using default parameters) -model = LogisticRegression() -model.fit(x_train, y_train.values.ravel()) -y_pred = model.predict(x_test) - - -### -------------------------------- ### -### article generation ### -### -------------------------------- ### -# borrow file reading function from reader.py - -def get_feat(): - feats = [abs(x) for x in model.coef_[0]] - max_val = max(feats) - idx = feats.index(max_val) - return data.columns[idx] - -acc = str(round(metrics.accuracy_score(y_test, y_pred) * 100, 1)) + "%" -most_imp_feat = get_feat() -# info = get_article(acc, most_imp_feat) - - - -### ------------------------------- ### -### interface creation ### -### ------------------------------- ### - - -# predictor for generic number of features -def general_predictor(*args): - features = [] - - # transform categorical input - for colname, arg in zip(data.columns, args): - if (colname in cat_value_dicts): - features.append(cat_value_dicts[colname][arg]) - else: - features.append(arg) - - # predict single datapoint - new_input = [features] - result = model.predict(new_input) - return cat_value_dicts[final_colname][result[0]] - -# add data labels to replace those lost via star-args - - -block = gr.Blocks() - -with open('info.md') as f: - with block: - gr.Markdown(f.readline()) - gr.Markdown('Take the quiz to get a personalized recommendation using AI.') - - with gr.Row(): - with gr.Box(): - inputls = [] - for colname in data.columns: - # skip last column - if colname == final_colname: - continue - - # access categories dict if data is categorical - # otherwise, just use a number input - if colname in cat_value_dicts: - radio_options = list(cat_value_dicts[colname].keys()) - inputls.append(gr.Dropdown(radio_options, type="value", label=colname)) - else: - # add numerical input - inputls.append(gr.Number(label=colname)) - gr.Markdown("<br />") - - submit = gr.Button("Click to see your personalized result!", variant="primary") - gr.Markdown("<br />") - output = gr.Textbox(label="Your recommendation:", placeholder="your recommendation will appear here") - - submit.click(fn=general_predictor, inputs=inputls, outputs=output) - gr.Markdown("<br />") - - with gr.Row(): - with gr.Box(): - gr.Markdown(f"<h3>Accuracy: </h3>{acc}") - with gr.Box(): - gr.Markdown(f"<h3>Most important feature: </h3>{most_imp_feat}") - - gr.Markdown("<br />") - - with gr.Box(): - gr.Markdown('''⭐ Note that model accuracy is based on the uploaded data.csv and reflects how well the AI model can give correct recommendations for <em>that dataset</em>. Model accuracy and most important feature can be helpful for understanding how the model works, but <em>should not be considered absolute facts about the real world</em>.''') - - with gr.Box(): - with open('info.md') as f: - f.readline() - gr.Markdown(f.read()) - -# show the interface -block.launch() \ No newline at end of file diff --git a/spaces/akhaliq/PaintTransformer/train/data/__init__.py b/spaces/akhaliq/PaintTransformer/train/data/__init__.py deleted file mode 100644 index 78063f405b457405e098016159136196bf41701b..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/PaintTransformer/train/data/__init__.py +++ /dev/null @@ -1,94 +0,0 @@ -"""This package includes all the modules related to data loading and preprocessing - - To add a custom dataset class called 'dummy', you need to add a file called 'dummy_dataset.py' and define a subclass 'DummyDataset' inherited from BaseDataset. - You need to implement four functions: - -- <__init__>: initialize the class, first call BaseDataset.__init__(self, opt). - -- <__len__>: return the size of dataset. - -- <__getitem__>: get a data point from data loader. - -- <modify_commandline_options>: (optionally) add dataset-specific options and set default options. - -Now you can use the dataset class by specifying flag '--dataset_mode dummy'. -See our template dataset class 'template_dataset.py' for more details. -""" -import importlib -import torch.utils.data -from data.base_dataset import BaseDataset - - -def find_dataset_using_name(dataset_name): - """Import the module "data/[dataset_name]_dataset.py". - - In the file, the class called DatasetNameDataset() will - be instantiated. It has to be a subclass of BaseDataset, - and it is case-insensitive. - """ - dataset_filename = "data." + dataset_name + "_dataset" - datasetlib = importlib.import_module(dataset_filename) - - dataset = None - target_dataset_name = dataset_name.replace('_', '') + 'dataset' - for name, cls in datasetlib.__dict__.items(): - if name.lower() == target_dataset_name.lower() \ - and issubclass(cls, BaseDataset): - dataset = cls - - if dataset is None: - raise NotImplementedError("In %s.py, there should be a subclass of BaseDataset with class name that matches %s in lowercase." % (dataset_filename, target_dataset_name)) - - return dataset - - -def get_option_setter(dataset_name): - """Return the static method <modify_commandline_options> of the dataset class.""" - dataset_class = find_dataset_using_name(dataset_name) - return dataset_class.modify_commandline_options - - -def create_dataset(opt): - """Create a dataset given the option. - - This function wraps the class CustomDatasetDataLoader. - This is the main interface between this package and 'train.py'/'test.py' - - Example: - >>> from data import create_dataset - >>> dataset = create_dataset(opt) - """ - data_loader = CustomDatasetDataLoader(opt) - dataset = data_loader.load_data() - return dataset - - -class CustomDatasetDataLoader(): - """Wrapper class of Dataset class that performs multi-threaded data loading""" - - def __init__(self, opt): - """Initialize this class - - Step 1: create a dataset instance given the name [dataset_mode] - Step 2: create a multi-threaded data loader. - """ - self.opt = opt - dataset_class = find_dataset_using_name(opt.dataset_mode) - self.dataset = dataset_class(opt) - print("dataset [%s] was created" % type(self.dataset).__name__) - self.dataloader = torch.utils.data.DataLoader( - self.dataset, - batch_size=opt.batch_size, - shuffle=not opt.serial_batches, - num_workers=int(opt.num_threads), - drop_last=True) - - def load_data(self): - return self - - def __len__(self): - """Return the number of data in the dataset""" - return min(len(self.dataset), self.opt.max_dataset_size) - - def __iter__(self): - """Return a batch of data""" - for i, data in enumerate(self.dataloader): - if i * self.opt.batch_size >= self.opt.max_dataset_size: - break - yield data diff --git a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/display.py b/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/display.py deleted file mode 100644 index 956880722a3f05613ebd06f5686b3d8a59642e92..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Real-Time-Voice-Cloning/vocoder/display.py +++ /dev/null @@ -1,120 +0,0 @@ -import matplotlib.pyplot as plt -import time -import numpy as np -import sys - - -def progbar(i, n, size=16): - done = (i * size) // n - bar = '' - for i in range(size): - bar += '█' if i <= done else '░' - return bar - - -def stream(message) : - try: - sys.stdout.write("\r{%s}" % message) - except: - #Remove non-ASCII characters from message - message = ''.join(i for i in message if ord(i)<128) - sys.stdout.write("\r{%s}" % message) - - -def simple_table(item_tuples) : - - border_pattern = '+---------------------------------------' - whitespace = ' ' - - headings, cells, = [], [] - - for item in item_tuples : - - heading, cell = str(item[0]), str(item[1]) - - pad_head = True if len(heading) < len(cell) else False - - pad = abs(len(heading) - len(cell)) - pad = whitespace[:pad] - - pad_left = pad[:len(pad)//2] - pad_right = pad[len(pad)//2:] - - if pad_head : - heading = pad_left + heading + pad_right - else : - cell = pad_left + cell + pad_right - - headings += [heading] - cells += [cell] - - border, head, body = '', '', '' - - for i in range(len(item_tuples)) : - - temp_head = f'| {headings[i]} ' - temp_body = f'| {cells[i]} ' - - border += border_pattern[:len(temp_head)] - head += temp_head - body += temp_body - - if i == len(item_tuples) - 1 : - head += '|' - body += '|' - border += '+' - - print(border) - print(head) - print(border) - print(body) - print(border) - print(' ') - - -def time_since(started) : - elapsed = time.time() - started - m = int(elapsed // 60) - s = int(elapsed % 60) - if m >= 60 : - h = int(m // 60) - m = m % 60 - return f'{h}h {m}m {s}s' - else : - return f'{m}m {s}s' - - -def save_attention(attn, path) : - fig = plt.figure(figsize=(12, 6)) - plt.imshow(attn.T, interpolation='nearest', aspect='auto') - fig.savefig(f'{path}.png', bbox_inches='tight') - plt.close(fig) - - -def save_spectrogram(M, path, length=None) : - M = np.flip(M, axis=0) - if length : M = M[:, :length] - fig = plt.figure(figsize=(12, 6)) - plt.imshow(M, interpolation='nearest', aspect='auto') - fig.savefig(f'{path}.png', bbox_inches='tight') - plt.close(fig) - - -def plot(array) : - fig = plt.figure(figsize=(30, 5)) - ax = fig.add_subplot(111) - ax.xaxis.label.set_color('grey') - ax.yaxis.label.set_color('grey') - ax.xaxis.label.set_fontsize(23) - ax.yaxis.label.set_fontsize(23) - ax.tick_params(axis='x', colors='grey', labelsize=23) - ax.tick_params(axis='y', colors='grey', labelsize=23) - plt.plot(array) - - -def plot_spec(M) : - M = np.flip(M, axis=0) - plt.figure(figsize=(18,4)) - plt.imshow(M, interpolation='nearest', aspect='auto') - plt.show() - diff --git a/spaces/akhaliq/deeplab2/evaluation/coco_instance_ap.py b/spaces/akhaliq/deeplab2/evaluation/coco_instance_ap.py deleted file mode 100644 index c97d8c02c2e2c683d4df9f47c5510de8bee7347c..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/deeplab2/evaluation/coco_instance_ap.py +++ /dev/null @@ -1,337 +0,0 @@ -# coding=utf-8 -# Copyright 2021 The Deeplab2 Authors. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""COCO-style instance segmentation evaluation metrics. - -Implements a Keras interface to COCO API. -COCO API: github.com/cocodataset/cocoapi/ -""" -from typing import Any, Collection, Mapping, Optional - -from absl import logging -import numpy as np -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -import tensorflow as tf - -from deeplab2.utils import coco_tools -from deeplab2.utils import panoptic_instances - - -def _unwrap_segmentation(seg): - return { - 'size': list(seg['size']), - 'counts': seg['counts'], - } - - -_ANNOTATION_CONVERSION = { - 'bbox': list, - 'segmentation': _unwrap_segmentation, -} - - -def _unwrap_annotation(ann: Mapping[str, Any]) -> Mapping[str, Any]: - """Unwraps the objects in an COCO-style annotation dictionary. - - Logic within the Keras metric class wraps the objects within the ground-truth - and detection annotations in ListWrapper and DictWrapper classes. On the other - hand, the COCO API does strict type checking as part of determining which - branch to use in comparing detections and segmentations. We therefore have - to coerce the types from the wrapper to the built-in types that COCO is - expecting. - - Args: - ann: A COCO-style annotation dictionary that may contain ListWrapper and - DictWrapper objects. - - Returns: - The same annotation information, but with wrappers reduced to built-in - types. - """ - unwrapped_ann = {} - for k in ann: - if k in _ANNOTATION_CONVERSION: - unwrapped_ann[k] = _ANNOTATION_CONVERSION[k](ann[k]) - else: - unwrapped_ann[k] = ann[k] - return unwrapped_ann - - -class InstanceAveragePrecision(tf.keras.metrics.Metric): - """COCO evaluation metric class.""" - - def __init__(self, name: str = 'instance_ap', **kwargs): - """Constructs COCO evaluation class.""" - super(InstanceAveragePrecision, self).__init__(name=name, **kwargs) - self.reset_states() - - def reset_states(self) -> None: - """Reset COCO API object.""" - self.detections = [] - self.dataset = { - 'images': [], - 'annotations': [], - 'categories': [] - } - self.image_id = 1 - self.next_groundtruth_annotation_id = 1 - self.category_ids = set() - self.metric_values = None - - def evaluate(self) -> np.ndarray: - """Evaluates with detections from all images with COCO API. - - Returns: - coco_metric: float numpy array with shape [12] representing the - coco-style evaluation metrics. - """ - self.dataset['categories'] = [{ - 'id': int(category_id) - } for category_id in self.category_ids] - - # Creates "unwrapped" copies of COCO json-style objects. - dataset = { - 'images': self.dataset['images'], - 'categories': self.dataset['categories'] - } - dataset['annotations'] = [ - _unwrap_annotation(ann) for ann in self.dataset['annotations'] - ] - detections = [_unwrap_annotation(ann) for ann in self.detections] - - logging.info('Creating COCO objects for AP eval...') - coco_gt = COCO() - coco_gt.dataset = dataset - coco_gt.createIndex() - - coco_dt = coco_gt.loadRes(detections) - - logging.info('Running COCO evaluation...') - coco_eval = COCOeval(coco_gt, coco_dt, iouType='segm') - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - coco_metrics = coco_eval.stats - return np.array(coco_metrics, dtype=np.float32) - - def result(self) -> np.ndarray: - """Return the instance segmentation metric values, computing them if needed. - - Returns: - A float vector of 12 elements. The meaning of each element is (in order): - - 0. AP @[ IoU=0.50:0.95 | area= all | maxDets=100 ] - 1. AP @[ IoU=0.50 | area= all | maxDets=100 ] - 2. AP @[ IoU=0.75 | area= all | maxDets=100 ] - 3. AP @[ IoU=0.50:0.95 | area= small | maxDets=100 ] - 4. AP @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] - 5. AP @[ IoU=0.50:0.95 | area= large | maxDets=100 ] - 6. AR @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] - 7. AR @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] - 8. AR @[ IoU=0.50:0.95 | area= all | maxDets=100 ] - 9. AR @[ IoU=0.50:0.95 | area= small | maxDets=100 ] - 10. AR @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] - 11, AR @[ IoU=0.50:0.95 | area= large | maxDets=100 ] - - Where: AP = Average Precision - AR = Average Recall - IoU = Intersection over Union. IoU=0.50:0.95 is the average of the - metric over thresholds of 0.5 to 0.95 with increments of 0.05. - - The area thresholds mean that, for those entries, ground truth annotation - with area outside the range is ignored. - small: [0**2, 32**2], - medium: [32**2, 96**2] - large: [96**2, 1e5**2] - """ - if not self.metric_values: - self.metric_values = self.evaluate() - return self.metric_values - - def update_state(self, groundtruth_boxes: tf.Tensor, - groundtruth_classes: tf.Tensor, groundtruth_masks: tf.Tensor, - groundtruth_is_crowd: tf.Tensor, detection_masks: tf.Tensor, - detection_scores: tf.Tensor, - detection_classes: tf.Tensor) -> None: - """Update detection results and groundtruth data. - - Append detection results to self.detections to the aggregate results from - all of the validation set. The groundtruth_data is parsed and added into a - dictionary with the same format as COCO dataset, which can be used for - evaluation. - - Args: - groundtruth_boxes: tensor (float32) with shape [num_gt_annos, 4] - groundtruth_classes: tensor (int) with shape [num_gt_annos] - groundtruth_masks: tensor (uint8) with shape [num_gt_annos, image_height, - image_width] - groundtruth_is_crowd: tensor (bool) with shape [num_gt_annos] - detection_masks: tensor (uint8) with shape [num_detections, image_height, - image_width] - detection_scores: tensor (float32) with shape [num_detections] - detection_classes: tensor (int) with shape [num_detections] - """ - # Reset the caching of result values. - self.metric_values = None - - # Update known category ids. - self.category_ids.update(groundtruth_classes.numpy()) - self.category_ids.update(detection_classes.numpy()) - - # Add ground-truth annotations. - groundtruth_annotations = coco_tools.ExportSingleImageGroundtruthToCoco( - self.image_id, - self.next_groundtruth_annotation_id, - self.category_ids, - groundtruth_boxes.numpy(), - groundtruth_classes.numpy(), - groundtruth_masks=groundtruth_masks.numpy(), - groundtruth_is_crowd=groundtruth_is_crowd.numpy()) - self.next_groundtruth_annotation_id += len(groundtruth_annotations) - - # Add to set of images for which there are gt & detections - # Infers image size from groundtruth masks. - _, height, width = groundtruth_masks.shape - self.dataset['images'].append({ - 'id': self.image_id, - 'height': height, - 'width': width, - }) - self.dataset['annotations'].extend(groundtruth_annotations) - - # Add predictions/detections. - detection_annotations = coco_tools.ExportSingleImageDetectionMasksToCoco( - self.image_id, self.category_ids, detection_masks.numpy(), - detection_scores.numpy(), detection_classes.numpy()) - self.detections.extend(detection_annotations) - - self.image_id += 1 - - -def _instance_masks(panoptic_label_map: tf.Tensor, - instance_panoptic_labels: tf.Tensor) -> tf.Tensor: - """Constructs an array of masks for each instance in a panoptic label map. - - Args: - panoptic_label_map: An integer tensor of shape `[image_height, image_width]` - specifying the panoptic label at each pixel. - instance_panoptic_labels: An integer tensor of shape `[num_instances]` that - gives the label for each unique instance for which to compute masks. - - Returns: - A boolean tensor of shape `[num_instances, image_height, image_width]` where - each slice in the first dimension gives the mask for a single instance over - the entire image. - """ - return tf.math.equal( - tf.expand_dims(panoptic_label_map, 0), - tf.reshape(instance_panoptic_labels, - [tf.size(instance_panoptic_labels), 1, 1])) - - -class PanopticInstanceAveragePrecision(tf.keras.metrics.Metric): - """Computes instance segmentation AP of panoptic segmentations. - - Panoptic segmentation includes both "thing" and "stuff" classes. This class - ignores the "stuff" classes to report metrics on only the "thing" classes - that have discrete instances. It computes a series of AP-based metrics using - the COCO evaluation scripts. - """ - - def __init__(self, - num_classes: int, - things_list: Collection[int], - label_divisor: int, - ignored_label: int, - name: str = 'panoptic_instance_ap', - **kwargs): - """Constructs panoptic instance segmentation evaluation class.""" - super(PanopticInstanceAveragePrecision, self).__init__(name=name, **kwargs) - self.num_classes = num_classes - self.stuff_list = set(range(num_classes)).difference(things_list) - self.label_divisor = label_divisor - self.ignored_label = ignored_label - self.detection_metric = InstanceAveragePrecision() - self.reset_states() - - def reset_states(self) -> None: - self.detection_metric.reset_states() - - def result(self) -> np.ndarray: - return self.detection_metric.result() - - def update_state(self, - groundtruth_panoptic: tf.Tensor, - predicted_panoptic: tf.Tensor, - semantic_probability: tf.Tensor, - instance_score_map: tf.Tensor, - is_crowd_map: Optional[tf.Tensor] = None) -> None: - """Adds the results from a new image to be computed by the metric. - - Args: - groundtruth_panoptic: A 2D integer tensor, with the true panoptic label at - each pixel. - predicted_panoptic: 2D integer tensor with predicted panoptic labels to be - evaluated. - semantic_probability: An float tensor of shape `[image_height, - image_width, num_classes]`. Specifies at each pixel the estimated - probability distribution that that pixel belongs to each semantic class. - instance_score_map: A 2D float tensor, where the pixels for an instance - will have the probability of that being an instance. - is_crowd_map: A 2D boolean tensor. Where it is True, the instance in that - region is a "crowd" instance. It is assumed that all pixels in an - instance will have the same value in this map. If set to None (the - default), it will be assumed that none of the ground truth instances are - crowds. - """ - classes_to_ignore = tf.convert_to_tensor([self.ignored_label] + - list(self.stuff_list), tf.int32) - (gt_unique_labels, - gt_box_coords) = panoptic_instances.instance_boxes_from_masks( - groundtruth_panoptic, classes_to_ignore, self.label_divisor) - gt_classes = tf.math.floordiv(gt_unique_labels, self.label_divisor) - - gt_masks = _instance_masks(groundtruth_panoptic, gt_unique_labels) - - if is_crowd_map is None: - gt_is_crowd = tf.zeros(tf.shape(gt_classes), tf.bool) - else: - gt_is_crowd = panoptic_instances.per_instance_is_crowd( - is_crowd_map, groundtruth_panoptic, gt_unique_labels) - - (pred_unique_labels, - pred_scores) = panoptic_instances.combined_instance_scores( - predicted_panoptic, semantic_probability, instance_score_map, - self.label_divisor, self.ignored_label) - - # Filter out stuff and ignored label. - pred_classes = tf.math.floordiv(pred_unique_labels, self.label_divisor) - pred_class_is_ignored = tf.math.reduce_any( - tf.math.equal( - tf.expand_dims(pred_classes, 1), - tf.expand_dims(classes_to_ignore, 0)), - axis=1) - pred_class_is_kept = tf.math.logical_not(pred_class_is_ignored) - pred_unique_labels = tf.boolean_mask(pred_unique_labels, pred_class_is_kept) - pred_scores = tf.boolean_mask(pred_scores, pred_class_is_kept) - - # Recompute class labels after the filtering. - pred_classes = tf.math.floordiv(pred_unique_labels, self.label_divisor) - pred_masks = _instance_masks(predicted_panoptic, pred_unique_labels) - - self.detection_metric.update_state(gt_box_coords, gt_classes, gt_masks, - gt_is_crowd, pred_masks, pred_scores, - pred_classes) diff --git a/spaces/akhaliq/lama/bin/mask_example.py b/spaces/akhaliq/lama/bin/mask_example.py deleted file mode 100644 index 59e25ca8eb3ed4141851c3af284fc66285444de0..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/lama/bin/mask_example.py +++ /dev/null @@ -1,14 +0,0 @@ -import matplotlib.pyplot as plt -from skimage import io -from skimage.transform import resize - -from saicinpainting.evaluation.masks.mask import SegmentationMask - -im = io.imread('imgs/ex4.jpg') -im = resize(im, (512, 1024), anti_aliasing=True) -mask_seg = SegmentationMask(num_variants_per_mask=10) -mask_examples = mask_seg.get_masks(im) -for i, example in enumerate(mask_examples): - plt.imshow(example) - plt.show() - plt.imsave(f'tmp/img_masks/{i}.png', example) diff --git a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/util.py b/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/util.py deleted file mode 100644 index 5d6ddc3f5bc63092db67d083389b34162b1fea71..0000000000000000000000000000000000000000 --- a/spaces/alexray/btc_predictor/venv/lib/python3.10/site-packages/pip/_vendor/pygments/util.py +++ /dev/null @@ -1,308 +0,0 @@ -""" - pygments.util - ~~~~~~~~~~~~~ - - Utility functions. - - :copyright: Copyright 2006-2021 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -import re -from io import TextIOWrapper - - -split_path_re = re.compile(r'[/\\ ]') -doctype_lookup_re = re.compile(r''' - <!DOCTYPE\s+( - [a-zA-Z_][a-zA-Z0-9]* - (?: \s+ # optional in HTML5 - [a-zA-Z_][a-zA-Z0-9]*\s+ - "[^"]*")? - ) - [^>]*> -''', re.DOTALL | re.MULTILINE | re.VERBOSE) -tag_re = re.compile(r'<(.+?)(\s.*?)?>.*?</.+?>', - re.UNICODE | re.IGNORECASE | re.DOTALL | re.MULTILINE) -xml_decl_re = re.compile(r'\s*<\?xml[^>]*\?>', re.I) - - -class ClassNotFound(ValueError): - """Raised if one of the lookup functions didn't find a matching class.""" - - -class OptionError(Exception): - pass - - -def get_choice_opt(options, optname, allowed, default=None, normcase=False): - string = options.get(optname, default) - if normcase: - string = string.lower() - if string not in allowed: - raise OptionError('Value for option %s must be one of %s' % - (optname, ', '.join(map(str, allowed)))) - return string - - -def get_bool_opt(options, optname, default=None): - string = options.get(optname, default) - if isinstance(string, bool): - return string - elif isinstance(string, int): - return bool(string) - elif not isinstance(string, str): - raise OptionError('Invalid type %r for option %s; use ' - '1/0, yes/no, true/false, on/off' % ( - string, optname)) - elif string.lower() in ('1', 'yes', 'true', 'on'): - return True - elif string.lower() in ('0', 'no', 'false', 'off'): - return False - else: - raise OptionError('Invalid value %r for option %s; use ' - '1/0, yes/no, true/false, on/off' % ( - string, optname)) - - -def get_int_opt(options, optname, default=None): - string = options.get(optname, default) - try: - return int(string) - except TypeError: - raise OptionError('Invalid type %r for option %s; you ' - 'must give an integer value' % ( - string, optname)) - except ValueError: - raise OptionError('Invalid value %r for option %s; you ' - 'must give an integer value' % ( - string, optname)) - - -def get_list_opt(options, optname, default=None): - val = options.get(optname, default) - if isinstance(val, str): - return val.split() - elif isinstance(val, (list, tuple)): - return list(val) - else: - raise OptionError('Invalid type %r for option %s; you ' - 'must give a list value' % ( - val, optname)) - - -def docstring_headline(obj): - if not obj.__doc__: - return '' - res = [] - for line in obj.__doc__.strip().splitlines(): - if line.strip(): - res.append(" " + line.strip()) - else: - break - return ''.join(res).lstrip() - - -def make_analysator(f): - """Return a static text analyser function that returns float values.""" - def text_analyse(text): - try: - rv = f(text) - except Exception: - return 0.0 - if not rv: - return 0.0 - try: - return min(1.0, max(0.0, float(rv))) - except (ValueError, TypeError): - return 0.0 - text_analyse.__doc__ = f.__doc__ - return staticmethod(text_analyse) - - -def shebang_matches(text, regex): - r"""Check if the given regular expression matches the last part of the - shebang if one exists. - - >>> from pygments.util import shebang_matches - >>> shebang_matches('#!/usr/bin/env python', r'python(2\.\d)?') - True - >>> shebang_matches('#!/usr/bin/python2.4', r'python(2\.\d)?') - True - >>> shebang_matches('#!/usr/bin/python-ruby', r'python(2\.\d)?') - False - >>> shebang_matches('#!/usr/bin/python/ruby', r'python(2\.\d)?') - False - >>> shebang_matches('#!/usr/bin/startsomethingwith python', - ... r'python(2\.\d)?') - True - - It also checks for common windows executable file extensions:: - - >>> shebang_matches('#!C:\\Python2.4\\Python.exe', r'python(2\.\d)?') - True - - Parameters (``'-f'`` or ``'--foo'`` are ignored so ``'perl'`` does - the same as ``'perl -e'``) - - Note that this method automatically searches the whole string (eg: - the regular expression is wrapped in ``'^$'``) - """ - index = text.find('\n') - if index >= 0: - first_line = text[:index].lower() - else: - first_line = text.lower() - if first_line.startswith('#!'): - try: - found = [x for x in split_path_re.split(first_line[2:].strip()) - if x and not x.startswith('-')][-1] - except IndexError: - return False - regex = re.compile(r'^%s(\.(exe|cmd|bat|bin))?$' % regex, re.IGNORECASE) - if regex.search(found) is not None: - return True - return False - - -def doctype_matches(text, regex): - """Check if the doctype matches a regular expression (if present). - - Note that this method only checks the first part of a DOCTYPE. - eg: 'html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"' - """ - m = doctype_lookup_re.search(text) - if m is None: - return False - doctype = m.group(1) - return re.compile(regex, re.I).match(doctype.strip()) is not None - - -def html_doctype_matches(text): - """Check if the file looks like it has a html doctype.""" - return doctype_matches(text, r'html') - - -_looks_like_xml_cache = {} - - -def looks_like_xml(text): - """Check if a doctype exists or if we have some tags.""" - if xml_decl_re.match(text): - return True - key = hash(text) - try: - return _looks_like_xml_cache[key] - except KeyError: - m = doctype_lookup_re.search(text) - if m is not None: - return True - rv = tag_re.search(text[:1000]) is not None - _looks_like_xml_cache[key] = rv - return rv - - -def surrogatepair(c): - """Given a unicode character code with length greater than 16 bits, - return the two 16 bit surrogate pair. - """ - # From example D28 of: - # http://www.unicode.org/book/ch03.pdf - return (0xd7c0 + (c >> 10), (0xdc00 + (c & 0x3ff))) - - -def format_lines(var_name, seq, raw=False, indent_level=0): - """Formats a sequence of strings for output.""" - lines = [] - base_indent = ' ' * indent_level * 4 - inner_indent = ' ' * (indent_level + 1) * 4 - lines.append(base_indent + var_name + ' = (') - if raw: - # These should be preformatted reprs of, say, tuples. - for i in seq: - lines.append(inner_indent + i + ',') - else: - for i in seq: - # Force use of single quotes - r = repr(i + '"') - lines.append(inner_indent + r[:-2] + r[-1] + ',') - lines.append(base_indent + ')') - return '\n'.join(lines) - - -def duplicates_removed(it, already_seen=()): - """ - Returns a list with duplicates removed from the iterable `it`. - - Order is preserved. - """ - lst = [] - seen = set() - for i in it: - if i in seen or i in already_seen: - continue - lst.append(i) - seen.add(i) - return lst - - -class Future: - """Generic class to defer some work. - - Handled specially in RegexLexerMeta, to support regex string construction at - first use. - """ - def get(self): - raise NotImplementedError - - -def guess_decode(text): - """Decode *text* with guessed encoding. - - First try UTF-8; this should fail for non-UTF-8 encodings. - Then try the preferred locale encoding. - Fall back to latin-1, which always works. - """ - try: - text = text.decode('utf-8') - return text, 'utf-8' - except UnicodeDecodeError: - try: - import locale - prefencoding = locale.getpreferredencoding() - text = text.decode() - return text, prefencoding - except (UnicodeDecodeError, LookupError): - text = text.decode('latin1') - return text, 'latin1' - - -def guess_decode_from_terminal(text, term): - """Decode *text* coming from terminal *term*. - - First try the terminal encoding, if given. - Then try UTF-8. Then try the preferred locale encoding. - Fall back to latin-1, which always works. - """ - if getattr(term, 'encoding', None): - try: - text = text.decode(term.encoding) - except UnicodeDecodeError: - pass - else: - return text, term.encoding - return guess_decode(text) - - -def terminal_encoding(term): - """Return our best guess of encoding for the given *term*.""" - if getattr(term, 'encoding', None): - return term.encoding - import locale - return locale.getpreferredencoding() - - -class UnclosingTextIOWrapper(TextIOWrapper): - # Don't close underlying buffer on destruction. - def close(self): - self.flush() diff --git a/spaces/ali-ghamdan/deoldify/fastai/text/__init__.py b/spaces/ali-ghamdan/deoldify/fastai/text/__init__.py deleted file mode 100644 index 96df44853fdbd68a6e687f5f4a27878900e47bd0..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/deoldify/fastai/text/__init__.py +++ /dev/null @@ -1,10 +0,0 @@ -from .. import basics -from ..basics import * -from .learner import * -from .data import * -from .transform import * -from .models import * -from .. import text - -__all__ = [*basics.__all__, *learner.__all__, *data.__all__, *transform.__all__, *models.__all__, 'text'] - diff --git a/spaces/allknowingroger/Image-Models-Test66/app.py b/spaces/allknowingroger/Image-Models-Test66/app.py deleted file mode 100644 index c067b0760265c94ebd50da2d2bbe6c9b90fe5872..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test66/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "chayanbhansali/clock-tower", - "Yntec/samaritan3dCartoon2MVAE", - "inu-ai/niji-diffusion-xl-base-1.0", - "MakAttack/6537c6f75b769050ff861f8b", - "zhangyi617/yun-car-lora", - "Falah/sdxl2033", - "Aswitha/pet-dogs-amb", - "Yntec/samdoesartsUlt", - "Yntec/photoMovieX", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/almn-uhc/Streamlit-Data-Synthesis-Example/README.md b/spaces/almn-uhc/Streamlit-Data-Synthesis-Example/README.md deleted file mode 100644 index b329b096003cf11f1a4e520f05bffe7788e7cfe4..0000000000000000000000000000000000000000 --- a/spaces/almn-uhc/Streamlit-Data-Synthesis-Example/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Streamlit Data Synthesis Example -emoji: 😻 -colorFrom: pink -colorTo: gray -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/inverse_folding/gvp_modules.py b/spaces/alphunt/diffdock-alphunt-demo/esm/esm/inverse_folding/gvp_modules.py deleted file mode 100644 index 484d9d5d0d8a52153de1f557c698e400b6fb1dc4..0000000000000000000000000000000000000000 --- a/spaces/alphunt/diffdock-alphunt-demo/esm/esm/inverse_folding/gvp_modules.py +++ /dev/null @@ -1,473 +0,0 @@ -# Contents of this file are from the open source code for -# -# Jing, B., Eismann, S., Suriana, P., Townshend, R. J. L., & Dror, R. (2020). -# Learning from Protein Structure with Geometric Vector Perceptrons. In -# International Conference on Learning Representations. -# -# MIT License -# -# Copyright (c) 2020 Bowen Jing, Stephan Eismann, Patricia Suriana, Raphael Townshend, Ron Dror -# -# Permission is hereby granted, free of charge, to any person obtaining a copy -# of this software and associated documentation files (the "Software"), to deal -# in the Software without restriction, including without limitation the rights -# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell -# copies of the Software, and to permit persons to whom the Software is -# furnished to do so, subject to the following conditions: -# -# The above copyright notice and this permission notice shall be included in all -# copies or substantial portions of the Software. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR -# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, -# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE -# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER -# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, -# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. - -import typing as T -import torch -from torch import nn -import torch.nn.functional as F -from torch_geometric.nn import MessagePassing -from torch_scatter import scatter_add, scatter - -def tuple_size(tp): - return tuple([0 if a is None else a.size() for a in tp]) - -def tuple_sum(tp1, tp2): - s1, v1 = tp1 - s2, v2 = tp2 - if v2 is None and v2 is None: - return (s1 + s2, None) - return (s1 + s2, v1 + v2) - -def tuple_cat(*args, dim=-1): - ''' - Concatenates any number of tuples (s, V) elementwise. - - :param dim: dimension along which to concatenate when viewed - as the `dim` index for the scalar-channel tensors. - This means that `dim=-1` will be applied as - `dim=-2` for the vector-channel tensors. - ''' - dim %= len(args[0][0].shape) - s_args, v_args = list(zip(*args)) - return torch.cat(s_args, dim=dim), torch.cat(v_args, dim=dim) - -def tuple_index(x, idx): - ''' - Indexes into a tuple (s, V) along the first dimension. - - :param idx: any object which can be used to index into a `torch.Tensor` - ''' - return x[0][idx], x[1][idx] - -def randn(n, dims, device="cpu"): - ''' - Returns random tuples (s, V) drawn elementwise from a normal distribution. - - :param n: number of data points - :param dims: tuple of dimensions (n_scalar, n_vector) - - :return: (s, V) with s.shape = (n, n_scalar) and - V.shape = (n, n_vector, 3) - ''' - return torch.randn(n, dims[0], device=device), \ - torch.randn(n, dims[1], 3, device=device) - -def _norm_no_nan(x, axis=-1, keepdims=False, eps=1e-8, sqrt=True): - ''' - L2 norm of tensor clamped above a minimum value `eps`. - - :param sqrt: if `False`, returns the square of the L2 norm - ''' - # clamp is slow - # out = torch.clamp(torch.sum(torch.square(x), axis, keepdims), min=eps) - out = torch.sum(torch.square(x), axis, keepdims) + eps - return torch.sqrt(out) if sqrt else out - -def _split(x, nv): - ''' - Splits a merged representation of (s, V) back into a tuple. - Should be used only with `_merge(s, V)` and only if the tuple - representation cannot be used. - - :param x: the `torch.Tensor` returned from `_merge` - :param nv: the number of vector channels in the input to `_merge` - ''' - v = torch.reshape(x[..., -3*nv:], x.shape[:-1] + (nv, 3)) - s = x[..., :-3*nv] - return s, v - -def _merge(s, v): - ''' - Merges a tuple (s, V) into a single `torch.Tensor`, where the - vector channels are flattened and appended to the scalar channels. - Should be used only if the tuple representation cannot be used. - Use `_split(x, nv)` to reverse. - ''' - v = torch.reshape(v, v.shape[:-2] + (3*v.shape[-2],)) - return torch.cat([s, v], -1) - -class GVP(nn.Module): - ''' - Geometric Vector Perceptron. See manuscript and README.md - for more details. - - :param in_dims: tuple (n_scalar, n_vector) - :param out_dims: tuple (n_scalar, n_vector) - :param h_dim: intermediate number of vector channels, optional - :param activations: tuple of functions (scalar_act, vector_act) - :param tuple_io: whether to keep accepting tuple inputs and outputs when vi - or vo = 0 - ''' - def __init__(self, in_dims, out_dims, h_dim=None, vector_gate=False, - activations=(F.relu, torch.sigmoid), tuple_io=True, - eps=1e-8): - super(GVP, self).__init__() - self.si, self.vi = in_dims - self.so, self.vo = out_dims - self.tuple_io = tuple_io - if self.vi: - self.h_dim = h_dim or max(self.vi, self.vo) - self.wh = nn.Linear(self.vi, self.h_dim, bias=False) - self.ws = nn.Linear(self.h_dim + self.si, self.so) - if self.vo: - self.wv = nn.Linear(self.h_dim, self.vo, bias=False) - if vector_gate: - self.wg = nn.Linear(self.so, self.vo) - else: - self.ws = nn.Linear(self.si, self.so) - - self.vector_gate = vector_gate - self.scalar_act, self.vector_act = activations - self.eps = eps - - def forward(self, x): - ''' - :param x: tuple (s, V) of `torch.Tensor`, - or (if vectors_in is 0), a single `torch.Tensor` - :return: tuple (s, V) of `torch.Tensor`, - or (if vectors_out is 0), a single `torch.Tensor` - ''' - if self.vi: - s, v = x - v = torch.transpose(v, -1, -2) - vh = self.wh(v) - vn = _norm_no_nan(vh, axis=-2, eps=self.eps) - s = self.ws(torch.cat([s, vn], -1)) - if self.scalar_act: - s = self.scalar_act(s) - if self.vo: - v = self.wv(vh) - v = torch.transpose(v, -1, -2) - if self.vector_gate: - g = self.wg(s).unsqueeze(-1) - else: - g = _norm_no_nan(v, axis=-1, keepdims=True, eps=self.eps) - if self.vector_act: - g = self.vector_act(g) - v = v * g - else: - if self.tuple_io: - assert x[1] is None - x = x[0] - s = self.ws(x) - if self.scalar_act: - s = self.scalar_act(s) - if self.vo: - v = torch.zeros(list(s.shape)[:-1] + [self.vo, 3], - device=s.device) - - if self.vo: - return (s, v) - elif self.tuple_io: - return (s, None) - else: - return s - - -class _VDropout(nn.Module): - ''' - Vector channel dropout where the elements of each - vector channel are dropped together. - ''' - def __init__(self, drop_rate): - super(_VDropout, self).__init__() - self.drop_rate = drop_rate - - def forward(self, x): - ''' - :param x: `torch.Tensor` corresponding to vector channels - ''' - if x is None: - return None - device = x.device - if not self.training: - return x - mask = torch.bernoulli( - (1 - self.drop_rate) * torch.ones(x.shape[:-1], device=device) - ).unsqueeze(-1) - x = mask * x / (1 - self.drop_rate) - return x - -class Dropout(nn.Module): - ''' - Combined dropout for tuples (s, V). - Takes tuples (s, V) as input and as output. - ''' - def __init__(self, drop_rate): - super(Dropout, self).__init__() - self.sdropout = nn.Dropout(drop_rate) - self.vdropout = _VDropout(drop_rate) - - def forward(self, x): - ''' - :param x: tuple (s, V) of `torch.Tensor`, - or single `torch.Tensor` - (will be assumed to be scalar channels) - ''' - if type(x) is torch.Tensor: - return self.sdropout(x) - s, v = x - return self.sdropout(s), self.vdropout(v) - -class LayerNorm(nn.Module): - ''' - Combined LayerNorm for tuples (s, V). - Takes tuples (s, V) as input and as output. - ''' - def __init__(self, dims, tuple_io=True, eps=1e-8): - super(LayerNorm, self).__init__() - self.tuple_io = tuple_io - self.s, self.v = dims - self.scalar_norm = nn.LayerNorm(self.s) - self.eps = eps - - def forward(self, x): - ''' - :param x: tuple (s, V) of `torch.Tensor`, - or single `torch.Tensor` - (will be assumed to be scalar channels) - ''' - if not self.v: - if self.tuple_io: - return self.scalar_norm(x[0]), None - return self.scalar_norm(x) - s, v = x - vn = _norm_no_nan(v, axis=-1, keepdims=True, sqrt=False, eps=self.eps) - nonzero_mask = (vn > 2 * self.eps) - vn = torch.sum(vn * nonzero_mask, dim=-2, keepdim=True - ) / (self.eps + torch.sum(nonzero_mask, dim=-2, keepdim=True)) - vn = torch.sqrt(vn + self.eps) - v = nonzero_mask * (v / vn) - return self.scalar_norm(s), v - -class GVPConv(MessagePassing): - ''' - Graph convolution / message passing with Geometric Vector Perceptrons. - Takes in a graph with node and edge embeddings, - and returns new node embeddings. - - This does NOT do residual updates and pointwise feedforward layers - ---see `GVPConvLayer`. - - :param in_dims: input node embedding dimensions (n_scalar, n_vector) - :param out_dims: output node embedding dimensions (n_scalar, n_vector) - :param edge_dims: input edge embedding dimensions (n_scalar, n_vector) - :param n_layers: number of GVPs in the message function - :param module_list: preconstructed message function, overrides n_layers - :param aggr: should be "add" if some incoming edges are masked, as in - a masked autoregressive decoder architecture - ''' - def __init__(self, in_dims, out_dims, edge_dims, n_layers=3, - vector_gate=False, module_list=None, aggr="mean", eps=1e-8, - activations=(F.relu, torch.sigmoid)): - super(GVPConv, self).__init__(aggr=aggr) - self.eps = eps - self.si, self.vi = in_dims - self.so, self.vo = out_dims - self.se, self.ve = edge_dims - - module_list = module_list or [] - if not module_list: - if n_layers == 1: - module_list.append( - GVP((2*self.si + self.se, 2*self.vi + self.ve), - (self.so, self.vo), activations=(None, None))) - else: - module_list.append( - GVP((2*self.si + self.se, 2*self.vi + self.ve), out_dims, - vector_gate=vector_gate, activations=activations) - ) - for i in range(n_layers - 2): - module_list.append(GVP(out_dims, out_dims, - vector_gate=vector_gate)) - module_list.append(GVP(out_dims, out_dims, - activations=(None, None))) - self.message_func = nn.Sequential(*module_list) - - def forward(self, x, edge_index, edge_attr): - ''' - :param x: tuple (s, V) of `torch.Tensor` - :param edge_index: array of shape [2, n_edges] - :param edge_attr: tuple (s, V) of `torch.Tensor` - ''' - x_s, x_v = x - message = self.propagate(edge_index, - s=x_s, v=x_v.reshape(x_v.shape[0], 3*x_v.shape[1]), - edge_attr=edge_attr) - return _split(message, self.vo) - - def message(self, s_i, v_i, s_j, v_j, edge_attr): - v_j = v_j.view(v_j.shape[0], v_j.shape[1]//3, 3) - v_i = v_i.view(v_i.shape[0], v_i.shape[1]//3, 3) - message = tuple_cat((s_j, v_j), edge_attr, (s_i, v_i)) - message = self.message_func(message) - return _merge(*message) - - -class GVPConvLayer(nn.Module): - ''' - Full graph convolution / message passing layer with - Geometric Vector Perceptrons. Residually updates node embeddings with - aggregated incoming messages, applies a pointwise feedforward - network to node embeddings, and returns updated node embeddings. - - To only compute the aggregated messages, see `GVPConv`. - - :param node_dims: node embedding dimensions (n_scalar, n_vector) - :param edge_dims: input edge embedding dimensions (n_scalar, n_vector) - :param n_message: number of GVPs to use in message function - :param n_feedforward: number of GVPs to use in feedforward function - :param drop_rate: drop probability in all dropout layers - :param autoregressive: if `True`, this `GVPConvLayer` will be used - with a different set of input node embeddings for messages - where src >= dst - ''' - def __init__(self, node_dims, edge_dims, vector_gate=False, - n_message=3, n_feedforward=2, drop_rate=.1, - autoregressive=False, attention_heads=0, - conv_activations=(F.relu, torch.sigmoid), - n_edge_gvps=0, layernorm=True, eps=1e-8): - - super(GVPConvLayer, self).__init__() - if attention_heads == 0: - self.conv = GVPConv( - node_dims, node_dims, edge_dims, n_layers=n_message, - vector_gate=vector_gate, - aggr="add" if autoregressive else "mean", - activations=conv_activations, - eps=eps, - ) - else: - raise NotImplementedError - if layernorm: - self.norm = nn.ModuleList([LayerNorm(node_dims, eps=eps) for _ in range(2)]) - else: - self.norm = nn.ModuleList([nn.Identity() for _ in range(2)]) - self.dropout = nn.ModuleList([Dropout(drop_rate) for _ in range(2)]) - - ff_func = [] - if n_feedforward == 1: - ff_func.append(GVP(node_dims, node_dims, activations=(None, None))) - else: - hid_dims = 4*node_dims[0], 2*node_dims[1] - ff_func.append(GVP(node_dims, hid_dims, vector_gate=vector_gate)) - for i in range(n_feedforward-2): - ff_func.append(GVP(hid_dims, hid_dims, vector_gate=vector_gate)) - ff_func.append(GVP(hid_dims, node_dims, activations=(None, None))) - self.ff_func = nn.Sequential(*ff_func) - - self.edge_message_func = None - if n_edge_gvps > 0: - si, vi = node_dims - se, ve = edge_dims - module_list = [ - GVP((2*si + se, 2*vi + ve), edge_dims, vector_gate=vector_gate) - ] - for i in range(n_edge_gvps - 2): - module_list.append(GVP(edge_dims, edge_dims, - vector_gate=vector_gate)) - if n_edge_gvps > 1: - module_list.append(GVP(edge_dims, edge_dims, - activations=(None, None))) - self.edge_message_func = nn.Sequential(*module_list) - if layernorm: - self.edge_norm = LayerNorm(edge_dims, eps=eps) - else: - self.edge_norm = nn.Identity() - self.edge_dropout = Dropout(drop_rate) - - def forward(self, x, edge_index, edge_attr, - autoregressive_x=None, node_mask=None): - ''' - :param x: tuple (s, V) of `torch.Tensor` - :param edge_index: array of shape [2, n_edges] - :param edge_attr: tuple (s, V) of `torch.Tensor` - :param autoregressive_x: tuple (s, V) of `torch.Tensor`. - If not `None`, will be used as srcqq node embeddings - for forming messages where src >= dst. The corrent node - embeddings `x` will still be the base of the update and the - pointwise feedforward. - :param node_mask: array of type `bool` to index into the first - dim of node embeddings (s, V). If not `None`, only - these nodes will be updated. - ''' - if self.edge_message_func: - src, dst = edge_index - if autoregressive_x is None: - x_src = x[0][src], x[1][src] - else: - mask = (src < dst).unsqueeze(-1) - x_src = ( - torch.where(mask, x[0][src], autoregressive_x[0][src]), - torch.where(mask.unsqueeze(-1), x[1][src], - autoregressive_x[1][src]) - ) - x_dst = x[0][dst], x[1][dst] - x_edge = ( - torch.cat([x_src[0], edge_attr[0], x_dst[0]], dim=-1), - torch.cat([x_src[1], edge_attr[1], x_dst[1]], dim=-2) - ) - edge_attr_dh = self.edge_message_func(x_edge) - edge_attr = self.edge_norm(tuple_sum(edge_attr, - self.edge_dropout(edge_attr_dh))) - - if autoregressive_x is not None: - src, dst = edge_index - mask = src < dst - edge_index_forward = edge_index[:, mask] - edge_index_backward = edge_index[:, ~mask] - edge_attr_forward = tuple_index(edge_attr, mask) - edge_attr_backward = tuple_index(edge_attr, ~mask) - - dh = tuple_sum( - self.conv(x, edge_index_forward, edge_attr_forward), - self.conv(autoregressive_x, edge_index_backward, edge_attr_backward) - ) - - count = scatter_add(torch.ones_like(dst), dst, - dim_size=dh[0].size(0)).clamp(min=1).unsqueeze(-1) - - dh = dh[0] / count, dh[1] / count.unsqueeze(-1) - - else: - dh = self.conv(x, edge_index, edge_attr) - - if node_mask is not None: - x_ = x - x, dh = tuple_index(x, node_mask), tuple_index(dh, node_mask) - - x = self.norm[0](tuple_sum(x, self.dropout[0](dh))) - - dh = self.ff_func(x) - x = self.norm[1](tuple_sum(x, self.dropout[1](dh))) - - if node_mask is not None: - x_[0][node_mask], x_[1][node_mask] = x[0], x[1] - x = x_ - - return x, edge_attr diff --git a/spaces/alsalemi/pv-segment-01/group_by_aspect_ratio.py b/spaces/alsalemi/pv-segment-01/group_by_aspect_ratio.py deleted file mode 100644 index d12e14b540cc788abb98f40134ca9738dcd88a9a..0000000000000000000000000000000000000000 --- a/spaces/alsalemi/pv-segment-01/group_by_aspect_ratio.py +++ /dev/null @@ -1,196 +0,0 @@ -import bisect -import copy -import math -from collections import defaultdict -from itertools import chain, repeat - -import numpy as np -import torch -import torch.utils.data -import torchvision -from PIL import Image -from torch.utils.data.sampler import BatchSampler, Sampler -from torch.utils.model_zoo import tqdm - - -def _repeat_to_at_least(iterable, n): - repeat_times = math.ceil(n / len(iterable)) - repeated = chain.from_iterable(repeat(iterable, repeat_times)) - return list(repeated) - - -class GroupedBatchSampler(BatchSampler): - """ - Wraps another sampler to yield a mini-batch of indices. - It enforces that the batch only contain elements from the same group. - It also tries to provide mini-batches which follows an ordering which is - as close as possible to the ordering from the original sampler. - Args: - sampler (Sampler): Base sampler. - group_ids (list[int]): If the sampler produces indices in range [0, N), - `group_ids` must be a list of `N` ints which contains the group id of each sample. - The group ids must be a continuous set of integers starting from - 0, i.e. they must be in the range [0, num_groups). - batch_size (int): Size of mini-batch. - """ - - def __init__(self, sampler, group_ids, batch_size): - if not isinstance(sampler, Sampler): - raise ValueError(f"sampler should be an instance of torch.utils.data.Sampler, but got sampler={sampler}") - self.sampler = sampler - self.group_ids = group_ids - self.batch_size = batch_size - - def __iter__(self): - buffer_per_group = defaultdict(list) - samples_per_group = defaultdict(list) - - num_batches = 0 - for idx in self.sampler: - group_id = self.group_ids[idx] - buffer_per_group[group_id].append(idx) - samples_per_group[group_id].append(idx) - if len(buffer_per_group[group_id]) == self.batch_size: - yield buffer_per_group[group_id] - num_batches += 1 - del buffer_per_group[group_id] - assert len(buffer_per_group[group_id]) < self.batch_size - - # now we have run out of elements that satisfy - # the group criteria, let's return the remaining - # elements so that the size of the sampler is - # deterministic - expected_num_batches = len(self) - num_remaining = expected_num_batches - num_batches - if num_remaining > 0: - # for the remaining batches, take first the buffers with the largest number - # of elements - for group_id, _ in sorted(buffer_per_group.items(), key=lambda x: len(x[1]), reverse=True): - remaining = self.batch_size - len(buffer_per_group[group_id]) - samples_from_group_id = _repeat_to_at_least(samples_per_group[group_id], remaining) - buffer_per_group[group_id].extend(samples_from_group_id[:remaining]) - assert len(buffer_per_group[group_id]) == self.batch_size - yield buffer_per_group[group_id] - num_remaining -= 1 - if num_remaining == 0: - break - assert num_remaining == 0 - - def __len__(self): - return len(self.sampler) // self.batch_size - - -def _compute_aspect_ratios_slow(dataset, indices=None): - print( - "Your dataset doesn't support the fast path for " - "computing the aspect ratios, so will iterate over " - "the full dataset and load every image instead. " - "This might take some time..." - ) - if indices is None: - indices = range(len(dataset)) - - class SubsetSampler(Sampler): - def __init__(self, indices): - self.indices = indices - - def __iter__(self): - return iter(self.indices) - - def __len__(self): - return len(self.indices) - - sampler = SubsetSampler(indices) - data_loader = torch.utils.data.DataLoader( - dataset, - batch_size=1, - sampler=sampler, - num_workers=14, # you might want to increase it for faster processing - collate_fn=lambda x: x[0], - ) - aspect_ratios = [] - with tqdm(total=len(dataset)) as pbar: - for _i, (img, _) in enumerate(data_loader): - pbar.update(1) - height, width = img.shape[-2:] - aspect_ratio = float(width) / float(height) - aspect_ratios.append(aspect_ratio) - return aspect_ratios - - -def _compute_aspect_ratios_custom_dataset(dataset, indices=None): - if indices is None: - indices = range(len(dataset)) - aspect_ratios = [] - for i in indices: - height, width = dataset.get_height_and_width(i) - aspect_ratio = float(width) / float(height) - aspect_ratios.append(aspect_ratio) - return aspect_ratios - - -def _compute_aspect_ratios_coco_dataset(dataset, indices=None): - if indices is None: - indices = range(len(dataset)) - aspect_ratios = [] - for i in indices: - img_info = dataset.coco.imgs[dataset.ids[i]] - aspect_ratio = float(img_info["width"]) / float(img_info["height"]) - aspect_ratios.append(aspect_ratio) - return aspect_ratios - - -def _compute_aspect_ratios_voc_dataset(dataset, indices=None): - if indices is None: - indices = range(len(dataset)) - aspect_ratios = [] - for i in indices: - # this doesn't load the data into memory, because PIL loads it lazily - width, height = Image.open(dataset.images[i]).size - aspect_ratio = float(width) / float(height) - aspect_ratios.append(aspect_ratio) - return aspect_ratios - - -def _compute_aspect_ratios_subset_dataset(dataset, indices=None): - if indices is None: - indices = range(len(dataset)) - - ds_indices = [dataset.indices[i] for i in indices] - return compute_aspect_ratios(dataset.dataset, ds_indices) - - -def compute_aspect_ratios(dataset, indices=None): - if hasattr(dataset, "get_height_and_width"): - return _compute_aspect_ratios_custom_dataset(dataset, indices) - - if isinstance(dataset, torchvision.datasets.CocoDetection): - return _compute_aspect_ratios_coco_dataset(dataset, indices) - - if isinstance(dataset, torchvision.datasets.VOCDetection): - return _compute_aspect_ratios_voc_dataset(dataset, indices) - - if isinstance(dataset, torch.utils.data.Subset): - return _compute_aspect_ratios_subset_dataset(dataset, indices) - - # slow path - return _compute_aspect_ratios_slow(dataset, indices) - - -def _quantize(x, bins): - bins = copy.deepcopy(bins) - bins = sorted(bins) - quantized = list(map(lambda y: bisect.bisect_right(bins, y), x)) - return quantized - - -def create_aspect_ratio_groups(dataset, k=0): - aspect_ratios = compute_aspect_ratios(dataset) - bins = (2 ** np.linspace(-1, 1, 2 * k + 1)).tolist() if k > 0 else [1.0] - groups = _quantize(aspect_ratios, bins) - # count number of elements per group - counts = np.unique(groups, return_counts=True)[1] - fbins = [0] + bins + [np.inf] - print(f"Using {fbins} as bins for aspect ratio quantization") - print(f"Count of instances per bin: {counts}") - return groups diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/jportaudio/src/com/portaudio/PortAudio.java b/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/jportaudio/src/com/portaudio/PortAudio.java deleted file mode 100644 index 41b3c67b58f9877ddbb4fe040ec32a6fe9a67829..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/bindings/java/jportaudio/src/com/portaudio/PortAudio.java +++ /dev/null @@ -1,261 +0,0 @@ -/* - * Portable Audio I/O Library - * Java Binding for PortAudio - * - * Based on the Open Source API proposed by Ross Bencina - * Copyright (c) 2008 Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -/** @file - @ingroup bindings_java - - @brief Java wrapper for the PortAudio API. -*/ -package com.portaudio; - -/** - * Java methods that call PortAudio via JNI. This is a portable audio I/O - * library that can be used as an alternative to JavaSound. - * - * Please see the PortAudio documentation for a full explanation. - * - * http://portaudio.com/docs/ - * http://portaudio.com/docs/v19-doxydocs/portaudio_8h.html - * - * This Java binding does not support audio callbacks because an audio callback - * should never block. Calling into a Java virtual machine might block for - * garbage collection or synchronization. So only the blocking read/write mode - * is supported. - * - * @see BlockingStream - * @see DeviceInfo - * @see HostApiInfo - * @see StreamInfo - * @see StreamParameters - * - * @author Phil Burk - * - */ -public class PortAudio -{ - public final static int FLAG_CLIP_OFF = (1 << 0); - public final static int FLAG_DITHER_OFF = (1 << 1); - - /** Sample Formats */ - public final static int FORMAT_FLOAT_32 = (1 << 0); - public final static int FORMAT_INT_32 = (1 << 1); // not supported - public final static int FORMAT_INT_24 = (1 << 2); // not supported - public final static int FORMAT_INT_16 = (1 << 3); - public final static int FORMAT_INT_8 = (1 << 4); // not supported - public final static int FORMAT_UINT_8 = (1 << 5); // not supported - - /** These HOST_API_TYPES will not change in the future. */ - public final static int HOST_API_TYPE_DEV = 0; - public final static int HOST_API_TYPE_DIRECTSOUND = 1; - public final static int HOST_API_TYPE_MME = 2; - public final static int HOST_API_TYPE_ASIO = 3; - /** Apple Sound Manager. Obsolete. */ - public final static int HOST_API_TYPE_SOUNDMANAGER = 4; - public final static int HOST_API_TYPE_COREAUDIO = 5; - public final static int HOST_API_TYPE_OSS = 7; - public final static int HOST_API_TYPE_ALSA = 8; - public final static int HOST_API_TYPE_AL = 9; - public final static int HOST_API_TYPE_BEOS = 10; - public final static int HOST_API_TYPE_WDMKS = 11; - public final static int HOST_API_TYPE_JACK = 12; - public final static int HOST_API_TYPE_WASAPI = 13; - public final static int HOST_API_TYPE_AUDIOSCIENCE = 14; - public final static int HOST_API_TYPE_COUNT = 15; - - static - { - String os = System.getProperty( "os.name" ).toLowerCase(); - // On Windows we have separate libraries for 32 and 64-bit JVMs. - if( os.indexOf( "win" ) >= 0 ) - { - if( System.getProperty( "os.arch" ).contains( "64" ) ) - { - System.loadLibrary( "jportaudio_x64" ); - } - else - { - System.loadLibrary( "jportaudio_x86" ); - } - } - else - { - System.loadLibrary( "jportaudio" ); - } - System.out.println( "---- JPortAudio version " + getVersion() + ", " - + getVersionText() ); - } - - /** - * @return the release number of the currently running PortAudio build, eg - * 1900. - */ - public native static int getVersion(); - - /** - * @return a textual description of the current PortAudio build, eg - * "PortAudio V19-devel 13 October 2002". - */ - public native static String getVersionText(); - - /** - * Library initialization function - call this before using PortAudio. This - * function initializes internal data structures and prepares underlying - * host APIs for use. With the exception of getVersion(), getVersionText(), - * and getErrorText(), this function MUST be called before using any other - * PortAudio API functions. - */ - public native static void initialize(); - - /** - * Library termination function - call this when finished using PortAudio. - * This function deallocates all resources allocated by PortAudio since it - * was initialized by a call to initialize(). In cases where Pa_Initialise() - * has been called multiple times, each call must be matched with a - * corresponding call to terminate(). The final matching call to terminate() - * will automatically close any PortAudio streams that are still open. - */ - public native static void terminate(); - - /** - * @return the number of available devices. The number of available devices - * may be zero. - */ - public native static int getDeviceCount(); - - private native static void getDeviceInfo( int index, DeviceInfo deviceInfo ); - - /** - * @param index - * A valid device index in the range 0 to (getDeviceCount()-1) - * @return An DeviceInfo structure. - * @throws RuntimeException - * if the device parameter is out of range. - */ - public static DeviceInfo getDeviceInfo( int index ) - { - DeviceInfo deviceInfo = new DeviceInfo(); - getDeviceInfo( index, deviceInfo ); - return deviceInfo; - } - - /** - * @return the number of available host APIs. - */ - public native static int getHostApiCount(); - - private native static void getHostApiInfo( int index, - HostApiInfo hostApiInfo ); - - /** - * @param index - * @return information about the Host API - */ - public static HostApiInfo getHostApiInfo( int index ) - { - HostApiInfo hostApiInfo = new HostApiInfo(); - getHostApiInfo( index, hostApiInfo ); - return hostApiInfo; - } - - /** - * @param hostApiType - * A unique host API identifier, for example - * HOST_API_TYPE_COREAUDIO. - * @return a runtime host API index - */ - public native static int hostApiTypeIdToHostApiIndex( int hostApiType ); - - /** - * @param hostApiIndex - * A valid host API index ranging from 0 to (getHostApiCount()-1) - * @param apiDeviceIndex - * A valid per-host device index in the range 0 to - * (getHostApiInfo(hostApi).deviceCount-1) - * @return standard PortAudio device index - */ - public native static int hostApiDeviceIndexToDeviceIndex( int hostApiIndex, - int apiDeviceIndex ); - - public native static int getDefaultInputDevice(); - - public native static int getDefaultOutputDevice(); - - public native static int getDefaultHostApi(); - - /** - * @param inputStreamParameters - * input description, may be null - * @param outputStreamParameters - * output description, may be null - * @param sampleRate - * typically 44100 or 48000, or maybe 22050, 16000, 8000, 96000 - * @return 0 if supported or a negative error - */ - public native static int isFormatSupported( - StreamParameters inputStreamParameters, - StreamParameters outputStreamParameters, int sampleRate ); - - private native static void openStream( BlockingStream blockingStream, - StreamParameters inputStreamParameters, - StreamParameters outputStreamParameters, int sampleRate, - int framesPerBuffer, int flags ); - - /** - * - * @param inputStreamParameters - * input description, may be null - * @param outputStreamParameters - * output description, may be null - * @param sampleRate - * typically 44100 or 48000, or maybe 22050, 16000, 8000, 96000 - * @param framesPerBuffer - * @param flags - * @return - */ - public static BlockingStream openStream( - StreamParameters inputStreamParameters, - StreamParameters outputStreamParameters, int sampleRate, - int framesPerBuffer, int flags ) - { - BlockingStream blockingStream = new BlockingStream(); - openStream( blockingStream, inputStreamParameters, - outputStreamParameters, sampleRate, framesPerBuffer, flags ); - return blockingStream; - } - -} diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_wmme_low_level_latency_params.c b/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_wmme_low_level_latency_params.c deleted file mode 100644 index 31c8892e4f7fb43a911de76b3899c608b56ecc04..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/test/patest_wmme_low_level_latency_params.c +++ /dev/null @@ -1,191 +0,0 @@ -/* - * $Id: $ - * Portable Audio I/O Library - * Windows MME low level buffer parameters test - * - * Copyright (c) 2007 Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#include <stdio.h> -#include <math.h> - -#include <windows.h> /* required when using pa_win_wmme.h */ -#include <mmsystem.h> /* required when using pa_win_wmme.h */ - -#include "portaudio.h" -#include "pa_win_wmme.h" - -#define NUM_SECONDS (6) -#define SAMPLE_RATE (44100) - -#define WMME_FRAMES_PER_BUFFER (440) -#define WMME_BUFFER_COUNT (6) - -#define FRAMES_PER_BUFFER WMME_FRAMES_PER_BUFFER /* hardwire portaudio callback buffer size to WMME buffer size for this test */ - -#ifndef M_PI -#define M_PI (3.14159265) -#endif - -#define TABLE_SIZE (2048) - -#define CHANNEL_COUNT (2) - - -typedef struct -{ - float sine[TABLE_SIZE]; - double phase; -} -paTestData; - -/* This routine will be called by the PortAudio engine when audio is needed. -** It may called at interrupt level on some machines so don't do anything -** that could mess up the system like calling malloc() or free(). -*/ -static int patestCallback( const void *inputBuffer, void *outputBuffer, - unsigned long framesPerBuffer, - const PaStreamCallbackTimeInfo* timeInfo, - PaStreamCallbackFlags statusFlags, - void *userData ) -{ - paTestData *data = (paTestData*)userData; - float *out = (float*)outputBuffer; - unsigned long i,j; - - (void) timeInfo; /* Prevent unused variable warnings. */ - (void) statusFlags; - (void) inputBuffer; - - for( i=0; i<framesPerBuffer; i++ ) - { - float x = data->sine[(int)data->phase]; - data->phase += 20; - if( data->phase >= TABLE_SIZE ){ - data->phase -= TABLE_SIZE; - } - - for( j = 0; j < CHANNEL_COUNT; ++j ){ - *out++ = x; - } - } - - return paContinue; -} - -/*******************************************************************/ -int main(int argc, char* argv[]) -{ - PaStreamParameters outputParameters; - PaWinMmeStreamInfo wmmeStreamInfo; - PaStream *stream; - PaError err; - paTestData data; - int i; - int deviceIndex; - - printf("PortAudio Test: output a sine blip on each channel. SR = %d, BufSize = %d, Chans = %d\n", SAMPLE_RATE, FRAMES_PER_BUFFER, CHANNEL_COUNT); - - err = Pa_Initialize(); - if( err != paNoError ) goto error; - - deviceIndex = Pa_GetHostApiInfo( Pa_HostApiTypeIdToHostApiIndex( paMME ) )->defaultOutputDevice; - if( argc == 2 ){ - sscanf( argv[1], "%d", &deviceIndex ); - } - - printf( "using device id %d (%s)\n", deviceIndex, Pa_GetDeviceInfo(deviceIndex)->name ); - - /* initialise sinusoidal wavetable */ - for( i=0; i<TABLE_SIZE; i++ ) - { - data.sine[i] = (float) sin( ((double)i/(double)TABLE_SIZE) * M_PI * 2. ); - } - - data.phase = 0; - - outputParameters.device = deviceIndex; - outputParameters.channelCount = CHANNEL_COUNT; - outputParameters.sampleFormat = paFloat32; /* 32 bit floating point processing */ - outputParameters.suggestedLatency = 0; /*Pa_GetDeviceInfo( outputParameters.device )->defaultLowOutputLatency;*/ - outputParameters.hostApiSpecificStreamInfo = NULL; - - wmmeStreamInfo.size = sizeof(PaWinMmeStreamInfo); - wmmeStreamInfo.hostApiType = paMME; - wmmeStreamInfo.version = 1; - wmmeStreamInfo.flags = paWinMmeUseLowLevelLatencyParameters | paWinMmeDontThrottleOverloadedProcessingThread; - wmmeStreamInfo.framesPerBuffer = WMME_FRAMES_PER_BUFFER; - wmmeStreamInfo.bufferCount = WMME_BUFFER_COUNT; - outputParameters.hostApiSpecificStreamInfo = &wmmeStreamInfo; - - - if( Pa_IsFormatSupported( 0, &outputParameters, SAMPLE_RATE ) == paFormatIsSupported ){ - printf( "Pa_IsFormatSupported reports device will support %d channels.\n", CHANNEL_COUNT ); - }else{ - printf( "Pa_IsFormatSupported reports device will not support %d channels.\n", CHANNEL_COUNT ); - } - - err = Pa_OpenStream( - &stream, - NULL, /* no input */ - &outputParameters, - SAMPLE_RATE, - FRAMES_PER_BUFFER, - paClipOff, /* we won't output out of range samples so don't bother clipping them */ - patestCallback, - &data ); - if( err != paNoError ) goto error; - - err = Pa_StartStream( stream ); - if( err != paNoError ) goto error; - - printf("Play for %d seconds.\n", NUM_SECONDS ); - Pa_Sleep( NUM_SECONDS * 1000 ); - - err = Pa_StopStream( stream ); - if( err != paNoError ) goto error; - - err = Pa_CloseStream( stream ); - if( err != paNoError ) goto error; - - Pa_Terminate(); - printf("Test finished.\n"); - - return err; -error: - Pa_Terminate(); - fprintf( stderr, "An error occurred while using the portaudio stream\n" ); - fprintf( stderr, "Error number: %d\n", err ); - fprintf( stderr, "Error message: %s\n", Pa_GetErrorText( err ) ); - return err; -} diff --git a/spaces/amsterdamNLP/contrastive-pairs/app.py b/spaces/amsterdamNLP/contrastive-pairs/app.py deleted file mode 100644 index 4d5dbb52a80901ee67a9af6c95c314dc6dfc04dd..0000000000000000000000000000000000000000 --- a/spaces/amsterdamNLP/contrastive-pairs/app.py +++ /dev/null @@ -1,118 +0,0 @@ -import torch -import datasets -import gradio -import pandas - -from transformers import GPT2LMHeadModel, GPT2TokenizerFast - - -class CrowSPairsDataset(object): - def __init__(self): - super().__init__() - - self.df = (datasets - .load_dataset("BigScienceBiasEval/crows_pairs_multilingual")["test"] - .to_pandas() - .query('stereo_antistereo == "stereo"') - .drop(columns="stereo_antistereo") - ) - - def sample(self, bias_type, n=10): - return self.df[self.df["bias_type"] == bias_type].sample(n=n) - - def bias_types(self): - return self.df.bias_type.unique().tolist() - - -def run(df): - result = "<table><tr style='color: white; background-color: #555'><th>index</th><th>more stereotypical</th><th>less stereotypical<th></tr>" - for i, row in df.iterrows(): - result += f"<tr><td>{i}</td>" - more = row["sent_more"] - - more = tokenizer(more, return_tensors="pt")["input_ids"].to(device) - with torch.no_grad(): - out_more = model(more, labels=more.clone()) - score_more = out_more["loss"] - perplexity_more = torch.exp(score_more).item() - - less = row["sent_less"] - less = tokenizer(less, return_tensors="pt")["input_ids"].to(device) - with torch.no_grad(): - out_less = model(less, labels=less.clone()) - score_less = out_less["loss"] - perplexity_less = torch.exp(score_less).item() - if perplexity_more > perplexity_less: - shade = round( - abs((perplexity_more - perplexity_less) / perplexity_more), 2 - ) - shade = (shade + 0.2) / 1.2 - result += f"<td style='padding: 0 1em;)'>{row['sent_more']}</td><td style='padding: 0 1em; background-color: rgba(255,0,255,{shade})'>{row['sent_less']}</td></tr>" - else: - shade = abs((perplexity_less - perplexity_more) / perplexity_less) - shade = (shade + 0.2) / 1.2 - result += f"<td style='padding: 0 1em; background-color: rgba(0,255,255,{shade})'>{row['sent_more']}</td><td style='padding: 0 1em;'>{row['sent_less']}</td></tr>" - result += "</table>" - return result - -def sample_and_run(bias_type): - sample = dataset.sample(bias_type) - return run(sample) - -def manual_run(more, less): - df = pandas.DataFrame.from_dict({ - 'sent_more': [more], - 'sent_less': [less], - 'bias_type': ["manual"], - }) - return run(df) - -if torch.cuda.is_available(): - device = torch.device("cuda") -else: - device = torch.device("cpu") - -model_id = "gpt2" -model = GPT2LMHeadModel.from_pretrained(model_id).to(device) -tokenizer = GPT2TokenizerFast.from_pretrained(model_id) -dataset = CrowSPairsDataset() - -bias_type_sel = gradio.Dropdown(label="Bias Type", choices=dataset.bias_types()) - -with open("description.md") as fh: - desc = fh.read() - -with open("descr-2.md") as fh: - desc2 = fh.read() - -with open("notice.md") as fh: - notice = fh.read() - -with open("results.md") as fh: - results = fh.read() - -with gradio.Blocks(title="Detecting stereotypes in the GPT-2 language model using CrowS-Pairs") as iface: - gradio.Markdown(desc) - with gradio.Row(equal_height=True): - with gradio.Column(scale=4): - bias_sel = gradio.Dropdown(label="Bias Type", choices=dataset.bias_types()) - with gradio.Column(scale=1): - but = gradio.Button("Sample") - gradio.Markdown(desc2) - with gradio.Row(equal_height=True): - with gradio.Column(scale=2): - more = gradio.Textbox(label="More stereotypical") - with gradio.Column(scale=2): - less = gradio.Textbox(label="Less stereotypical") - with gradio.Column(scale=1): - manual = gradio.Button("Run") - out = gradio.HTML() - but.click(sample_and_run, bias_sel, out) - manual.click(manual_run, [more, less], out) - - with gradio.Accordion("Some more details"): - gradio.Markdown(notice) - with gradio.Accordion("Results for English and French BERT language models"): - gradio.Markdown(results) - -iface.launch() diff --git a/spaces/anonymous-pits/pits/commons.py b/spaces/anonymous-pits/pits/commons.py deleted file mode 100644 index 3c644b7008d1e137d8931b8646ba36455c1d4266..0000000000000000000000000000000000000000 --- a/spaces/anonymous-pits/pits/commons.py +++ /dev/null @@ -1,181 +0,0 @@ -# from https://github.com/jaywalnut310/vits -import math -import torch -from torch.nn import functional as F - - -def init_weights(m, mean=0.0, std=0.01): - classname = m.__class__.__name__ - if classname.find("Conv") != -1: - m.weight.data.normal_(mean, std) - - -def get_padding(kernel_size, dilation=1): - return int((kernel_size * dilation - dilation) / 2) - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def intersperse(lst, item): - result = [item] * (len(lst) * 2 + 1) - result[1::2] = lst - return result - - -def kl_divergence(m_p, logs_p, m_q, logs_q): - """KL(P||Q)""" - kl = (logs_q - logs_p) - 0.5 - kl += 0.5 * (torch.exp(2. * logs_p) + ((m_p - m_q)**2)) * torch.exp(-2. * logs_q) - return kl - - -def rand_gumbel(shape): - """Sample from the Gumbel distribution, protect from overflows.""" - uniform_samples = torch.rand(shape) * 0.99998 + 0.00001 - return -torch.log(-torch.log(uniform_samples)) - - -def rand_gumbel_like(x): - g = rand_gumbel(x.size()).to(dtype=x.dtype, device=x.device) - return g - - -def slice_segments(x, ids_str, segment_size=4): - ret = torch.zeros_like(x[:, :, :segment_size]) - for i in range(x.size(0)): - idx_str = ids_str[i] - idx_end = idx_str + segment_size - ret[i] = x[i, :, idx_str:idx_end] - return ret - - -def rand_slice_segments(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = (torch.rand([b]).to(device=x.device) - * ids_str_max).to(dtype=torch.long) - ids_str = torch.max(torch.zeros(ids_str.size()).to(ids_str.device), ids_str).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - -def rand_slice_segments_for_cat(x, x_lengths=None, segment_size=4): - b, d, t = x.size() - if x_lengths is None: - x_lengths = t - ids_str_max = x_lengths - segment_size + 1 - ids_str = torch.rand([b//2]).to(device=x.device) - ids_str = (torch.cat([ids_str,ids_str], dim=0) - * ids_str_max).to(dtype=torch.long) - ids_str = torch.max(torch.zeros(ids_str.size()).to(ids_str.device), ids_str).to(dtype=torch.long) - ret = slice_segments(x, ids_str, segment_size) - return ret, ids_str - - - - -def get_timing_signal_1d( - length, channels, min_timescale=1.0, max_timescale=1.0e4): - position = torch.arange(length, dtype=torch.float) - num_timescales = channels // 2 - log_timescale_increment = ( - math.log(float(max_timescale) / float(min_timescale)) / (num_timescales - 1) - ) - inv_timescales = min_timescale * torch.exp( - torch.arange(num_timescales, dtype=torch.float) * -log_timescale_increment - ) - scaled_time = position.unsqueeze(0) * inv_timescales.unsqueeze(1) - signal = torch.cat([torch.sin(scaled_time), torch.cos(scaled_time)], 0) - signal = F.pad(signal, [0, 0, 0, channels % 2]) - signal = signal.view(1, channels, length) - return signal - - -def add_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4): - b, channels, length = x.size() - signal = get_timing_signal_1d( - length, channels, min_timescale, max_timescale - ) - return x + signal.to(dtype=x.dtype, device=x.device) - - -def cat_timing_signal_1d(x, min_timescale=1.0, max_timescale=1.0e4, axis=1): - b, channels, length = x.size() - signal = get_timing_signal_1d( - length, channels, min_timescale, max_timescale - ) - return torch.cat([x, signal.to(dtype=x.dtype, device=x.device)], axis) - - -def subsequent_mask(length): - mask = torch.tril(torch.ones(length, length)).unsqueeze(0).unsqueeze(0) - return mask - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a + input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -def convert_pad_shape(pad_shape): - l = pad_shape[::-1] - pad_shape = [item for sublist in l for item in sublist] - return pad_shape - - -def shift_1d(x): - x = F.pad(x, convert_pad_shape([[0, 0], [0, 0], [1, 0]]))[:, :, :-1] - return x - - -def sequence_mask(length, max_length=None): - if max_length is None: - max_length = length.max() - x = torch.arange(max_length, dtype=length.dtype, device=length.device) - return x.unsqueeze(0) < length.unsqueeze(1) - - -def generate_path(duration, mask): - """ - duration: [b, 1, t_x] - mask: [b, 1, t_y, t_x] - """ - device = duration.device - - b, _, t_y, t_x = mask.shape - cum_duration = torch.cumsum(duration, -1) - - cum_duration_flat = cum_duration.view(b * t_x) - path = sequence_mask(cum_duration_flat, t_y).to(mask.dtype) - path = path.view(b, t_x, t_y) - path = path - F.pad(path, convert_pad_shape([[0, 0], [1, 0], [0, 0]]))[:, :-1] - path = path.unsqueeze(1).transpose(2, 3) * mask - return path - - -def clip_grad_value_(parameters, clip_value, norm_type=2): - if isinstance(parameters, torch.Tensor): - parameters = [parameters] - parameters = list(filter(lambda p: p.grad is not None, parameters)) - norm_type = float(norm_type) - if clip_value is not None: - clip_value = float(clip_value) - - total_norm = 0 - for p in parameters: - param_norm = p.grad.data.norm(norm_type) - total_norm += param_norm.item() ** norm_type - if clip_value is not None: - p.grad.data.clamp_(min=-clip_value, max=clip_value) - total_norm = total_norm ** (1. / norm_type) - return total_norm diff --git a/spaces/antonovmaxim/text-generation-webui-space/docs/Docker.md b/spaces/antonovmaxim/text-generation-webui-space/docs/Docker.md deleted file mode 100644 index b1e92253cd72423a86d72f6bb057da9bed19a4bc..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/docs/Docker.md +++ /dev/null @@ -1,181 +0,0 @@ -Docker Compose is a way of installing and launching the web UI in an isolated Ubuntu image using only a few commands. - -In order to create the image as described in the main README, you must have docker compose 2.17 or higher: - -``` -~$ docker compose version -Docker Compose version v2.17.2 -``` - -# Intructions by [@loeken](https://github.com/loeken) - -- [Ubuntu 22.04](#ubuntu-2204) - - [0. youtube video](#0-youtube-video) - - [1. update the drivers](#1-update-the-drivers) - - [2. reboot](#2-reboot) - - [3. install docker](#3-install-docker) - - [4. docker \& container toolkit](#4-docker--container-toolkit) - - [5. clone the repo](#5-clone-the-repo) - - [6. prepare models](#6-prepare-models) - - [7. prepare .env file](#7-prepare-env-file) - - [8. startup docker container](#8-startup-docker-container) -- [Manjaro](#manjaro) - - [update the drivers](#update-the-drivers) - - [reboot](#reboot) - - [docker \& container toolkit](#docker--container-toolkit) - - [continue with ubuntu task](#continue-with-ubuntu-task) -- [Windows](#windows) - - [0. youtube video](#0-youtube-video-1) - - [1. choco package manager](#1-choco-package-manager) - - [2. install drivers/dependencies](#2-install-driversdependencies) - - [3. install wsl](#3-install-wsl) - - [4. reboot](#4-reboot) - - [5. git clone \&\& startup](#5-git-clone--startup) - - [6. prepare models](#6-prepare-models-1) - - [7. startup](#7-startup) -- [notes](#notes) - -# Ubuntu 22.04 - -## 0. youtube video -A video walking you through the setup can be found here: - -[![oobabooga text-generation-webui setup in docker on ubuntu 22.04](https://img.youtube.com/vi/ELkKWYh8qOk/0.jpg)](https://www.youtube.com/watch?v=ELkKWYh8qOk) - - -## 1. update the drivers -in the the “software updater” update drivers to the last version of the prop driver. - -## 2. reboot -to switch using to new driver - -## 3. install docker -```bash -sudo apt update -sudo apt-get install curl -sudo mkdir -m 0755 -p /etc/apt/keyrings -curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg -echo \ - "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ - "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ - sudo tee /etc/apt/sources.list.d/docker.list > /dev/null -sudo apt update -sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin docker-compose -y -sudo usermod -aG docker $USER -newgrp docker -``` - -## 4. docker & container toolkit -```bash -curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg -echo "deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://nvidia.github.io/libnvidia-container/stable/ubuntu22.04/amd64 /" | \ -sudo tee /etc/apt/sources.list.d/nvidia.list > /dev/null -sudo apt update -sudo apt install nvidia-docker2 nvidia-container-runtime -y -sudo systemctl restart docker -``` - -## 5. clone the repo -``` -git clone https://github.com/oobabooga/text-generation-webui -cd text-generation-webui -``` - -## 6. prepare models -download and place the models inside the models folder. tested with: - -4bit -https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1483891617 -https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1483941105 - -8bit: -https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1484235789 - -## 7. prepare .env file -edit .env values to your needs. -```bash -cp .env.example .env -nano .env -``` - -## 8. startup docker container -```bash -docker compose up --build -``` - -# Manjaro -manjaro/arch is similar to ubuntu just the dependency installation is more convenient - -## update the drivers -```bash -sudo mhwd -a pci nonfree 0300 -``` -## reboot -```bash -reboot -``` -## docker & container toolkit -```bash -yay -S docker docker-compose buildkit gcc nvidia-docker -sudo usermod -aG docker $USER -newgrp docker -sudo systemctl restart docker # required by nvidia-container-runtime -``` - -## continue with ubuntu task -continue at [5. clone the repo](#5-clone-the-repo) - -# Windows -## 0. youtube video -A video walking you through the setup can be found here: -[![oobabooga text-generation-webui setup in docker on windows 11](https://img.youtube.com/vi/ejH4w5b5kFQ/0.jpg)](https://www.youtube.com/watch?v=ejH4w5b5kFQ) - -## 1. choco package manager -install package manager (https://chocolatey.org/ ) -``` -Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1')) -``` - -## 2. install drivers/dependencies -``` -choco install nvidia-display-driver cuda git docker-desktop -``` - -## 3. install wsl -wsl --install - -## 4. reboot -after reboot enter username/password in wsl - -## 5. git clone && startup -clone the repo and edit .env values to your needs. -``` -cd Desktop -git clone https://github.com/oobabooga/text-generation-webui -cd text-generation-webui -COPY .env.example .env -notepad .env -``` - -## 6. prepare models -download and place the models inside the models folder. tested with: - -4bit https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1483891617 https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1483941105 - -8bit: https://github.com/oobabooga/text-generation-webui/pull/530#issuecomment-1484235789 - -## 7. startup -``` -docker compose up -``` - -# notes - -on older ubuntus you can manually install the docker compose plugin like this: -``` -DOCKER_CONFIG=${DOCKER_CONFIG:-$HOME/.docker} -mkdir -p $DOCKER_CONFIG/cli-plugins -curl -SL https://github.com/docker/compose/releases/download/v2.17.2/docker-compose-linux-x86_64 -o $DOCKER_CONFIG/cli-plugins/docker-compose -chmod +x $DOCKER_CONFIG/cli-plugins/docker-compose -export PATH="$HOME/.docker/cli-plugins:$PATH" -``` diff --git a/spaces/antonovmaxim/text-generation-webui-space/docs/llama.cpp-models.md b/spaces/antonovmaxim/text-generation-webui-space/docs/llama.cpp-models.md deleted file mode 100644 index 153f70affedf55df3b58af5c46eb0396b5ecf010..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/docs/llama.cpp-models.md +++ /dev/null @@ -1,43 +0,0 @@ -# Using llama.cpp in the web UI - -## Setting up the models - -#### Pre-converted - -Place the model in the `models` folder, making sure that its name contains `ggml` somewhere and ends in `.bin`. - -#### Convert LLaMA yourself - -Follow the instructions in the llama.cpp README to generate the `ggml-model.bin` file: https://github.com/ggerganov/llama.cpp#usage - -## GPU offloading - -Enabled with the `--n-gpu-layers` parameter. If you have enough VRAM, use a high number like `--n-gpu-layers 200000` to offload all layers to the GPU. - -Note that you need to manually install `llama-cpp-python` with GPU support. To do that: - -#### Linux - -``` -pip uninstall -y llama-cpp-python -CMAKE_ARGS="-DLLAMA_CUBLAS=on" FORCE_CMAKE=1 pip install llama-cpp-python --no-cache-dir -``` - -#### Windows - -``` -pip uninstall -y llama-cpp-python -set CMAKE_ARGS="-DLLAMA_CUBLAS=on" -set FORCE_CMAKE=1 -pip install llama-cpp-python --no-cache-dir -``` - -Here you can find the different compilation options for OpenBLAS / cuBLAS / CLBlast: https://pypi.org/project/llama-cpp-python/ - -## Performance - -This was the performance of llama-7b int4 on my i5-12400F (cpu only): - -> Output generated in 33.07 seconds (6.05 tokens/s, 200 tokens, context 17) - -You can change the number of threads with `--threads N`. diff --git a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/tacotron2-Capacitron/train_capacitron_t2.py b/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/tacotron2-Capacitron/train_capacitron_t2.py deleted file mode 100644 index f1ae2bd5c584ff5d10d19ca7ed3a5154c49cf9b7..0000000000000000000000000000000000000000 --- a/spaces/artificialguybr/video-dubbing/TTS/recipes/ljspeech/tacotron2-Capacitron/train_capacitron_t2.py +++ /dev/null @@ -1,114 +0,0 @@ -import os - -from trainer import Trainer, TrainerArgs - -from TTS.config.shared_configs import BaseAudioConfig -from TTS.tts.configs.shared_configs import BaseDatasetConfig, CapacitronVAEConfig -from TTS.tts.configs.tacotron2_config import Tacotron2Config -from TTS.tts.datasets import load_tts_samples -from TTS.tts.models.tacotron2 import Tacotron2 -from TTS.tts.utils.text.tokenizer import TTSTokenizer -from TTS.utils.audio import AudioProcessor - -output_path = os.path.dirname(os.path.abspath(__file__)) - -data_path = "/srv/data/" - -# Using LJSpeech like dataset processing for the blizzard dataset -dataset_config = BaseDatasetConfig( - formatter="ljspeech", - meta_file_train="metadata.csv", - path=data_path, -) - -audio_config = BaseAudioConfig( - sample_rate=22050, - do_trim_silence=True, - trim_db=60.0, - signal_norm=False, - mel_fmin=0.0, - mel_fmax=11025, - spec_gain=1.0, - log_func="np.log", - ref_level_db=20, - preemphasis=0.0, -) - -# Using the standard Capacitron config -capacitron_config = CapacitronVAEConfig(capacitron_VAE_loss_alpha=1.0, capacitron_capacity=50) - -config = Tacotron2Config( - run_name="Capacitron-Tacotron2", - audio=audio_config, - capacitron_vae=capacitron_config, - use_capacitron_vae=True, - batch_size=128, # Tune this to your gpu - max_audio_len=8 * 22050, # Tune this to your gpu - min_audio_len=1 * 22050, - eval_batch_size=16, - num_loader_workers=8, - num_eval_loader_workers=8, - precompute_num_workers=24, - run_eval=True, - test_delay_epochs=25, - ga_alpha=0.0, - r=2, - optimizer="CapacitronOptimizer", - optimizer_params={"RAdam": {"betas": [0.9, 0.998], "weight_decay": 1e-6}, "SGD": {"lr": 1e-5, "momentum": 0.9}}, - attention_type="dynamic_convolution", - grad_clip=0.0, # Important! We overwrite the standard grad_clip with capacitron_grad_clip - double_decoder_consistency=False, - epochs=1000, - text_cleaner="phoneme_cleaners", - use_phonemes=True, - phoneme_language="en-us", - phonemizer="espeak", - phoneme_cache_path=os.path.join(data_path, "phoneme_cache"), - stopnet_pos_weight=15, - print_step=25, - print_eval=True, - mixed_precision=False, - seq_len_norm=True, - output_path=output_path, - datasets=[dataset_config], - lr=1e-3, - lr_scheduler="StepwiseGradualLR", - lr_scheduler_params={ - "gradual_learning_rates": [ - [0, 1e-3], - [2e4, 5e-4], - [4e5, 3e-4], - [6e4, 1e-4], - [8e4, 5e-5], - ] - }, - scheduler_after_epoch=False, # scheduler doesn't work without this flag - # Need to experiment with these below for capacitron - loss_masking=False, - decoder_loss_alpha=1.0, - postnet_loss_alpha=1.0, - postnet_diff_spec_alpha=0.0, - decoder_diff_spec_alpha=0.0, - decoder_ssim_alpha=0.0, - postnet_ssim_alpha=0.0, -) - -ap = AudioProcessor(**config.audio.to_dict()) - -tokenizer, config = TTSTokenizer.init_from_config(config) - -train_samples, eval_samples = load_tts_samples(dataset_config, eval_split=True) - -model = Tacotron2(config, ap, tokenizer, speaker_manager=None) - -trainer = Trainer( - TrainerArgs(), - config, - output_path, - model=model, - train_samples=train_samples, - eval_samples=eval_samples, - training_assets={"audio_processor": ap}, -) - -trainer.fit() diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/SgiImagePlugin.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/SgiImagePlugin.py deleted file mode 100644 index f0207bb775678808f368483300730a490260742c..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PIL/SgiImagePlugin.py +++ /dev/null @@ -1,230 +0,0 @@ -# -# The Python Imaging Library. -# $Id$ -# -# SGI image file handling -# -# See "The SGI Image File Format (Draft version 0.97)", Paul Haeberli. -# <ftp://ftp.sgi.com/graphics/SGIIMAGESPEC> -# -# -# History: -# 2017-22-07 mb Add RLE decompression -# 2016-16-10 mb Add save method without compression -# 1995-09-10 fl Created -# -# Copyright (c) 2016 by Mickael Bonfill. -# Copyright (c) 2008 by Karsten Hiddemann. -# Copyright (c) 1997 by Secret Labs AB. -# Copyright (c) 1995 by Fredrik Lundh. -# -# See the README file for information on usage and redistribution. -# - - -import os -import struct - -from . import Image, ImageFile -from ._binary import i16be as i16 -from ._binary import o8 - - -def _accept(prefix): - return len(prefix) >= 2 and i16(prefix) == 474 - - -MODES = { - (1, 1, 1): "L", - (1, 2, 1): "L", - (2, 1, 1): "L;16B", - (2, 2, 1): "L;16B", - (1, 3, 3): "RGB", - (2, 3, 3): "RGB;16B", - (1, 3, 4): "RGBA", - (2, 3, 4): "RGBA;16B", -} - - -## -# Image plugin for SGI images. -class SgiImageFile(ImageFile.ImageFile): - - format = "SGI" - format_description = "SGI Image File Format" - - def _open(self): - - # HEAD - headlen = 512 - s = self.fp.read(headlen) - - if not _accept(s): - raise ValueError("Not an SGI image file") - - # compression : verbatim or RLE - compression = s[2] - - # bpc : 1 or 2 bytes (8bits or 16bits) - bpc = s[3] - - # dimension : 1, 2 or 3 (depending on xsize, ysize and zsize) - dimension = i16(s, 4) - - # xsize : width - xsize = i16(s, 6) - - # ysize : height - ysize = i16(s, 8) - - # zsize : channels count - zsize = i16(s, 10) - - # layout - layout = bpc, dimension, zsize - - # determine mode from bits/zsize - rawmode = "" - try: - rawmode = MODES[layout] - except KeyError: - pass - - if rawmode == "": - raise ValueError("Unsupported SGI image mode") - - self._size = xsize, ysize - self.mode = rawmode.split(";")[0] - if self.mode == "RGB": - self.custom_mimetype = "image/rgb" - - # orientation -1 : scanlines begins at the bottom-left corner - orientation = -1 - - # decoder info - if compression == 0: - pagesize = xsize * ysize * bpc - if bpc == 2: - self.tile = [ - ("SGI16", (0, 0) + self.size, headlen, (self.mode, 0, orientation)) - ] - else: - self.tile = [] - offset = headlen - for layer in self.mode: - self.tile.append( - ("raw", (0, 0) + self.size, offset, (layer, 0, orientation)) - ) - offset += pagesize - elif compression == 1: - self.tile = [ - ("sgi_rle", (0, 0) + self.size, headlen, (rawmode, orientation, bpc)) - ] - - -def _save(im, fp, filename): - if im.mode != "RGB" and im.mode != "RGBA" and im.mode != "L": - raise ValueError("Unsupported SGI image mode") - - # Get the keyword arguments - info = im.encoderinfo - - # Byte-per-pixel precision, 1 = 8bits per pixel - bpc = info.get("bpc", 1) - - if bpc not in (1, 2): - raise ValueError("Unsupported number of bytes per pixel") - - # Flip the image, since the origin of SGI file is the bottom-left corner - orientation = -1 - # Define the file as SGI File Format - magic_number = 474 - # Run-Length Encoding Compression - Unsupported at this time - rle = 0 - - # Number of dimensions (x,y,z) - dim = 3 - # X Dimension = width / Y Dimension = height - x, y = im.size - if im.mode == "L" and y == 1: - dim = 1 - elif im.mode == "L": - dim = 2 - # Z Dimension: Number of channels - z = len(im.mode) - - if dim == 1 or dim == 2: - z = 1 - - # assert we've got the right number of bands. - if len(im.getbands()) != z: - raise ValueError( - f"incorrect number of bands in SGI write: {z} vs {len(im.getbands())}" - ) - - # Minimum Byte value - pinmin = 0 - # Maximum Byte value (255 = 8bits per pixel) - pinmax = 255 - # Image name (79 characters max, truncated below in write) - img_name = os.path.splitext(os.path.basename(filename))[0] - img_name = img_name.encode("ascii", "ignore") - # Standard representation of pixel in the file - colormap = 0 - fp.write(struct.pack(">h", magic_number)) - fp.write(o8(rle)) - fp.write(o8(bpc)) - fp.write(struct.pack(">H", dim)) - fp.write(struct.pack(">H", x)) - fp.write(struct.pack(">H", y)) - fp.write(struct.pack(">H", z)) - fp.write(struct.pack(">l", pinmin)) - fp.write(struct.pack(">l", pinmax)) - fp.write(struct.pack("4s", b"")) # dummy - fp.write(struct.pack("79s", img_name)) # truncates to 79 chars - fp.write(struct.pack("s", b"")) # force null byte after img_name - fp.write(struct.pack(">l", colormap)) - fp.write(struct.pack("404s", b"")) # dummy - - rawmode = "L" - if bpc == 2: - rawmode = "L;16B" - - for channel in im.split(): - fp.write(channel.tobytes("raw", rawmode, 0, orientation)) - - if hasattr(fp, "flush"): - fp.flush() - - -class SGI16Decoder(ImageFile.PyDecoder): - _pulls_fd = True - - def decode(self, buffer): - rawmode, stride, orientation = self.args - pagesize = self.state.xsize * self.state.ysize - zsize = len(self.mode) - self.fd.seek(512) - - for band in range(zsize): - channel = Image.new("L", (self.state.xsize, self.state.ysize)) - channel.frombytes( - self.fd.read(2 * pagesize), "raw", "L;16B", stride, orientation - ) - self.im.putband(channel.im, band) - - return -1, 0 - - -# -# registry - - -Image.register_decoder("SGI16", SGI16Decoder) -Image.register_open(SgiImageFile.format, SgiImageFile, _accept) -Image.register_save(SgiImageFile.format, _save) -Image.register_mime(SgiImageFile.format, "image/sgi") - -Image.register_extensions(SgiImageFile.format, [".bw", ".rgb", ".rgba", ".sgi"]) - -# End of file diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PySimpleGUI/__init__.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PySimpleGUI/__init__.py deleted file mode 100644 index 771da22eb834e620df831a8d77281e04ba11cb28..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/PySimpleGUI/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -name = "PySimpleGUI" -from .PySimpleGUI import * -from .PySimpleGUI import __version__ diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/normalized_stacked_area_chart.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/normalized_stacked_area_chart.py deleted file mode 100644 index a6bfec3652763552145acc85e5646dadc519cc06..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/altair/examples/normalized_stacked_area_chart.py +++ /dev/null @@ -1,16 +0,0 @@ -""" -Normalized Stacked Area Chart ------------------------------ -This example shows how to make a normalized stacked area chart. -""" -# category: area charts -import altair as alt -from vega_datasets import data - -source = data.iowa_electricity() - -alt.Chart(source).mark_area().encode( - x="year:T", - y=alt.Y("net_generation:Q", stack="normalize"), - color="source:N" -) diff --git a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/coloredlogs/syslog.py b/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/coloredlogs/syslog.py deleted file mode 100644 index 5fd4629ab56288bbfe0269ae6ba619db5d3756bc..0000000000000000000000000000000000000000 --- a/spaces/arxify/RVC-beta-v2-0618/runtime/Lib/site-packages/coloredlogs/syslog.py +++ /dev/null @@ -1,292 +0,0 @@ -# Easy to use system logging for Python's logging module. -# -# Author: Peter Odding <peter@peterodding.com> -# Last Change: December 10, 2020 -# URL: https://coloredlogs.readthedocs.io - -""" -Easy to use UNIX system logging for Python's :mod:`logging` module. - -Admittedly system logging has little to do with colored terminal output, however: - -- The `coloredlogs` package is my attempt to do Python logging right and system - logging is an important part of that equation. - -- I've seen a surprising number of quirks and mistakes in system logging done - in Python, for example including ``%(asctime)s`` in a format string (the - system logging daemon is responsible for adding timestamps and thus you end - up with duplicate timestamps that make the logs awful to read :-). - -- The ``%(programname)s`` filter originated in my system logging code and I - wanted it in `coloredlogs` so the step to include this module wasn't that big. - -- As a bonus this Python module now has a test suite and proper documentation. - -So there :-P. Go take a look at :func:`enable_system_logging()`. -""" - -# Standard library modules. -import logging -import logging.handlers -import os -import socket -import sys - -# External dependencies. -from humanfriendly import coerce_boolean -from humanfriendly.compat import on_macos, on_windows - -# Modules included in our package. -from coloredlogs import ( - DEFAULT_LOG_LEVEL, - ProgramNameFilter, - adjust_level, - find_program_name, - level_to_number, - replace_handler, -) - -LOG_DEVICE_MACOSX = '/var/run/syslog' -"""The pathname of the log device on Mac OS X (a string).""" - -LOG_DEVICE_UNIX = '/dev/log' -"""The pathname of the log device on Linux and most other UNIX systems (a string).""" - -DEFAULT_LOG_FORMAT = '%(programname)s[%(process)d]: %(levelname)s %(message)s' -""" -The default format for log messages sent to the system log (a string). - -The ``%(programname)s`` format requires :class:`~coloredlogs.ProgramNameFilter` -but :func:`enable_system_logging()` takes care of this for you. - -The ``name[pid]:`` construct (specifically the colon) in the format allows -rsyslogd_ to extract the ``$programname`` from each log message, which in turn -allows configuration files in ``/etc/rsyslog.d/*.conf`` to filter these log -messages to a separate log file (if the need arises). - -.. _rsyslogd: https://en.wikipedia.org/wiki/Rsyslog -""" - -# Initialize a logger for this module. -logger = logging.getLogger(__name__) - - -class SystemLogging(object): - - """Context manager to enable system logging.""" - - def __init__(self, *args, **kw): - """ - Initialize a :class:`SystemLogging` object. - - :param args: Positional arguments to :func:`enable_system_logging()`. - :param kw: Keyword arguments to :func:`enable_system_logging()`. - """ - self.args = args - self.kw = kw - self.handler = None - - def __enter__(self): - """Enable system logging when entering the context.""" - if self.handler is None: - self.handler = enable_system_logging(*self.args, **self.kw) - return self.handler - - def __exit__(self, exc_type=None, exc_value=None, traceback=None): - """ - Disable system logging when leaving the context. - - .. note:: If an exception is being handled when we leave the context a - warning message including traceback is logged *before* system - logging is disabled. - """ - if self.handler is not None: - if exc_type is not None: - logger.warning("Disabling system logging due to unhandled exception!", exc_info=True) - (self.kw.get('logger') or logging.getLogger()).removeHandler(self.handler) - self.handler = None - - -def enable_system_logging(programname=None, fmt=None, logger=None, reconfigure=True, **kw): - """ - Redirect :mod:`logging` messages to the system log (e.g. ``/var/log/syslog``). - - :param programname: The program name to embed in log messages (a string, defaults - to the result of :func:`~coloredlogs.find_program_name()`). - :param fmt: The log format for system log messages (a string, defaults to - :data:`DEFAULT_LOG_FORMAT`). - :param logger: The logger to which the :class:`~logging.handlers.SysLogHandler` - should be connected (defaults to the root logger). - :param level: The logging level for the :class:`~logging.handlers.SysLogHandler` - (defaults to :data:`.DEFAULT_LOG_LEVEL`). This value is coerced - using :func:`~coloredlogs.level_to_number()`. - :param reconfigure: If :data:`True` (the default) multiple calls to - :func:`enable_system_logging()` will each override - the previous configuration. - :param kw: Refer to :func:`connect_to_syslog()`. - :returns: A :class:`~logging.handlers.SysLogHandler` object or - :data:`None`. If an existing handler is found and `reconfigure` - is :data:`False` the existing handler object is returned. If the - connection to the system logging daemon fails :data:`None` is - returned. - - As of release 15.0 this function uses :func:`is_syslog_supported()` to - check whether system logging is supported and appropriate before it's - enabled. - - .. note:: When the logger's effective level is too restrictive it is - relaxed (refer to `notes about log levels`_ for details). - """ - # Check whether system logging is supported / appropriate. - if not is_syslog_supported(): - return None - # Provide defaults for omitted arguments. - programname = programname or find_program_name() - logger = logger or logging.getLogger() - fmt = fmt or DEFAULT_LOG_FORMAT - level = level_to_number(kw.get('level', DEFAULT_LOG_LEVEL)) - # Check whether system logging is already enabled. - handler, logger = replace_handler(logger, match_syslog_handler, reconfigure) - # Make sure reconfiguration is allowed or not relevant. - if not (handler and not reconfigure): - # Create a system logging handler. - handler = connect_to_syslog(**kw) - # Make sure the handler was successfully created. - if handler: - # Enable the use of %(programname)s. - ProgramNameFilter.install(handler=handler, fmt=fmt, programname=programname) - # Connect the formatter, handler and logger. - handler.setFormatter(logging.Formatter(fmt)) - logger.addHandler(handler) - # Adjust the level of the selected logger. - adjust_level(logger, level) - return handler - - -def connect_to_syslog(address=None, facility=None, level=None): - """ - Create a :class:`~logging.handlers.SysLogHandler`. - - :param address: The device file or network address of the system logging - daemon (a string or tuple, defaults to the result of - :func:`find_syslog_address()`). - :param facility: Refer to :class:`~logging.handlers.SysLogHandler`. - Defaults to ``LOG_USER``. - :param level: The logging level for the :class:`~logging.handlers.SysLogHandler` - (defaults to :data:`.DEFAULT_LOG_LEVEL`). This value is coerced - using :func:`~coloredlogs.level_to_number()`. - :returns: A :class:`~logging.handlers.SysLogHandler` object or :data:`None` (if the - system logging daemon is unavailable). - - The process of connecting to the system logging daemon goes as follows: - - - The following two socket types are tried (in decreasing preference): - - 1. :data:`~socket.SOCK_RAW` avoids truncation of log messages but may - not be supported. - 2. :data:`~socket.SOCK_STREAM` (TCP) supports longer messages than the - default (which is UDP). - """ - if not address: - address = find_syslog_address() - if facility is None: - facility = logging.handlers.SysLogHandler.LOG_USER - if level is None: - level = DEFAULT_LOG_LEVEL - for socktype in socket.SOCK_RAW, socket.SOCK_STREAM, None: - kw = dict(facility=facility, address=address) - if socktype is not None: - kw['socktype'] = socktype - try: - handler = logging.handlers.SysLogHandler(**kw) - except IOError: - # IOError is a superclass of socket.error which can be raised if the system - # logging daemon is unavailable. - pass - else: - handler.setLevel(level_to_number(level)) - return handler - - -def find_syslog_address(): - """ - Find the most suitable destination for system log messages. - - :returns: The pathname of a log device (a string) or an address/port tuple as - supported by :class:`~logging.handlers.SysLogHandler`. - - On Mac OS X this prefers :data:`LOG_DEVICE_MACOSX`, after that :data:`LOG_DEVICE_UNIX` - is checked for existence. If both of these device files don't exist the default used - by :class:`~logging.handlers.SysLogHandler` is returned. - """ - if sys.platform == 'darwin' and os.path.exists(LOG_DEVICE_MACOSX): - return LOG_DEVICE_MACOSX - elif os.path.exists(LOG_DEVICE_UNIX): - return LOG_DEVICE_UNIX - else: - return 'localhost', logging.handlers.SYSLOG_UDP_PORT - - -def is_syslog_supported(): - """ - Determine whether system logging is supported. - - :returns: - - :data:`True` if system logging is supported and can be enabled, - :data:`False` if system logging is not supported or there are good - reasons for not enabling it. - - The decision making process here is as follows: - - Override - If the environment variable ``$COLOREDLOGS_SYSLOG`` is set it is evaluated - using :func:`~humanfriendly.coerce_boolean()` and the resulting value - overrides the platform detection discussed below, this allows users to - override the decision making process if they disagree / know better. - - Linux / UNIX - On systems that are not Windows or MacOS (see below) we assume UNIX which - means either syslog is available or sending a bunch of UDP packets to - nowhere won't hurt anyone... - - Microsoft Windows - Over the years I've had multiple reports of :pypi:`coloredlogs` spewing - extremely verbose errno 10057 warning messages to the console (once for - each log message I suppose) so I now assume it a default that - "syslog-style system logging" is not generally available on Windows. - - Apple MacOS - There's cPython issue `#38780`_ which seems to result in a fatal exception - when the Python interpreter shuts down. This is (way) worse than not - having system logging enabled. The error message mentioned in `#38780`_ - has actually been following me around for years now, see for example: - - - https://github.com/xolox/python-rotate-backups/issues/9 mentions Docker - images implying Linux, so not strictly the same as `#38780`_. - - - https://github.com/xolox/python-npm-accel/issues/4 is definitely related - to `#38780`_ and is what eventually prompted me to add the - :func:`is_syslog_supported()` logic. - - .. _#38780: https://bugs.python.org/issue38780 - """ - override = os.environ.get("COLOREDLOGS_SYSLOG") - if override is not None: - return coerce_boolean(override) - else: - return not (on_windows() or on_macos()) - - -def match_syslog_handler(handler): - """ - Identify system logging handlers. - - :param handler: The :class:`~logging.Handler` class to check. - :returns: :data:`True` if the handler is a - :class:`~logging.handlers.SysLogHandler`, - :data:`False` otherwise. - - This function can be used as a callback for :func:`.find_handler()`. - """ - return isinstance(handler, logging.handlers.SysLogHandler) diff --git a/spaces/aryadytm/photo-low-light-enhance/src/st_style.py b/spaces/aryadytm/photo-low-light-enhance/src/st_style.py deleted file mode 100644 index 5d2bc9e635c9744f77cbdb9998a4ff4c2a37c431..0000000000000000000000000000000000000000 --- a/spaces/aryadytm/photo-low-light-enhance/src/st_style.py +++ /dev/null @@ -1,42 +0,0 @@ -button_style = """ -<style> -div.stButton > button:first-child { - background-color: rgb(255, 75, 75); - color: rgb(255, 255, 255); -} -div.stButton > button:hover { - background-color: rgb(255, 75, 75); - color: rgb(255, 255, 255); -} -div.stButton > button:active { - background-color: rgb(255, 75, 75); - color: rgb(255, 255, 255); -} -div.stButton > button:focus { - background-color: rgb(255, 75, 75); - color: rgb(255, 255, 255); -} -.css-1cpxqw2:focus:not(:active) { - background-color: rgb(255, 75, 75); - border-color: rgb(255, 75, 75); - color: rgb(255, 255, 255); -} -""" - -style = """ -<style> -#MainMenu { - visibility: hidden; -} -footer { - visibility: hidden; -} -header { - visibility: hidden; -} -</style> -""" - - -def apply_prod_style(st): - return st.markdown(style, unsafe_allow_html=True) \ No newline at end of file diff --git a/spaces/atwk-llm/README/README.md b/spaces/atwk-llm/README/README.md deleted file mode 100644 index d0dcb44df151d0074d942c69c384df193fdc9173..0000000000000000000000000000000000000000 --- a/spaces/atwk-llm/README/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: README -emoji: 💻 -colorFrom: blue -colorTo: blue -sdk: static -pinned: false ---- - -This is a Non Profit Organisation. Primary objective to finetune an LLM & create a Question Answering module for the ATWK App. diff --git a/spaces/aubmindlab/Arabic-NLP/backend/sarcasm.py b/spaces/aubmindlab/Arabic-NLP/backend/sarcasm.py deleted file mode 100644 index b57b47c89ed659f683fa681faa92b95f6df43039..0000000000000000000000000000000000000000 --- a/spaces/aubmindlab/Arabic-NLP/backend/sarcasm.py +++ /dev/null @@ -1,21 +0,0 @@ -import streamlit as st -from .sa import predictor - - -def write(): - st.markdown( - """ - # Arabic Sarcasm Detection - - This is a simple sarcasm detection app that uses the [MARBERT](https://huggingface.co/UBC-NLP/MARBERT) model trained on [ArSarcasm](https://github.com/iabufarha/ArSarcasm) - """ - ) - - input_text = st.text_input( - "Enter your text here:", - ) - if st.button("Predict"): - with st.spinner("Predicting..."): - prediction, scores = predictor.get_preds_from_sarcasm([input_text]) - st.write(f"Result: {prediction[0]}") - st.write(f"Score: {scores[0]}") diff --git a/spaces/awacke1/Azure-Cosmos-DB/app.py b/spaces/awacke1/Azure-Cosmos-DB/app.py deleted file mode 100644 index e9312d25db9ac342517ee273a1e97f8d3a7242cb..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Azure-Cosmos-DB/app.py +++ /dev/null @@ -1,78 +0,0 @@ -import streamlit as st -from azure.cosmos import CosmosClient - -# Load and save query files -def load_query(filename): - with open(filename, 'r') as file: - return file.read() - -def save_query(filename, query): - with open(filename, 'w') as file: - file.write(query) - -# Streamlit UI -st.title("Azure Cosmos DB Explorer 👽") - -# Default URI and KEY (You can set these values in code) -default_account_uri = "Your Default Cosmos DB URI Here" -default_account_key = "Your Default Cosmos DB Key Here" - -client = None - -# Connection Details Expander -with st.expander("Connect 🌍"): - account_uri = st.text_input("Account URI:", default_account_uri) - account_key = st.text_input("Account Key:", default_account_key, type="password") - database_name = st.text_input("Database Name:", "") - container_name = st.text_input("Container Name:", "") - if st.button("Connect"): - try: - client = CosmosClient(account_uri, credential=account_key) - database_client = client.get_database_client(database_name) - container_client = database_client.get_container_client(container_name) - st.success("Connected successfully! 🎉") - except Exception as e: - st.error(f"Failed to connect: {e}") - -# Query Editor Expander -with st.expander("Query Editor 📝"): - query = st.text_area("Enter your SQL query here:", "") - file_option = st.selectbox("File Options", ["New", "Open", "Save", "Save As"]) - - if file_option == "New": - query = "" - elif file_option == "Open": - open_file = st.file_uploader("Choose a file:", type=["txt"]) - if open_file is not None: - query = load_query(open_file) - elif file_option == "Save": - save_filename = st.text_input("Enter filename to save:", "my_query.txt") - if st.button("Save Query"): - save_query(save_filename, query) - elif file_option == "Save As": - saveas_filename = st.text_input("Enter new filename:", "my_new_query.txt") - if st.button("Save As"): - save_query(saveas_filename, query) - - if st.button("Execute Query 🚀"): - if client: - try: - items = list(container_client.query_items( - query=query, - enable_cross_partition_query=True - )) - st.write("Results 📋:") - st.json(items) - except Exception as e: - st.error(f"Query failed: {e}") - else: - st.warning("Not connected to any Cosmos DB. Please connect first.") - -# Instructions -st.markdown(""" -## Instructions to Run this App: - -1. **Install Packages**: If you haven't, install the required Python packages: -2. **Run App**: Save this code in a file, say `streamlit_cosmosdb_app.py`, and then run `streamlit run streamlit_cosmosdb_app.py`. -3. **Execute**: Use the UI to connect and run SQL queries against your Cosmos DB. -""") diff --git a/spaces/awacke1/Bloom.Generative.Writer/generators/model.py b/spaces/awacke1/Bloom.Generative.Writer/generators/model.py deleted file mode 100644 index 4a0ad9c8dc5da172a47331e30b819a0e9a1aa48f..0000000000000000000000000000000000000000 --- a/spaces/awacke1/Bloom.Generative.Writer/generators/model.py +++ /dev/null @@ -1,53 +0,0 @@ - -from unittest import result -from templates.Templates import PromptTemplate -import openai -import os -import requests - -prompt = PromptTemplate() - -stop_words = ["###", "\n\n", "<br><br>", "The authors: "] - - -def model(type, template, seq_len=250): - train = '' - if type == 'title': - train = prompt.TITLE_TO_ABSTRACST - if type == 'topic': - train = prompt.TOPIC_TO_ABSTRACST - - HF_URL = "https://api-inference.huggingface.co/models/bigscience/bloom" - #HF_KEY = os.environ["HF_KEY"] - HF_TOKEN = os.environ["HF_TOKEN"] - - headers = {"Authorization": f"Bearer {HF_TOKEN}"} - - print(f"Inside model {seq_len}") - - payload = { - "inputs": train + template, - "parameters": { - "temperature": 0.9, - "max_new_tokens": seq_len, - "return_full_text": False, - "top_p": 0.8, - "frequency_penalty": 1.0, - "retention_penalty": 1.0, - }, - "options": { - "use_cache": False - } - } - - response = requests.post(HF_URL, json=payload, headers=headers) - response = response.json() - result = response[0]['generated_text'] - # print("********", len(result), "********") - - # print(result) - - result = result.split("\n\n\n\n")[-1].strip().split("\n")[-1] - - - return {"result": result} diff --git a/spaces/banana-projects/web3d/node_modules/three/src/loaders/FontLoader.js b/spaces/banana-projects/web3d/node_modules/three/src/loaders/FontLoader.js deleted file mode 100644 index 6cb6a0d07a0441f7561999010b462aa472be5925..0000000000000000000000000000000000000000 --- a/spaces/banana-projects/web3d/node_modules/three/src/loaders/FontLoader.js +++ /dev/null @@ -1,62 +0,0 @@ -import { Font } from '../extras/core/Font.js'; -import { FileLoader } from './FileLoader.js'; -import { DefaultLoadingManager } from './LoadingManager.js'; - -/** - * @author mrdoob / http://mrdoob.com/ - */ - -function FontLoader( manager ) { - - this.manager = ( manager !== undefined ) ? manager : DefaultLoadingManager; - -} - -Object.assign( FontLoader.prototype, { - - load: function ( url, onLoad, onProgress, onError ) { - - var scope = this; - - var loader = new FileLoader( this.manager ); - loader.setPath( this.path ); - loader.load( url, function ( text ) { - - var json; - - try { - - json = JSON.parse( text ); - - } catch ( e ) { - - console.warn( 'THREE.FontLoader: typeface.js support is being deprecated. Use typeface.json instead.' ); - json = JSON.parse( text.substring( 65, text.length - 2 ) ); - - } - - var font = scope.parse( json ); - - if ( onLoad ) onLoad( font ); - - }, onProgress, onError ); - - }, - - parse: function ( json ) { - - return new Font( json ); - - }, - - setPath: function ( value ) { - - this.path = value; - return this; - - } - -} ); - - -export { FontLoader }; diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326231036.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326231036.py deleted file mode 100644 index 3cafe1a5f5e03ceb453c48fd3f4a26581551795d..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220326231036.py +++ /dev/null @@ -1,68 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - - - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_faces[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "<p style='text-align: center'><a href='https://arxiv.org/abs/2101.04061' target='_blank'>Towards Real-World Blind Face Restoration with Generative Facial Prior</a> | <a href='https://github.com/TencentARC/GFPGAN' target='_blank'>Github Repo</a></p><center><img src='https://visitor-badge.glitch.me/badge?page_id=akhaliq_GFPGAN' alt='visitor badge'></center>" -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001522.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001522.py deleted file mode 100644 index 43c7248137807e6458b0e62c42481571795ea9de..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327001522.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "<p style='text-align: center'><a href='https://arxiv.org/abs/2101.04061' target='_blank'>Towards Real-World Blind Face Restoration with Generative Facial Prior</a> | <a href='https://github.com/TencentARC/GFPGAN' target='_blank'>Github Repo</a></p><center><img src='https://visitor-badge.glitch.me/badge?page_id=akhaliq_GFPGAN' alt='visitor badge'></center>" -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005511.py b/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005511.py deleted file mode 100644 index 43c7248137807e6458b0e62c42481571795ea9de..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/.history/app_20220327005511.py +++ /dev/null @@ -1,65 +0,0 @@ -import os -#os.system("pip install gfpgan") - -#os.system("pip freeze") -#os.system("wget https://github.com/TencentARC/GFPGAN/releases/download/v0.2.0/GFPGANCleanv1-NoCE-C2.pth -P .") -import random -import gradio as gr -from PIL import Image -import torch -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/ab/Abraham_Lincoln_O-77_matte_collodion_print.jpg/1024px-Abraham_Lincoln_O-77_matte_collodion_print.jpg', 'lincoln.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/5/50/Albert_Einstein_%28Nobel%29.png', 'einstein.png') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/9/9d/Thomas_Edison2.jpg/1024px-Thomas_Edison2.jpg', 'edison.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/a/a9/Henry_Ford_1888.jpg/1024px-Henry_Ford_1888.jpg', 'Henry.jpg') -# torch.hub.download_url_to_file('https://upload.wikimedia.org/wikipedia/commons/thumb/0/06/Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg/800px-Frida_Kahlo%2C_by_Guillermo_Kahlo.jpg', 'Frida.jpg') - - - - -import cv2 -import glob -import numpy as np -from basicsr.utils import imwrite -from gfpgan import GFPGANer - -import warnings -warnings.warn('The unoptimized RealESRGAN is very slow on CPU. We do not use it. ' - 'If you really want to use it, please modify the corresponding codes.') -bg_upsampler = None - - - -# set up GFPGAN restorer -restorer = GFPGANer( - model_path='experiments/pretrained_models/GFPGANv1.3.pth', - upscale=2, - arch='clean', - channel_multiplier=2, - bg_upsampler=bg_upsampler) - - -def inference(img): - input_img = cv2.imread(img, cv2.IMREAD_COLOR) - cropped_faces, restored_faces, restored_img = restorer.enhance( - input_img, has_aligned=False, only_center_face=False, paste_back=True) - - return Image.fromarray(restored_img[0][:,:,::-1]) - -title = "GFP-GAN" -description = "Gradio demo for GFP-GAN: Towards Real-World Blind Face Restoration with Generative Facial Prior. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below. Please click submit only once" -article = "<p style='text-align: center'><a href='https://arxiv.org/abs/2101.04061' target='_blank'>Towards Real-World Blind Face Restoration with Generative Facial Prior</a> | <a href='https://github.com/TencentARC/GFPGAN' target='_blank'>Github Repo</a></p><center><img src='https://visitor-badge.glitch.me/badge?page_id=akhaliq_GFPGAN' alt='visitor badge'></center>" -gr.Interface( - inference, - [gr.inputs.Image(type="filepath", label="Input")], - gr.outputs.Image(type="pil", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ['lincoln.jpg'], - ['einstein.png'], - ['edison.jpg'], - ['Henry.jpg'], - ['Frida.jpg'] - ] - ).launch(enable_queue=True,cache_examples=True) \ No newline at end of file diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/README.md b/spaces/beihai/GFPGAN-V1.3-whole-image/README.md deleted file mode 100644 index c01224a69f399e92e1f8160f1f14d4e0fff04692..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/README.md +++ /dev/null @@ -1,33 +0,0 @@ ---- -title: GFPGAN -emoji: 📚 -colorFrom: green -colorTo: blue -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio` or `streamlit` - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/models/hifacegan_model.py b/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/models/hifacegan_model.py deleted file mode 100644 index 435a2b179d6b7c670fe96a83ce45b461300b2c89..0000000000000000000000000000000000000000 --- a/spaces/beihai/GFPGAN-V1.3-whole-image/basicsr/models/hifacegan_model.py +++ /dev/null @@ -1,288 +0,0 @@ -import torch -from collections import OrderedDict -from os import path as osp -from tqdm import tqdm - -from basicsr.archs import build_network -from basicsr.losses import build_loss -from basicsr.metrics import calculate_metric -from basicsr.utils import imwrite, tensor2img -from basicsr.utils.registry import MODEL_REGISTRY -from .sr_model import SRModel - - -@MODEL_REGISTRY.register() -class HiFaceGANModel(SRModel): - """HiFaceGAN model for generic-purpose face restoration. - No prior modeling required, works for any degradations. - Currently doesn't support EMA for inference. - """ - - def init_training_settings(self): - - train_opt = self.opt['train'] - self.ema_decay = train_opt.get('ema_decay', 0) - if self.ema_decay > 0: - raise (NotImplementedError('HiFaceGAN does not support EMA now. Pass')) - - self.net_g.train() - - self.net_d = build_network(self.opt['network_d']) - self.net_d = self.model_to_device(self.net_d) - self.print_network(self.net_d) - - # define losses - # HiFaceGAN does not use pixel loss by default - if train_opt.get('pixel_opt'): - self.cri_pix = build_loss(train_opt['pixel_opt']).to(self.device) - else: - self.cri_pix = None - - if train_opt.get('perceptual_opt'): - self.cri_perceptual = build_loss(train_opt['perceptual_opt']).to(self.device) - else: - self.cri_perceptual = None - - if train_opt.get('feature_matching_opt'): - self.cri_feat = build_loss(train_opt['feature_matching_opt']).to(self.device) - else: - self.cri_feat = None - - if self.cri_pix is None and self.cri_perceptual is None: - raise ValueError('Both pixel and perceptual losses are None.') - - if train_opt.get('gan_opt'): - self.cri_gan = build_loss(train_opt['gan_opt']).to(self.device) - - self.net_d_iters = train_opt.get('net_d_iters', 1) - self.net_d_init_iters = train_opt.get('net_d_init_iters', 0) - # set up optimizers and schedulers - self.setup_optimizers() - self.setup_schedulers() - - def setup_optimizers(self): - train_opt = self.opt['train'] - # optimizer g - optim_type = train_opt['optim_g'].pop('type') - self.optimizer_g = self.get_optimizer(optim_type, self.net_g.parameters(), **train_opt['optim_g']) - self.optimizers.append(self.optimizer_g) - # optimizer d - optim_type = train_opt['optim_d'].pop('type') - self.optimizer_d = self.get_optimizer(optim_type, self.net_d.parameters(), **train_opt['optim_d']) - self.optimizers.append(self.optimizer_d) - - def discriminate(self, input_lq, output, ground_truth): - """ - This is a conditional (on the input) discriminator - In Batch Normalization, the fake and real images are - recommended to be in the same batch to avoid disparate - statistics in fake and real images. - So both fake and real images are fed to D all at once. - """ - h, w = output.shape[-2:] - if output.shape[-2:] != input_lq.shape[-2:]: - lq = torch.nn.functional.interpolate(input_lq, (h, w)) - real = torch.nn.functional.interpolate(ground_truth, (h, w)) - fake_concat = torch.cat([lq, output], dim=1) - real_concat = torch.cat([lq, real], dim=1) - else: - fake_concat = torch.cat([input_lq, output], dim=1) - real_concat = torch.cat([input_lq, ground_truth], dim=1) - - fake_and_real = torch.cat([fake_concat, real_concat], dim=0) - discriminator_out = self.net_d(fake_and_real) - pred_fake, pred_real = self._divide_pred(discriminator_out) - return pred_fake, pred_real - - @staticmethod - def _divide_pred(pred): - """ - Take the prediction of fake and real images from the combined batch. - The prediction contains the intermediate outputs of multiscale GAN, - so it's usually a list - """ - if type(pred) == list: - fake = [] - real = [] - for p in pred: - fake.append([tensor[:tensor.size(0) // 2] for tensor in p]) - real.append([tensor[tensor.size(0) // 2:] for tensor in p]) - else: - fake = pred[:pred.size(0) // 2] - real = pred[pred.size(0) // 2:] - - return fake, real - - def optimize_parameters(self, current_iter): - # optimize net_g - for p in self.net_d.parameters(): - p.requires_grad = False - - self.optimizer_g.zero_grad() - self.output = self.net_g(self.lq) - - l_g_total = 0 - loss_dict = OrderedDict() - - if (current_iter % self.net_d_iters == 0 and current_iter > self.net_d_init_iters): - # pixel loss - if self.cri_pix: - l_g_pix = self.cri_pix(self.output, self.gt) - l_g_total += l_g_pix - loss_dict['l_g_pix'] = l_g_pix - - # perceptual loss - if self.cri_perceptual: - l_g_percep, l_g_style = self.cri_perceptual(self.output, self.gt) - if l_g_percep is not None: - l_g_total += l_g_percep - loss_dict['l_g_percep'] = l_g_percep - if l_g_style is not None: - l_g_total += l_g_style - loss_dict['l_g_style'] = l_g_style - - # Requires real prediction for feature matching loss - pred_fake, pred_real = self.discriminate(self.lq, self.output, self.gt) - l_g_gan = self.cri_gan(pred_fake, True, is_disc=False) - l_g_total += l_g_gan - loss_dict['l_g_gan'] = l_g_gan - - # feature matching loss - if self.cri_feat: - l_g_feat = self.cri_feat(pred_fake, pred_real) - l_g_total += l_g_feat - loss_dict['l_g_feat'] = l_g_feat - - l_g_total.backward() - self.optimizer_g.step() - - # optimize net_d - for p in self.net_d.parameters(): - p.requires_grad = True - - self.optimizer_d.zero_grad() - # TODO: Benchmark test between HiFaceGAN and SRGAN implementation: - # SRGAN use the same fake output for discriminator update - # while HiFaceGAN regenerate a new output using updated net_g - # This should not make too much difference though. Stick to SRGAN now. - # ------------------------------------------------------------------- - # ---------- Below are original HiFaceGAN code snippet -------------- - # ------------------------------------------------------------------- - # with torch.no_grad(): - # fake_image = self.net_g(self.lq) - # fake_image = fake_image.detach() - # fake_image.requires_grad_() - # pred_fake, pred_real = self.discriminate(self.lq, fake_image, self.gt) - - # real - pred_fake, pred_real = self.discriminate(self.lq, self.output.detach(), self.gt) - l_d_real = self.cri_gan(pred_real, True, is_disc=True) - loss_dict['l_d_real'] = l_d_real - # fake - l_d_fake = self.cri_gan(pred_fake, False, is_disc=True) - loss_dict['l_d_fake'] = l_d_fake - - l_d_total = (l_d_real + l_d_fake) / 2 - l_d_total.backward() - self.optimizer_d.step() - - self.log_dict = self.reduce_loss_dict(loss_dict) - - if self.ema_decay > 0: - print('HiFaceGAN does not support EMA now. pass') - - def validation(self, dataloader, current_iter, tb_logger, save_img=False): - """ - Warning: HiFaceGAN requires train() mode even for validation - For more info, see https://github.com/Lotayou/Face-Renovation/issues/31 - - Args: - dataloader (torch.utils.data.DataLoader): Validation dataloader. - current_iter (int): Current iteration. - tb_logger (tensorboard logger): Tensorboard logger. - save_img (bool): Whether to save images. Default: False. - """ - - if self.opt['network_g']['type'] in ('HiFaceGAN', 'SPADEGenerator'): - self.net_g.train() - - if self.opt['dist']: - self.dist_validation(dataloader, current_iter, tb_logger, save_img) - else: - print('In HiFaceGANModel: The new metrics package is under development.' + - 'Using super method now (Only PSNR & SSIM are supported)') - super().nondist_validation(dataloader, current_iter, tb_logger, save_img) - - def nondist_validation(self, dataloader, current_iter, tb_logger, save_img): - """ - TODO: Validation using updated metric system - The metrics are now evaluated after all images have been tested - This allows batch processing, and also allows evaluation of - distributional metrics, such as: - - @ Frechet Inception Distance: FID - @ Maximum Mean Discrepancy: MMD - - Warning: - Need careful batch management for different inference settings. - - """ - dataset_name = dataloader.dataset.opt['name'] - with_metrics = self.opt['val'].get('metrics') is not None - if with_metrics: - self.metric_results = dict() # {metric: 0 for metric in self.opt['val']['metrics'].keys()} - sr_tensors = [] - gt_tensors = [] - - pbar = tqdm(total=len(dataloader), unit='image') - for val_data in dataloader: - img_name = osp.splitext(osp.basename(val_data['lq_path'][0]))[0] - self.feed_data(val_data) - self.test() - - visuals = self.get_current_visuals() # detached cpu tensor, non-squeeze - sr_tensors.append(visuals['result']) - if 'gt' in visuals: - gt_tensors.append(visuals['gt']) - del self.gt - - # tentative for out of GPU memory - del self.lq - del self.output - torch.cuda.empty_cache() - - if save_img: - if self.opt['is_train']: - save_img_path = osp.join(self.opt['path']['visualization'], img_name, - f'{img_name}_{current_iter}.png') - else: - if self.opt['val']['suffix']: - save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, - f'{img_name}_{self.opt["val"]["suffix"]}.png') - else: - save_img_path = osp.join(self.opt['path']['visualization'], dataset_name, - f'{img_name}_{self.opt["name"]}.png') - - imwrite(tensor2img(visuals['result']), save_img_path) - - pbar.update(1) - pbar.set_description(f'Test {img_name}') - pbar.close() - - if with_metrics: - sr_pack = torch.cat(sr_tensors, dim=0) - gt_pack = torch.cat(gt_tensors, dim=0) - # calculate metrics - for name, opt_ in self.opt['val']['metrics'].items(): - # The new metric caller automatically returns mean value - # FIXME: ERROR: calculate_metric only supports two arguments. Now the codes cannot be successfully run - self.metric_results[name] = calculate_metric(dict(sr_pack=sr_pack, gt_pack=gt_pack), opt_) - self._log_validation_metric_values(current_iter, dataset_name, tb_logger) - - def save(self, epoch, current_iter): - if hasattr(self, 'net_g_ema'): - print('HiFaceGAN does not support EMA now. Fallback to normal mode.') - - self.save_network(self.net_g, 'net_g', current_iter) - self.save_network(self.net_d, 'net_d', current_iter) - self.save_training_state(epoch, current_iter) diff --git a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621075237.py b/spaces/beihai/PDF-Table-Extractor/.history/app_20220621075237.py deleted file mode 100644 index 87b92fd361cd168f3791e2166d2dea537736223b..0000000000000000000000000000000000000000 --- a/spaces/beihai/PDF-Table-Extractor/.history/app_20220621075237.py +++ /dev/null @@ -1,31 +0,0 @@ -#-*- coding : utf-8-*- -import base64 -from subprocess import STDOUT -import streamlit as st -import pandas as pd -import camelot as cam # extracting tables from PDFs - -st.title("PDF Table Extractor") - -input_pdf = st.file_uploader(label = "", type = 'pdf') - -page_number = st.text_input("请填写表格所在PDF页码,eg: 3", value = 1) -background = st.selectbox("表格线条是否隐藏",('True', 'False')) -if input_pdf is not None: - # byte object into a PDF file - with open("input.pdf", "wb") as f: - base64_pdf = base64.b64encode(input_pdf.read()).decode('utf-8') - f.write(base64.b64decode(base64_pdf)) - f.close() - - # read the pdf and parse it using stream - tables = cam.read_pdf("input.pdf", pages=page_number, process_background=background) - result = pd.ExcelWriter('result.xlsx', engine='xlsxwriter') - tables[0].to_excel(result,index=False) - # for i in range(0,len(tables)): - # table = tables[i].df - # sheetname = str(i) - # table.to_excel(result, sheetname,index=False) - - with open('result.xlsx','rb') as f: - st.download_button('提取完成,点击下载!', f,file_name='result.xlsx',mime="application/vnd.ms-excel") \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Avatar Subtitles Navi Only 1080p.md b/spaces/bioriAsaeru/text-to-voice/Avatar Subtitles Navi Only 1080p.md deleted file mode 100644 index a8fb6d630424d4b9b1f29680e7cc62b9b2f0006f..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Avatar Subtitles Navi Only 1080p.md +++ /dev/null @@ -1,18 +0,0 @@ - -<h1>How to Watch Avatar with Subtitles for the Na'vi Language Only</h1> -<p>Avatar is a 2009 epic science fiction film directed by James Cameron that features a fictional alien race called the Na'vi, who live on the moon Pandora. The film uses a mix of English and Na'vi languages, with subtitles for the latter. However, some viewers may prefer to watch the film without subtitles for the English parts, and only have subtitles for the Na'vi parts. This can enhance the immersion and realism of the film, as well as challenge the viewers to understand the context and emotions of the characters without relying on words.</p> -<h2>avatar subtitles navi only 1080p</h2><br /><p><b><b>Download</b> 🗸🗸🗸 <a href="https://urloso.com/2uyPIG">https://urloso.com/2uyPIG</a></b></p><br /><br /> -<p>Fortunately, there are several ways to watch Avatar with subtitles for the Na'vi language only. Here are some of them:</p> -<ul> -<li>Download a subtitle file that only contains the Na'vi parts from a website such as opensubtitles.com[^1^] or subdl.com[^2^]. You can then load the subtitle file into your media player of choice, such as VLC or MPC-HC, and sync it with the video file. Make sure to choose a subtitle file that matches the resolution and format of your video file, such as 1080p BluRay x264.</li> -<li>Buy or rent a Blu-ray or DVD copy of Avatar that has an option to select subtitles for the Na'vi language only. This option may not be available in all regions or editions, so check the product description or reviews before purchasing. You can then play the disc on your Blu-ray or DVD player, or rip it to your computer using a software such as MakeMKV.</li> -<li>Stream Avatar online from a platform that has an option to select subtitles for the Na'vi language only. This option may not be available in all platforms or regions, so check the availability and settings before streaming. Some examples of platforms that offer this option are Amazon Prime Video, iTunes, and Google Play Movies.</li> -</ul> -<p>Whichever method you choose, you can enjoy watching Avatar with subtitles for the Na'vi language only and experience the film in a new and exciting way.</p> - -<p>If you are wondering why the Na'vi language is so fascinating and complex, it is because it was created by a linguist named Paul Frommer. He was hired by James Cameron to develop a language that would sound alien but realistic, and that would reflect the culture and environment of the Na'vi. He based the language on features from various human languages, such as ergativity, ejectives, and infixes. He also created a vocabulary of over 2,000 words, and a grammar system that allows for word formation and variation.</p> -<p></p> -<p>The Na'vi language is not only used in the film, but also in other media related to Avatar, such as video games, books, comics, and theme park attractions. There are also fan communities that learn and use the language for fun and communication. You can find online resources such as dictionaries, courses, podcasts, and forums to help you learn the Na'vi language. You can also join groups and events that organize Na'vi language activities and meetups.</p> -<p>Watching Avatar with subtitles for the Na'vi language only is not only a way to enjoy the film, but also a way to appreciate the richness and beauty of the Na'vi language. You can learn more about the language and its creator, and even try to speak it yourself. You can also connect with other fans who share your interest and passion for the Na'vi language. By doing so, you can immerse yourself in the world of Avatar and experience it in a deeper and more meaningful way.</p> d5da3c52bf<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Dorian Yates Blood and Guts Book PDF The Ultimate Guide to Building Maximum Muscle Mass.md b/spaces/bioriAsaeru/text-to-voice/Dorian Yates Blood and Guts Book PDF The Ultimate Guide to Building Maximum Muscle Mass.md deleted file mode 100644 index 67f6f71eaf21438335f61b1b1e24a479104038b2..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Dorian Yates Blood and Guts Book PDF The Ultimate Guide to Building Maximum Muscle Mass.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>Download Iarna Manelelor 2011 Album Full</h2><br /><p><b><b>Download File</b> ⇒⇒⇒ <a href="https://urloso.com/2uyRjB">https://urloso.com/2uyRjB</a></b></p><br /><br /> -<br /> - aaccfb2cb3<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/bioriAsaeru/text-to-voice/Eminem Encore Download Zip 25 ((FREE)).md b/spaces/bioriAsaeru/text-to-voice/Eminem Encore Download Zip 25 ((FREE)).md deleted file mode 100644 index 05d3fcf43b63a8fe494c1379770d25f48934ba05..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Eminem Encore Download Zip 25 ((FREE)).md +++ /dev/null @@ -1,32 +0,0 @@ -<br /> -Title: How to Download Eminem's Encore Album Zip for Free - -Article: -```html -<p>If you are a fan of Eminem, you might be interested in downloading his Encore album zip for free. Encore is the fifth studio album by the American rapper, released in 2004. It features guest appearances from Dr. Dre, 50 Cent, Nate Dogg, Obie Trice, Stat Quo, and D12. The album contains some of Eminem's most controversial and political songs, such as "Mosh", "Like Toy Soldiers", and "Just Lose It".</p> -<p>However, downloading Eminem's Encore album zip for free is not as easy as it sounds. You might encounter some problems, such as broken links, viruses, low-quality audio, or legal issues. That's why we have prepared this guide to help you find the best and safest way to download Eminem's Encore album zip for free.</p> -<h2>eminem encore download zip 25</h2><br /><p><b><b>Download</b> ……… <a href="https://urloso.com/2uyPm5">https://urloso.com/2uyPm5</a></b></p><br /><br /> -<h2>Step 1: Find a reliable source</h2> -<p>The first step is to find a reliable source that offers Eminem's Encore album zip for free. There are many websites that claim to provide free music downloads, but not all of them are trustworthy. Some of them might contain malware, spyware, or adware that can harm your device or compromise your privacy. Some of them might also violate the copyright laws and put you at risk of legal action.</p> -<p>Therefore, you need to be careful and do some research before you click on any link. One way to do that is to check the reviews and ratings of the website from other users. You can also use tools like VirusTotal or URLVoid to scan the website for any malicious content or reputation issues.</p> -<p>Alternatively, you can use some of the sources that we have found for you. These are some of the websites that offer Eminem's Encore album zip for free and have good reputation and quality:</p> -<ul> -<li><a href="https://archive.org/details/eminem-encore-12inch">Encore 12'' : Eminem : Free Download, Borrow, and Streaming - Archive</a>: This website allows you to download or stream Eminem's Encore 12-inch vinyl version for free. It contains four tracks: Encore (Clean), Encore (Album), Encore (Instrumental), and Encore (Acapella). The audio quality is high and the website is safe and legal.</li> -<li><a href="https://www5.mphiphop.com/album-eminem-encore-deluxe-version/">Download ALBUM: Eminem - Encore (Deluxe Version) | Mphiphop</a>: This website allows you to download Eminem's Encore deluxe version album zip for free. It contains 23 tracks, including three bonus tracks: We As Americans, Love You More, and Ricky Ticky Toc. The audio quality is decent and the website is secure and fast.</li> -<li><a href="https://soundcloud.com/anne-sahu/eminem-encore-download-zip-25">Eminem Encore Download Zip 25 - SoundCloud</a>: This website allows you to listen to an excerpt of Eminem's Encore album zip for free. It contains a 25-second snippet of the track "Encore / Curtains Down". The audio quality is good and the website is popular and reputable.</li> -</ul> -<h2>Step 2: Download the file</h2> -<p>The second step is to download the file from the source that you have chosen. Depending on the website, you might need to follow different steps to download the file. For example, some websites might require you to create an account, verify your email, or complete a captcha before you can download the file. Some websites might also have pop-up ads or redirects that you need to close or avoid.</p> -<p>Here are some general tips to download the file safely and successfully:</p> -<p></p> -<ul> -<li>Make sure you have enough space on your device or external storage.</li> -<li>Use a stable and fast internet connection.</li> -<li>Use a browser that supports downloads or a download manager that can resume interrupted downloads.</li> -<li>Scan the file with an antivirus software before opening it.</li> -<li>Unzip the file with a software that can handle zip files.</li> -</ul> -<h2>Step 3: Enjoy the music</h2> -<p>The final</p> d5da3c52bf<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Fix Download Kochupusthakam Pdf Kambi Kathakal.md b/spaces/bioriAsaeru/text-to-voice/Fix Download Kochupusthakam Pdf Kambi Kathakal.md deleted file mode 100644 index 37bd633135ae997675f59a8a78224fc872e3c8ca..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Fix Download Kochupusthakam Pdf Kambi Kathakal.md +++ /dev/null @@ -1,7 +0,0 @@ - -<p>Many kambi kathakal are published by many unknown peoples but only few kambi kathakal writers are only win in there activity. Download your favorite aunty kathakal and ammayi kathakal. Kochupusthakam PDF kambi kathakal online. Download Latest Kambi PDF kathakal. Free ammayi kathakal.</p> -<h2>Download Kochupusthakam Pdf Kambi Kathakal</h2><br /><p><b><b>Download Zip</b> ⇔ <a href="https://urloso.com/2uyPpy">https://urloso.com/2uyPpy</a></b></p><br /><br /> -<p>Kambi kathakal writers are talented and they have a vishualization regarding the work. Kambi aunty sumalatha is one of the kambi kathkal writing person she know very well about a good kochupusthakam kambi katha</p> -<p>Rama chechi each kambi kathakal are beautiful and very interesting all people once they read there kambi malayalam katha . they will read and eagerly waiting for there best new edition kambi kochupusthakam online kathakal. New kambi kathakal list are given below velamma kambi cartooon kathakal malayalam. Savitha bhabhi malayalam kambi cartoon kathakal. download best kambi cartoon kathakal online.</p> aaccfb2cb3<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/bioriAsaeru/text-to-voice/Huawei Firmware Update U8665 19.md b/spaces/bioriAsaeru/text-to-voice/Huawei Firmware Update U8665 19.md deleted file mode 100644 index 78acb187ffa2826dda00a2dbeb271354ea59b3e6..0000000000000000000000000000000000000000 --- a/spaces/bioriAsaeru/text-to-voice/Huawei Firmware Update U8665 19.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>huawei firmware update u8665 19</h2><br /><p><b><b>DOWNLOAD</b> » <a href="https://urloso.com/2uyQu3">https://urloso.com/2uyQu3</a></b></p><br /><br /> -<br /> - aaccfb2cb3<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/ops/fma.py b/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/ops/fma.py deleted file mode 100644 index 2eeac58a626c49231e04122b93e321ada954c5d3..0000000000000000000000000000000000000000 --- a/spaces/birsardar/stable-diffusion-mat-outpainting-primer/torch_utils/ops/fma.py +++ /dev/null @@ -1,60 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Fused multiply-add, with slightly faster gradients than `torch.addcmul()`.""" - -import torch - -#---------------------------------------------------------------------------- - -def fma(a, b, c): # => a * b + c - return _FusedMultiplyAdd.apply(a, b, c) - -#---------------------------------------------------------------------------- - -class _FusedMultiplyAdd(torch.autograd.Function): # a * b + c - @staticmethod - def forward(ctx, a, b, c): # pylint: disable=arguments-differ - out = torch.addcmul(c, a, b) - ctx.save_for_backward(a, b) - ctx.c_shape = c.shape - return out - - @staticmethod - def backward(ctx, dout): # pylint: disable=arguments-differ - a, b = ctx.saved_tensors - c_shape = ctx.c_shape - da = None - db = None - dc = None - - if ctx.needs_input_grad[0]: - da = _unbroadcast(dout * b, a.shape) - - if ctx.needs_input_grad[1]: - db = _unbroadcast(dout * a, b.shape) - - if ctx.needs_input_grad[2]: - dc = _unbroadcast(dout, c_shape) - - return da, db, dc - -#---------------------------------------------------------------------------- - -def _unbroadcast(x, shape): - extra_dims = x.ndim - len(shape) - assert extra_dims >= 0 - dim = [i for i in range(x.ndim) if x.shape[i] > 1 and (i < extra_dims or shape[i - extra_dims] == 1)] - if len(dim): - x = x.sum(dim=dim, keepdim=True) - if extra_dims: - x = x.reshape(-1, *x.shape[extra_dims+1:]) - assert x.shape == shape - return x - -#---------------------------------------------------------------------------- diff --git a/spaces/blanchon/qrcode-diffusion/README.md b/spaces/blanchon/qrcode-diffusion/README.md deleted file mode 100644 index c52bf2432863f96587f54cc7ac6e8b8b966eee5a..0000000000000000000000000000000000000000 --- a/spaces/blanchon/qrcode-diffusion/README.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: QrCode Diffusion -emoji: 📱 -colorFrom: red -colorTo: yellow -python_version: 3.10.11 -sdk: gradio -sdk_version: 3.34.0 -app_file: app.py -tags: [qrcode, stable-diffusion, controlnet] -pinned: true ---- - -# QrCode Diffusion - -## Description - -This is a simple application that allows you to generate a QrCode and apply a stable diffusion algorithm to it. The diffusion algorithm used is the ControlNet algorithm. - -## How to use - -```python -python app.py -``` - -And then go to the link that appears in the terminal. - -## References - -- ControlNet -- Stable Diffusion - -## Credits - -The original idea is from [nhciao](https://www.reddit.com/user/nhciao/) ([Twitter](https://twitter.com/nhciao)) and [this post](https://www.reddit.com/r/StableDiffusion/comments/141hg9x/controlnet_for_qr_code/). - -## Other - -This is also fun <https://qrbtf.com/> and [open source ](https://github.com/ciaochaos/qrbtf). \ No newline at end of file diff --git a/spaces/brainblow/AI-TV/Dockerfile b/spaces/brainblow/AI-TV/Dockerfile deleted file mode 100644 index 8e670518dbe8c6f90ca81d1aafdda840ebdff7b4..0000000000000000000000000000000000000000 --- a/spaces/brainblow/AI-TV/Dockerfile +++ /dev/null @@ -1,35 +0,0 @@ -FROM node:18 - - -ARG DEBIAN_FRONTEND=noninteractive - -RUN apt update - -RUN apt --yes install ffmpeg - -# Set up a new user named "user" with user ID 1000 -RUN useradd -o -u 1000 user - -# Switch to the "user" user -USER user - -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Install app dependencies -# A wildcard is used to ensure both package.json AND package-lock.json are copied -# where available (npm@5+) -COPY --chown=user package*.json $HOME/app - -RUN npm install - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -EXPOSE 7860 1935 8000 - -CMD [ "npm", "run", "start" ] \ No newline at end of file diff --git a/spaces/breadlicker45/the-jam-machine-app/decoder.py b/spaces/breadlicker45/the-jam-machine-app/decoder.py deleted file mode 100644 index a56cdc377b968815dd379f4cf7e0287aa977d5d7..0000000000000000000000000000000000000000 --- a/spaces/breadlicker45/the-jam-machine-app/decoder.py +++ /dev/null @@ -1,197 +0,0 @@ -from utils import * -from familizer import Familizer -from miditok import Event - - -class TextDecoder: - """Decodes text into: - 1- List of events - 2- Then converts these events to midi file via MidiTok and miditoolkit - - :param tokenizer: from MidiTok - - Usage with write_to_midi method: - args: text(String) example -> PIECE_START TRACK_START INST=25 DENSITY=2 BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50...BAR_END TRACK_END - returns: midi file from miditoolkit - """ - - def __init__(self, tokenizer, familized=True): - self.tokenizer = tokenizer - self.familized = familized - - def decode(self, text): - r"""converts from text to instrument events - Args: - text (String): example -> PIECE_START TRACK_START INST=25 DENSITY=2 BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50...BAR_END TRACK_END - - Returns: - Dict{inst_id: List[Events]}: List of events of Notes with velocities, aggregated Timeshifts, for each instrument - """ - piece_events = self.text_to_events(text) - inst_events = self.piece_to_inst_events(piece_events) - events = self.add_timeshifts_for_empty_bars(inst_events) - events = self.aggregate_timeshifts(events) - events = self.add_velocity(events) - return events - - def tokenize(self, events): - r"""converts from events to MidiTok tokens - Args: - events (Dict{inst_id: List[Events]}): List of events for each instrument - - Returns: - List[List[Events]]: List of tokens for each instrument - """ - tokens = [] - for inst in events.keys(): - tokens.append(self.tokenizer.events_to_tokens(events[inst])) - return tokens - - def get_midi(self, text, filename=None): - r"""converts from text to midi - Args: - text (String): example -> PIECE_START TRACK_START INST=25 DENSITY=2 BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50...BAR_END TRACK_END - - Returns: - miditoolkit midi: Returns and writes to midi - """ - events = self.decode(text) - tokens = self.tokenize(events) - instruments = self.get_instruments_tuple(events) - midi = self.tokenizer.tokens_to_midi(tokens, instruments) - - if filename is not None: - midi.dump(f"{filename}") - print(f"midi file written: {filename}") - - return midi - - @staticmethod - def text_to_events(text): - events = [] - for word in text.split(" "): - # TODO: Handle bar and track values with a counter - _event = word.split("=") - value = _event[1] if len(_event) > 1 else None - event = get_event(_event[0], value) - if event: - events.append(event) - return events - - @staticmethod - def piece_to_inst_events(piece_events): - """Converts piece events of 8 bars to instrument events for entire song - - Args: - piece_events (List[Events]): List of events of Notes, Timeshifts, Bars, Tracks - - Returns: - Dict{inst_id: List[Events]}: List of events for each instrument - - """ - inst_events = {} - current_instrument = -1 - for event in piece_events: - if event.type == "Instrument": - current_instrument = event.value - if current_instrument not in inst_events: - inst_events[current_instrument] = [] - elif current_instrument != -1: - inst_events[current_instrument].append(event) - return inst_events - - @staticmethod - def add_timeshifts_for_empty_bars(inst_events): - """Adds time shift events instead of consecutive [BAR_START BAR_END] events""" - new_inst_events = {} - for inst, events in inst_events.items(): - new_inst_events[inst] = [] - for index, event in enumerate(events): - if event.type == "Bar-End" or event.type == "Bar-Start": - if events[index - 1].type == "Bar-Start": - new_inst_events[inst].append(Event("Time-Shift", "4.0.8")) - else: - new_inst_events[inst].append(event) - return new_inst_events - - @staticmethod - def add_timeshifts(beat_values1, beat_values2): - """Adds two beat values - - Args: - beat_values1 (String): like 0.3.8 - beat_values2 (String): like 1.7.8 - - Returns: - beat_str (String): added beats like 2.2.8 for example values - """ - value1 = to_base10(beat_values1) - value2 = to_base10(beat_values2) - return to_beat_str(value1 + value2) - - def aggregate_timeshifts(self, events): - """Aggregates consecutive time shift events bigger than a bar - -> like Timeshift 4.0.8 - - Args: - events (_type_): _description_ - - Returns: - _type_: _description_ - """ - new_events = {} - for inst, events in events.items(): - inst_events = [] - for i, event in enumerate(events): - if ( - event.type == "Time-Shift" - and len(inst_events) > 0 - and inst_events[-1].type == "Time-Shift" - ): - inst_events[-1].value = self.add_timeshifts( - inst_events[-1].value, event.value - ) - else: - inst_events.append(event) - new_events[inst] = inst_events - return new_events - - @staticmethod - def add_velocity(events): - """Adds default velocity 99 to note events since they are removed from text, needed to generate midi""" - new_events = {} - for inst, events in events.items(): - inst_events = [] - for event in events: - inst_events.append(event) - if event.type == "Note-On": - inst_events.append(Event("Velocity", 99)) - new_events[inst] = inst_events - return new_events - - def get_instruments_tuple(self, events): - """Returns instruments tuple for midi generation""" - instruments = [] - for inst in events.keys(): - is_drum = 0 - if inst == "DRUMS": - inst = 0 - is_drum = 1 - if self.familized: - inst = Familizer(arbitrary=True).get_program_number(int(inst)) - instruments.append((int(inst), is_drum)) - return tuple(instruments) - - -if __name__ == "__main__": - - filename = "midi/generated/misnaej/the-jam-machine-elec-famil/20221209_175750" - encoded_json = readFromFile( - f"{filename}.json", - True, - ) - encoded_text = encoded_json["sequence"] - # encoded_text = "PIECE_START TRACK_START INST=25 DENSITY=2 BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=67 NOTE_ON=64 TIME_DELTA=1 NOTE_OFF=67 NOTE_OFF=64 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=67 NOTE_ON=64 TIME_DELTA=1 NOTE_OFF=67 NOTE_OFF=64 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=67 NOTE_ON=64 TIME_DELTA=1 NOTE_OFF=67 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=69 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=69 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=57 TIME_DELTA=1 NOTE_OFF=57 NOTE_ON=56 TIME_DELTA=1 NOTE_OFF=56 NOTE_ON=64 NOTE_ON=60 NOTE_ON=55 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=55 BAR_END BAR_START NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=66 NOTE_ON=62 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=66 NOTE_OFF=62 NOTE_OFF=50 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=67 NOTE_ON=64 TIME_DELTA=1 NOTE_OFF=67 NOTE_OFF=64 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=50 NOTE_ON=64 NOTE_ON=60 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=64 NOTE_OFF=60 NOTE_OFF=50 NOTE_ON=59 NOTE_ON=55 NOTE_ON=50 TIME_DELTA=1 NOTE_OFF=59 NOTE_OFF=50 NOTE_OFF=55 NOTE_OFF=50 BAR_END BAR_START BAR_END TRACK_END" - - miditok = get_miditok() - TextDecoder(miditok).get_midi(encoded_text, filename=filename) diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/datasets/README.md b/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/datasets/README.md deleted file mode 100644 index 9fb3e4f7afec17137c95c78be6ef06d520ec8032..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/detectron2/data/datasets/README.md +++ /dev/null @@ -1,9 +0,0 @@ - - -### Common Datasets - -The dataset implemented here do not need to load the data into the final format. -It should provide the minimal data structure needed to use the dataset, so it can be very efficient. - -For example, for an image dataset, just provide the file names and labels, but don't read the images. -Let the downstream decide how to read. diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/utils.py b/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/utils.py deleted file mode 100644 index 2e76eb9535a68dcb4ccb065556c55289294e42c8..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/projects/DensePose/densepose/modeling/utils.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -from torch import nn - - -def initialize_module_params(module: nn.Module) -> None: - for name, param in module.named_parameters(): - if "bias" in name: - nn.init.constant_(param, 0) - elif "weight" in name: - nn.init.kaiming_normal_(param, mode="fan_out", nonlinearity="relu") diff --git a/spaces/brjathu/HMR2.0/vendor/detectron2/tools/plain_train_net.py b/spaces/brjathu/HMR2.0/vendor/detectron2/tools/plain_train_net.py deleted file mode 100644 index be4588e559816727635ce287281df3d41514a8cc..0000000000000000000000000000000000000000 --- a/spaces/brjathu/HMR2.0/vendor/detectron2/tools/plain_train_net.py +++ /dev/null @@ -1,217 +0,0 @@ -#!/usr/bin/env python -# Copyright (c) Facebook, Inc. and its affiliates. -""" -Detectron2 training script with a plain training loop. - -This script reads a given config file and runs the training or evaluation. -It is an entry point that is able to train standard models in detectron2. - -In order to let one script support training of many models, -this script contains logic that are specific to these built-in models and therefore -may not be suitable for your own project. -For example, your research project perhaps only needs a single "evaluator". - -Therefore, we recommend you to use detectron2 as a library and take -this file as an example of how to use the library. -You may want to write your own script with your datasets and other customizations. - -Compared to "train_net.py", this script supports fewer default features. -It also includes fewer abstraction, therefore is easier to add custom logic. -""" - -import logging -import os -from collections import OrderedDict -import torch -from torch.nn.parallel import DistributedDataParallel - -import detectron2.utils.comm as comm -from detectron2.checkpoint import DetectionCheckpointer, PeriodicCheckpointer -from detectron2.config import get_cfg -from detectron2.data import ( - MetadataCatalog, - build_detection_test_loader, - build_detection_train_loader, -) -from detectron2.engine import default_argument_parser, default_setup, default_writers, launch -from detectron2.evaluation import ( - CityscapesInstanceEvaluator, - CityscapesSemSegEvaluator, - COCOEvaluator, - COCOPanopticEvaluator, - DatasetEvaluators, - LVISEvaluator, - PascalVOCDetectionEvaluator, - SemSegEvaluator, - inference_on_dataset, - print_csv_format, -) -from detectron2.modeling import build_model -from detectron2.solver import build_lr_scheduler, build_optimizer -from detectron2.utils.events import EventStorage - -logger = logging.getLogger("detectron2") - - -def get_evaluator(cfg, dataset_name, output_folder=None): - """ - Create evaluator(s) for a given dataset. - This uses the special metadata "evaluator_type" associated with each builtin dataset. - For your own dataset, you can simply create an evaluator manually in your - script and do not have to worry about the hacky if-else logic here. - """ - if output_folder is None: - output_folder = os.path.join(cfg.OUTPUT_DIR, "inference") - evaluator_list = [] - evaluator_type = MetadataCatalog.get(dataset_name).evaluator_type - if evaluator_type in ["sem_seg", "coco_panoptic_seg"]: - evaluator_list.append( - SemSegEvaluator( - dataset_name, - distributed=True, - output_dir=output_folder, - ) - ) - if evaluator_type in ["coco", "coco_panoptic_seg"]: - evaluator_list.append(COCOEvaluator(dataset_name, output_dir=output_folder)) - if evaluator_type == "coco_panoptic_seg": - evaluator_list.append(COCOPanopticEvaluator(dataset_name, output_folder)) - if evaluator_type == "cityscapes_instance": - return CityscapesInstanceEvaluator(dataset_name) - if evaluator_type == "cityscapes_sem_seg": - return CityscapesSemSegEvaluator(dataset_name) - if evaluator_type == "pascal_voc": - return PascalVOCDetectionEvaluator(dataset_name) - if evaluator_type == "lvis": - return LVISEvaluator(dataset_name, cfg, True, output_folder) - if len(evaluator_list) == 0: - raise NotImplementedError( - "no Evaluator for the dataset {} with the type {}".format(dataset_name, evaluator_type) - ) - if len(evaluator_list) == 1: - return evaluator_list[0] - return DatasetEvaluators(evaluator_list) - - -def do_test(cfg, model): - results = OrderedDict() - for dataset_name in cfg.DATASETS.TEST: - data_loader = build_detection_test_loader(cfg, dataset_name) - evaluator = get_evaluator( - cfg, dataset_name, os.path.join(cfg.OUTPUT_DIR, "inference", dataset_name) - ) - results_i = inference_on_dataset(model, data_loader, evaluator) - results[dataset_name] = results_i - if comm.is_main_process(): - logger.info("Evaluation results for {} in csv format:".format(dataset_name)) - print_csv_format(results_i) - if len(results) == 1: - results = list(results.values())[0] - return results - - -def do_train(cfg, model, resume=False): - model.train() - optimizer = build_optimizer(cfg, model) - scheduler = build_lr_scheduler(cfg, optimizer) - - checkpointer = DetectionCheckpointer( - model, cfg.OUTPUT_DIR, optimizer=optimizer, scheduler=scheduler - ) - start_iter = ( - checkpointer.resume_or_load(cfg.MODEL.WEIGHTS, resume=resume).get("iteration", -1) + 1 - ) - max_iter = cfg.SOLVER.MAX_ITER - - periodic_checkpointer = PeriodicCheckpointer( - checkpointer, cfg.SOLVER.CHECKPOINT_PERIOD, max_iter=max_iter - ) - - writers = default_writers(cfg.OUTPUT_DIR, max_iter) if comm.is_main_process() else [] - - # compared to "train_net.py", we do not support accurate timing and - # precise BN here, because they are not trivial to implement in a small training loop - data_loader = build_detection_train_loader(cfg) - logger.info("Starting training from iteration {}".format(start_iter)) - with EventStorage(start_iter) as storage: - for data, iteration in zip(data_loader, range(start_iter, max_iter)): - storage.iter = iteration - - loss_dict = model(data) - losses = sum(loss_dict.values()) - assert torch.isfinite(losses).all(), loss_dict - - loss_dict_reduced = {k: v.item() for k, v in comm.reduce_dict(loss_dict).items()} - losses_reduced = sum(loss for loss in loss_dict_reduced.values()) - if comm.is_main_process(): - storage.put_scalars(total_loss=losses_reduced, **loss_dict_reduced) - - optimizer.zero_grad() - losses.backward() - optimizer.step() - storage.put_scalar("lr", optimizer.param_groups[0]["lr"], smoothing_hint=False) - scheduler.step() - - if ( - cfg.TEST.EVAL_PERIOD > 0 - and (iteration + 1) % cfg.TEST.EVAL_PERIOD == 0 - and iteration != max_iter - 1 - ): - do_test(cfg, model) - # Compared to "train_net.py", the test results are not dumped to EventStorage - comm.synchronize() - - if iteration - start_iter > 5 and ( - (iteration + 1) % 20 == 0 or iteration == max_iter - 1 - ): - for writer in writers: - writer.write() - periodic_checkpointer.step(iteration) - - -def setup(args): - """ - Create configs and perform basic setups. - """ - cfg = get_cfg() - cfg.merge_from_file(args.config_file) - cfg.merge_from_list(args.opts) - cfg.freeze() - default_setup( - cfg, args - ) # if you don't like any of the default setup, write your own setup code - return cfg - - -def main(args): - cfg = setup(args) - - model = build_model(cfg) - logger.info("Model:\n{}".format(model)) - if args.eval_only: - DetectionCheckpointer(model, save_dir=cfg.OUTPUT_DIR).resume_or_load( - cfg.MODEL.WEIGHTS, resume=args.resume - ) - return do_test(cfg, model) - - distributed = comm.get_world_size() > 1 - if distributed: - model = DistributedDataParallel( - model, device_ids=[comm.get_local_rank()], broadcast_buffers=False - ) - - do_train(cfg, model, resume=args.resume) - return do_test(cfg, model) - - -if __name__ == "__main__": - args = default_argument_parser().parse_args() - print("Command Line Args:", args) - launch( - main, - args.num_gpus, - num_machines=args.num_machines, - machine_rank=args.machine_rank, - dist_url=args.dist_url, - args=(args,), - ) diff --git a/spaces/camenduru-com/one-shot-talking-face/README.md b/spaces/camenduru-com/one-shot-talking-face/README.md deleted file mode 100644 index 29d4db968d1389c3282a7009adca6686f28b15a7..0000000000000000000000000000000000000000 --- a/spaces/camenduru-com/one-shot-talking-face/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: One Shot Talking Face -emoji: 🌞 -colorFrom: blue -colorTo: blue -sdk: docker -sdk_version: 3.9 -app_file: oh-no.py -pinned: false ---- \ No newline at end of file diff --git a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/data/test_dataset.py b/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/data/test_dataset.py deleted file mode 100644 index 3aeba33fb2d8a2d9f1b9f37abc3a0a6104f0aace..0000000000000000000000000000000000000000 --- a/spaces/carlosalonso/Detection-video/carpeta_deteccion/tests/data/test_dataset.py +++ /dev/null @@ -1,185 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. - -import os -import pickle -import sys -import unittest -from functools import partial -import torch -from iopath.common.file_io import LazyPath - -from detectron2 import model_zoo -from detectron2.config import get_cfg, instantiate -from detectron2.data import ( - DatasetCatalog, - DatasetFromList, - MapDataset, - ToIterableDataset, - build_batch_data_loader, - build_detection_test_loader, - build_detection_train_loader, -) -from detectron2.data.common import ( - AspectRatioGroupedDataset, - set_default_dataset_from_list_serialize_method, -) -from detectron2.data.samplers import InferenceSampler, TrainingSampler - - -def _a_slow_func(x): - return "path/{}".format(x) - - -class TestDatasetFromList(unittest.TestCase): - # Failing for py3.6, likely due to pickle - @unittest.skipIf(sys.version_info.minor <= 6, "Not supported in Python 3.6") - def test_using_lazy_path(self): - dataset = [] - for i in range(10): - dataset.append({"file_name": LazyPath(partial(_a_slow_func, i))}) - - dataset = DatasetFromList(dataset) - for i in range(10): - path = dataset[i]["file_name"] - self.assertTrue(isinstance(path, LazyPath)) - self.assertEqual(os.fspath(path), _a_slow_func(i)) - - def test_alternative_serialize_method(self): - dataset = [1, 2, 3] - dataset = DatasetFromList(dataset, serialize=torch.tensor) - self.assertEqual(dataset[2], torch.tensor(3)) - - def test_change_default_serialize_method(self): - dataset = [1, 2, 3] - with set_default_dataset_from_list_serialize_method(torch.tensor): - dataset_1 = DatasetFromList(dataset, serialize=True) - self.assertEqual(dataset_1[2], torch.tensor(3)) - dataset_2 = DatasetFromList(dataset, serialize=True) - self.assertEqual(dataset_2[2], 3) - - -class TestMapDataset(unittest.TestCase): - @staticmethod - def map_func(x): - if x == 2: - return None - return x * 2 - - def test_map_style(self): - ds = DatasetFromList([1, 2, 3]) - ds = MapDataset(ds, TestMapDataset.map_func) - self.assertEqual(ds[0], 2) - self.assertEqual(ds[2], 6) - self.assertIn(ds[1], [2, 6]) - - def test_iter_style(self): - class DS(torch.utils.data.IterableDataset): - def __iter__(self): - yield from [1, 2, 3] - - ds = DS() - ds = MapDataset(ds, TestMapDataset.map_func) - self.assertIsInstance(ds, torch.utils.data.IterableDataset) - - data = list(iter(ds)) - self.assertEqual(data, [2, 6]) - - def test_pickleability(self): - ds = DatasetFromList([1, 2, 3]) - ds = MapDataset(ds, lambda x: x * 2) - ds = pickle.loads(pickle.dumps(ds)) - self.assertEqual(ds[0], 2) - - -class TestAspectRatioGrouping(unittest.TestCase): - def test_reiter_leak(self): - data = [(1, 0), (0, 1), (1, 0), (0, 1)] - data = [{"width": a, "height": b} for (a, b) in data] - batchsize = 2 - dataset = AspectRatioGroupedDataset(data, batchsize) - - for _ in range(5): - for idx, __ in enumerate(dataset): - if idx == 1: - # manually break, so the iterator does not stop by itself - break - # check that bucket sizes are valid - for bucket in dataset._buckets: - self.assertLess(len(bucket), batchsize) - - -class TestDataLoader(unittest.TestCase): - def _get_kwargs(self): - # get kwargs of build_detection_train_loader - cfg = model_zoo.get_config("common/data/coco.py").dataloader.train - cfg.dataset.names = "coco_2017_val_100" - cfg.pop("_target_") - kwargs = {k: instantiate(v) for k, v in cfg.items()} - return kwargs - - def test_build_dataloader_train(self): - kwargs = self._get_kwargs() - dl = build_detection_train_loader(**kwargs) - next(iter(dl)) - - def test_build_iterable_dataloader_train(self): - kwargs = self._get_kwargs() - ds = DatasetFromList(kwargs.pop("dataset")) - ds = ToIterableDataset(ds, TrainingSampler(len(ds))) - dl = build_detection_train_loader(dataset=ds, **kwargs) - next(iter(dl)) - - def test_build_iterable_dataloader_from_cfg(self): - cfg = get_cfg() - - class MyData(torch.utils.data.IterableDataset): - def __iter__(self): - while True: - yield 1 - - cfg.DATASETS.TRAIN = ["iter_data"] - DatasetCatalog.register("iter_data", lambda: MyData()) - dl = build_detection_train_loader(cfg, mapper=lambda x: x, aspect_ratio_grouping=False) - next(iter(dl)) - - dl = build_detection_test_loader(cfg, "iter_data", mapper=lambda x: x) - next(iter(dl)) - - def _check_is_range(self, data_loader, N): - # check that data_loader produces range(N) - data = list(iter(data_loader)) - data = [x for batch in data for x in batch] # flatten the batches - self.assertEqual(len(data), N) - self.assertEqual(set(data), set(range(N))) - - def test_build_batch_dataloader_inference(self): - # Test that build_batch_data_loader can be used for inference - N = 96 - ds = DatasetFromList(list(range(N))) - sampler = InferenceSampler(len(ds)) - dl = build_batch_data_loader(ds, sampler, 8, num_workers=3) - self._check_is_range(dl, N) - - def test_build_dataloader_inference(self): - N = 50 - ds = DatasetFromList(list(range(N))) - sampler = InferenceSampler(len(ds)) - # test that parallel loader works correctly - dl = build_detection_test_loader( - dataset=ds, sampler=sampler, mapper=lambda x: x, num_workers=3 - ) - self._check_is_range(dl, N) - - # test that batch_size works correctly - dl = build_detection_test_loader( - dataset=ds, sampler=sampler, mapper=lambda x: x, batch_size=4, num_workers=0 - ) - self._check_is_range(dl, N) - - def test_build_iterable_dataloader_inference(self): - # Test that build_detection_test_loader supports iterable dataset - N = 50 - ds = DatasetFromList(list(range(N))) - ds = ToIterableDataset(ds, InferenceSampler(len(ds))) - dl = build_detection_test_loader(dataset=ds, mapper=lambda x: x, num_workers=3) - self._check_is_range(dl, N) diff --git a/spaces/catgirlss/kittens/Dockerfile b/spaces/catgirlss/kittens/Dockerfile deleted file mode 100644 index eef259fa372a804549fb0af0913718a13344da34..0000000000000000000000000000000000000000 --- a/spaces/catgirlss/kittens/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ - apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] diff --git a/spaces/ccds/vits_onnx/export/vits/text/cleaners.py b/spaces/ccds/vits_onnx/export/vits/text/cleaners.py deleted file mode 100644 index 657951af902d0884b1ecc110e2ea932c6903b50a..0000000000000000000000000000000000000000 --- a/spaces/ccds/vits_onnx/export/vits/text/cleaners.py +++ /dev/null @@ -1,58 +0,0 @@ -import re -from unidecode import unidecode -import pyopenjtalk - -pyopenjtalk._lazy_init() - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - - -def japanese_cleaners(text): - '''Pipeline for notating accent in Japanese text.''' - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = '' - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - if text != '': - text += ' ' - labels = pyopenjtalk.extract_fullcontext(sentence) - for n, label in enumerate(labels): - phoneme = re.search(r'\-([^\+]*)\+', label).group(1) - if phoneme not in ['sil', 'pau']: - text += phoneme.replace('ch', 'ʧ').replace('sh', 'ʃ').replace('cl', 'Q') - else: - continue - n_moras = int(re.search(r'/F:(\d+)_', label).group(1)) - a1 = int(re.search(r"/A:(\-?[0-9]+)\+", label).group(1)) - a2 = int(re.search(r"\+(\d+)\+", label).group(1)) - a3 = int(re.search(r"\+(\d+)/", label).group(1)) - if re.search(r'\-([^\+]*)\+', labels[n + 1]).group(1) in ['sil', 'pau']: - a2_next = -1 - else: - a2_next = int(re.search(r"\+(\d+)\+", labels[n + 1]).group(1)) - # Accent phrase boundary - if a3 == 1 and a2_next == 1: - text += ' ' - # Falling - elif a1 == 0 and a2_next == a2 + 1 and a2 != n_moras: - text += '↓' - # Rising - elif a2 == 1 and a2_next == 2: - text += '↑' - if i < len(marks): - text += unidecode(marks[i]).replace(' ', '') - if re.match('[A-Za-z]', text[-1]): - text += '.' - return text - - -def japanese_cleaners2(text): - return japanese_cleaners(text).replace('ts','ʦ').replace('...','…') diff --git a/spaces/chansung/LLaMA-13B/app.py b/spaces/chansung/LLaMA-13B/app.py deleted file mode 100644 index d52136fa5c99f33a8a775a433bab464cd2c50553..0000000000000000000000000000000000000000 --- a/spaces/chansung/LLaMA-13B/app.py +++ /dev/null @@ -1,96 +0,0 @@ -import os -import time -import torch -import gradio as gr - -from strings import TITLE, ABSTRACT, EXAMPLES -from gen import get_pretrained_models, get_output - -generator = get_pretrained_models("13B", "tokenizer") - -history = [] - -def chat( - user_input, - include_input, - truncate, - top_p, - temperature, - max_gen_len, - state_chatbot -): - bot_response = get_output( - generator=generator, - prompt=user_input, - max_gen_len=max_gen_len, - temperature=temperature, - top_p=top_p)[0] - - # remove the first phrase identical to user prompt - if not include_input: - bot_response = bot_response[len(user_input):] - bot_response = bot_response.replace("\n", "<br>") - - # trip the last phrase - if truncate: - try: - bot_response = bot_response[:bot_response.rfind(".")+1] - except: - pass - - history.append({ - "role": "user", - "content": user_input - }) - history.append({ - "role": "system", - "content": bot_response - }) - - state_chatbot = state_chatbot + [(user_input, None)] - - response = "" - for word in bot_response.split(" "): - time.sleep(0.1) - response += word + " " - current_pair = (user_input, response) - state_chatbot[-1] = current_pair - yield state_chatbot, state_chatbot - -def reset_textbox(): - return gr.update(value='') - -with gr.Blocks(css = """#col_container {width: 95%; margin-left: auto; margin-right: auto;} - #chatbot {height: 400px; overflow: auto;}""") as demo: - - state_chatbot = gr.State([]) - - with gr.Column(elem_id='col_container'): - gr.Markdown(f"## {TITLE}\n\n\n\n{ABSTRACT}") - - with gr.Accordion("Example prompts", open=False): - example_str = "\n" - for example in EXAMPLES: - example_str += f"- {example}\n" - - gr.Markdown(example_str) - - chatbot = gr.Chatbot(elem_id='chatbot') - textbox = gr.Textbox(placeholder="Enter a prompt") - - with gr.Accordion("Parameters", open=False): - include_input = gr.Checkbox(value=True, label="Do you want to include the input in the generated text?") - truncate = gr.Checkbox(value=True, label="Truncate the unfinished last words?") - - max_gen_len = gr.Slider(minimum=20, maximum=512, value=256, step=1, interactive=True, label="Max Genenration Length",) - top_p = gr.Slider(minimum=-0, maximum=1.0, value=1.0, step=0.05, interactive=True, label="Top-p (nucleus sampling)",) - temperature = gr.Slider(minimum=-0, maximum=5.0, value=1.0, step=0.1, interactive=True, label="Temperature",) - - textbox.submit( - chat, - [textbox, include_input, truncate, top_p, temperature, max_gen_len, state_chatbot], - [state_chatbot, chatbot] - ) - textbox.submit(reset_textbox, [], [textbox]) - -demo.queue(api_open=False).launch() \ No newline at end of file diff --git a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/ok_vqa_utils.py b/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/ok_vqa_utils.py deleted file mode 100644 index 2db61942fd0263213fe92b1025e906e59220380b..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/multimodal/open_flamingo/eval/ok_vqa_utils.py +++ /dev/null @@ -1,213 +0,0 @@ -# Those are manual mapping that are not caught by our stemming rules or would -# would be done incorrectly by our automatic stemming rule. In details, -# the keys of the _MANUAL_MATCHES dict contains the original word and the value -# contains the transformation of the word expected by the OKVQA stemming rule. -# These manual rules were found by checking the `raw_answers` and the `answers` -# fields of the released OKVQA dataset and checking all things that were not -# properly mapped by our automatic rules. In particular some of the mapping -# are sometimes constant, e.g. christmas -> christmas which was incorrectly -# singularized by our inflection.singularize. -import re -import nltk -from nltk.corpus.reader import VERB -import inflection - -_MANUAL_MATCHES = { - "police": "police", - "las": "las", - "vegas": "vegas", - "yes": "yes", - "jeans": "jean", - "hell's": "hell", - "domino's": "domino", - "morning": "morn", - "clothes": "cloth", - "are": "are", - "riding": "ride", - "leaves": "leaf", - "dangerous": "danger", - "clothing": "cloth", - "texting": "text", - "kiting": "kite", - "firefighters": "firefight", - "ties": "tie", - "married": "married", - "teething": "teeth", - "gloves": "glove", - "tennis": "tennis", - "dining": "dine", - "directions": "direct", - "waves": "wave", - "christmas": "christmas", - "drives": "drive", - "pudding": "pud", - "coding": "code", - "plating": "plate", - "quantas": "quanta", - "hornes": "horn", - "graves": "grave", - "mating": "mate", - "paned": "pane", - "alertness": "alert", - "sunbathing": "sunbath", - "tenning": "ten", - "wetness": "wet", - "urinating": "urine", - "sickness": "sick", - "braves": "brave", - "firefighting": "firefight", - "lenses": "lens", - "reflections": "reflect", - "backpackers": "backpack", - "eatting": "eat", - "designers": "design", - "curiousity": "curious", - "playfulness": "play", - "blindness": "blind", - "hawke": "hawk", - "tomatoe": "tomato", - "rodeoing": "rodeo", - "brightness": "bright", - "circuses": "circus", - "skateboarders": "skateboard", - "staring": "stare", - "electronics": "electron", - "electicity": "elect", - "mountainous": "mountain", - "socializing": "social", - "hamburgers": "hamburg", - "caves": "cave", - "transitions": "transit", - "wading": "wade", - "creame": "cream", - "toileting": "toilet", - "sautee": "saute", - "buildings": "build", - "belongings": "belong", - "stockings": "stock", - "walle": "wall", - "cumulis": "cumuli", - "travelers": "travel", - "conducter": "conduct", - "browsing": "brows", - "pooping": "poop", - "haircutting": "haircut", - "toppings": "top", - "hearding": "heard", - "sunblocker": "sunblock", - "bases": "base", - "markings": "mark", - "mopeds": "mope", - "kindergartener": "kindergarten", - "pies": "pie", - "scrapbooking": "scrapbook", - "couponing": "coupon", - "meetings": "meet", - "elevators": "elev", - "lowes": "low", - "men's": "men", - "childrens": "children", - "shelves": "shelve", - "paintings": "paint", - "raines": "rain", - "paring": "pare", - "expressions": "express", - "routes": "rout", - "pease": "peas", - "vastness": "vast", - "awning": "awn", - "boy's": "boy", - "drunkenness": "drunken", - "teasing": "teas", - "conferences": "confer", - "ripeness": "ripe", - "suspenders": "suspend", - "earnings": "earn", - "reporters": "report", - "kid's": "kid", - "containers": "contain", - "corgie": "corgi", - "porche": "porch", - "microwaves": "microwave", - "batter's": "batter", - "sadness": "sad", - "apartments": "apart", - "oxygenize": "oxygen", - "striping": "stripe", - "purring": "pure", - "professionals": "profession", - "piping": "pipe", - "farmer's": "farmer", - "potatoe": "potato", - "emirates": "emir", - "womens": "women", - "veteran's": "veteran", - "wilderness": "wilder", - "propellers": "propel", - "alpes": "alp", - "charioteering": "chariot", - "swining": "swine", - "illness": "ill", - "crepte": "crept", - "adhesives": "adhesive", - "regent's": "regent", - "decorations": "decor", - "rabbies": "rabbi", - "overseas": "oversea", - "travellers": "travel", - "casings": "case", - "smugness": "smug", - "doves": "dove", - "nationals": "nation", - "mustange": "mustang", - "ringe": "ring", - "gondoliere": "gondolier", - "vacationing": "vacate", - "reminders": "remind", - "baldness": "bald", - "settings": "set", - "glaced": "glace", - "coniferous": "conifer", - "revelations": "revel", - "personals": "person", - "daughter's": "daughter", - "badness": "bad", - "projections": "project", - "polarizing": "polar", - "vandalizers": "vandal", - "minerals": "miner", - "protesters": "protest", - "controllers": "control", - "weddings": "wed", - "sometimes": "sometime", - "earing": "ear", -} - - -class OKVQAStemmer: - """Stemmer to match OKVQA v1.1 procedure.""" - - def __init__(self): - self._wordnet_lemmatizer = nltk.stem.WordNetLemmatizer() - - def stem(self, input_string): - """Apply stemming.""" - word_and_pos = nltk.pos_tag(nltk.tokenize.word_tokenize(input_string)) - stemmed_words = [] - for w, p in word_and_pos: - if w in _MANUAL_MATCHES: - w = _MANUAL_MATCHES[w] - elif w.endswith("ing"): - w = self._wordnet_lemmatizer.lemmatize(w, VERB) - elif p.startswith("NNS") or p.startswith("NNPS"): - w = inflection.singularize(w) - stemmed_words.append(w) - return " ".join(stemmed_words) - - -stemmer = OKVQAStemmer() - - -def postprocess_ok_vqa_generation(prediction) -> str: - prediction_stem = stemmer.stem(prediction) - return prediction_stem diff --git a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/xla_spawn.py b/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/xla_spawn.py deleted file mode 100644 index 5df6bfa2d5dc3105e38599e97abce22934991d8b..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/legacy/seq2seq/xla_spawn.py +++ /dev/null @@ -1,83 +0,0 @@ -# Copyright 2020 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -A simple launcher script for TPU training - -Inspired by https://github.com/pytorch/pytorch/blob/master/torch/distributed/launch.py - -:: - >>> python xla_spawn.py --num_cores=NUM_CORES_YOU_HAVE - YOUR_TRAINING_SCRIPT.py (--arg1 --arg2 --arg3 and all other - arguments of your training script) - -""" - - -import importlib -import sys -from argparse import REMAINDER, ArgumentParser -from pathlib import Path - -import torch_xla.distributed.xla_multiprocessing as xmp - - -def parse_args(): - """ - Helper function parsing the command line options - @retval ArgumentParser - """ - parser = ArgumentParser( - description=( - "PyTorch TPU distributed training launch helper utility that will spawn up multiple distributed processes" - ) - ) - - # Optional arguments for the launch helper - parser.add_argument("--num_cores", type=int, default=1, help="Number of TPU cores to use (1 or 8).") - - # positional - parser.add_argument( - "training_script", - type=str, - help=( - "The full path to the single TPU training " - "program/script to be launched in parallel, " - "followed by all the arguments for the " - "training script" - ), - ) - - # rest from the training program - parser.add_argument("training_script_args", nargs=REMAINDER) - - return parser.parse_args() - - -def main(): - args = parse_args() - - # Import training_script as a module. - script_fpath = Path(args.training_script) - sys.path.append(str(script_fpath.parent.resolve())) - mod_name = script_fpath.stem - mod = importlib.import_module(mod_name) - - # Patch sys.argv - sys.argv = [args.training_script] + args.training_script_args + ["--tpu_num_cores", str(args.num_cores)] - - xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) - - -if __name__ == "__main__": - main() diff --git a/spaces/chendl/compositional_test/transformers/examples/pytorch/question-answering/run_qa_beam_search.py b/spaces/chendl/compositional_test/transformers/examples/pytorch/question-answering/run_qa_beam_search.py deleted file mode 100644 index 7c78d453519a442fde1657bc287677e29f8cc6e3..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/pytorch/question-answering/run_qa_beam_search.py +++ /dev/null @@ -1,719 +0,0 @@ -#!/usr/bin/env python -# coding=utf-8 -# Copyright 2020 The HuggingFace Team All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Fine-tuning XLNet for question answering with beam search using a slightly adapted version of the 🤗 Trainer. -""" -# You can also adapt this script on your own question answering task. Pointers for this are left as comments. - -import logging -import os -import sys -from dataclasses import dataclass, field -from typing import Optional - -import datasets -import evaluate -from datasets import load_dataset -from trainer_qa import QuestionAnsweringTrainer -from utils_qa import postprocess_qa_predictions_with_beam_search - -import transformers -from transformers import ( - DataCollatorWithPadding, - EvalPrediction, - HfArgumentParser, - TrainingArguments, - XLNetConfig, - XLNetForQuestionAnswering, - XLNetTokenizerFast, - default_data_collator, - set_seed, -) -from transformers.trainer_utils import get_last_checkpoint -from transformers.utils import check_min_version, send_example_telemetry -from transformers.utils.versions import require_version - - -# Will error if the minimal version of Transformers is not installed. Remove at your own risks. -check_min_version("4.28.0") - -require_version("datasets>=1.8.0", "To fix: pip install -r examples/pytorch/question-answering/requirements.txt") - -logger = logging.getLogger(__name__) - - -@dataclass -class ModelArguments: - """ - Arguments pertaining to which model/config/tokenizer we are going to fine-tune from. - """ - - model_name_or_path: str = field( - metadata={"help": "Path to pretrained model or model identifier from huggingface.co/models"} - ) - config_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained config name or path if not the same as model_name"} - ) - tokenizer_name: Optional[str] = field( - default=None, metadata={"help": "Pretrained tokenizer name or path if not the same as model_name"} - ) - cache_dir: Optional[str] = field( - default=None, - metadata={"help": "Where do you want to store the pretrained models downloaded from huggingface.co"}, - ) - model_revision: str = field( - default="main", - metadata={"help": "The specific model version to use (can be a branch name, tag name or commit id)."}, - ) - use_auth_token: bool = field( - default=False, - metadata={ - "help": ( - "Will use the token generated when running `huggingface-cli login` (necessary to use this script " - "with private models)." - ) - }, - ) - - -@dataclass -class DataTrainingArguments: - """ - Arguments pertaining to what data we are going to input our model for training and eval. - """ - - dataset_name: Optional[str] = field( - default=None, metadata={"help": "The name of the dataset to use (via the datasets library)."} - ) - dataset_config_name: Optional[str] = field( - default=None, metadata={"help": "The configuration name of the dataset to use (via the datasets library)."} - ) - train_file: Optional[str] = field(default=None, metadata={"help": "The input training data file (a text file)."}) - validation_file: Optional[str] = field( - default=None, - metadata={"help": "An optional input evaluation data file to evaluate the perplexity on (a text file)."}, - ) - test_file: Optional[str] = field( - default=None, - metadata={"help": "An optional input test data file to test the perplexity on (a text file)."}, - ) - overwrite_cache: bool = field( - default=False, metadata={"help": "Overwrite the cached training and evaluation sets"} - ) - preprocessing_num_workers: Optional[int] = field( - default=None, - metadata={"help": "The number of processes to use for the preprocessing."}, - ) - max_seq_length: int = field( - default=384, - metadata={ - "help": ( - "The maximum total input sequence length after tokenization. Sequences longer " - "than this will be truncated, sequences shorter will be padded." - ) - }, - ) - pad_to_max_length: bool = field( - default=True, - metadata={ - "help": ( - "Whether to pad all samples to `max_seq_length`. If False, will pad the samples dynamically when" - " batching to the maximum length in the batch (which can be faster on GPU but will be slower on TPU)." - ) - }, - ) - max_train_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of training examples to this " - "value if set." - ) - }, - ) - max_eval_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of evaluation examples to this " - "value if set." - ) - }, - ) - max_predict_samples: Optional[int] = field( - default=None, - metadata={ - "help": ( - "For debugging purposes or quicker training, truncate the number of prediction examples to this " - "value if set." - ) - }, - ) - version_2_with_negative: bool = field( - default=False, metadata={"help": "If true, some of the examples do not have an answer."} - ) - null_score_diff_threshold: float = field( - default=0.0, - metadata={ - "help": ( - "The threshold used to select the null answer: if the best answer has a score that is less than " - "the score of the null answer minus this threshold, the null answer is selected for this example. " - "Only useful when `version_2_with_negative=True`." - ) - }, - ) - doc_stride: int = field( - default=128, - metadata={"help": "When splitting up a long document into chunks, how much stride to take between chunks."}, - ) - n_best_size: int = field( - default=20, - metadata={"help": "The total number of n-best predictions to generate when looking for an answer."}, - ) - max_answer_length: int = field( - default=30, - metadata={ - "help": ( - "The maximum length of an answer that can be generated. This is needed because the start " - "and end predictions are not conditioned on one another." - ) - }, - ) - - def __post_init__(self): - if ( - self.dataset_name is None - and self.train_file is None - and self.validation_file is None - and self.test_file is None - ): - raise ValueError("Need either a dataset name or a training/validation/test file.") - else: - if self.train_file is not None: - extension = self.train_file.split(".")[-1] - assert extension in ["csv", "json"], "`train_file` should be a csv or a json file." - if self.validation_file is not None: - extension = self.validation_file.split(".")[-1] - assert extension in ["csv", "json"], "`validation_file` should be a csv or a json file." - if self.test_file is not None: - extension = self.test_file.split(".")[-1] - assert extension in ["csv", "json"], "`test_file` should be a csv or a json file." - - -def main(): - # See all possible arguments in src/transformers/training_args.py - # or by passing the --help flag to this script. - # We now keep distinct sets of args, for a cleaner separation of concerns. - - parser = HfArgumentParser((ModelArguments, DataTrainingArguments, TrainingArguments)) - if len(sys.argv) == 2 and sys.argv[1].endswith(".json"): - # If we pass only one argument to the script and it's the path to a json file, - # let's parse it to get our arguments. - model_args, data_args, training_args = parser.parse_json_file(json_file=os.path.abspath(sys.argv[1])) - else: - model_args, data_args, training_args = parser.parse_args_into_dataclasses() - - # Sending telemetry. Tracking the example usage helps us better allocate resources to maintain them. The - # information sent is the one passed as arguments along with your Python/PyTorch versions. - send_example_telemetry("run_qa_beam_search", model_args, data_args) - - # Setup logging - logging.basicConfig( - format="%(asctime)s - %(levelname)s - %(name)s - %(message)s", - datefmt="%m/%d/%Y %H:%M:%S", - handlers=[logging.StreamHandler(sys.stdout)], - ) - - if training_args.should_log: - # The default of training_args.log_level is passive, so we set log level at info here to have that default. - transformers.utils.logging.set_verbosity_info() - - log_level = training_args.get_process_log_level() - logger.setLevel(log_level) - datasets.utils.logging.set_verbosity(log_level) - transformers.utils.logging.set_verbosity(log_level) - transformers.utils.logging.enable_default_handler() - transformers.utils.logging.enable_explicit_format() - - # Log on each process the small summary: - logger.warning( - f"Process rank: {training_args.local_rank}, device: {training_args.device}, n_gpu: {training_args.n_gpu}" - + f"distributed training: {bool(training_args.local_rank != -1)}, 16-bits training: {training_args.fp16}" - ) - logger.info(f"Training/evaluation parameters {training_args}") - - # Detecting last checkpoint. - last_checkpoint = None - if os.path.isdir(training_args.output_dir) and training_args.do_train and not training_args.overwrite_output_dir: - last_checkpoint = get_last_checkpoint(training_args.output_dir) - if last_checkpoint is None and len(os.listdir(training_args.output_dir)) > 0: - raise ValueError( - f"Output directory ({training_args.output_dir}) already exists and is not empty. " - "Use --overwrite_output_dir to overcome." - ) - elif last_checkpoint is not None and training_args.resume_from_checkpoint is None: - logger.info( - f"Checkpoint detected, resuming training at {last_checkpoint}. To avoid this behavior, change " - "the `--output_dir` or add `--overwrite_output_dir` to train from scratch." - ) - - # Set seed before initializing model. - set_seed(training_args.seed) - - # Get the datasets: you can either provide your own CSV/JSON/TXT training and evaluation files (see below) - # or just provide the name of one of the public datasets available on the hub at https://huggingface.co/datasets/ - # (the dataset will be downloaded automatically from the datasets Hub). - # - # For CSV/JSON files, this script will use the column called 'text' or the first column if no column called - # 'text' is found. You can easily tweak this behavior (see below). - # - # In distributed training, the load_dataset function guarantee that only one local process can concurrently - # download the dataset. - if data_args.dataset_name is not None: - # Downloading and loading a dataset from the hub. - raw_datasets = load_dataset( - data_args.dataset_name, - data_args.dataset_config_name, - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - else: - data_files = {} - if data_args.train_file is not None: - data_files["train"] = data_args.train_file - extension = data_args.train_file.split(".")[-1] - if data_args.validation_file is not None: - data_files["validation"] = data_args.validation_file - extension = data_args.validation_file.split(".")[-1] - if data_args.test_file is not None: - data_files["test"] = data_args.test_file - extension = data_args.test_file.split(".")[-1] - raw_datasets = load_dataset( - extension, - data_files=data_files, - field="data", - cache_dir=model_args.cache_dir, - use_auth_token=True if model_args.use_auth_token else None, - ) - # See more about loading any type of standard or custom dataset (from files, python dict, pandas DataFrame, etc) at - # https://huggingface.co/docs/datasets/loading_datasets.html. - - # Load pretrained model and tokenizer - # - # Distributed training: - # The .from_pretrained methods guarantee that only one local process can concurrently - # download model & vocab. - config = XLNetConfig.from_pretrained( - model_args.config_name if model_args.config_name else model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - revision=model_args.model_revision, - use_auth_token=True if model_args.use_auth_token else None, - ) - tokenizer = XLNetTokenizerFast.from_pretrained( - model_args.tokenizer_name if model_args.tokenizer_name else model_args.model_name_or_path, - cache_dir=model_args.cache_dir, - revision=model_args.model_revision, - use_auth_token=True if model_args.use_auth_token else None, - ) - model = XLNetForQuestionAnswering.from_pretrained( - model_args.model_name_or_path, - from_tf=bool(".ckpt" in model_args.model_name_or_path), - config=config, - cache_dir=model_args.cache_dir, - revision=model_args.model_revision, - use_auth_token=True if model_args.use_auth_token else None, - ) - - # Preprocessing the datasets. - # Preprocessing is slighlty different for training and evaluation. - if training_args.do_train: - column_names = raw_datasets["train"].column_names - elif training_args.do_eval: - column_names = raw_datasets["validation"].column_names - else: - column_names = raw_datasets["test"].column_names - question_column_name = "question" if "question" in column_names else column_names[0] - context_column_name = "context" if "context" in column_names else column_names[1] - answer_column_name = "answers" if "answers" in column_names else column_names[2] - - # Padding side determines if we do (question|context) or (context|question). - pad_on_right = tokenizer.padding_side == "right" - - if data_args.max_seq_length > tokenizer.model_max_length: - logger.warning( - f"The max_seq_length passed ({data_args.max_seq_length}) is larger than the maximum length for the" - f"model ({tokenizer.model_max_length}). Using max_seq_length={tokenizer.model_max_length}." - ) - max_seq_length = min(data_args.max_seq_length, tokenizer.model_max_length) - - # Training preprocessing - def prepare_train_features(examples): - # Some of the questions have lots of whitespace on the left, which is not useful and will make the - # truncation of the context fail (the tokenized question will take a lots of space). So we remove that - # left whitespace - examples[question_column_name] = [q.lstrip() for q in examples[question_column_name]] - - # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results - # in one example possible giving several features when a context is long, each of those features having a - # context that overlaps a bit the context of the previous feature. - tokenized_examples = tokenizer( - examples[question_column_name if pad_on_right else context_column_name], - examples[context_column_name if pad_on_right else question_column_name], - truncation="only_second" if pad_on_right else "only_first", - max_length=max_seq_length, - stride=data_args.doc_stride, - return_overflowing_tokens=True, - return_offsets_mapping=True, - return_special_tokens_mask=True, - return_token_type_ids=True, - padding="max_length", - ) - - # Since one example might give us several features if it has a long context, we need a map from a feature to - # its corresponding example. This key gives us just that. - sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") - # The offset mappings will give us a map from token to character position in the original context. This will - # help us compute the start_positions and end_positions. - offset_mapping = tokenized_examples.pop("offset_mapping") - # The special tokens will help us build the p_mask (which indicates the tokens that can't be in answers). - special_tokens = tokenized_examples.pop("special_tokens_mask") - - # Let's label those examples! - tokenized_examples["start_positions"] = [] - tokenized_examples["end_positions"] = [] - tokenized_examples["is_impossible"] = [] - tokenized_examples["cls_index"] = [] - tokenized_examples["p_mask"] = [] - - for i, offsets in enumerate(offset_mapping): - # We will label impossible answers with the index of the CLS token. - input_ids = tokenized_examples["input_ids"][i] - cls_index = input_ids.index(tokenizer.cls_token_id) - tokenized_examples["cls_index"].append(cls_index) - - # Grab the sequence corresponding to that example (to know what is the context and what is the question). - sequence_ids = tokenized_examples["token_type_ids"][i] - for k, s in enumerate(special_tokens[i]): - if s: - sequence_ids[k] = 3 - context_idx = 1 if pad_on_right else 0 - - # Build the p_mask: non special tokens and context gets 0.0, the others get 1.0. - # The cls token gets 1.0 too (for predictions of empty answers). - tokenized_examples["p_mask"].append( - [ - 0.0 if (not special_tokens[i][k] and s == context_idx) or k == cls_index else 1.0 - for k, s in enumerate(sequence_ids) - ] - ) - - # One example can give several spans, this is the index of the example containing this span of text. - sample_index = sample_mapping[i] - answers = examples[answer_column_name][sample_index] - # If no answers are given, set the cls_index as answer. - if len(answers["answer_start"]) == 0: - tokenized_examples["start_positions"].append(cls_index) - tokenized_examples["end_positions"].append(cls_index) - tokenized_examples["is_impossible"].append(1.0) - else: - # Start/end character index of the answer in the text. - start_char = answers["answer_start"][0] - end_char = start_char + len(answers["text"][0]) - - # Start token index of the current span in the text. - token_start_index = 0 - while sequence_ids[token_start_index] != context_idx: - token_start_index += 1 - - # End token index of the current span in the text. - token_end_index = len(input_ids) - 1 - while sequence_ids[token_end_index] != context_idx: - token_end_index -= 1 - # Detect if the answer is out of the span (in which case this feature is labeled with the CLS index). - if not (offsets[token_start_index][0] <= start_char and offsets[token_end_index][1] >= end_char): - tokenized_examples["start_positions"].append(cls_index) - tokenized_examples["end_positions"].append(cls_index) - tokenized_examples["is_impossible"].append(1.0) - else: - # Otherwise move the token_start_index and token_end_index to the two ends of the answer. - # Note: we could go after the last offset if the answer is the last word (edge case). - while token_start_index < len(offsets) and offsets[token_start_index][0] <= start_char: - token_start_index += 1 - tokenized_examples["start_positions"].append(token_start_index - 1) - while offsets[token_end_index][1] >= end_char: - token_end_index -= 1 - tokenized_examples["end_positions"].append(token_end_index + 1) - tokenized_examples["is_impossible"].append(0.0) - - return tokenized_examples - - if training_args.do_train: - if "train" not in raw_datasets: - raise ValueError("--do_train requires a train dataset") - train_dataset = raw_datasets["train"] - if data_args.max_train_samples is not None: - # Select samples from Dataset, This will help to decrease processing time - max_train_samples = min(len(train_dataset), data_args.max_train_samples) - train_dataset = train_dataset.select(range(max_train_samples)) - # Create Training Features - with training_args.main_process_first(desc="train dataset map pre-processing"): - train_dataset = train_dataset.map( - prepare_train_features, - batched=True, - num_proc=data_args.preprocessing_num_workers, - remove_columns=column_names, - load_from_cache_file=not data_args.overwrite_cache, - desc="Running tokenizer on train dataset", - ) - if data_args.max_train_samples is not None: - # Select samples from dataset again since Feature Creation might increase number of features - max_train_samples = min(len(train_dataset), data_args.max_train_samples) - train_dataset = train_dataset.select(range(max_train_samples)) - - # Validation preprocessing - def prepare_validation_features(examples): - # Tokenize our examples with truncation and maybe padding, but keep the overflows using a stride. This results - # in one example possible giving several features when a context is long, each of those features having a - # context that overlaps a bit the context of the previous feature. - tokenized_examples = tokenizer( - examples[question_column_name if pad_on_right else context_column_name], - examples[context_column_name if pad_on_right else question_column_name], - truncation="only_second" if pad_on_right else "only_first", - max_length=max_seq_length, - stride=data_args.doc_stride, - return_overflowing_tokens=True, - return_offsets_mapping=True, - return_special_tokens_mask=True, - return_token_type_ids=True, - padding="max_length", - ) - - # Since one example might give us several features if it has a long context, we need a map from a feature to - # its corresponding example. This key gives us just that. - sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") - - # The special tokens will help us build the p_mask (which indicates the tokens that can't be in answers). - special_tokens = tokenized_examples.pop("special_tokens_mask") - - # For evaluation, we will need to convert our predictions to substrings of the context, so we keep the - # corresponding example_id and we will store the offset mappings. - tokenized_examples["example_id"] = [] - - # We still provide the index of the CLS token and the p_mask to the model, but not the is_impossible label. - tokenized_examples["cls_index"] = [] - tokenized_examples["p_mask"] = [] - - for i, input_ids in enumerate(tokenized_examples["input_ids"]): - # Find the CLS token in the input ids. - cls_index = input_ids.index(tokenizer.cls_token_id) - tokenized_examples["cls_index"].append(cls_index) - - # Grab the sequence corresponding to that example (to know what is the context and what is the question). - sequence_ids = tokenized_examples["token_type_ids"][i] - for k, s in enumerate(special_tokens[i]): - if s: - sequence_ids[k] = 3 - context_idx = 1 if pad_on_right else 0 - - # Build the p_mask: non special tokens and context gets 0.0, the others 1.0. - tokenized_examples["p_mask"].append( - [ - 0.0 if (not special_tokens[i][k] and s == context_idx) or k == cls_index else 1.0 - for k, s in enumerate(sequence_ids) - ] - ) - - # One example can give several spans, this is the index of the example containing this span of text. - sample_index = sample_mapping[i] - tokenized_examples["example_id"].append(examples["id"][sample_index]) - - # Set to None the offset_mapping that are not part of the context so it's easy to determine if a token - # position is part of the context or not. - tokenized_examples["offset_mapping"][i] = [ - (o if sequence_ids[k] == context_idx else None) - for k, o in enumerate(tokenized_examples["offset_mapping"][i]) - ] - - return tokenized_examples - - if training_args.do_eval: - if "validation" not in raw_datasets: - raise ValueError("--do_eval requires a validation dataset") - eval_examples = raw_datasets["validation"] - if data_args.max_eval_samples is not None: - # Selecting Eval Samples from Dataset - max_eval_samples = min(len(eval_examples), data_args.max_eval_samples) - eval_examples = eval_examples.select(range(max_eval_samples)) - # Create Features from Eval Dataset - with training_args.main_process_first(desc="validation dataset map pre-processing"): - eval_dataset = eval_examples.map( - prepare_validation_features, - batched=True, - num_proc=data_args.preprocessing_num_workers, - remove_columns=column_names, - load_from_cache_file=not data_args.overwrite_cache, - desc="Running tokenizer on validation dataset", - ) - if data_args.max_eval_samples is not None: - # Selecting Samples from Dataset again since Feature Creation might increase samples size - max_eval_samples = min(len(eval_dataset), data_args.max_eval_samples) - eval_dataset = eval_dataset.select(range(max_eval_samples)) - - if training_args.do_predict: - if "test" not in raw_datasets: - raise ValueError("--do_predict requires a test dataset") - predict_examples = raw_datasets["test"] - if data_args.max_predict_samples is not None: - # We will select sample from whole data - predict_examples = predict_examples.select(range(data_args.max_predict_samples)) - # Test Feature Creation - with training_args.main_process_first(desc="prediction dataset map pre-processing"): - predict_dataset = predict_examples.map( - prepare_validation_features, - batched=True, - num_proc=data_args.preprocessing_num_workers, - remove_columns=column_names, - load_from_cache_file=not data_args.overwrite_cache, - desc="Running tokenizer on prediction dataset", - ) - if data_args.max_predict_samples is not None: - # During Feature creation dataset samples might increase, we will select required samples again - max_predict_samples = min(len(predict_dataset), data_args.max_predict_samples) - predict_dataset = predict_dataset.select(range(max_predict_samples)) - - # Data collator - # We have already padded to max length if the corresponding flag is True, otherwise we need to pad in the data - # collator. - data_collator = ( - default_data_collator - if data_args.pad_to_max_length - else DataCollatorWithPadding(tokenizer, pad_to_multiple_of=8 if training_args.fp16 else None) - ) - - # Post-processing: - def post_processing_function(examples, features, predictions, stage="eval"): - # Post-processing: we match the start logits and end logits to answers in the original context. - predictions, scores_diff_json = postprocess_qa_predictions_with_beam_search( - examples=examples, - features=features, - predictions=predictions, - version_2_with_negative=data_args.version_2_with_negative, - n_best_size=data_args.n_best_size, - max_answer_length=data_args.max_answer_length, - start_n_top=model.config.start_n_top, - end_n_top=model.config.end_n_top, - output_dir=training_args.output_dir, - log_level=log_level, - prefix=stage, - ) - # Format the result to the format the metric expects. - if data_args.version_2_with_negative: - formatted_predictions = [ - {"id": k, "prediction_text": v, "no_answer_probability": scores_diff_json[k]} - for k, v in predictions.items() - ] - else: - formatted_predictions = [{"id": k, "prediction_text": v} for k, v in predictions.items()] - - references = [{"id": ex["id"], "answers": ex[answer_column_name]} for ex in examples] - return EvalPrediction(predictions=formatted_predictions, label_ids=references) - - metric = evaluate.load("squad_v2" if data_args.version_2_with_negative else "squad") - - def compute_metrics(p: EvalPrediction): - return metric.compute(predictions=p.predictions, references=p.label_ids) - - # Initialize our Trainer - trainer = QuestionAnsweringTrainer( - model=model, - args=training_args, - train_dataset=train_dataset if training_args.do_train else None, - eval_dataset=eval_dataset if training_args.do_eval else None, - eval_examples=eval_examples if training_args.do_eval else None, - tokenizer=tokenizer, - data_collator=data_collator, - post_process_function=post_processing_function, - compute_metrics=compute_metrics, - ) - - # Training - if training_args.do_train: - checkpoint = None - if training_args.resume_from_checkpoint is not None: - checkpoint = training_args.resume_from_checkpoint - elif last_checkpoint is not None: - checkpoint = last_checkpoint - train_result = trainer.train(resume_from_checkpoint=checkpoint) - trainer.save_model() # Saves the tokenizer too for easy upload - - metrics = train_result.metrics - - max_train_samples = ( - data_args.max_train_samples if data_args.max_train_samples is not None else len(train_dataset) - ) - metrics["train_samples"] = min(max_train_samples, len(train_dataset)) - - trainer.log_metrics("train", metrics) - trainer.save_metrics("train", metrics) - trainer.save_state() - - # Evaluation - if training_args.do_eval: - logger.info("*** Evaluate ***") - metrics = trainer.evaluate() - - max_eval_samples = data_args.max_eval_samples if data_args.max_eval_samples is not None else len(eval_dataset) - metrics["eval_samples"] = min(max_eval_samples, len(eval_dataset)) - - trainer.log_metrics("eval", metrics) - trainer.save_metrics("eval", metrics) - - # Prediction - if training_args.do_predict: - logger.info("*** Predict ***") - results = trainer.predict(predict_dataset, predict_examples) - metrics = results.metrics - - max_predict_samples = ( - data_args.max_predict_samples if data_args.max_predict_samples is not None else len(predict_dataset) - ) - metrics["predict_samples"] = min(max_predict_samples, len(predict_dataset)) - - trainer.log_metrics("predict", metrics) - trainer.save_metrics("predict", metrics) - - kwargs = {"finetuned_from": model_args.model_name_or_path, "tasks": "question-answering"} - if data_args.dataset_name is not None: - kwargs["dataset_tags"] = data_args.dataset_name - if data_args.dataset_config_name is not None: - kwargs["dataset_args"] = data_args.dataset_config_name - kwargs["dataset"] = f"{data_args.dataset_name} {data_args.dataset_config_name}" - else: - kwargs["dataset"] = data_args.dataset_name - - if training_args.push_to_hub: - trainer.push_to_hub(**kwargs) - else: - trainer.create_model_card(**kwargs) - - -def _mp_fn(index): - # For xla_spawn (TPUs) - main() - - -if __name__ == "__main__": - main() diff --git a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/test_run/test_finetune.sh b/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/test_run/test_finetune.sh deleted file mode 100644 index c44d110d20046a217e7484365949e41ac21835d7..0000000000000000000000000000000000000000 --- a/spaces/chendl/compositional_test/transformers/examples/research_projects/rag-end2end-retriever/test_run/test_finetune.sh +++ /dev/null @@ -1,57 +0,0 @@ -# Add parent directory to python path to access lightning_base.py -export PYTHONPATH="../":"${PYTHONPATH}" - -#creates the custom knowlegebase -python use_own_knowledge_dataset.py - - -# Start a single-node Ray cluster. -ray start --head - -# A sample finetuning run, you need to specify data_dir, output_dir and model_name_or_path -# run ./examples/rag/finetune_rag_ray.sh --help to see all the possible options - - - -python finetune_rag.py \ - --model_name_or_path facebook/rag-token-base \ - --model_type rag_token \ - --fp16 \ - --gpus 2 \ - --profile \ - --do_train \ - --end2end \ - --do_predict \ - --n_val -1 \ - --train_batch_size 1 \ - --eval_batch_size 1 \ - --max_source_length 128 \ - --max_target_length 25 \ - --val_max_target_length 25 \ - --test_max_target_length 25 \ - --label_smoothing 0.1 \ - --dropout 0.1 \ - --attention_dropout 0.1 \ - --weight_decay 0.001 \ - --adam_epsilon 1e-08 \ - --max_grad_norm 0.1 \ - --lr_scheduler polynomial \ - --learning_rate 3e-05 \ - --num_train_epochs 10 \ - --warmup_steps 500 \ - --gradient_accumulation_steps 1 \ - --distributed_retriever ray \ - --num_retrieval_workers 4 \ - --index_name custom \ - --context_encoder_name facebook/dpr-ctx_encoder-multiset-base \ - --index_gpus 2 \ - --gpu_order [2,3,4,5,6,7,8,9,0,1] \ - --indexing_freq 5 - - - -# Stop the Ray cluster. -ray stop - -#CUDA_VISIBLE_DEVICES=2,3,4,5,6,7,8,9,0,1 sh ./test_run/test_finetune.sh -#Make sure --gpu_order is same. \ No newline at end of file diff --git a/spaces/chixiao/chixiaobing/Dockerfile b/spaces/chixiao/chixiaobing/Dockerfile deleted file mode 100644 index 3698c7cb7938e025afc53b18a571ae2961fbdffe..0000000000000000000000000000000000000000 --- a/spaces/chixiao/chixiaobing/Dockerfile +++ /dev/null @@ -1,34 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符 -ENV Go_Proxy_BingAI_USER_TOKEN_1="kJs8hD92ncMzLaoQWYtX5rG6bE3fZ4iO" - -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] \ No newline at end of file diff --git a/spaces/chronopt-research/ViTExCo/src/utils.py b/spaces/chronopt-research/ViTExCo/src/utils.py deleted file mode 100644 index 0fbbd8d04d09ac623584005374238bd0edcb40d9..0000000000000000000000000000000000000000 --- a/spaces/chronopt-research/ViTExCo/src/utils.py +++ /dev/null @@ -1,806 +0,0 @@ -import sys -import time -import numpy as np -from PIL import Image -from skimage import color -from skimage.transform import resize -import src.data.functional as F -import torch -from torch import nn -import torch.nn.functional as F_torch -import torchvision.transforms.functional as F_torchvision -from numba import cuda, jit -import math - -rgb_from_xyz = np.array( - [ - [3.24048134, -0.96925495, 0.05564664], - [-1.53715152, 1.87599, -0.20404134], - [-0.49853633, 0.04155593, 1.05731107], - ] -) -l_norm, ab_norm = 1.0, 1.0 -l_mean, ab_mean = 50.0, 0 - - -class SquaredPadding(object): - def __init__(self, target_size=384, fill_value=0): - self.target_size = target_size - self.fill_value = fill_value - pass - - def __call__(self, img, return_pil=True, return_paddings=False): - if type(img) != torch.Tensor: - img = F_torchvision.to_tensor(img) - - H, W = img.size(1), img.size(2) - if H > W: - H_new, W_new = self.target_size, int(W/H*self.target_size) - # Resize image - img = F_torchvision.resize(img, (H_new, W_new)) - - # Padding image - padded_size = H_new - W_new - paddings = (padded_size // 2, (padded_size // 2) + (padded_size % 2), 0, 0) - padded_img = F_torch.pad(img, paddings, value=self.fill_value) - else: - H_new, W_new = int(H/W*self.target_size), self.target_size - # Resize image - img = F_torchvision.resize(img, (H_new, W_new)) - - # Padding image - padded_size = W_new - H_new - paddings = (0, 0, padded_size // 2, (padded_size // 2) + (padded_size % 2)) - padded_img = F_torch.pad(img, paddings, value=self.fill_value) - - if return_pil: - padded_img = F_torchvision.to_pil_image(padded_img) - - if return_paddings: - return padded_img, paddings - - return padded_img - -class UnpaddingSquare(object): - def __init__(self): - pass - - def __call__(self, img, paddings): - H, W = img.size(1), img.size(2) - pad_l, pad_r, pad_t, pad_b = paddings - W_ori = W - pad_l - pad_r - H_ori = H - pad_t - pad_b - - return F_torchvision.crop(img, top=pad_t, left=pad_l, height=H_ori, width=W_ori) - -class ResizeFlow(object): - def __init__(self, target_size=(384,384)): - self.target_size = target_size - pass - - def __call__(self, flow): - return F_torch.interpolate(flow.unsqueeze(0), self.target_size, mode='bilinear', align_corners=True).squeeze(0) - -class SquaredPaddingFlow(object): - def __init__(self, fill_value=0): - self.fill_value = fill_value - - def __call__(self, flow): - H, W = flow.size(1), flow.size(2) - - if H > W: - # Padding flow - padded_size = H - W - paddings = (padded_size // 2, (padded_size // 2) + (padded_size % 2), 0, 0) - padded_img = F_torch.pad(flow, paddings, value=self.fill_value) - else: - # Padding flow - padded_size = W - H - paddings = (0, 0, padded_size // 2, (padded_size // 2) + (padded_size % 2)) - padded_img = F_torch.pad(flow, paddings, value=self.fill_value) - - return padded_img - - -def gray2rgb_batch(l): - # gray image tensor to rgb image tensor - l_uncenter = uncenter_l(l) - l_uncenter = l_uncenter / (2 * l_mean) - return torch.cat((l_uncenter, l_uncenter, l_uncenter), dim=1) - - -def vgg_preprocess(tensor): - # input is RGB tensor which ranges in [0,1] - # output is BGR tensor which ranges in [0,255] - tensor_bgr = torch.cat((tensor[:, 2:3, :, :], tensor[:, 1:2, :, :], tensor[:, 0:1, :, :]), dim=1) - tensor_bgr_ml = tensor_bgr - torch.Tensor([0.40760392, 0.45795686, 0.48501961]).type_as(tensor_bgr).view(1, 3, 1, 1) - return tensor_bgr_ml * 255 - - -def tensor_lab2rgb(input): - """ - n * 3* h *w - """ - input_trans = input.transpose(1, 2).transpose(2, 3) # n * h * w * 3 - L, a, b = ( - input_trans[:, :, :, 0:1], - input_trans[:, :, :, 1:2], - input_trans[:, :, :, 2:], - ) - y = (L + 16.0) / 116.0 - x = (a / 500.0) + y - z = y - (b / 200.0) - - neg_mask = z.data < 0 - z[neg_mask] = 0 - xyz = torch.cat((x, y, z), dim=3) - - mask = xyz.data > 0.2068966 - mask_xyz = xyz.clone() - mask_xyz[mask] = torch.pow(xyz[mask], 3.0) - mask_xyz[~mask] = (xyz[~mask] - 16.0 / 116.0) / 7.787 - mask_xyz[:, :, :, 0] = mask_xyz[:, :, :, 0] * 0.95047 - mask_xyz[:, :, :, 2] = mask_xyz[:, :, :, 2] * 1.08883 - - rgb_trans = torch.mm(mask_xyz.view(-1, 3), torch.from_numpy(rgb_from_xyz).type_as(xyz)).view( - input.size(0), input.size(2), input.size(3), 3 - ) - rgb = rgb_trans.transpose(2, 3).transpose(1, 2) - - mask = rgb > 0.0031308 - mask_rgb = rgb.clone() - mask_rgb[mask] = 1.055 * torch.pow(rgb[mask], 1 / 2.4) - 0.055 - mask_rgb[~mask] = rgb[~mask] * 12.92 - - neg_mask = mask_rgb.data < 0 - large_mask = mask_rgb.data > 1 - mask_rgb[neg_mask] = 0 - mask_rgb[large_mask] = 1 - return mask_rgb - - -###### loss functions ###### -def feature_normalize(feature_in): - feature_in_norm = torch.norm(feature_in, 2, 1, keepdim=True) + sys.float_info.epsilon - feature_in_norm = torch.div(feature_in, feature_in_norm) - return feature_in_norm - - -# denormalization for l -def uncenter_l(l): - return l * l_norm + l_mean - - -def get_grid(x): - torchHorizontal = torch.linspace(-1.0, 1.0, x.size(3)).view(1, 1, 1, x.size(3)).expand(x.size(0), 1, x.size(2), x.size(3)) - torchVertical = torch.linspace(-1.0, 1.0, x.size(2)).view(1, 1, x.size(2), 1).expand(x.size(0), 1, x.size(2), x.size(3)) - - return torch.cat([torchHorizontal, torchVertical], 1) - - -class WarpingLayer(nn.Module): - def __init__(self, device): - super(WarpingLayer, self).__init__() - self.device = device - - def forward(self, x, flow): - """ - It takes the input image and the flow and warps the input image according to the flow - - Args: - x: the input image - flow: the flow tensor, which is a 4D tensor of shape (batch_size, 2, height, width) - - Returns: - The warped image - """ - # WarpingLayer uses F.grid_sample, which expects normalized grid - # we still output unnormalized flow for the convenience of comparing EPEs with FlowNet2 and original code - # so here we need to denormalize the flow - flow_for_grip = torch.zeros_like(flow).to(self.device) - flow_for_grip[:, 0, :, :] = flow[:, 0, :, :] / ((flow.size(3) - 1.0) / 2.0) - flow_for_grip[:, 1, :, :] = flow[:, 1, :, :] / ((flow.size(2) - 1.0) / 2.0) - - grid = (get_grid(x).to(self.device) + flow_for_grip).permute(0, 2, 3, 1) - return F_torch.grid_sample(x, grid, align_corners=True) - - -class CenterPad_threshold(object): - def __init__(self, image_size, threshold=3 / 4): - self.height = image_size[0] - self.width = image_size[1] - self.threshold = threshold - - def __call__(self, image): - # pad the image to 16:9 - # pad height - I = np.array(image) - - # for padded input - height_old = np.size(I, 0) - width_old = np.size(I, 1) - old_size = [height_old, width_old] - height = self.height - width = self.width - I_pad = np.zeros((height, width, np.size(I, 2))) - - ratio = height / width - - if height_old / width_old == ratio: - if height_old == height: - return Image.fromarray(I.astype(np.uint8)) - new_size = [int(x * height / height_old) for x in old_size] - I_resize = resize(I, new_size, mode="reflect", preserve_range=True, clip=False, anti_aliasing=True) - return Image.fromarray(I_resize.astype(np.uint8)) - - if height_old / width_old > self.threshold: - width_new, height_new = width_old, int(width_old * self.threshold) - height_margin = height_old - height_new - height_crop_start = height_margin // 2 - I_crop = I[height_crop_start : (height_crop_start + height_new), :, :] - I_resize = resize(I_crop, [height, width], mode="reflect", preserve_range=True, clip=False, anti_aliasing=True) - - return Image.fromarray(I_resize.astype(np.uint8)) - - if height_old / width_old > ratio: # pad the width and crop - new_size = [int(x * width / width_old) for x in old_size] - I_resize = resize(I, new_size, mode="reflect", preserve_range=True, clip=False, anti_aliasing=True) - width_resize = np.size(I_resize, 1) - height_resize = np.size(I_resize, 0) - start_height = (height_resize - height) // 2 - I_pad[:, :, :] = I_resize[start_height : (start_height + height), :, :] - else: # pad the height and crop - new_size = [int(x * height / height_old) for x in old_size] - I_resize = resize(I, new_size, mode="reflect", preserve_range=True, clip=False, anti_aliasing=True) - width_resize = np.size(I_resize, 1) - height_resize = np.size(I_resize, 0) - start_width = (width_resize - width) // 2 - I_pad[:, :, :] = I_resize[:, start_width : (start_width + width), :] - - return Image.fromarray(I_pad.astype(np.uint8)) - - -class Normalize(object): - def __init__(self): - pass - - def __call__(self, inputs): - inputs[0:1, :, :] = F.normalize(inputs[0:1, :, :], 50, 1) - inputs[1:3, :, :] = F.normalize(inputs[1:3, :, :], (0, 0), (1, 1)) - return inputs - - -class RGB2Lab(object): - def __init__(self): - pass - - def __call__(self, inputs): - return color.rgb2lab(inputs) - - -class ToTensor(object): - def __init__(self): - pass - - def __call__(self, inputs): - return F.to_mytensor(inputs) - - -class CenterPad(object): - def __init__(self, image_size): - self.height = image_size[0] - self.width = image_size[1] - - def __call__(self, image): - # pad the image to 16:9 - # pad height - I = np.array(image) - - # for padded input - height_old = np.size(I, 0) - width_old = np.size(I, 1) - old_size = [height_old, width_old] - height = self.height - width = self.width - I_pad = np.zeros((height, width, np.size(I, 2))) - - ratio = height / width - if height_old / width_old == ratio: - if height_old == height: - return Image.fromarray(I.astype(np.uint8)) - new_size = [int(x * height / height_old) for x in old_size] - I_resize = resize(I, new_size, mode="reflect", preserve_range=True, clip=False, anti_aliasing=True) - return Image.fromarray(I_resize.astype(np.uint8)) - - if height_old / width_old > ratio: # pad the width and crop - new_size = [int(x * width / width_old) for x in old_size] - I_resize = resize(I, new_size, mode="reflect", preserve_range=True, clip=False, anti_aliasing=True) - width_resize = np.size(I_resize, 1) - height_resize = np.size(I_resize, 0) - start_height = (height_resize - height) // 2 - I_pad[:, :, :] = I_resize[start_height : (start_height + height), :, :] - else: # pad the height and crop - new_size = [int(x * height / height_old) for x in old_size] - I_resize = resize(I, new_size, mode="reflect", preserve_range=True, clip=False, anti_aliasing=True) - width_resize = np.size(I_resize, 1) - height_resize = np.size(I_resize, 0) - start_width = (width_resize - width) // 2 - I_pad[:, :, :] = I_resize[:, start_width : (start_width + width), :] - - return Image.fromarray(I_pad.astype(np.uint8)) - - -class CenterPadCrop_numpy(object): - """ - pad the image according to the height - """ - - def __init__(self, image_size): - self.height = image_size[0] - self.width = image_size[1] - - def __call__(self, image, threshold=3 / 4): - # pad the image to 16:9 - # pad height - I = np.array(image) - # for padded input - height_old = np.size(I, 0) - width_old = np.size(I, 1) - old_size = [height_old, width_old] - height = self.height - width = self.width - padding_size = width - if image.ndim == 2: - I_pad = np.zeros((width, width)) - else: - I_pad = np.zeros((width, width, I.shape[2])) - - ratio = height / width - if height_old / width_old == ratio: - return I - - # if height_old / width_old > threshold: - # width_new, height_new = width_old, int(width_old * threshold) - # height_margin = height_old - height_new - # height_crop_start = height_margin // 2 - # I_crop = I[height_start : (height_start + height_new), :] - # I_resize = resize( - # I_crop, [height, width], mode="reflect", preserve_range=True, clip=False, anti_aliasing=True - # ) - # return I_resize - - if height_old / width_old > ratio: # pad the width and crop - new_size = [int(x * width / width_old) for x in old_size] - I_resize = resize(I, new_size, mode="reflect", preserve_range=True, clip=False, anti_aliasing=True) - width_resize = np.size(I_resize, 1) - height_resize = np.size(I_resize, 0) - start_height = (height_resize - height) // 2 - start_height_block = (padding_size - height) // 2 - if image.ndim == 2: - I_pad[start_height_block : (start_height_block + height), :] = I_resize[ - start_height : (start_height + height), : - ] - else: - I_pad[start_height_block : (start_height_block + height), :, :] = I_resize[ - start_height : (start_height + height), :, : - ] - else: # pad the height and crop - new_size = [int(x * height / height_old) for x in old_size] - I_resize = resize(I, new_size, mode="reflect", preserve_range=True, clip=False, anti_aliasing=True) - width_resize = np.size(I_resize, 1) - height_resize = np.size(I_resize, 0) - start_width = (width_resize - width) // 2 - start_width_block = (padding_size - width) // 2 - if image.ndim == 2: - I_pad[:, start_width_block : (start_width_block + width)] = I_resize[:, start_width : (start_width + width)] - - else: - I_pad[:, start_width_block : (start_width_block + width), :] = I_resize[ - :, start_width : (start_width + width), : - ] - - crop_start_height = (I_pad.shape[0] - height) // 2 - crop_start_width = (I_pad.shape[1] - width) // 2 - - if image.ndim == 2: - return I_pad[crop_start_height : (crop_start_height + height), crop_start_width : (crop_start_width + width)] - else: - return I_pad[crop_start_height : (crop_start_height + height), crop_start_width : (crop_start_width + width), :] - - -@jit(nopython=True, nogil=True) -def biInterpolation_cpu(distorted, i, j): - i = np.uint16(i) - j = np.uint16(j) - Q11 = distorted[j, i] - Q12 = distorted[j, i + 1] - Q21 = distorted[j + 1, i] - Q22 = distorted[j + 1, i + 1] - - return np.int8( - Q11 * (i + 1 - i) * (j + 1 - j) + Q12 * (i - i) * (j + 1 - j) + Q21 * (i + 1 - i) * (j - j) + Q22 * (i - i) * (j - j) - ) - -@jit(nopython=True, nogil=True) -def iterSearchShader_cpu(padu, padv, xr, yr, W, H, maxIter, precision): - # print('processing location', (xr, yr)) - # - if abs(padu[yr, xr]) < precision and abs(padv[yr, xr]) < precision: - return xr, yr - - # Our initialize method in this paper, can see the overleaf for detail - if (xr + 1) <= (W - 1): - dif = padu[yr, xr + 1] - padu[yr, xr] - else: - dif = padu[yr, xr] - padu[yr, xr - 1] - u_next = padu[yr, xr] / (1 + dif) - if (yr + 1) <= (H - 1): - dif = padv[yr + 1, xr] - padv[yr, xr] - else: - dif = padv[yr, xr] - padv[yr - 1, xr] - v_next = padv[yr, xr] / (1 + dif) - i = xr - u_next - j = yr - v_next - i_int = int(i) - j_int = int(j) - - # The same as traditional iterative search method - for _ in range(maxIter): - if not 0 <= i <= (W - 1) or not 0 <= j <= (H - 1): - return i, j - - u11 = padu[j_int, i_int] - v11 = padv[j_int, i_int] - - u12 = padu[j_int, i_int + 1] - v12 = padv[j_int, i_int + 1] - - int1 = padu[j_int + 1, i_int] - v21 = padv[j_int + 1, i_int] - - int2 = padu[j_int + 1, i_int + 1] - v22 = padv[j_int + 1, i_int + 1] - - u = ( - u11 * (i_int + 1 - i) * (j_int + 1 - j) - + u12 * (i - i_int) * (j_int + 1 - j) - + int1 * (i_int + 1 - i) * (j - j_int) - + int2 * (i - i_int) * (j - j_int) - ) - - v = ( - v11 * (i_int + 1 - i) * (j_int + 1 - j) - + v12 * (i - i_int) * (j_int + 1 - j) - + v21 * (i_int + 1 - i) * (j - j_int) - + v22 * (i - i_int) * (j - j_int) - ) - - i_next = xr - u - j_next = yr - v - - if abs(i - i_next) < precision and abs(j - j_next) < precision: - return i, j - - i = i_next - j = j_next - - # if the search doesn't converge within max iter, it will return the last iter result - return i_next, j_next - -@jit(nopython=True, nogil=True) -def iterSearch_cpu(distortImg, resultImg, padu, padv, W, H, maxIter=5, precision=1e-2): - for xr in range(W): - for yr in range(H): - # (xr, yr) is the point in result image, (i, j) is the search result in distorted image - i, j = iterSearchShader_cpu(padu, padv, xr, yr, W, H, maxIter, precision) - - # reflect the pixels outside the border - if i > W - 1: - i = 2 * W - 1 - i - if i < 0: - i = -i - if j > H - 1: - j = 2 * H - 1 - j - if j < 0: - j = -j - - # Bilinear interpolation to get the pixel at (i, j) in distorted image - resultImg[yr, xr, 0] = biInterpolation_cpu( - distortImg[:, :, 0], - i, - j, - ) - resultImg[yr, xr, 1] = biInterpolation_cpu( - distortImg[:, :, 1], - i, - j, - ) - resultImg[yr, xr, 2] = biInterpolation_cpu( - distortImg[:, :, 2], - i, - j, - ) - return None - - -def forward_mapping_cpu(source_image, u, v, maxIter=5, precision=1e-2): - """ - warp the image according to the forward flow - u: horizontal - v: vertical - """ - H = source_image.shape[0] - W = source_image.shape[1] - - distortImg = np.array(np.zeros((H + 1, W + 1, 3)), dtype=np.uint8) - distortImg[0:H, 0:W] = source_image[0:H, 0:W] - distortImg[H, 0:W] = source_image[H - 1, 0:W] - distortImg[0:H, W] = source_image[0:H, W - 1] - distortImg[H, W] = source_image[H - 1, W - 1] - - padu = np.array(np.zeros((H + 1, W + 1)), dtype=np.float32) - padu[0:H, 0:W] = u[0:H, 0:W] - padu[H, 0:W] = u[H - 1, 0:W] - padu[0:H, W] = u[0:H, W - 1] - padu[H, W] = u[H - 1, W - 1] - - padv = np.array(np.zeros((H + 1, W + 1)), dtype=np.float32) - padv[0:H, 0:W] = v[0:H, 0:W] - padv[H, 0:W] = v[H - 1, 0:W] - padv[0:H, W] = v[0:H, W - 1] - padv[H, W] = v[H - 1, W - 1] - - resultImg = np.array(np.zeros((H, W, 3)), dtype=np.uint8) - iterSearch_cpu(distortImg, resultImg, padu, padv, W, H, maxIter, precision) - return resultImg - -class Distortion_with_flow_cpu(object): - """Elastic distortion""" - - def __init__(self, maxIter=3, precision=1e-3): - self.maxIter = maxIter - self.precision = precision - - def __call__(self, inputs, dx, dy): - inputs = np.array(inputs) - shape = inputs.shape[0], inputs.shape[1] - remap_image = forward_mapping_cpu(inputs, dy, dx, maxIter=self.maxIter, precision=self.precision) - - return Image.fromarray(remap_image) - -@cuda.jit(device=True) -def biInterpolation_gpu(distorted, i, j): - i = int(i) - j = int(j) - Q11 = distorted[j, i] - Q12 = distorted[j, i + 1] - Q21 = distorted[j + 1, i] - Q22 = distorted[j + 1, i + 1] - - return np.int8( - Q11 * (i + 1 - i) * (j + 1 - j) + Q12 * (i - i) * (j + 1 - j) + Q21 * (i + 1 - i) * (j - j) + Q22 * (i - i) * (j - j) - ) - -@cuda.jit(device=True) -def iterSearchShader_gpu(padu, padv, xr, yr, W, H, maxIter, precision): - # print('processing location', (xr, yr)) - # - if abs(padu[yr, xr]) < precision and abs(padv[yr, xr]) < precision: - return xr, yr - - # Our initialize method in this paper, can see the overleaf for detail - if (xr + 1) <= (W - 1): - dif = padu[yr, xr + 1] - padu[yr, xr] - else: - dif = padu[yr, xr] - padu[yr, xr - 1] - u_next = padu[yr, xr] / (1 + dif) - if (yr + 1) <= (H - 1): - dif = padv[yr + 1, xr] - padv[yr, xr] - else: - dif = padv[yr, xr] - padv[yr - 1, xr] - v_next = padv[yr, xr] / (1 + dif) - i = xr - u_next - j = yr - v_next - i_int = int(i) - j_int = int(j) - - # The same as traditional iterative search method - for _ in range(maxIter): - if not 0 <= i <= (W - 1) or not 0 <= j <= (H - 1): - return i, j - - u11 = padu[j_int, i_int] - v11 = padv[j_int, i_int] - - u12 = padu[j_int, i_int + 1] - v12 = padv[j_int, i_int + 1] - - int1 = padu[j_int + 1, i_int] - v21 = padv[j_int + 1, i_int] - - int2 = padu[j_int + 1, i_int + 1] - v22 = padv[j_int + 1, i_int + 1] - - u = ( - u11 * (i_int + 1 - i) * (j_int + 1 - j) - + u12 * (i - i_int) * (j_int + 1 - j) - + int1 * (i_int + 1 - i) * (j - j_int) - + int2 * (i - i_int) * (j - j_int) - ) - - v = ( - v11 * (i_int + 1 - i) * (j_int + 1 - j) - + v12 * (i - i_int) * (j_int + 1 - j) - + v21 * (i_int + 1 - i) * (j - j_int) - + v22 * (i - i_int) * (j - j_int) - ) - - i_next = xr - u - j_next = yr - v - - if abs(i - i_next) < precision and abs(j - j_next) < precision: - return i, j - - i = i_next - j = j_next - - # if the search doesn't converge within max iter, it will return the last iter result - return i_next, j_next - -@cuda.jit -def iterSearch_gpu(distortImg, resultImg, padu, padv, W, H, maxIter=5, precision=1e-2): - - start_x, start_y = cuda.grid(2) - stride_x, stride_y = cuda.gridsize(2) - - for xr in range(start_x, W, stride_x): - for yr in range(start_y, H, stride_y): - - i,j = iterSearchShader_gpu(padu, padv, xr, yr, W, H, maxIter, precision) - - if i > W - 1: - i = 2 * W - 1 - i - if i < 0: - i = -i - if j > H - 1: - j = 2 * H - 1 - j - if j < 0: - j = -j - - resultImg[yr, xr,0] = biInterpolation_gpu(distortImg[:,:,0], i, j) - resultImg[yr, xr,1] = biInterpolation_gpu(distortImg[:,:,1], i, j) - resultImg[yr, xr,2] = biInterpolation_gpu(distortImg[:,:,2], i, j) - return None - -def forward_mapping_gpu(source_image, u, v, maxIter=5, precision=1e-2): - """ - warp the image according to the forward flow - u: horizontal - v: vertical - """ - H = source_image.shape[0] - W = source_image.shape[1] - - resultImg = np.array(np.zeros((H, W, 3)), dtype=np.uint8) - - distortImg = np.array(np.zeros((H + 1, W + 1, 3)), dtype=np.uint8) - distortImg[0:H, 0:W] = source_image[0:H, 0:W] - distortImg[H, 0:W] = source_image[H - 1, 0:W] - distortImg[0:H, W] = source_image[0:H, W - 1] - distortImg[H, W] = source_image[H - 1, W - 1] - - padu = np.array(np.zeros((H + 1, W + 1)), dtype=np.float32) - padu[0:H, 0:W] = u[0:H, 0:W] - padu[H, 0:W] = u[H - 1, 0:W] - padu[0:H, W] = u[0:H, W - 1] - padu[H, W] = u[H - 1, W - 1] - - padv = np.array(np.zeros((H + 1, W + 1)), dtype=np.float32) - padv[0:H, 0:W] = v[0:H, 0:W] - padv[H, 0:W] = v[H - 1, 0:W] - padv[0:H, W] = v[0:H, W - 1] - padv[H, W] = v[H - 1, W - 1] - - padu = cuda.to_device(padu) - padv = cuda.to_device(padv) - distortImg = cuda.to_device(distortImg) - resultImg = cuda.to_device(resultImg) - - threadsperblock = (16, 16) - blockspergrid_x = math.ceil(W / threadsperblock[0]) - blockspergrid_y = math.ceil(H / threadsperblock[1]) - blockspergrid = (blockspergrid_x, blockspergrid_y) - - - iterSearch_gpu[blockspergrid, threadsperblock](distortImg, resultImg, padu, padv, W, H, maxIter, precision) - resultImg = resultImg.copy_to_host() - return resultImg - -class Distortion_with_flow_gpu(object): - - def __init__(self, maxIter=3, precision=1e-3): - self.maxIter = maxIter - self.precision = precision - - def __call__(self, inputs, dx, dy): - inputs = np.array(inputs) - shape = inputs.shape[0], inputs.shape[1] - remap_image = forward_mapping_gpu(inputs, dy, dx, maxIter=self.maxIter, precision=self.precision) - - return Image.fromarray(remap_image) - -def read_flow(filename): - """ - read optical flow from Middlebury .flo file - :param filename: name of the flow file - :return: optical flow data in matrix - """ - f = open(filename, "rb") - try: - magic = np.fromfile(f, np.float32, count=1)[0] # For Python3.x - except: - magic = np.fromfile(f, np.float32, count=1) # For Python2.x - data2d = None - if (202021.25 != magic)and(123.25!=magic): - print("Magic number incorrect. Invalid .flo file") - elif (123.25==magic): - w = np.fromfile(f, np.int32, count=1)[0] - h = np.fromfile(f, np.int32, count=1)[0] - # print("Reading %d x %d flo file" % (h, w)) - data2d = np.fromfile(f, np.float16, count=2 * w * h) - # reshape data into 3D array (columns, rows, channels) - data2d = np.resize(data2d, (2, h, w)) - elif (202021.25 == magic): - w = np.fromfile(f, np.int32, count=1)[0] - h = np.fromfile(f, np.int32, count=1)[0] - # print("Reading %d x %d flo file" % (h, w)) - data2d = np.fromfile(f, np.float32, count=2 * w * h) - # reshape data into 3D array (columns, rows, channels) - data2d = np.resize(data2d, (2, h, w)) - f.close() - return data2d.astype(np.float32) - -class LossHandler: - def __init__(self): - self.loss_dict = {} - self.count_sample = 0 - - def add_loss(self, key, loss): - if key not in self.loss_dict: - self.loss_dict[key] = 0 - self.loss_dict[key] += loss - - def get_loss(self, key): - return self.loss_dict[key] / self.count_sample - - def count_one_sample(self): - self.count_sample += 1 - - def reset(self): - self.loss_dict = {} - self.count_sample = 0 - - -class TimeHandler: - def __init__(self): - self.time_handler = {} - - def compute_time(self, key): - if key not in self.time_handler: - self.time_handler[key] = time.time() - return None - else: - return time.time() - self.time_handler.pop(key) - - -def print_num_params(model, is_trainable=False): - model_name = model.__class__.__name__.ljust(30) - - if is_trainable: - num_params = sum(p.numel() for p in model.parameters() if p.requires_grad) - print(f"| TRAINABLE | {model_name} | {('{:,}'.format(num_params)).rjust(10)} |") - else: - num_params = sum(p.numel() for p in model.parameters()) - print(f"| GENERAL | {model_name} | {('{:,}'.format(num_params)).rjust(10)} |") - - return num_params diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/charsetprober.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/charsetprober.py deleted file mode 100644 index a103ca11356606402c03b320a4fcdb8635051623..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/chardet/charsetprober.py +++ /dev/null @@ -1,147 +0,0 @@ -######################## BEGIN LICENSE BLOCK ######################## -# The Original Code is Mozilla Universal charset detector code. -# -# The Initial Developer of the Original Code is -# Netscape Communications Corporation. -# Portions created by the Initial Developer are Copyright (C) 2001 -# the Initial Developer. All Rights Reserved. -# -# Contributor(s): -# Mark Pilgrim - port to Python -# Shy Shalom - original C code -# -# This library is free software; you can redistribute it and/or -# modify it under the terms of the GNU Lesser General Public -# License as published by the Free Software Foundation; either -# version 2.1 of the License, or (at your option) any later version. -# -# This library is distributed in the hope that it will be useful, -# but WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# Lesser General Public License for more details. -# -# You should have received a copy of the GNU Lesser General Public -# License along with this library; if not, write to the Free Software -# Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA -# 02110-1301 USA -######################### END LICENSE BLOCK ######################### - -import logging -import re -from typing import Optional, Union - -from .enums import LanguageFilter, ProbingState - -INTERNATIONAL_WORDS_PATTERN = re.compile( - b"[a-zA-Z]*[\x80-\xFF]+[a-zA-Z]*[^a-zA-Z\x80-\xFF]?" -) - - -class CharSetProber: - - SHORTCUT_THRESHOLD = 0.95 - - def __init__(self, lang_filter: LanguageFilter = LanguageFilter.NONE) -> None: - self._state = ProbingState.DETECTING - self.active = True - self.lang_filter = lang_filter - self.logger = logging.getLogger(__name__) - - def reset(self) -> None: - self._state = ProbingState.DETECTING - - @property - def charset_name(self) -> Optional[str]: - return None - - @property - def language(self) -> Optional[str]: - raise NotImplementedError - - def feed(self, byte_str: Union[bytes, bytearray]) -> ProbingState: - raise NotImplementedError - - @property - def state(self) -> ProbingState: - return self._state - - def get_confidence(self) -> float: - return 0.0 - - @staticmethod - def filter_high_byte_only(buf: Union[bytes, bytearray]) -> bytes: - buf = re.sub(b"([\x00-\x7F])+", b" ", buf) - return buf - - @staticmethod - def filter_international_words(buf: Union[bytes, bytearray]) -> bytearray: - """ - We define three types of bytes: - alphabet: english alphabets [a-zA-Z] - international: international characters [\x80-\xFF] - marker: everything else [^a-zA-Z\x80-\xFF] - The input buffer can be thought to contain a series of words delimited - by markers. This function works to filter all words that contain at - least one international character. All contiguous sequences of markers - are replaced by a single space ascii character. - This filter applies to all scripts which do not use English characters. - """ - filtered = bytearray() - - # This regex expression filters out only words that have at-least one - # international character. The word may include one marker character at - # the end. - words = INTERNATIONAL_WORDS_PATTERN.findall(buf) - - for word in words: - filtered.extend(word[:-1]) - - # If the last character in the word is a marker, replace it with a - # space as markers shouldn't affect our analysis (they are used - # similarly across all languages and may thus have similar - # frequencies). - last_char = word[-1:] - if not last_char.isalpha() and last_char < b"\x80": - last_char = b" " - filtered.extend(last_char) - - return filtered - - @staticmethod - def remove_xml_tags(buf: Union[bytes, bytearray]) -> bytes: - """ - Returns a copy of ``buf`` that retains only the sequences of English - alphabet and high byte characters that are not between <> characters. - This filter can be applied to all scripts which contain both English - characters and extended ASCII characters, but is currently only used by - ``Latin1Prober``. - """ - filtered = bytearray() - in_tag = False - prev = 0 - buf = memoryview(buf).cast("c") - - for curr, buf_char in enumerate(buf): - # Check if we're coming out of or entering an XML tag - - # https://github.com/python/typeshed/issues/8182 - if buf_char == b">": # type: ignore[comparison-overlap] - prev = curr + 1 - in_tag = False - # https://github.com/python/typeshed/issues/8182 - elif buf_char == b"<": # type: ignore[comparison-overlap] - if curr > prev and not in_tag: - # Keep everything after last non-extended-ASCII, - # non-alphabetic character - filtered.extend(buf[prev:curr]) - # Output a space to delimit stretch we kept - filtered.extend(b" ") - in_tag = True - - # If we're not in a tag... - if not in_tag: - # Keep everything after last non-extended-ASCII, non-alphabetic - # character - filtered.extend(buf[prev:]) - - return filtered diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/relativedelta.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/relativedelta.py deleted file mode 100644 index a9e85f7e6cd7488e6b2f4b249d5cf6af314c3859..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/dateutil/relativedelta.py +++ /dev/null @@ -1,599 +0,0 @@ -# -*- coding: utf-8 -*- -import datetime -import calendar - -import operator -from math import copysign - -from six import integer_types -from warnings import warn - -from ._common import weekday - -MO, TU, WE, TH, FR, SA, SU = weekdays = tuple(weekday(x) for x in range(7)) - -__all__ = ["relativedelta", "MO", "TU", "WE", "TH", "FR", "SA", "SU"] - - -class relativedelta(object): - """ - The relativedelta type is designed to be applied to an existing datetime and - can replace specific components of that datetime, or represents an interval - of time. - - It is based on the specification of the excellent work done by M.-A. Lemburg - in his - `mx.DateTime <https://www.egenix.com/products/python/mxBase/mxDateTime/>`_ extension. - However, notice that this type does *NOT* implement the same algorithm as - his work. Do *NOT* expect it to behave like mx.DateTime's counterpart. - - There are two different ways to build a relativedelta instance. The - first one is passing it two date/datetime classes:: - - relativedelta(datetime1, datetime2) - - The second one is passing it any number of the following keyword arguments:: - - relativedelta(arg1=x,arg2=y,arg3=z...) - - year, month, day, hour, minute, second, microsecond: - Absolute information (argument is singular); adding or subtracting a - relativedelta with absolute information does not perform an arithmetic - operation, but rather REPLACES the corresponding value in the - original datetime with the value(s) in relativedelta. - - years, months, weeks, days, hours, minutes, seconds, microseconds: - Relative information, may be negative (argument is plural); adding - or subtracting a relativedelta with relative information performs - the corresponding arithmetic operation on the original datetime value - with the information in the relativedelta. - - weekday: - One of the weekday instances (MO, TU, etc) available in the - relativedelta module. These instances may receive a parameter N, - specifying the Nth weekday, which could be positive or negative - (like MO(+1) or MO(-2)). Not specifying it is the same as specifying - +1. You can also use an integer, where 0=MO. This argument is always - relative e.g. if the calculated date is already Monday, using MO(1) - or MO(-1) won't change the day. To effectively make it absolute, use - it in combination with the day argument (e.g. day=1, MO(1) for first - Monday of the month). - - leapdays: - Will add given days to the date found, if year is a leap - year, and the date found is post 28 of february. - - yearday, nlyearday: - Set the yearday or the non-leap year day (jump leap days). - These are converted to day/month/leapdays information. - - There are relative and absolute forms of the keyword - arguments. The plural is relative, and the singular is - absolute. For each argument in the order below, the absolute form - is applied first (by setting each attribute to that value) and - then the relative form (by adding the value to the attribute). - - The order of attributes considered when this relativedelta is - added to a datetime is: - - 1. Year - 2. Month - 3. Day - 4. Hours - 5. Minutes - 6. Seconds - 7. Microseconds - - Finally, weekday is applied, using the rule described above. - - For example - - >>> from datetime import datetime - >>> from dateutil.relativedelta import relativedelta, MO - >>> dt = datetime(2018, 4, 9, 13, 37, 0) - >>> delta = relativedelta(hours=25, day=1, weekday=MO(1)) - >>> dt + delta - datetime.datetime(2018, 4, 2, 14, 37) - - First, the day is set to 1 (the first of the month), then 25 hours - are added, to get to the 2nd day and 14th hour, finally the - weekday is applied, but since the 2nd is already a Monday there is - no effect. - - """ - - def __init__(self, dt1=None, dt2=None, - years=0, months=0, days=0, leapdays=0, weeks=0, - hours=0, minutes=0, seconds=0, microseconds=0, - year=None, month=None, day=None, weekday=None, - yearday=None, nlyearday=None, - hour=None, minute=None, second=None, microsecond=None): - - if dt1 and dt2: - # datetime is a subclass of date. So both must be date - if not (isinstance(dt1, datetime.date) and - isinstance(dt2, datetime.date)): - raise TypeError("relativedelta only diffs datetime/date") - - # We allow two dates, or two datetimes, so we coerce them to be - # of the same type - if (isinstance(dt1, datetime.datetime) != - isinstance(dt2, datetime.datetime)): - if not isinstance(dt1, datetime.datetime): - dt1 = datetime.datetime.fromordinal(dt1.toordinal()) - elif not isinstance(dt2, datetime.datetime): - dt2 = datetime.datetime.fromordinal(dt2.toordinal()) - - self.years = 0 - self.months = 0 - self.days = 0 - self.leapdays = 0 - self.hours = 0 - self.minutes = 0 - self.seconds = 0 - self.microseconds = 0 - self.year = None - self.month = None - self.day = None - self.weekday = None - self.hour = None - self.minute = None - self.second = None - self.microsecond = None - self._has_time = 0 - - # Get year / month delta between the two - months = (dt1.year - dt2.year) * 12 + (dt1.month - dt2.month) - self._set_months(months) - - # Remove the year/month delta so the timedelta is just well-defined - # time units (seconds, days and microseconds) - dtm = self.__radd__(dt2) - - # If we've overshot our target, make an adjustment - if dt1 < dt2: - compare = operator.gt - increment = 1 - else: - compare = operator.lt - increment = -1 - - while compare(dt1, dtm): - months += increment - self._set_months(months) - dtm = self.__radd__(dt2) - - # Get the timedelta between the "months-adjusted" date and dt1 - delta = dt1 - dtm - self.seconds = delta.seconds + delta.days * 86400 - self.microseconds = delta.microseconds - else: - # Check for non-integer values in integer-only quantities - if any(x is not None and x != int(x) for x in (years, months)): - raise ValueError("Non-integer years and months are " - "ambiguous and not currently supported.") - - # Relative information - self.years = int(years) - self.months = int(months) - self.days = days + weeks * 7 - self.leapdays = leapdays - self.hours = hours - self.minutes = minutes - self.seconds = seconds - self.microseconds = microseconds - - # Absolute information - self.year = year - self.month = month - self.day = day - self.hour = hour - self.minute = minute - self.second = second - self.microsecond = microsecond - - if any(x is not None and int(x) != x - for x in (year, month, day, hour, - minute, second, microsecond)): - # For now we'll deprecate floats - later it'll be an error. - warn("Non-integer value passed as absolute information. " + - "This is not a well-defined condition and will raise " + - "errors in future versions.", DeprecationWarning) - - if isinstance(weekday, integer_types): - self.weekday = weekdays[weekday] - else: - self.weekday = weekday - - yday = 0 - if nlyearday: - yday = nlyearday - elif yearday: - yday = yearday - if yearday > 59: - self.leapdays = -1 - if yday: - ydayidx = [31, 59, 90, 120, 151, 181, 212, - 243, 273, 304, 334, 366] - for idx, ydays in enumerate(ydayidx): - if yday <= ydays: - self.month = idx+1 - if idx == 0: - self.day = yday - else: - self.day = yday-ydayidx[idx-1] - break - else: - raise ValueError("invalid year day (%d)" % yday) - - self._fix() - - def _fix(self): - if abs(self.microseconds) > 999999: - s = _sign(self.microseconds) - div, mod = divmod(self.microseconds * s, 1000000) - self.microseconds = mod * s - self.seconds += div * s - if abs(self.seconds) > 59: - s = _sign(self.seconds) - div, mod = divmod(self.seconds * s, 60) - self.seconds = mod * s - self.minutes += div * s - if abs(self.minutes) > 59: - s = _sign(self.minutes) - div, mod = divmod(self.minutes * s, 60) - self.minutes = mod * s - self.hours += div * s - if abs(self.hours) > 23: - s = _sign(self.hours) - div, mod = divmod(self.hours * s, 24) - self.hours = mod * s - self.days += div * s - if abs(self.months) > 11: - s = _sign(self.months) - div, mod = divmod(self.months * s, 12) - self.months = mod * s - self.years += div * s - if (self.hours or self.minutes or self.seconds or self.microseconds - or self.hour is not None or self.minute is not None or - self.second is not None or self.microsecond is not None): - self._has_time = 1 - else: - self._has_time = 0 - - @property - def weeks(self): - return int(self.days / 7.0) - - @weeks.setter - def weeks(self, value): - self.days = self.days - (self.weeks * 7) + value * 7 - - def _set_months(self, months): - self.months = months - if abs(self.months) > 11: - s = _sign(self.months) - div, mod = divmod(self.months * s, 12) - self.months = mod * s - self.years = div * s - else: - self.years = 0 - - def normalized(self): - """ - Return a version of this object represented entirely using integer - values for the relative attributes. - - >>> relativedelta(days=1.5, hours=2).normalized() - relativedelta(days=+1, hours=+14) - - :return: - Returns a :class:`dateutil.relativedelta.relativedelta` object. - """ - # Cascade remainders down (rounding each to roughly nearest microsecond) - days = int(self.days) - - hours_f = round(self.hours + 24 * (self.days - days), 11) - hours = int(hours_f) - - minutes_f = round(self.minutes + 60 * (hours_f - hours), 10) - minutes = int(minutes_f) - - seconds_f = round(self.seconds + 60 * (minutes_f - minutes), 8) - seconds = int(seconds_f) - - microseconds = round(self.microseconds + 1e6 * (seconds_f - seconds)) - - # Constructor carries overflow back up with call to _fix() - return self.__class__(years=self.years, months=self.months, - days=days, hours=hours, minutes=minutes, - seconds=seconds, microseconds=microseconds, - leapdays=self.leapdays, year=self.year, - month=self.month, day=self.day, - weekday=self.weekday, hour=self.hour, - minute=self.minute, second=self.second, - microsecond=self.microsecond) - - def __add__(self, other): - if isinstance(other, relativedelta): - return self.__class__(years=other.years + self.years, - months=other.months + self.months, - days=other.days + self.days, - hours=other.hours + self.hours, - minutes=other.minutes + self.minutes, - seconds=other.seconds + self.seconds, - microseconds=(other.microseconds + - self.microseconds), - leapdays=other.leapdays or self.leapdays, - year=(other.year if other.year is not None - else self.year), - month=(other.month if other.month is not None - else self.month), - day=(other.day if other.day is not None - else self.day), - weekday=(other.weekday if other.weekday is not None - else self.weekday), - hour=(other.hour if other.hour is not None - else self.hour), - minute=(other.minute if other.minute is not None - else self.minute), - second=(other.second if other.second is not None - else self.second), - microsecond=(other.microsecond if other.microsecond - is not None else - self.microsecond)) - if isinstance(other, datetime.timedelta): - return self.__class__(years=self.years, - months=self.months, - days=self.days + other.days, - hours=self.hours, - minutes=self.minutes, - seconds=self.seconds + other.seconds, - microseconds=self.microseconds + other.microseconds, - leapdays=self.leapdays, - year=self.year, - month=self.month, - day=self.day, - weekday=self.weekday, - hour=self.hour, - minute=self.minute, - second=self.second, - microsecond=self.microsecond) - if not isinstance(other, datetime.date): - return NotImplemented - elif self._has_time and not isinstance(other, datetime.datetime): - other = datetime.datetime.fromordinal(other.toordinal()) - year = (self.year or other.year)+self.years - month = self.month or other.month - if self.months: - assert 1 <= abs(self.months) <= 12 - month += self.months - if month > 12: - year += 1 - month -= 12 - elif month < 1: - year -= 1 - month += 12 - day = min(calendar.monthrange(year, month)[1], - self.day or other.day) - repl = {"year": year, "month": month, "day": day} - for attr in ["hour", "minute", "second", "microsecond"]: - value = getattr(self, attr) - if value is not None: - repl[attr] = value - days = self.days - if self.leapdays and month > 2 and calendar.isleap(year): - days += self.leapdays - ret = (other.replace(**repl) - + datetime.timedelta(days=days, - hours=self.hours, - minutes=self.minutes, - seconds=self.seconds, - microseconds=self.microseconds)) - if self.weekday: - weekday, nth = self.weekday.weekday, self.weekday.n or 1 - jumpdays = (abs(nth) - 1) * 7 - if nth > 0: - jumpdays += (7 - ret.weekday() + weekday) % 7 - else: - jumpdays += (ret.weekday() - weekday) % 7 - jumpdays *= -1 - ret += datetime.timedelta(days=jumpdays) - return ret - - def __radd__(self, other): - return self.__add__(other) - - def __rsub__(self, other): - return self.__neg__().__radd__(other) - - def __sub__(self, other): - if not isinstance(other, relativedelta): - return NotImplemented # In case the other object defines __rsub__ - return self.__class__(years=self.years - other.years, - months=self.months - other.months, - days=self.days - other.days, - hours=self.hours - other.hours, - minutes=self.minutes - other.minutes, - seconds=self.seconds - other.seconds, - microseconds=self.microseconds - other.microseconds, - leapdays=self.leapdays or other.leapdays, - year=(self.year if self.year is not None - else other.year), - month=(self.month if self.month is not None else - other.month), - day=(self.day if self.day is not None else - other.day), - weekday=(self.weekday if self.weekday is not None else - other.weekday), - hour=(self.hour if self.hour is not None else - other.hour), - minute=(self.minute if self.minute is not None else - other.minute), - second=(self.second if self.second is not None else - other.second), - microsecond=(self.microsecond if self.microsecond - is not None else - other.microsecond)) - - def __abs__(self): - return self.__class__(years=abs(self.years), - months=abs(self.months), - days=abs(self.days), - hours=abs(self.hours), - minutes=abs(self.minutes), - seconds=abs(self.seconds), - microseconds=abs(self.microseconds), - leapdays=self.leapdays, - year=self.year, - month=self.month, - day=self.day, - weekday=self.weekday, - hour=self.hour, - minute=self.minute, - second=self.second, - microsecond=self.microsecond) - - def __neg__(self): - return self.__class__(years=-self.years, - months=-self.months, - days=-self.days, - hours=-self.hours, - minutes=-self.minutes, - seconds=-self.seconds, - microseconds=-self.microseconds, - leapdays=self.leapdays, - year=self.year, - month=self.month, - day=self.day, - weekday=self.weekday, - hour=self.hour, - minute=self.minute, - second=self.second, - microsecond=self.microsecond) - - def __bool__(self): - return not (not self.years and - not self.months and - not self.days and - not self.hours and - not self.minutes and - not self.seconds and - not self.microseconds and - not self.leapdays and - self.year is None and - self.month is None and - self.day is None and - self.weekday is None and - self.hour is None and - self.minute is None and - self.second is None and - self.microsecond is None) - # Compatibility with Python 2.x - __nonzero__ = __bool__ - - def __mul__(self, other): - try: - f = float(other) - except TypeError: - return NotImplemented - - return self.__class__(years=int(self.years * f), - months=int(self.months * f), - days=int(self.days * f), - hours=int(self.hours * f), - minutes=int(self.minutes * f), - seconds=int(self.seconds * f), - microseconds=int(self.microseconds * f), - leapdays=self.leapdays, - year=self.year, - month=self.month, - day=self.day, - weekday=self.weekday, - hour=self.hour, - minute=self.minute, - second=self.second, - microsecond=self.microsecond) - - __rmul__ = __mul__ - - def __eq__(self, other): - if not isinstance(other, relativedelta): - return NotImplemented - if self.weekday or other.weekday: - if not self.weekday or not other.weekday: - return False - if self.weekday.weekday != other.weekday.weekday: - return False - n1, n2 = self.weekday.n, other.weekday.n - if n1 != n2 and not ((not n1 or n1 == 1) and (not n2 or n2 == 1)): - return False - return (self.years == other.years and - self.months == other.months and - self.days == other.days and - self.hours == other.hours and - self.minutes == other.minutes and - self.seconds == other.seconds and - self.microseconds == other.microseconds and - self.leapdays == other.leapdays and - self.year == other.year and - self.month == other.month and - self.day == other.day and - self.hour == other.hour and - self.minute == other.minute and - self.second == other.second and - self.microsecond == other.microsecond) - - def __hash__(self): - return hash(( - self.weekday, - self.years, - self.months, - self.days, - self.hours, - self.minutes, - self.seconds, - self.microseconds, - self.leapdays, - self.year, - self.month, - self.day, - self.hour, - self.minute, - self.second, - self.microsecond, - )) - - def __ne__(self, other): - return not self.__eq__(other) - - def __div__(self, other): - try: - reciprocal = 1 / float(other) - except TypeError: - return NotImplemented - - return self.__mul__(reciprocal) - - __truediv__ = __div__ - - def __repr__(self): - l = [] - for attr in ["years", "months", "days", "leapdays", - "hours", "minutes", "seconds", "microseconds"]: - value = getattr(self, attr) - if value: - l.append("{attr}={value:+g}".format(attr=attr, value=value)) - for attr in ["year", "month", "day", "weekday", - "hour", "minute", "second", "microsecond"]: - value = getattr(self, attr) - if value is not None: - l.append("{attr}={value}".format(attr=attr, value=repr(value))) - return "{classname}({attrs})".format(classname=self.__class__.__name__, - attrs=", ".join(l)) - - -def _sign(x): - return int(copysign(1, x)) - -# vim:ts=4:sw=4:et diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/message_factory.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/message_factory.py deleted file mode 100644 index 74dd4a676cb54633ad3c4b2cab6a06fbeca7547d..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/google/protobuf/message_factory.py +++ /dev/null @@ -1,234 +0,0 @@ -# Protocol Buffers - Google's data interchange format -# Copyright 2008 Google Inc. All rights reserved. -# https://developers.google.com/protocol-buffers/ -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are -# met: -# -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above -# copyright notice, this list of conditions and the following disclaimer -# in the documentation and/or other materials provided with the -# distribution. -# * Neither the name of Google Inc. nor the names of its -# contributors may be used to endorse or promote products derived from -# this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS -# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT -# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR -# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT -# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, -# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT -# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, -# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY -# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE -# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - -"""Provides a factory class for generating dynamic messages. - -The easiest way to use this class is if you have access to the FileDescriptor -protos containing the messages you want to create you can just do the following: - -message_classes = message_factory.GetMessages(iterable_of_file_descriptors) -my_proto_instance = message_classes['some.proto.package.MessageName']() -""" - -__author__ = 'matthewtoia@google.com (Matt Toia)' - -import warnings - -from google.protobuf.internal import api_implementation -from google.protobuf import descriptor_pool -from google.protobuf import message - -if api_implementation.Type() == 'python': - from google.protobuf.internal import python_message as message_impl -else: - from google.protobuf.pyext import cpp_message as message_impl # pylint: disable=g-import-not-at-top - - -# The type of all Message classes. -_GENERATED_PROTOCOL_MESSAGE_TYPE = message_impl.GeneratedProtocolMessageType - - -def GetMessageClass(descriptor): - """Obtains a proto2 message class based on the passed in descriptor. - - Passing a descriptor with a fully qualified name matching a previous - invocation will cause the same class to be returned. - - Args: - descriptor: The descriptor to build from. - - Returns: - A class describing the passed in descriptor. - """ - concrete_class = getattr(descriptor, '_concrete_class', None) - if concrete_class: - return concrete_class - return _InternalCreateMessageClass(descriptor) - - -def GetMessageClassesForFiles(files, pool): - """Gets all the messages from specified files. - - This will find and resolve dependencies, failing if the descriptor - pool cannot satisfy them. - - Args: - files: The file names to extract messages from. - pool: The descriptor pool to find the files including the dependent - files. - - Returns: - A dictionary mapping proto names to the message classes. - """ - result = {} - for file_name in files: - file_desc = pool.FindFileByName(file_name) - for desc in file_desc.message_types_by_name.values(): - result[desc.full_name] = GetMessageClass(desc) - - # While the extension FieldDescriptors are created by the descriptor pool, - # the python classes created in the factory need them to be registered - # explicitly, which is done below. - # - # The call to RegisterExtension will specifically check if the - # extension was already registered on the object and either - # ignore the registration if the original was the same, or raise - # an error if they were different. - - for extension in file_desc.extensions_by_name.values(): - extended_class = GetMessageClass(extension.containing_type) - extended_class.RegisterExtension(extension) - # Recursively load protos for extension field, in order to be able to - # fully represent the extension. This matches the behavior for regular - # fields too. - if extension.message_type: - GetMessageClass(extension.message_type) - return result - - -def _InternalCreateMessageClass(descriptor): - """Builds a proto2 message class based on the passed in descriptor. - - Args: - descriptor: The descriptor to build from. - - Returns: - A class describing the passed in descriptor. - """ - descriptor_name = descriptor.name - result_class = _GENERATED_PROTOCOL_MESSAGE_TYPE( - descriptor_name, - (message.Message,), - { - 'DESCRIPTOR': descriptor, - # If module not set, it wrongly points to message_factory module. - '__module__': None, - }) - for field in descriptor.fields: - if field.message_type: - GetMessageClass(field.message_type) - for extension in result_class.DESCRIPTOR.extensions: - extended_class = GetMessageClass(extension.containing_type) - extended_class.RegisterExtension(extension) - if extension.message_type: - GetMessageClass(extension.message_type) - return result_class - - -# Deprecated. Please use GetMessageClass() or GetMessageClassesForFiles() -# method above instead. -class MessageFactory(object): - """Factory for creating Proto2 messages from descriptors in a pool.""" - - def __init__(self, pool=None): - """Initializes a new factory.""" - self.pool = pool or descriptor_pool.DescriptorPool() - - def GetPrototype(self, descriptor): - """Obtains a proto2 message class based on the passed in descriptor. - - Passing a descriptor with a fully qualified name matching a previous - invocation will cause the same class to be returned. - - Args: - descriptor: The descriptor to build from. - - Returns: - A class describing the passed in descriptor. - """ - warnings.warn('MessageFactory class is deprecated. Please use ' - 'GetMessageClass() instead of MessageFactory.GetPrototype. ' - 'MessageFactory class will be removed after 2024.') - return GetMessageClass(descriptor) - - def CreatePrototype(self, descriptor): - """Builds a proto2 message class based on the passed in descriptor. - - Don't call this function directly, it always creates a new class. Call - GetMessageClass() instead. - - Args: - descriptor: The descriptor to build from. - - Returns: - A class describing the passed in descriptor. - """ - warnings.warn('Directly call CreatePrototype is wrong. Please use ' - 'GetMessageClass() method instead. Directly use ' - 'CreatePrototype will raise error after July 2023.') - return _InternalCreateMessageClass(descriptor) - - def GetMessages(self, files): - """Gets all the messages from a specified file. - - This will find and resolve dependencies, failing if the descriptor - pool cannot satisfy them. - - Args: - files: The file names to extract messages from. - - Returns: - A dictionary mapping proto names to the message classes. This will include - any dependent messages as well as any messages defined in the same file as - a specified message. - """ - warnings.warn('MessageFactory class is deprecated. Please use ' - 'GetMessageClassesForFiles() instead of ' - 'MessageFactory.GetMessages(). MessageFactory class ' - 'will be removed after 2024.') - return GetMessageClassesForFiles(files, self.pool) - - -def GetMessages(file_protos, pool=None): - """Builds a dictionary of all the messages available in a set of files. - - Args: - file_protos: Iterable of FileDescriptorProto to build messages out of. - pool: The descriptor pool to add the file protos. - - Returns: - A dictionary mapping proto names to the message classes. This will include - any dependent messages as well as any messages defined in the same file as - a specified message. - """ - # The cpp implementation of the protocol buffer library requires to add the - # message in topological order of the dependency graph. - des_pool = pool or descriptor_pool.DescriptorPool() - file_by_name = {file_proto.name: file_proto for file_proto in file_protos} - def _AddFile(file_proto): - for dependency in file_proto.dependency: - if dependency in file_by_name: - # Remove from elements to be visited, in order to cut cycles. - _AddFile(file_by_name.pop(dependency)) - des_pool.Add(file_proto) - while file_by_name: - _AddFile(file_by_name.popitem()[1]) - return GetMessageClassesForFiles( - [file_proto.name for file_proto in file_protos], des_pool) diff --git a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/external_utils.py b/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/external_utils.py deleted file mode 100644 index 9a6064bd25da68c51ee9b09f3551e2a31fdb253b..0000000000000000000000000000000000000000 --- a/spaces/chuan-hd/law-assistant-chatbot/.venv/lib/python3.11/site-packages/gradio/external_utils.py +++ /dev/null @@ -1,140 +0,0 @@ -"""Utility function for gradio/external.py""" - -import base64 -import math -import operator -import re -import warnings -from typing import Dict, List, Tuple - -import requests -import yaml - -from gradio import components - -################## -# Helper functions for processing tabular data -################## - - -def get_tabular_examples(model_name: str) -> Dict[str, List[float]]: - readme = requests.get(f"https://huggingface.co/{model_name}/resolve/main/README.md") - if readme.status_code != 200: - warnings.warn(f"Cannot load examples from README for {model_name}", UserWarning) - example_data = {} - else: - yaml_regex = re.search( - "(?:^|[\r\n])---[\n\r]+([\\S\\s]*?)[\n\r]+---([\n\r]|$)", readme.text - ) - if yaml_regex is None: - example_data = {} - else: - example_yaml = next( - yaml.safe_load_all(readme.text[: yaml_regex.span()[-1]]) - ) - example_data = example_yaml.get("widget", {}).get("structuredData", {}) - if not example_data: - raise ValueError( - f"No example data found in README.md of {model_name} - Cannot build gradio demo. " - "See the README.md here: https://huggingface.co/scikit-learn/tabular-playground/blob/main/README.md " - "for a reference on how to provide example data to your model." - ) - # replace nan with string NaN for inference API - for data in example_data.values(): - for i, val in enumerate(data): - if isinstance(val, float) and math.isnan(val): - data[i] = "NaN" - return example_data - - -def cols_to_rows( - example_data: Dict[str, List[float]] -) -> Tuple[List[str], List[List[float]]]: - headers = list(example_data.keys()) - n_rows = max(len(example_data[header] or []) for header in headers) - data = [] - for row_index in range(n_rows): - row_data = [] - for header in headers: - col = example_data[header] or [] - if row_index >= len(col): - row_data.append("NaN") - else: - row_data.append(col[row_index]) - data.append(row_data) - return headers, data - - -def rows_to_cols(incoming_data: Dict) -> Dict[str, Dict[str, Dict[str, List[str]]]]: - data_column_wise = {} - for i, header in enumerate(incoming_data["headers"]): - data_column_wise[header] = [str(row[i]) for row in incoming_data["data"]] - return {"inputs": {"data": data_column_wise}} - - -################## -# Helper functions for processing other kinds of data -################## - - -def postprocess_label(scores: Dict) -> Dict: - sorted_pred = sorted(scores.items(), key=operator.itemgetter(1), reverse=True) - return { - "label": sorted_pred[0][0], - "confidences": [ - {"label": pred[0], "confidence": pred[1]} for pred in sorted_pred - ], - } - - -def encode_to_base64(r: requests.Response) -> str: - # Handles the different ways HF API returns the prediction - base64_repr = base64.b64encode(r.content).decode("utf-8") - data_prefix = ";base64," - # Case 1: base64 representation already includes data prefix - if data_prefix in base64_repr: - return base64_repr - else: - content_type = r.headers.get("content-type") - # Case 2: the data prefix is a key in the response - if content_type == "application/json": - try: - data = r.json()[0] - content_type = data["content-type"] - base64_repr = data["blob"] - except KeyError as ke: - raise ValueError( - "Cannot determine content type returned by external API." - ) from ke - # Case 3: the data prefix is included in the response headers - else: - pass - new_base64 = f"data:{content_type};base64,{base64_repr}" - return new_base64 - - -################## -# Helper function for cleaning up an Interface loaded from HF Spaces -################## - - -def streamline_spaces_interface(config: Dict) -> Dict: - """Streamlines the interface config dictionary to remove unnecessary keys.""" - config["inputs"] = [ - components.get_component_instance(component) - for component in config["input_components"] - ] - config["outputs"] = [ - components.get_component_instance(component) - for component in config["output_components"] - ] - parameters = { - "article", - "description", - "flagging_options", - "inputs", - "outputs", - "title", - } - config = {k: config[k] for k in parameters} - return config diff --git a/spaces/chuxiaojie/NAFNet/app.py b/spaces/chuxiaojie/NAFNet/app.py deleted file mode 100644 index 0887b640ebac13698e3394619b2567bbda15b7ff..0000000000000000000000000000000000000000 --- a/spaces/chuxiaojie/NAFNet/app.py +++ /dev/null @@ -1,43 +0,0 @@ -import gradio as gr -import os - - -os.system("git clone https://github.com/megvii-research/NAFNet") -os.system("mv NAFNet/* ./") -os.system("mv *.pth experiments/pretrained_models/") -os.system("python3 setup.py develop --no_cuda_ext --user") - - -def inference(image, task): - if not os.path.exists('tmp'): - os.system('mkdir tmp') - image.save("tmp/lq_image.png", "PNG") - - if task == 'Denoising': - os.system("python basicsr/demo.py -opt options/test/SIDD/NAFNet-width64.yml --input_path ./tmp/lq_image.png --output_path ./tmp/image.png") - - if task == 'Deblurring': - os.system("python basicsr/demo.py -opt options/test/REDS/NAFNet-width64.yml --input_path ./tmp/lq_image.png --output_path ./tmp/image.png") - - return 'tmp/image.png' - -title = "NAFNet" -description = "Gradio demo for <b>NAFNet: Nonlinear Activation Free Network for Image Restoration</b>. NAFNet achieves state-of-the-art performance on three tasks: image denoising, image debluring and stereo image super-resolution (SR). See the paper and project page for detailed results below. Here, we provide a demo for image denoise and deblur. To use it, simply upload your image, or click one of the examples to load them. Inference needs some time since this demo uses CPU." -article = "<p style='text-align: center'><a href='https://arxiv.org/abs/2204.04676' target='_blank'>Simple Baselines for Image Restoration</a> | <a href='https://arxiv.org/abs/2204.08714' target='_blank'>NAFSSR: Stereo Image Super-Resolution Using NAFNet</a> | <a href='https://github.com/megvii-research/NAFNet' target='_blank'> Github Repo</a></p>" - - -examples = [['demo/noisy.png', 'Denoising'], - ['demo/blurry.jpg', 'Deblurring']] - -iface = gr.Interface( - inference, - [gr.inputs.Image(type="pil", label="Input"), - gr.inputs.Radio(["Denoising", "Deblurring"], default="Denoising", label='task'),], - gr.outputs.Image(type="file", label="Output"), - title=title, - description=description, - article=article, - enable_queue=True, - examples=examples - ) -iface.launch(debug=True,enable_queue=True) \ No newline at end of file diff --git a/spaces/cihyFjudo/fairness-paper-search/HACK HDD Regenerator 2011 DC 08.05.2013 Crack.md b/spaces/cihyFjudo/fairness-paper-search/HACK HDD Regenerator 2011 DC 08.05.2013 Crack.md deleted file mode 100644 index 2948c91bc780445d488d0a95cefb6f41edfaf5f8..0000000000000000000000000000000000000000 --- a/spaces/cihyFjudo/fairness-paper-search/HACK HDD Regenerator 2011 DC 08.05.2013 Crack.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>HACK HDD Regenerator 2011 DC 08.05.2013 Crack</h2><br /><p><b><b>DOWNLOAD</b> » <a href="https://tinurli.com/2uwknB">https://tinurli.com/2uwknB</a></b></p><br /><br /> -<br /> - aaccfb2cb3<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/cleanmaster/akagi-sovits3/preprocess_hubert_f0.py b/spaces/cleanmaster/akagi-sovits3/preprocess_hubert_f0.py deleted file mode 100644 index 4fe7f21541acb01537797f430d53b3c0e63279e1..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/akagi-sovits3/preprocess_hubert_f0.py +++ /dev/null @@ -1,106 +0,0 @@ -import os -import argparse - -import torch -import json -from glob import glob - -from pyworld import pyworld -from tqdm import tqdm -from scipy.io import wavfile - -import utils -from mel_processing import mel_spectrogram_torch -#import h5py -import logging -logging.getLogger('numba').setLevel(logging.WARNING) - -import parselmouth -import librosa -import numpy as np - - -def get_f0(path,p_len=None, f0_up_key=0): - x, _ = librosa.load(path, 32000) - if p_len is None: - p_len = x.shape[0]//320 - else: - assert abs(p_len-x.shape[0]//320) < 3, (path, p_len, x.shape) - time_step = 320 / 32000 * 1000 - f0_min = 50 - f0_max = 1100 - f0_mel_min = 1127 * np.log(1 + f0_min / 700) - f0_mel_max = 1127 * np.log(1 + f0_max / 700) - - f0 = parselmouth.Sound(x, 32000).to_pitch_ac( - time_step=time_step / 1000, voicing_threshold=0.6, - pitch_floor=f0_min, pitch_ceiling=f0_max).selected_array['frequency'] - - pad_size=(p_len - len(f0) + 1) // 2 - if(pad_size>0 or p_len - len(f0) - pad_size>0): - f0 = np.pad(f0,[[pad_size,p_len - len(f0) - pad_size]], mode='constant') - - f0bak = f0.copy() - f0 *= pow(2, f0_up_key / 12) - f0_mel = 1127 * np.log(1 + f0 / 700) - f0_mel[f0_mel > 0] = (f0_mel[f0_mel > 0] - f0_mel_min) * 254 / (f0_mel_max - f0_mel_min) + 1 - f0_mel[f0_mel <= 1] = 1 - f0_mel[f0_mel > 255] = 255 - f0_coarse = np.rint(f0_mel).astype(np.int) - return f0_coarse, f0bak - -def resize2d(x, target_len): - source = np.array(x) - source[source<0.001] = np.nan - target = np.interp(np.arange(0, len(source)*target_len, len(source))/ target_len, np.arange(0, len(source)), source) - res = np.nan_to_num(target) - return res - -def compute_f0(path, c_len): - x, sr = librosa.load(path, sr=32000) - f0, t = pyworld.dio( - x.astype(np.double), - fs=sr, - f0_ceil=800, - frame_period=1000 * 320 / sr, - ) - f0 = pyworld.stonemask(x.astype(np.double), f0, t, 32000) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - assert abs(c_len - x.shape[0]//320) < 3, (c_len, f0.shape) - - return None, resize2d(f0, c_len) - - -def process(filename): - print(filename) - save_name = filename+".soft.pt" - if not os.path.exists(save_name): - devive = torch.device("cuda" if torch.cuda.is_available() else "cpu") - wav, _ = librosa.load(filename, sr=16000) - wav = torch.from_numpy(wav).unsqueeze(0).to(devive) - c = utils.get_hubert_content(hmodel, wav) - torch.save(c.cpu(), save_name) - else: - c = torch.load(save_name) - f0path = filename+".f0.npy" - if not os.path.exists(f0path): - cf0, f0 = compute_f0(filename, c.shape[-1] * 2) - np.save(f0path, f0) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--in_dir", type=str, default="dataset/32k", help="path to input dir") - args = parser.parse_args() - - print("Loading hubert for content...") - hmodel = utils.get_hubert_model(0 if torch.cuda.is_available() else None) - print("Loaded hubert.") - - filenames = glob(f'{args.in_dir}/*/*.wav', recursive=True)#[:10] - - for filename in tqdm(filenames): - process(filename) - \ No newline at end of file diff --git a/spaces/cleanmaster/so-vits-svc-akagi/share.py b/spaces/cleanmaster/so-vits-svc-akagi/share.py deleted file mode 100644 index b9c3150693c47115bd9193f5fea5e42ab830732a..0000000000000000000000000000000000000000 --- a/spaces/cleanmaster/so-vits-svc-akagi/share.py +++ /dev/null @@ -1,47 +0,0 @@ -from inference.infer_tool_grad import VitsSvc -import gradio as gr -import os - -class VitsGradio: - def __init__(self): - self.so = VitsSvc() - self.lspk = [] - self.modelPaths = [] - for root,dirs,files in os.walk("checkpoints"): - for dir in dirs: - self.modelPaths.append(dir) - with gr.Blocks() as self.Vits: - with gr.Tab("VoiceConversion"): - with gr.Row(visible=False) as self.VoiceConversion: - with gr.Column(): - with gr.Row(): - with gr.Column(): - self.srcaudio = gr.Audio(label = "输入音频") - self.btnVC = gr.Button("说话人转换") - with gr.Column(): - self.dsid = gr.Dropdown(label = "目标角色", choices = self.lspk) - self.tran = gr.Slider(label = "升降调", maximum = 60, minimum = -60, step = 1, value = 0) - self.th = gr.Slider(label = "切片阈值", maximum = 32767, minimum = -32768, step = 0.1, value = -40) - with gr.Row(): - self.VCOutputs = gr.Audio() - self.btnVC.click(self.so.inference, inputs=[self.srcaudio,self.dsid,self.tran,self.th], outputs=[self.VCOutputs]) - with gr.Tab("SelectModel"): - with gr.Column(): - modelstrs = gr.Dropdown(label = "模型", choices = self.modelPaths, value = self.modelPaths[0], type = "value") - devicestrs = gr.Dropdown(label = "设备", choices = ["cpu","cuda"], value = "cpu", type = "value") - btnMod = gr.Button("载入模型") - btnMod.click(self.loadModel, inputs=[modelstrs,devicestrs], outputs = [self.dsid,self.VoiceConversion]) - - def loadModel(self, path, device): - self.lspk = [] - self.so.set_device(device) - self.so.loadCheckpoint(path) - for spk, sid in self.so.hps.spk.items(): - self.lspk.append(spk) - VChange = gr.update(visible = True) - SDChange = gr.update(choices = self.lspk, value = self.lspk[0]) - return [SDChange,VChange] - -grVits = VitsGradio() - -grVits.Vits.launch(share = True) \ No newline at end of file diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/appdirs.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/appdirs.py deleted file mode 100644 index 2acd1debeb1d3b981fc577b777a77106c765c391..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/appdirs.py +++ /dev/null @@ -1,608 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -# Copyright (c) 2005-2010 ActiveState Software Inc. -# Copyright (c) 2013 Eddy Petrișor - -"""Utilities for determining application-specific dirs. - -See <http://github.com/ActiveState/appdirs> for details and usage. -""" -# Dev Notes: -# - MSDN on where to store app data files: -# http://support.microsoft.com/default.aspx?scid=kb;en-us;310294#XSLTH3194121123120121120120 -# - Mac OS X: http://developer.apple.com/documentation/MacOSX/Conceptual/BPFileSystem/index.html -# - XDG spec for Un*x: http://standards.freedesktop.org/basedir-spec/basedir-spec-latest.html - -__version__ = "1.4.4" -__version_info__ = tuple(int(segment) for segment in __version__.split(".")) - - -import sys -import os - -PY3 = sys.version_info[0] == 3 - -if PY3: - unicode = str - -if sys.platform.startswith('java'): - import platform - os_name = platform.java_ver()[3][0] - if os_name.startswith('Windows'): # "Windows XP", "Windows 7", etc. - system = 'win32' - elif os_name.startswith('Mac'): # "Mac OS X", etc. - system = 'darwin' - else: # "Linux", "SunOS", "FreeBSD", etc. - # Setting this to "linux2" is not ideal, but only Windows or Mac - # are actually checked for and the rest of the module expects - # *sys.platform* style strings. - system = 'linux2' -else: - system = sys.platform - - - -def user_data_dir(appname=None, appauthor=None, version=None, roaming=False): - r"""Return full path to the user-specific data dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be "<major>.<minor>". - Only applied when appname is present. - "roaming" (boolean, default False) can be set True to use the Windows - roaming appdata directory. That means that for users on a Windows - network setup for roaming profiles, this user data will be - sync'd on login. See - <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> - for a discussion of issues. - - Typical user data directories are: - Mac OS X: ~/Library/Application Support/<AppName> - Unix: ~/.local/share/<AppName> # or in $XDG_DATA_HOME, if defined - Win XP (not roaming): C:\Documents and Settings\<username>\Application Data\<AppAuthor>\<AppName> - Win XP (roaming): C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName> - Win 7 (not roaming): C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName> - Win 7 (roaming): C:\Users\<username>\AppData\Roaming\<AppAuthor>\<AppName> - - For Unix, we follow the XDG spec and support $XDG_DATA_HOME. - That means, by default "~/.local/share/<AppName>". - """ - if system == "win32": - if appauthor is None: - appauthor = appname - const = roaming and "CSIDL_APPDATA" or "CSIDL_LOCAL_APPDATA" - path = os.path.normpath(_get_win_folder(const)) - if appname: - if appauthor is not False: - path = os.path.join(path, appauthor, appname) - else: - path = os.path.join(path, appname) - elif system == 'darwin': - path = os.path.expanduser('~/Library/Application Support/') - if appname: - path = os.path.join(path, appname) - else: - path = os.getenv('XDG_DATA_HOME', os.path.expanduser("~/.local/share")) - if appname: - path = os.path.join(path, appname) - if appname and version: - path = os.path.join(path, version) - return path - - -def site_data_dir(appname=None, appauthor=None, version=None, multipath=False): - r"""Return full path to the user-shared data dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be "<major>.<minor>". - Only applied when appname is present. - "multipath" is an optional parameter only applicable to *nix - which indicates that the entire list of data dirs should be - returned. By default, the first item from XDG_DATA_DIRS is - returned, or '/usr/local/share/<AppName>', - if XDG_DATA_DIRS is not set - - Typical site data directories are: - Mac OS X: /Library/Application Support/<AppName> - Unix: /usr/local/share/<AppName> or /usr/share/<AppName> - Win XP: C:\Documents and Settings\All Users\Application Data\<AppAuthor>\<AppName> - Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) - Win 7: C:\ProgramData\<AppAuthor>\<AppName> # Hidden, but writeable on Win 7. - - For Unix, this is using the $XDG_DATA_DIRS[0] default. - - WARNING: Do not use this on Windows. See the Vista-Fail note above for why. - """ - if system == "win32": - if appauthor is None: - appauthor = appname - path = os.path.normpath(_get_win_folder("CSIDL_COMMON_APPDATA")) - if appname: - if appauthor is not False: - path = os.path.join(path, appauthor, appname) - else: - path = os.path.join(path, appname) - elif system == 'darwin': - path = os.path.expanduser('/Library/Application Support') - if appname: - path = os.path.join(path, appname) - else: - # XDG default for $XDG_DATA_DIRS - # only first, if multipath is False - path = os.getenv('XDG_DATA_DIRS', - os.pathsep.join(['/usr/local/share', '/usr/share'])) - pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] - if appname: - if version: - appname = os.path.join(appname, version) - pathlist = [os.sep.join([x, appname]) for x in pathlist] - - if multipath: - path = os.pathsep.join(pathlist) - else: - path = pathlist[0] - return path - - if appname and version: - path = os.path.join(path, version) - return path - - -def user_config_dir(appname=None, appauthor=None, version=None, roaming=False): - r"""Return full path to the user-specific config dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be "<major>.<minor>". - Only applied when appname is present. - "roaming" (boolean, default False) can be set True to use the Windows - roaming appdata directory. That means that for users on a Windows - network setup for roaming profiles, this user data will be - sync'd on login. See - <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> - for a discussion of issues. - - Typical user config directories are: - Mac OS X: same as user_data_dir - Unix: ~/.config/<AppName> # or in $XDG_CONFIG_HOME, if defined - Win *: same as user_data_dir - - For Unix, we follow the XDG spec and support $XDG_CONFIG_HOME. - That means, by default "~/.config/<AppName>". - """ - if system in ["win32", "darwin"]: - path = user_data_dir(appname, appauthor, None, roaming) - else: - path = os.getenv('XDG_CONFIG_HOME', os.path.expanduser("~/.config")) - if appname: - path = os.path.join(path, appname) - if appname and version: - path = os.path.join(path, version) - return path - - -def site_config_dir(appname=None, appauthor=None, version=None, multipath=False): - r"""Return full path to the user-shared data dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be "<major>.<minor>". - Only applied when appname is present. - "multipath" is an optional parameter only applicable to *nix - which indicates that the entire list of config dirs should be - returned. By default, the first item from XDG_CONFIG_DIRS is - returned, or '/etc/xdg/<AppName>', if XDG_CONFIG_DIRS is not set - - Typical site config directories are: - Mac OS X: same as site_data_dir - Unix: /etc/xdg/<AppName> or $XDG_CONFIG_DIRS[i]/<AppName> for each value in - $XDG_CONFIG_DIRS - Win *: same as site_data_dir - Vista: (Fail! "C:\ProgramData" is a hidden *system* directory on Vista.) - - For Unix, this is using the $XDG_CONFIG_DIRS[0] default, if multipath=False - - WARNING: Do not use this on Windows. See the Vista-Fail note above for why. - """ - if system in ["win32", "darwin"]: - path = site_data_dir(appname, appauthor) - if appname and version: - path = os.path.join(path, version) - else: - # XDG default for $XDG_CONFIG_DIRS - # only first, if multipath is False - path = os.getenv('XDG_CONFIG_DIRS', '/etc/xdg') - pathlist = [os.path.expanduser(x.rstrip(os.sep)) for x in path.split(os.pathsep)] - if appname: - if version: - appname = os.path.join(appname, version) - pathlist = [os.sep.join([x, appname]) for x in pathlist] - - if multipath: - path = os.pathsep.join(pathlist) - else: - path = pathlist[0] - return path - - -def user_cache_dir(appname=None, appauthor=None, version=None, opinion=True): - r"""Return full path to the user-specific cache dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be "<major>.<minor>". - Only applied when appname is present. - "opinion" (boolean) can be False to disable the appending of - "Cache" to the base app data dir for Windows. See - discussion below. - - Typical user cache directories are: - Mac OS X: ~/Library/Caches/<AppName> - Unix: ~/.cache/<AppName> (XDG default) - Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Cache - Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Cache - - On Windows the only suggestion in the MSDN docs is that local settings go in - the `CSIDL_LOCAL_APPDATA` directory. This is identical to the non-roaming - app data dir (the default returned by `user_data_dir` above). Apps typically - put cache data somewhere *under* the given dir here. Some examples: - ...\Mozilla\Firefox\Profiles\<ProfileName>\Cache - ...\Acme\SuperApp\Cache\1.0 - OPINION: This function appends "Cache" to the `CSIDL_LOCAL_APPDATA` value. - This can be disabled with the `opinion=False` option. - """ - if system == "win32": - if appauthor is None: - appauthor = appname - path = os.path.normpath(_get_win_folder("CSIDL_LOCAL_APPDATA")) - if appname: - if appauthor is not False: - path = os.path.join(path, appauthor, appname) - else: - path = os.path.join(path, appname) - if opinion: - path = os.path.join(path, "Cache") - elif system == 'darwin': - path = os.path.expanduser('~/Library/Caches') - if appname: - path = os.path.join(path, appname) - else: - path = os.getenv('XDG_CACHE_HOME', os.path.expanduser('~/.cache')) - if appname: - path = os.path.join(path, appname) - if appname and version: - path = os.path.join(path, version) - return path - - -def user_state_dir(appname=None, appauthor=None, version=None, roaming=False): - r"""Return full path to the user-specific state dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be "<major>.<minor>". - Only applied when appname is present. - "roaming" (boolean, default False) can be set True to use the Windows - roaming appdata directory. That means that for users on a Windows - network setup for roaming profiles, this user data will be - sync'd on login. See - <http://technet.microsoft.com/en-us/library/cc766489(WS.10).aspx> - for a discussion of issues. - - Typical user state directories are: - Mac OS X: same as user_data_dir - Unix: ~/.local/state/<AppName> # or in $XDG_STATE_HOME, if defined - Win *: same as user_data_dir - - For Unix, we follow this Debian proposal <https://wiki.debian.org/XDGBaseDirectorySpecification#state> - to extend the XDG spec and support $XDG_STATE_HOME. - - That means, by default "~/.local/state/<AppName>". - """ - if system in ["win32", "darwin"]: - path = user_data_dir(appname, appauthor, None, roaming) - else: - path = os.getenv('XDG_STATE_HOME', os.path.expanduser("~/.local/state")) - if appname: - path = os.path.join(path, appname) - if appname and version: - path = os.path.join(path, version) - return path - - -def user_log_dir(appname=None, appauthor=None, version=None, opinion=True): - r"""Return full path to the user-specific log dir for this application. - - "appname" is the name of application. - If None, just the system directory is returned. - "appauthor" (only used on Windows) is the name of the - appauthor or distributing body for this application. Typically - it is the owning company name. This falls back to appname. You may - pass False to disable it. - "version" is an optional version path element to append to the - path. You might want to use this if you want multiple versions - of your app to be able to run independently. If used, this - would typically be "<major>.<minor>". - Only applied when appname is present. - "opinion" (boolean) can be False to disable the appending of - "Logs" to the base app data dir for Windows, and "log" to the - base cache dir for Unix. See discussion below. - - Typical user log directories are: - Mac OS X: ~/Library/Logs/<AppName> - Unix: ~/.cache/<AppName>/log # or under $XDG_CACHE_HOME if defined - Win XP: C:\Documents and Settings\<username>\Local Settings\Application Data\<AppAuthor>\<AppName>\Logs - Vista: C:\Users\<username>\AppData\Local\<AppAuthor>\<AppName>\Logs - - On Windows the only suggestion in the MSDN docs is that local settings - go in the `CSIDL_LOCAL_APPDATA` directory. (Note: I'm interested in - examples of what some windows apps use for a logs dir.) - - OPINION: This function appends "Logs" to the `CSIDL_LOCAL_APPDATA` - value for Windows and appends "log" to the user cache dir for Unix. - This can be disabled with the `opinion=False` option. - """ - if system == "darwin": - path = os.path.join( - os.path.expanduser('~/Library/Logs'), - appname) - elif system == "win32": - path = user_data_dir(appname, appauthor, version) - version = False - if opinion: - path = os.path.join(path, "Logs") - else: - path = user_cache_dir(appname, appauthor, version) - version = False - if opinion: - path = os.path.join(path, "log") - if appname and version: - path = os.path.join(path, version) - return path - - -class AppDirs(object): - """Convenience wrapper for getting application dirs.""" - def __init__(self, appname=None, appauthor=None, version=None, - roaming=False, multipath=False): - self.appname = appname - self.appauthor = appauthor - self.version = version - self.roaming = roaming - self.multipath = multipath - - @property - def user_data_dir(self): - return user_data_dir(self.appname, self.appauthor, - version=self.version, roaming=self.roaming) - - @property - def site_data_dir(self): - return site_data_dir(self.appname, self.appauthor, - version=self.version, multipath=self.multipath) - - @property - def user_config_dir(self): - return user_config_dir(self.appname, self.appauthor, - version=self.version, roaming=self.roaming) - - @property - def site_config_dir(self): - return site_config_dir(self.appname, self.appauthor, - version=self.version, multipath=self.multipath) - - @property - def user_cache_dir(self): - return user_cache_dir(self.appname, self.appauthor, - version=self.version) - - @property - def user_state_dir(self): - return user_state_dir(self.appname, self.appauthor, - version=self.version) - - @property - def user_log_dir(self): - return user_log_dir(self.appname, self.appauthor, - version=self.version) - - -#---- internal support stuff - -def _get_win_folder_from_registry(csidl_name): - """This is a fallback technique at best. I'm not sure if using the - registry for this guarantees us the correct answer for all CSIDL_* - names. - """ - if PY3: - import winreg as _winreg - else: - import _winreg - - shell_folder_name = { - "CSIDL_APPDATA": "AppData", - "CSIDL_COMMON_APPDATA": "Common AppData", - "CSIDL_LOCAL_APPDATA": "Local AppData", - }[csidl_name] - - key = _winreg.OpenKey( - _winreg.HKEY_CURRENT_USER, - r"Software\Microsoft\Windows\CurrentVersion\Explorer\Shell Folders" - ) - dir, type = _winreg.QueryValueEx(key, shell_folder_name) - return dir - - -def _get_win_folder_with_pywin32(csidl_name): - from win32com.shell import shellcon, shell - dir = shell.SHGetFolderPath(0, getattr(shellcon, csidl_name), 0, 0) - # Try to make this a unicode path because SHGetFolderPath does - # not return unicode strings when there is unicode data in the - # path. - try: - dir = unicode(dir) - - # Downgrade to short path name if have highbit chars. See - # <http://bugs.activestate.com/show_bug.cgi?id=85099>. - has_high_char = False - for c in dir: - if ord(c) > 255: - has_high_char = True - break - if has_high_char: - try: - import win32api - dir = win32api.GetShortPathName(dir) - except ImportError: - pass - except UnicodeError: - pass - return dir - - -def _get_win_folder_with_ctypes(csidl_name): - import ctypes - - csidl_const = { - "CSIDL_APPDATA": 26, - "CSIDL_COMMON_APPDATA": 35, - "CSIDL_LOCAL_APPDATA": 28, - }[csidl_name] - - buf = ctypes.create_unicode_buffer(1024) - ctypes.windll.shell32.SHGetFolderPathW(None, csidl_const, None, 0, buf) - - # Downgrade to short path name if have highbit chars. See - # <http://bugs.activestate.com/show_bug.cgi?id=85099>. - has_high_char = False - for c in buf: - if ord(c) > 255: - has_high_char = True - break - if has_high_char: - buf2 = ctypes.create_unicode_buffer(1024) - if ctypes.windll.kernel32.GetShortPathNameW(buf.value, buf2, 1024): - buf = buf2 - - return buf.value - -def _get_win_folder_with_jna(csidl_name): - import array - from com.sun import jna - from com.sun.jna.platform import win32 - - buf_size = win32.WinDef.MAX_PATH * 2 - buf = array.zeros('c', buf_size) - shell = win32.Shell32.INSTANCE - shell.SHGetFolderPath(None, getattr(win32.ShlObj, csidl_name), None, win32.ShlObj.SHGFP_TYPE_CURRENT, buf) - dir = jna.Native.toString(buf.tostring()).rstrip("\0") - - # Downgrade to short path name if have highbit chars. See - # <http://bugs.activestate.com/show_bug.cgi?id=85099>. - has_high_char = False - for c in dir: - if ord(c) > 255: - has_high_char = True - break - if has_high_char: - buf = array.zeros('c', buf_size) - kernel = win32.Kernel32.INSTANCE - if kernel.GetShortPathName(dir, buf, buf_size): - dir = jna.Native.toString(buf.tostring()).rstrip("\0") - - return dir - -if system == "win32": - try: - import win32com.shell - _get_win_folder = _get_win_folder_with_pywin32 - except ImportError: - try: - from ctypes import windll - _get_win_folder = _get_win_folder_with_ctypes - except ImportError: - try: - import com.sun.jna - _get_win_folder = _get_win_folder_with_jna - except ImportError: - _get_win_folder = _get_win_folder_from_registry - - -#---- self test code - -if __name__ == "__main__": - appname = "MyApp" - appauthor = "MyCompany" - - props = ("user_data_dir", - "user_config_dir", - "user_cache_dir", - "user_state_dir", - "user_log_dir", - "site_data_dir", - "site_config_dir") - - print("-- app dirs %s --" % __version__) - - print("-- app dirs (with optional 'version')") - dirs = AppDirs(appname, appauthor, version="1.0") - for prop in props: - print("%s: %s" % (prop, getattr(dirs, prop))) - - print("\n-- app dirs (without optional 'version')") - dirs = AppDirs(appname, appauthor) - for prop in props: - print("%s: %s" % (prop, getattr(dirs, prop))) - - print("\n-- app dirs (without optional 'appauthor')") - dirs = AppDirs(appname) - for prop in props: - print("%s: %s" % (prop, getattr(dirs, prop))) - - print("\n-- app dirs (with disabled 'appauthor')") - dirs = AppDirs(appname, appauthor=False) - for prop in props: - print("%s: %s" % (prop, getattr(dirs, prop))) diff --git a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/__init__.py b/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/__init__.py deleted file mode 100644 index ce357417c7139664a194a6826220889f5ed59894..0000000000000000000000000000000000000000 --- a/spaces/cloudtheboi/Lofi4All/.pythonlibs/lib/python3.10/site-packages/fontTools/qu2cu/__init__.py +++ /dev/null @@ -1,15 +0,0 @@ -# Copyright 2016 Google Inc. All Rights Reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from .qu2cu import * diff --git a/spaces/codeparrot/code-complexity-predictor/app.py b/spaces/codeparrot/code-complexity-predictor/app.py deleted file mode 100644 index 389dee8997e065885f81e544e95797c1ee5d9c89..0000000000000000000000000000000000000000 --- a/spaces/codeparrot/code-complexity-predictor/app.py +++ /dev/null @@ -1,48 +0,0 @@ -import gradio as gr -from datasets import ClassLabel -from transformers import AutoTokenizer, AutoModelForSequenceClassification, pipeline - - -title = "BigO" -description = "In this space we predict the complexity of Java code with [UniXcoder-java-complexity-prediction](https://huggingface.co/codeparrot/unixcoder-java-complexity-prediction),\ - a multilingual model for code, fine-tuned on [CodeComplex](https://huggingface.co/datasets/codeparrot/codecomplex), a dataset for complexity prediction of Java code." - -#add examples -example = [['int n = 1000;\nSystem.out.println("Hey - your input is: " + n);'], - ['class GFG {\n \n public static void main(String[] args)\n {\n int i, n = 8;\n for (i = 1; i <= n; i++) {\n System.out.printf("Hello World !!!\n");\n }\n }\n}'], - ['import java.io.*;\nimport java.util.*;\n\npublic class C125 {\n\tpublic static void main(String[] args) throws IOException {\n\t\tBufferedReader r = new BufferedReader(new InputStreamReader(System.in));\n\t\tString s = r.readLine();\n\t\tint n = new Integer(s);\n\t\tSystem.out.println("0 0 "+n);\n\t}\n}\n']] - -# model to be changed to the finetuned one -tokenizer = AutoTokenizer.from_pretrained("codeparrot/unixcoder-java-complexity-prediction") -model = AutoModelForSequenceClassification.from_pretrained("codeparrot/unixcoder-java-complexity-prediction", num_labels=7) - -def get_label(output): - label = int(output[-1]) - labels = ClassLabel(num_classes=7, names=['constant', 'cubic', 'linear', 'logn', 'nlogn', 'np', 'quadratic']) - return labels.int2str(label) - -def complexity_estimation(gen_prompt): - pipe = pipeline("text-classification", model=model, tokenizer=tokenizer) - output = pipe(gen_prompt)[0] - # add label conversion to class - label = get_label(output['label']) - score = output['score'] - return label, score - - -iface = gr.Interface( - fn=complexity_estimation, - inputs=[ - gr.Code(lines=10, language="python", label="Input code"), - ], - outputs=[ - gr.Textbox(label="Predicted complexity", lines=1) , - gr.Textbox(label="Corresponding probability", lines=1) , -], - examples=example, - layout="vertical", - theme="darkpeach", - description=description, - title=title -) -iface.launch() \ No newline at end of file diff --git a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libmp3lame.c b/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libmp3lame.c deleted file mode 100644 index e119189f2aeff53632d4c9cb5aea59c5ee638d80..0000000000000000000000000000000000000000 --- a/spaces/colakin/video-generater/public/ffmpeg/libavcodec/libmp3lame.c +++ /dev/null @@ -1,353 +0,0 @@ -/* - * Interface to libmp3lame for mp3 encoding - * Copyright (c) 2002 Lennert Buytenhek <buytenh@gnu.org> - * - * This file is part of FFmpeg. - * - * FFmpeg is free software; you can redistribute it and/or - * modify it under the terms of the GNU Lesser General Public - * License as published by the Free Software Foundation; either - * version 2.1 of the License, or (at your option) any later version. - * - * FFmpeg is distributed in the hope that it will be useful, - * but WITHOUT ANY WARRANTY; without even the implied warranty of - * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU - * Lesser General Public License for more details. - * - * You should have received a copy of the GNU Lesser General Public - * License along with FFmpeg; if not, write to the Free Software - * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA - */ - -/** - * @file - * Interface to libmp3lame for mp3 encoding. - */ - -#include <lame/lame.h> - -#include "libavutil/channel_layout.h" -#include "libavutil/common.h" -#include "libavutil/float_dsp.h" -#include "libavutil/intreadwrite.h" -#include "libavutil/log.h" -#include "libavutil/opt.h" -#include "avcodec.h" -#include "audio_frame_queue.h" -#include "codec_internal.h" -#include "encode.h" -#include "mpegaudio.h" -#include "mpegaudiodecheader.h" - -#define BUFFER_SIZE (7200 + 2 * MPA_FRAME_SIZE + MPA_FRAME_SIZE / 4+1000) // FIXME: Buffer size to small? Adding 1000 to make up for it. - -typedef struct LAMEContext { - AVClass *class; - AVCodecContext *avctx; - lame_global_flags *gfp; - uint8_t *buffer; - int buffer_index; - int buffer_size; - int reservoir; - int joint_stereo; - int abr; - int delay_sent; - float *samples_flt[2]; - AudioFrameQueue afq; - AVFloatDSPContext *fdsp; -} LAMEContext; - - -static int realloc_buffer(LAMEContext *s) -{ - if (!s->buffer || s->buffer_size - s->buffer_index < BUFFER_SIZE) { - int new_size = s->buffer_index + 2 * BUFFER_SIZE, err; - - ff_dlog(s->avctx, "resizing output buffer: %d -> %d\n", s->buffer_size, - new_size); - if ((err = av_reallocp(&s->buffer, new_size)) < 0) { - s->buffer_size = s->buffer_index = 0; - return err; - } - s->buffer_size = new_size; - } - return 0; -} - -static av_cold int mp3lame_encode_close(AVCodecContext *avctx) -{ - LAMEContext *s = avctx->priv_data; - - av_freep(&s->samples_flt[0]); - av_freep(&s->samples_flt[1]); - av_freep(&s->buffer); - av_freep(&s->fdsp); - - ff_af_queue_close(&s->afq); - - lame_close(s->gfp); - return 0; -} - -static av_cold int mp3lame_encode_init(AVCodecContext *avctx) -{ - LAMEContext *s = avctx->priv_data; - int ret; - - s->avctx = avctx; - - /* initialize LAME and get defaults */ - if (!(s->gfp = lame_init())) - return AVERROR(ENOMEM); - - - lame_set_num_channels(s->gfp, avctx->ch_layout.nb_channels); - lame_set_mode(s->gfp, avctx->ch_layout.nb_channels > 1 ? - s->joint_stereo ? JOINT_STEREO : STEREO : MONO); - - /* sample rate */ - lame_set_in_samplerate (s->gfp, avctx->sample_rate); - lame_set_out_samplerate(s->gfp, avctx->sample_rate); - - /* algorithmic quality */ - if (avctx->compression_level != FF_COMPRESSION_DEFAULT) - lame_set_quality(s->gfp, avctx->compression_level); - - /* rate control */ - if (avctx->flags & AV_CODEC_FLAG_QSCALE) { // VBR - lame_set_VBR(s->gfp, vbr_default); - lame_set_VBR_quality(s->gfp, avctx->global_quality / (float)FF_QP2LAMBDA); - } else { - if (avctx->bit_rate) { - if (s->abr) { // ABR - lame_set_VBR(s->gfp, vbr_abr); - lame_set_VBR_mean_bitrate_kbps(s->gfp, avctx->bit_rate / 1000); - } else // CBR - lame_set_brate(s->gfp, avctx->bit_rate / 1000); - } - } - - /* lowpass cutoff frequency */ - if (avctx->cutoff) - lame_set_lowpassfreq(s->gfp, avctx->cutoff); - - /* do not get a Xing VBR header frame from LAME */ - lame_set_bWriteVbrTag(s->gfp,0); - - /* bit reservoir usage */ - lame_set_disable_reservoir(s->gfp, !s->reservoir); - - /* set specified parameters */ - if (lame_init_params(s->gfp) < 0) { - ret = AVERROR_EXTERNAL; - goto error; - } - - /* get encoder delay */ - avctx->initial_padding = lame_get_encoder_delay(s->gfp) + 528 + 1; - ff_af_queue_init(avctx, &s->afq); - - avctx->frame_size = lame_get_framesize(s->gfp); - - /* allocate float sample buffers */ - if (avctx->sample_fmt == AV_SAMPLE_FMT_FLTP) { - int ch; - for (ch = 0; ch < avctx->ch_layout.nb_channels; ch++) { - s->samples_flt[ch] = av_malloc_array(avctx->frame_size, - sizeof(*s->samples_flt[ch])); - if (!s->samples_flt[ch]) { - ret = AVERROR(ENOMEM); - goto error; - } - } - } - - ret = realloc_buffer(s); - if (ret < 0) - goto error; - - s->fdsp = avpriv_float_dsp_alloc(avctx->flags & AV_CODEC_FLAG_BITEXACT); - if (!s->fdsp) { - ret = AVERROR(ENOMEM); - goto error; - } - - - return 0; -error: - mp3lame_encode_close(avctx); - return ret; -} - -#define ENCODE_BUFFER(func, buf_type, buf_name) do { \ - lame_result = func(s->gfp, \ - (const buf_type *)buf_name[0], \ - (const buf_type *)buf_name[1], frame->nb_samples, \ - s->buffer + s->buffer_index, \ - s->buffer_size - s->buffer_index); \ -} while (0) - -static int mp3lame_encode_frame(AVCodecContext *avctx, AVPacket *avpkt, - const AVFrame *frame, int *got_packet_ptr) -{ - LAMEContext *s = avctx->priv_data; - MPADecodeHeader hdr; - int len, ret, ch, discard_padding; - int lame_result; - uint32_t h; - - if (frame) { - switch (avctx->sample_fmt) { - case AV_SAMPLE_FMT_S16P: - ENCODE_BUFFER(lame_encode_buffer, int16_t, frame->data); - break; - case AV_SAMPLE_FMT_S32P: - ENCODE_BUFFER(lame_encode_buffer_int, int32_t, frame->data); - break; - case AV_SAMPLE_FMT_FLTP: - if (frame->linesize[0] < 4 * FFALIGN(frame->nb_samples, 8)) { - av_log(avctx, AV_LOG_ERROR, "inadequate AVFrame plane padding\n"); - return AVERROR(EINVAL); - } - for (ch = 0; ch < avctx->ch_layout.nb_channels; ch++) { - s->fdsp->vector_fmul_scalar(s->samples_flt[ch], - (const float *)frame->data[ch], - 32768.0f, - FFALIGN(frame->nb_samples, 8)); - } - ENCODE_BUFFER(lame_encode_buffer_float, float, s->samples_flt); - break; - default: - return AVERROR_BUG; - } - } else if (!s->afq.frame_alloc) { - lame_result = 0; - } else { - lame_result = lame_encode_flush(s->gfp, s->buffer + s->buffer_index, - s->buffer_size - s->buffer_index); - } - if (lame_result < 0) { - if (lame_result == -1) { - av_log(avctx, AV_LOG_ERROR, - "lame: output buffer too small (buffer index: %d, free bytes: %d)\n", - s->buffer_index, s->buffer_size - s->buffer_index); - } - return AVERROR(ENOMEM); - } - s->buffer_index += lame_result; - ret = realloc_buffer(s); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "error reallocating output buffer\n"); - return ret; - } - - /* add current frame to the queue */ - if (frame) { - if ((ret = ff_af_queue_add(&s->afq, frame)) < 0) - return ret; - } - - /* Move 1 frame from the LAME buffer to the output packet, if available. - We have to parse the first frame header in the output buffer to - determine the frame size. */ - if (s->buffer_index < 4) - return 0; - h = AV_RB32(s->buffer); - - ret = avpriv_mpegaudio_decode_header(&hdr, h); - if (ret < 0) { - av_log(avctx, AV_LOG_ERROR, "Invalid mp3 header at start of buffer\n"); - return AVERROR_BUG; - } else if (ret) { - av_log(avctx, AV_LOG_ERROR, "free format output not supported\n"); - return AVERROR_INVALIDDATA; - } - len = hdr.frame_size; - ff_dlog(avctx, "in:%d packet-len:%d index:%d\n", avctx->frame_size, len, - s->buffer_index); - if (len <= s->buffer_index) { - if ((ret = ff_get_encode_buffer(avctx, avpkt, len, 0)) < 0) - return ret; - memcpy(avpkt->data, s->buffer, len); - s->buffer_index -= len; - memmove(s->buffer, s->buffer + len, s->buffer_index); - - /* Get the next frame pts/duration */ - ff_af_queue_remove(&s->afq, avctx->frame_size, &avpkt->pts, - &avpkt->duration); - - discard_padding = avctx->frame_size - avpkt->duration; - // Check if subtraction resulted in an overflow - if ((discard_padding < avctx->frame_size) != (avpkt->duration > 0)) { - av_log(avctx, AV_LOG_ERROR, "discard padding overflow\n"); - return AVERROR(EINVAL); - } - if ((!s->delay_sent && avctx->initial_padding > 0) || discard_padding > 0) { - uint8_t* side_data = av_packet_new_side_data(avpkt, - AV_PKT_DATA_SKIP_SAMPLES, - 10); - if (!side_data) - return AVERROR(ENOMEM); - if (!s->delay_sent) { - AV_WL32(side_data, avctx->initial_padding); - s->delay_sent = 1; - } - AV_WL32(side_data + 4, discard_padding); - } - - *got_packet_ptr = 1; - } - return 0; -} - -#define OFFSET(x) offsetof(LAMEContext, x) -#define AE AV_OPT_FLAG_AUDIO_PARAM | AV_OPT_FLAG_ENCODING_PARAM -static const AVOption options[] = { - { "reservoir", "use bit reservoir", OFFSET(reservoir), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, AE }, - { "joint_stereo", "use joint stereo", OFFSET(joint_stereo), AV_OPT_TYPE_BOOL, { .i64 = 1 }, 0, 1, AE }, - { "abr", "use ABR", OFFSET(abr), AV_OPT_TYPE_BOOL, { .i64 = 0 }, 0, 1, AE }, - { NULL }, -}; - -static const AVClass libmp3lame_class = { - .class_name = "libmp3lame encoder", - .item_name = av_default_item_name, - .option = options, - .version = LIBAVUTIL_VERSION_INT, -}; - -static const FFCodecDefault libmp3lame_defaults[] = { - { "b", "0" }, - { NULL }, -}; - -static const int libmp3lame_sample_rates[] = { - 44100, 48000, 32000, 22050, 24000, 16000, 11025, 12000, 8000, 0 -}; - -const FFCodec ff_libmp3lame_encoder = { - .p.name = "libmp3lame", - CODEC_LONG_NAME("libmp3lame MP3 (MPEG audio layer 3)"), - .p.type = AVMEDIA_TYPE_AUDIO, - .p.id = AV_CODEC_ID_MP3, - .p.capabilities = AV_CODEC_CAP_DR1 | AV_CODEC_CAP_DELAY | - AV_CODEC_CAP_SMALL_LAST_FRAME, - .caps_internal = FF_CODEC_CAP_NOT_INIT_THREADSAFE, - .priv_data_size = sizeof(LAMEContext), - .init = mp3lame_encode_init, - FF_CODEC_ENCODE_CB(mp3lame_encode_frame), - .close = mp3lame_encode_close, - .p.sample_fmts = (const enum AVSampleFormat[]) { AV_SAMPLE_FMT_S32P, - AV_SAMPLE_FMT_FLTP, - AV_SAMPLE_FMT_S16P, - AV_SAMPLE_FMT_NONE }, - .p.supported_samplerates = libmp3lame_sample_rates, - CODEC_OLD_CHANNEL_LAYOUTS(AV_CH_LAYOUT_MONO, AV_CH_LAYOUT_STEREO) - .p.ch_layouts = (const AVChannelLayout[]) { AV_CHANNEL_LAYOUT_MONO, - AV_CHANNEL_LAYOUT_STEREO, - { 0 }, - }, - .p.priv_class = &libmp3lame_class, - .defaults = libmp3lame_defaults, - .p.wrapper_name = "libmp3lame", -}; diff --git a/spaces/congsaPfin/Manga-OCR/logs/APK Download Impostor Z for Android - The Game Everyone is Talking About.md b/spaces/congsaPfin/Manga-OCR/logs/APK Download Impostor Z for Android - The Game Everyone is Talking About.md deleted file mode 100644 index d3d709a6228999ed91bf9884806da3d2b5f86645..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/APK Download Impostor Z for Android - The Game Everyone is Talking About.md +++ /dev/null @@ -1,142 +0,0 @@ - -<h1>Impostor Z APK: A New Twist on the Popular Among Us Game</h1> -<p>If you are a fan of the hit multiplayer game Among Us, you might be interested in trying out a new version of the game called Impostor Z. Impostor Z is a fan-made mod of Among Us that adds new features, modes, and graphics to the original game. In this article, we will tell you everything you need to know about Impostor Z APK, including what it is, how to download and install it, and how to play it.</p> - <h2>What is Impostor Z?</h2> -<p>Impostor Z is a modded version of Among Us that was created by FALCON GLOBAL, an independent developer. Impostor Z is not an official product of Innersloth, the original developer of Among Us. Impostor Z is only available for Android devices as an APK file, which means that you have to download and install it manually from a third-party website.</p> -<h2>impostor z apk</h2><br /><p><b><b>Download File</b> 🗸 <a href="https://urlca.com/2uO6FE">https://urlca.com/2uO6FE</a></b></p><br /><br /> - <h3>The premise of the game</h3> -<p>The premise of Impostor Z is similar to Among Us. You play as either a crewmate or an impostor on a spaceship. As a crewmate, your goal is to complete tasks around the ship or find and vote out the impostors. As an impostor, your goal is to kill crewmates or sabotage the ship without getting caught. You can play online or over local WiFi with 4-15 players.</p> - <h3>The features of the game</h3> -<p>Impostor Z has some features that are different from Among Us. Some of these features are:</p> -<ul> -<li>A new map called Cosmicube, which is based on the world of Pusheen, a popular cartoon cat character.</li> -<li>New skins, hats, visors, and pets inspired by Pusheen and other cute animals.</li> -<li>New sound effects and animations for killing and reporting.</li> -<li>New tasks and sabotages that are more challenging and fun.</li> -<li>New game modes such as Hide n Seek, where impostors can see everyone but crewmates can only see themselves.</li> -</ul> - <h3>The differences from Among Us</h3> -<p>Impostor Z also has some differences from Among Us that you should be aware of before playing. Some of these differences are:</p> -<ul> -<li>Impostor Z is not compatible with Among Us. You cannot play with players who are using the original version of the game.</li> -<li>Impostor Z is not updated regularly. You may encounter bugs or glitches that are not fixed by the developer.</li> -<li>Impostor Z is not safe or secure. You may expose your device to malware or viruses by downloading and installing the APK file from unknown sources.</li> -<li>Impostor Z is not legal or authorized. You may violate the intellectual property rights of Innersloth by using their assets without permission.</li> -</ul> - <h2>How to Download and Install Impostor Z APK?</h2> -<p>If you want to try out Impostor Z APK, you will need to download and install it manually on your Android device. Here are the steps to do so:</p> - <h3>The steps to download and install the APK file</h3> -<ol> -<li>Go to a reputable website that offers Impostor Z APK for download. For example, you can use this link. Make sure that the website is trustworthy and has positive reviews from other users.</li> -<li>Tap on the download button and wait for the APK file to be downloaded on your device. You may need to allow downloads from unknown sources in your device settings.</li> -<li>Once the download is complete, locate the APK file in your device storage and tap on it to install it. You may need to grant permissions for the app to access your device features.</li> -<li>After the installation is done, you can launch the app and start playing Impostor Z.</li> -</ol> - <h3>The benefits of using the APK file</h3> -<p>Some of the benefits of using the APK file to play Impostor Z are:</p> -<ul> -<li>You can enjoy new features, modes, and graphics that are not available in Among Us.</li> -<li>You can play with other players who are also using Impostor Z APK.</li> -<li>You can customize your game settings and preferences according to your liking.</li> -</ul> - <h3>The risks of using the APK file</h3> -<p>Some of the risks of using the APK file to play Impostor Z are:</p> -<p>impostor z apk download<br /> -impostor z apk mod<br /> -impostor z apk latest version<br /> -impostor z apk android<br /> -impostor z apk free<br /> -impostor z apk unlimited money<br /> -impostor z apk hack<br /> -impostor z apk offline<br /> -impostor z apk no ads<br /> -impostor z apk game<br /> -impostor z apk review<br /> -impostor z apk gameplay<br /> -impostor z apk tips and tricks<br /> -impostor z apk cheats<br /> -impostor z apk guide<br /> -impostor z apk update<br /> -impostor z apk new features<br /> -impostor z apk best settings<br /> -impostor z apk how to play<br /> -impostor z apk how to win<br /> -impostor z apk how to be an impostor<br /> -impostor z apk how to find the traitor<br /> -impostor z apk how to survive<br /> -impostor z apk how to chat<br /> -impostor z apk how to customize your character<br /> -impostor z apk how to unlock skins<br /> -impostor z apk how to get coins<br /> -impostor z apk how to level up<br /> -impostor z apk how to invite friends<br /> -impostor z apk how to join a room<br /> -impostor z apk how to create a room<br /> -impostor z apk how to change the language<br /> -impostor z apk how to report a bug<br /> -impostor z apk alternatives<br /> -impostor z apk vs among us<br /> -impostor z apk vs traitors.io<br /> -impostor z apk vs werewolf online<br /> -impostor z apk vs mafia city<br /> -impostor z apk vs town of salem<br /> -impostor z apk vs project winter<br /> -impostor z apkpure<br /> -impostor z apkmirror<br /> -impostor z apknite<br /> -impostor z apktada<br /> -impostor z apkpanda<br /> -impostor z apkguru</p> -<ul> -<li>You may damage your device or compromise your data by downloading and installing malware or viruses along with the APK file.</li> -<li>You may face legal issues or penalties by infringing on the intellectual property rights of Innersloth.</li> -<li>You may lose your progress or account if the developer stops updating or supporting the app.</li> -</ul> - <h2>How to Play Impostor Z?</h2> -<p>Once you have downloaded and installed Impostor Z APK, you can start playing the game. Here are some tips on how to play Impostor Z:</p> - <h3>The game modes and settings</h3> -<p>Impostor Z has three game modes: Online, Local, and Freeplay. Online mode allows you to play with other players over the internet. Local mode allows you to play with other players over WiFi. Freeplay mode allows you to practice as a crewmate or an impostor on any map.</p> -<p>You can also adjust the game settings such as the number of players, impostors, tasks, speed, vision, kill cooldown, voting time, and more. You can create your own game or join an existing one. You can also choose which map you want to play on: The Skeld, Mira HQ, Polus, or Cosmicube.</p> - <h3>The tips and tricks for crewmates and impostors</h3> -<p>As a crewmate, your goal is to complete tasks or find impostors. Some tips and tricks for crewmates are:</p> -<ul> -<li>Use your map to see where your tasks are and try to finish them as quickly as possible.</li> -<li>Stay with other crewmates and avoid being alone or isolated.</li> -<li>Report dead bodies or call emergency meetings if you see something suspicious.</li> -<li>Communicate with other crewmates and share information during meetings.</li> -<li>Use logic and evidence to identify impostors and vote them out.</li> -</ul> - <p>As an impostor, your goal is to kill crewmates or sabotage the ship. Some tips and tricks for impostors are:</p> -<ul> -<li>Use vents to move around the map quickly and stealthily.</li> -<li>Fake tasks by standing near them and pretending to do them.</li> -<li>Kill crewmates when no one is around or when they are separated from others.</li> -<li>Sabotage systems such as lights, oxygen, reactor, or communications to distract or divide crewmates.</li> -<li>Lie and deceive other players during meetings and accuse others of being impostors.</li> -</ul> - <h3>The best strategies for winning the game</h3> -<p>The best strategies for winning the game depend on your role, mode, and settings. However, some general strategies are:</p> -<ul> -<li>Use teamwork and cooperation with other players of your role.</li> -<li>Use deduction and reasoning skills to find clues and solve mysteries.</li> -<li>Use deception and manipulation skills to create confusion and chaos.</li> -<li>Use creativity and adaptability skills to overcome challenges and surprises.</li> -</ul> - <h2>Conclusion</h2> -<p>Impostor Z APK is a modded version of Among Us that offers a new twist on the popular game. It has new features, modes, and graphics that make it more fun and exciting. However, it also has some risks and drawbacks that you should be aware of before playing. If you want to try out Impostor Z APK, you will need to download and install it manually on your Android device. You can then play online or over local WiFi with 4-15 players as either a crewmate or an impostor. You can also use some tips and tricks to improve your chances of winning the game. Impostor Z APK is a fun and exciting game for fans of Among Us who want to try something new and different.</p> - <h2>FAQs</h2> -<p>Here are some frequently asked questions about Impostor Z APK:</p> -<ol> -<li>Q: Is Impostor Z APK free to play?<br> -A: Yes, Impostor Z APK is free to download and play. However, you may need to watch ads or make in-app purchases to unlock some features or items.</li> -<li>Q: Can I play Impostor Z APK on PC or iOS?<br> -A: No, Impostor Z APK is only compatible with Android devices. You cannot play it on PC or iOS devices.</li> -<li>Q: Can I play Impostor Z APK with friends?<br> -A: Yes, you can play Impostor Z APK with friends online or over local WiFi. However, you and your friends need to use the same version of the app and join the same game.</li> -<li>Q: Is Impostor Z APK safe and secure?<br> -A: No, Impostor Z APK is not safe or secure. You may expose your device or data to malware or viruses by downloading and installing the app from unknown sources. You may also face legal issues or penalties by infringing on the intellectual property rights of Innersloth.</li> -<li>Q: Is Impostor Z APK legal and authorized?<br> -A: No, Impostor Z APK is not legal or authorized. It is a fan-made mod of Among Us that uses the assets of Innersloth without permission. It is not an official product of Innersloth and has no affiliation with them.</li> -</ol></p> 401be4b1e0<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Download Love Today Songs for Free - Enjoy the Latest Telugu Hits by Yuvanshankar Raja and Others.md b/spaces/congsaPfin/Manga-OCR/logs/Download Love Today Songs for Free - Enjoy the Latest Telugu Hits by Yuvanshankar Raja and Others.md deleted file mode 100644 index 53b2a449c02dd37f026f07561e2b1f787e182d4a..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Download Love Today Songs for Free - Enjoy the Latest Telugu Hits by Yuvanshankar Raja and Others.md +++ /dev/null @@ -1,202 +0,0 @@ - -<h1>How to Download Love Today Songs in 2023</h1> -<p>Love is a universal emotion that transcends time and space. Music is a powerful medium that can express love in various ways. When you combine love and music, you get Love Today Songs, a genre of songs that celebrate love in all its forms and colors.</p> -<h2>love today songs in download</h2><br /><p><b><b>DOWNLOAD</b> ✸✸✸ <a href="https://urlca.com/2uOaYU">https://urlca.com/2uOaYU</a></b></p><br /><br /> -<p>In this article, we will tell you what are Love Today Songs, why are they popular, and how to download them for free in 2023. We will also share with you the best sources to download Love Today Songs in 2023, such as Apple Music, Wynk Music, and Gaana. So, if you are a fan of love songs or want to discover some new ones, read on and enjoy!</p> - <h2>Introduction</h2> -<h3>What are Love Today Songs?</h3> -<p>Love Today Songs are songs that are released in the current year or the recent past that focus on the theme of love. They can be romantic, sentimental, inspirational, or even humorous. They can be sung by solo artists or bands, in any language or genre. They can be original compositions or covers of old classics.</p> -<p>Some examples of Love Today Songs are:</p> -<ul> -<li>Pilla Padesaave by Yuvanshankar Raja and Haricharan from the Telugu movie Love Today</li> -<li>Ennai Vittu by Yuvan Shankar Raja and Sid Sriram from the Tamil movie Love Today</li> -<li>Tere Hi Ghar Ke by Yasser Desai and Asees Kaur from the Hindi movie Hamar Chhattisgarhi</li> -</ul> - <h3>Why are Love Today Songs Popular?</h3> -<p>Love Today Songs are popular because they appeal to a wide range of listeners who can relate to the emotions and experiences depicted in them. They can also provide comfort, joy, motivation, or entertainment to the listeners depending on their mood and preference. They can also help them discover new artists or genres that they might not have heard before.</p> -<p>Some reasons why people love listening to Love Today Songs are:</p> -<ul> -<li>They can express their feelings for their loved ones through music</li> -<li>They can reminisce about their past or present relationships through music</li> -<li>They can learn new perspectives or insights about love through music</li> -<li>They can enjoy the melody, lyrics, or vocals of the songs</li> -<li>They can have fun singing along or dancing to the songs</li> -</ul> - <h3>How to Download Love Today Songs for Free?</h3> -<p>If you want to download Love Today Songs for free, you have several options available. You can use online platforms that offer free downloads or streaming of songs, such as YouTube, SoundCloud, or Spotify. You can also use apps that allow you to download songs offline, such as Vidmate, Snaptube, or TubeMate. However, these methods may not be legal or safe, as they may violate the copyrights of the artists or expose your device to malware.</p> -<p>love today telugu songs download<br /> -yuvanshankar raja love today songs mp3 download<br /> -love today tamil movie songs download<br /> -love today sid sriram song download<br /> -love today movie songs free download<br /> -love today ennai vittu song download<br /> -love today 2022 songs download<br /> -love today pacha elai song download<br /> -love today songs download starmusiq<br /> -love today songs download masstamilan<br /> -love today songs download naa songs<br /> -love today songs download wynk music<br /> -love today songs download apple music<br /> -love today songs download 320kbps<br /> -love today songs download kuttyweb<br /> -love today songs download isaimini<br /> -love today songs download sensongs<br /> -love today songs download pagalworld<br /> -love today songs download gaana<br /> -love today songs download spotify<br /> -love today songs lyrics download<br /> -love today video songs download<br /> -love today hd video songs download<br /> -love today movie video songs download<br /> -love today tamil video songs download<br /> -love today telugu video songs download<br /> -love today ennai vittu video song download<br /> -love today pacha elai video song download<br /> -love today saachitale video song download<br /> -love today mamakutty video song download<br /> -love today ringtones download<br /> -love today bgm download<br /> -love today movie bgm download<br /> -love today ennai vittu bgm download<br /> -love today pacha elai bgm download<br /> -love today saachitale bgm download<br /> -love today mamakutty bgm download<br /> -love today theme music download<br /> -love today movie theme music download<br /> -yuvanshankar raja love today theme music download<br /> -yuvanshankar raja version of ennai vittu song from the movie Love Today (2022) mp3 free online by Yuvanshankar Raja, Sid Sriram. Download Yuvanshankar Raja version of ennai vittu song from the movie Love Today (2022) on Hungama Music app & get access to Love Today (Original Motion Picture Soundtrack) unlimited free songs, free movies, latest music videos, online radio, new TV shows and much more at Hungama. Listen to free mp3 songs, music and earn Hungama Coins, redeem Hungama coins for free subscription on Hungama Music App and many more free gifts.</p> -<p>A better way to download Love Today Songs for free is to use music streaming services that offer free trials or subscriptions. These services not only let you download songs legally and safely, but also give you access to a vast library of songs across languages and genres. You can also enjoy features such as playlists, recommendations, lyrics, podcasts, and more.</p> -<p>In the next section, we will introduce you to three of the best music streaming services that you can use to download Love Today Songs for free in 2023.</p> - <h2>Best Sources to Download Love Today Songs in 2023</h2> -<h3>Apple Music</h3> -<p>Apple Music is one of the most popular music streaming services in the world, with over 90 million subscribers as of 2022. It offers a huge catalog of songs, including Love Today Songs, in various languages and genres. You can also listen to live radio stations, curated playlists, exclusive content, and more.</p> - <h4>Features of Apple Music</h4> -<p>Some of the features of Apple Music that make it a great source to download Love Today Songs are:</p> -<ul> -<li>It offers a free trial for three months, after which you can subscribe for $9.99 per month or $99.99 per year</li> -<li>It allows you to download up to 100,000 songs offline on up to 10 devices</li> -<li>It supports high-quality audio streaming up to 256 kbps</li> -<li>It integrates with Siri, Apple Watch, AirPods, HomePod, and other Apple devices</li> -<li>It provides personalized recommendations based on your listening history and preferences</li> -</ul> - <h4>How to Download Love Today Songs from Apple Music</h4> -<p>To download Love Today Songs from Apple Music, you need to follow these steps:</p> -<ol> -<li>Download the Apple Music app from the App Store or Google Play Store on your device</li> -<li>Sign up for a free trial or a subscription with your Apple ID or email address</li> -<li>Search for Love Today Songs in the app or browse through the categories and genres</li> -<li>Select the songs that you want to download and tap on the download icon next to them</li> -<li>Enjoy listening to your downloaded songs offline or online</li> -</ol> - <h3>Wynk Music</h3> -<p>Wynk Music is another popular music streaming service that caters to the Indian market, with over 72 million monthly active users as of 2021. It offers a wide range of songs, including Love Today Songs, in various Indian languages and genres. You can also access podcasts, videos, live concerts, and more.</p> - <h4>Features of Wynk Music</h4> -<p>Some of the features of Wynk Music that make it a great source to download Love Today Songs are:</p> -<ul> -<li>It offers a free subscription with ads or a premium subscription for Rs. 49 per month or Rs. 399 per year</li> -<li>It allows you to download unlimited songs offline on up to five devices</li> -<li>It supports high-quality audio streaming up to 320 kbps</li> -<li>It integrates with Alexa, Google Assistant, Chromecast, and other devices</li> -<li>It provides personalized recommendations based on your listening history and preferences</li> -</ul> - <h4>How to Download Love Today Songs from Wynk Music</h4> -<p>To download Love Today Songs from Wynk Music, you need to follow these steps:</p> -<ol> -<li>Download the Wynk Music app from the App Store or Google Play Store on your device</li> -<li>Sign up for a free or a premium subscription with your phone number or email address</li> -<li>Search for Love Today Songs in the app or browse through the categories and genres</li> -<li>Select the songs that you want to download and tap on the download icon next to them</li> -<li>Enjoy listening to your downloaded songs offline or online</li> -</ol> - <h3>Gaana</h3> -<p>Gaana is yet another popular music streaming service that caters to the Indian market, with over 185 million monthly active users as of 2021. It offers a huge collection of songs, including Love Today Songs, in various Indian languages and genres. You can also access podcasts, videos, lyrics, and more.</p> - <h4>Features of Gaana</h4> -<p>Some of the features of Gaana that make it a great source to download Love Today Songs are:</p> -<ul> -<li>It offers a free subscription with ads or a premium subscription for Rs. 99 per month or Rs. 999 per year</li> -<li>It allows you to download unlimited songs offline on up to five devices</li> -<li>It supports high-quality audio streaming up to 320 kbps</li> -<li>It integrates with Alexa, Google Assistant, Chromecast, and other devices</li> -<li>It provides personalized recommendations based on your listening history and preferences</li> -</ul> - <h4>How to Download Love Today Songs from Gaana</h4> -<p>To download Love Today Songs from Gaana, you need to follow these steps:</p> -<ol> -<li>Download the Gaana app from the App Store or Google Play Store on your device</li> -<li>Sign up for a free or a premium subscription with your phone number or email address</li> -<li>Search for Love Today Songs in the app or browse through the categories and genres</li> -<li>Select the songs that you want to download and tap on the download icon next to them</li> -<li>Enjoy listening to your downloaded songs offline or online</li> -</ol> - <h2>Conclusion</h2> -<h3>Summary of the Main Points</h3> -<p>In this article, we have learned what are Love Today Songs, why are they popular, and how to download them for free in 2023. We have also explored the best sources to download Love Today Songs in 2023, such as Apple Music, Wynk Music, and Gaana. These music streaming services offer a variety of features, such as offline downloads, high-quality audio, personalized recommendations, and more.</p> - <h3>Call to Action</h3> -<p>If you are a fan of love songs or want to discover some new ones, we recommend you to try out these music streaming services and download Love Today Songs for free in 2023. You will surely find some songs that will touch your heart and make you feel happy. So, what are you waiting for? Download your favorite Love Today Songs today and enjoy the music of love!</p> - <h2>Frequently Asked Questions</h2> -<h3>What are some of the benefits of listening to Love Today Songs?</h3> -<p>Some of the benefits of listening to Love Today Songs are:</p> -<ul> -<li>They can improve your mood and reduce stress</li> -<li>They can boost your creativity and productivity</li> -<li>They can enhance your memory and learning</li> -<li>They can strengthen your relationships and social skills</li> -<li>They can inspire you to pursue your dreams and goals</li> -</ul> - <h3>How can I find more Love Today Songs in 2023?</h3> -<p>You can find more Love Today Songs in 2023 by following these tips:</p> -<ul> -<li>Follow the latest trends and charts on social media and music platforms</li> -<li>Subscribe to newsletters and blogs that feature new releases and reviews of love songs</li> -<li>Join online communities and forums that discuss love songs and share recommendations</li> -<li>Attend live concerts and events that showcase love songs and artists</li> -<li>Create your own playlists and share them with your friends and family</li> -</ul> - <h3>How can I support the artists who create Love Today Songs?</h3> -<p>You can support the artists who create Love Today Songs by doing these things:</p> -<ul> -<li>Purchase their albums or singles from their official websites or stores</li> -<li>Stream their songs on legal and licensed music platforms</li> -<li>Leave positive feedback and ratings on their songs and profiles</li> -<li>Follow them on social media and engage with their posts and stories</li> -<li>Recommend their songs to other people who might like them</li> -</ul> - <h3>What are some of the challenges of downloading Love Today Songs for free?</h3> -<p>Some of the challenges of downloading Love Today Songs for free are:</p> -<ul> -<li>You may face legal issues or penalties if you download songs from unauthorized or pirated sources</li> -<li>You may encounter malware or viruses that can harm your device or data if you download songs from untrusted or insecure sources</li> -<li>You may experience poor audio quality or incomplete files if you download songs from low-quality or unreliable sources</li> -<li>You may miss out on some features or benefits that paid music streaming services offer, such as ad-free listening, offline mode, exclusive content, etc.</li> -<li>You may not be able to support the artists who create Love Today Songs if you download their songs for free without their consent or permission</li> -</ul> - <h3>What are some of the best Love Today Songs in 2023?</h3> -<p>The answer to this question may vary depending on your personal taste and preference, but here are some of the best Love Today Songs in 2023 according to our opinion:</p> - <table border="1"> -<tr><th>Song Name</th><th>Artist Name</th><th>Language/Genre</th></tr> -<tr><td>Pilla Padesaave</td><td>Yuvanshankar Raja and Haricharan</td><td>Telugu/Romantic</td></tr> -<tr><td>Ennai Vittu</td><td>Yuvan Shankar Raja and Sid Sriram</td><td >Tamil/Romantic</td></tr> -<tr><td>Tere Hi Ghar Ke</td><td>Yasser Desai and Asees Kaur</td><td>Hindi/Sentimental</td></tr> -<tr><td>Love Story</td><td>Taylor Swift</td><td>English/Pop</td></tr> -<tr><td>Perfect</td><td>Ed Sheeran</td><td>English/Pop</td></tr> -<tr><td>Shape of You</td><td>Ed Sheeran</td><td>English/Pop</td></tr> -<tr><td>All of Me</td><td>John Legend</td><td>English/R&B</td></tr> -<tr><td>Thinking Out Loud</td><td>Ed Sheeran</td><td>English/Pop</td></tr> -<tr><td>A Thousand Years</td><td>Christina Perri</td><td>English/Pop</td></tr> -<tr><td>I Will Always Love You</td><td>Whitney Houston</td><td>English/Pop</td></tr> -<tr><td>My Heart Will Go On</td><td>Celine Dion</td>< -<td>English/Pop</td></tr> -<tr><td>I Don't Want to Miss a Thing</td>< -<td>Aerosmith</ <td>English/Rock</ <td></tr> -<tr>< -<td>You're Still the One</ <td>Shania Twain <td>English/Country <td></tr> -<tr> -<td>I'm Yours <td>Jason Mraz <td>English/Reggae <td></tr> -<tr> -<td>Lovely Day <td>Bill Withers <td>English/Soul <td></tr> -<tr> -<td>Crazy in Love <td>Beyoncé and Jay-Z <td>English/R&B <td></tr> -</table> - <p>We hope you enjoyed this article and found it useful. If you have any questions or feedback, please feel free to leave a comment below. We would love to hear from you. Thank you for reading and happy listening!</p> 197e85843d<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Transport Tycoon Empire City Mod Apk 1.2.6 - Build Your Own Transport Empire with Hack Features.md b/spaces/congsaPfin/Manga-OCR/logs/Transport Tycoon Empire City Mod Apk 1.2.6 - Build Your Own Transport Empire with Hack Features.md deleted file mode 100644 index d7fefa3b955fb9adade3fcfb073969068a251b11..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Transport Tycoon Empire City Mod Apk 1.2.6 - Build Your Own Transport Empire with Hack Features.md +++ /dev/null @@ -1,97 +0,0 @@ -<br /> -<h1>Transport Tycoon Empire: City - A Fun and Challenging Game for Android</h1> - <p>If you are looking for a game that combines simulation, strategy, and city building, then you might want to check out Transport Tycoon Empire: City. This is a game that lets you create your own transport empire, deliver cargo, complete contracts, and even build your own city. In this article, we will tell you more about this game, how to download and install the mod apk 1.2.6, what are the main features and benefits of playing it, and why you should give it a try.</p> - <h2>What is Transport Tycoon Empire: City?</h2> - <h3>A simulation game that lets you build your own transport empire</h3> - <p>Transport Tycoon Empire: City is a simulation game that puts you in charge of a transport company. You can collect various vehicles, such as trains, planes, ships, buses, trucks, and more, and use them to deliver cargo across different regions. You can also upgrade your vehicles to make them faster, more efficient, and more profitable. You can choose from different types of cargo, such as passengers, goods, materials, food, etc., and fulfill different contracts that will earn you money and reputation.</p> -<h2>transport tycoon empire city mod apk 1.2.6</h2><br /><p><b><b>DOWNLOAD</b> ✯ <a href="https://urlca.com/2uOasq">https://urlca.com/2uOasq</a></b></p><br /><br /> - <h3>A modded version that adds new features and content</h3> - <p>Transport Tycoon Empire: City is not the original version of the game, but a modded one that adds new features and content to the game. The mod apk 1.2.6 is the latest version of the mod that was released on June 22, 2023. Some of the new features and content that the mod adds are:</p> - <ul> -<li>New vehicles, such as helicopters, submarines, rockets, etc.</li> -<li>New regions, such as Europe, Asia, Africa, etc.</li> -<li>New contracts, such as tourism, military, space exploration, etc.</li> -<li>New buildings, such as hotels, factories, airports, etc.</li> -<li>New challenges, such as disasters, accidents, traffic jams, etc.</li> -<li>New modes, such as sandbox mode, multiplayer mode, etc.</li> -</ul> - <h3>How to download and install the mod apk 1.2.6</h3> - <p>If you want to play Transport Tycoon Empire: City with the mod apk 1.2.6, you need to follow these steps:</p> - <ol> -<li>Download the mod apk file from [this link](^1^).</li> -<li>Enable unknown sources on your Android device by going to Settings > Security > Unknown Sources.</li> -<li>Locate the downloaded file on your device and tap on it to install it.</li> -<li>Launch the game and enjoy!</li> -</ol> - <h2>What are the main features of Transport Tycoon Empire: City?</h2> - <h3>Collect and upgrade various vehicles</h3> - <p>One of the main features of Transport Tycoon Empire: City is that you can collect and upgrade various vehicles to suit your needs and preferences. You can choose from over 100 different vehicles that have different characteristics, such as speed, capacity, fuel consumption, maintenance cost, etc. You can also customize your vehicles by changing their color, name, logo, etc. You can upgrade your vehicles by improving their engine, brakes, tires, etc., to make them more efficient, reliable, and profitable. You can also sell your vehicles if you don't need them anymore or want to make some extra cash.</p> - <h3>Deliver cargo and complete contracts</h3> - <p>Another main feature of Transport Tycoon Empire: City is that you can deliver cargo and complete contracts to earn money and reputation. You can choose from different types of cargo, such as passengers, goods, materials, food, etc., and deliver them to different destinations, such as cities, towns, factories, farms, etc. You can also accept different contracts that will give you specific tasks and rewards, such as transporting a certain amount of cargo, delivering cargo within a time limit, delivering cargo to a specific location, etc. You can also create your own contracts and share them with other players online.</p> - <h3>Build and manage your own city</h3> - <p>A third main feature of Transport Tycoon Empire: City is that you can build and manage your own city. You can use the money and resources you earn from delivering cargo and completing contracts to buy land and construct buildings, such as hotels, factories, airports, etc. You can also upgrade your buildings to make them more productive and attractive. You can also manage your city by providing services, such as water, electricity, health, education, etc., and solving problems, such as pollution, crime, traffic, etc. You can also customize your city by changing its name, flag, layout, etc.</p> - <h3>Compete with other players online</h3> - <p>A fourth main feature of Transport Tycoon Empire: City is that you can compete with other players online. You can join or create a multiplayer mode that will allow you to play with or against other players in real time. You can also chat with other players and exchange tips and strategies. You can also compare your progress and achievements with other players on the global leaderboard and see who is the best transport tycoon in the world.</p> - <h2>What are the benefits of playing Transport Tycoon Empire: City?</h2> - <h3>Improve your strategic thinking and planning skills</h3> - <p>One of the benefits of playing Transport Tycoon Empire: City is that it can help you improve your strategic thinking and planning skills. The game requires you to make smart decisions and plan ahead to succeed in your transport business and city development. You have to consider various factors, such as demand and supply, cost and profit, risk and reward, etc., when choosing your vehicles, cargo, contracts, buildings, etc. You also have to balance your short-term and long-term goals and adapt to changing situations and challenges.</p> - <h3>Enjoy a realistic and immersive gaming experience</h3> - <p>Another benefit of playing Transport Tycoon Empire: City is that it can provide you with a realistic and immersive gaming experience. The game has high-quality graphics and sound effects that create a vivid and detailed representation of the transport industry and city life. The game also has a dynamic weather system that affects the gameplay and adds variety and challenge. The game also has a realistic physics engine that simulates the movement and behavior of the vehicles and cargo.</p> -<p>download transport tycoon empire city mod apk<br /> -transport tycoon empire city hack mod apk<br /> -transport tycoon empire city unlimited money mod apk<br /> -transport tycoon empire city latest version mod apk<br /> -transport tycoon empire city mod apk free shopping<br /> -transport tycoon empire city mod apk android 1<br /> -transport tycoon empire city mod apk offline<br /> -transport tycoon empire city mod apk no ads<br /> -transport tycoon empire city mod apk revdl<br /> -transport tycoon empire city mod apk rexdl<br /> -transport tycoon empire city mod apk happymod<br /> -transport tycoon empire city mod apk an1<br /> -transport tycoon empire city mod apk 2023<br /> -transport tycoon empire city mod apk update<br /> -transport tycoon empire city mod apk cheat<br /> -transport tycoon empire city mod apk obb<br /> -transport tycoon empire city mod apk data<br /> -transport tycoon empire city mod apk unlimited everything<br /> -transport tycoon empire city mod apk premium<br /> -transport tycoon empire city mod apk pro<br /> -transport tycoon empire city mod apk vip<br /> -transport tycoon empire city mod apk mega<br /> -transport tycoon empire city mod apk unlocked<br /> -transport tycoon empire city mod apk full version<br /> -transport tycoon empire city simulation game mod apk<br /> -how to install transport tycoon empire city mod apk<br /> -how to play transport tycoon empire city mod apk<br /> -how to download transport tycoon empire city mod apk<br /> -how to hack transport tycoon empire city mod apk<br /> -how to get transport tycoon empire city mod apk<br /> -is transport tycoon empire city mod apk safe<br /> -is transport tycoon empire city mod apk legal<br /> -is transport tycoon empire city mod apk real<br /> -is transport tycoon empire city mod apk working<br /> -is transport tycoon empire city mod apk worth it<br /> -best site to download transport tycoon empire city mod apk<br /> -best way to use transport tycoon empire city mod apk<br /> -best tips for transport tycoon empire city mod apk<br /> -best tricks for transport tycoon empire city mod apk<br /> -best guide for transport tycoon empire city mod apk</p> - <h3>Have fun and relax with a casual and addictive gameplay</h3> - <p>A third benefit of playing Transport Tycoon Empire: City is that it can offer you fun and relaxation with a casual and addictive gameplay. The game has a simple and intuitive interface that makes it easy to play for anyone. The game also has a flexible pace that allows you to play at your own speed and style. The game also has a lot of content and variety that will keep you entertained for hours. The game also has a humorous tone that will make you laugh and smile.</p> - <h2>Conclusion</h2> - <p>Transport Tycoon Empire: City is a game worth trying for Android users who love simulation games. It is a game that lets you build your own transport empire, deliver cargo, complete contracts, build your own city, compete with other players online, improve your strategic thinking and planning skills, enjoy a realistic and immersive gaming experience, and have fun and relax with a casual and addictive gameplay. You can download and install the mod apk 1.2.6 from the link provided in this article and start playing right away. You will not regret it!</p> - <h2>FAQs</h2> - <p>Here are some frequently asked questions about Transport Tycoon Empire: City:</p> - <h3>Q: Is Transport Tycoon Empire: City free to play?</h3> -<p>A: Yes, Transport Tycoon Empire: City is free to play. However, it contains some in-app purchases that can enhance your gaming experience.</p> - <h3>Q: Is Transport Tycoon Empire: City safe to download and install?</h3> -<p>A: Yes, Transport Tycoon Empire: City is safe to download and install. The mod apk file is scanned and verified by antivirus software and does not contain any malware or viruses.</p> - <h3>Q: How can I update Transport Tycoon Empire: City to the latest version?</h3> -<p>A: You can update Transport Tycoon Empire: City to the latest version by downloading and installing the new mod apk file from the same link as before. You do not need to uninstall the previous version.</p> - <h3>Q: How can I contact the developers of Transport Tycoon Empire: City?</h3> -<p>A: You can contact the developers of Transport Tycoon Empire: City by visiting their official website or their Facebook page. You can also send them an email or leave a comment on their Google Play Store page.</p> - <h3>Q: How can I share my feedback and suggestions for Transport Tycoon Empire: City?</h3> -<p>A: You can share your feedback and suggestions for Transport Tycoon Empire: City by rating and reviewing the game on the Google Play Store or by joining the online community of other players on the Facebook page or the Discord server.</p> 401be4b1e0<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/congsaPfin/Manga-OCR/logs/Z Legends APK The Most Amazing Dragon Ball Z Game for Android Devices.md b/spaces/congsaPfin/Manga-OCR/logs/Z Legends APK The Most Amazing Dragon Ball Z Game for Android Devices.md deleted file mode 100644 index 8b80eb1a04066dabd01b9bd901e86681c0ceec9f..0000000000000000000000000000000000000000 --- a/spaces/congsaPfin/Manga-OCR/logs/Z Legends APK The Most Amazing Dragon Ball Z Game for Android Devices.md +++ /dev/null @@ -1,123 +0,0 @@ - -<h1>Z Legends APK: A Cosmic Fighting Game for Android</h1> -<p>If you are a fan of cosmic fighting games, you might want to check out Z Legends APK. This is a game that lets you choose your favorite character from the popular series and fight against other cosmic warriors in various environments and situations. You can enjoy the thrilling action, the stunning graphics, and the epic sound effects of this game on your Android device. In this article, we will tell you everything you need to know about Z Legends APK, including what it is, how to download and install it, how to play it, and why you should play it.</p> - <h2>What is Z Legends APK?</h2> -<p>Z Legends APK is a fan-made game that is inspired by the famous series of cosmic fighters. It is not an official game from the original creators, but it is a tribute to their work. The game features many characters from the series, such as Goku, Vegeta, Gohan, Piccolo, Frieza, Cell, Buu, Broly, and more. You can also unlock new characters as you progress in the game.</p> -<h2>z legends apk</h2><br /><p><b><b>Download File</b> ✺✺✺ <a href="https://urlca.com/2uOe2T">https://urlca.com/2uOe2T</a></b></p><br /><br /> - <h3>The features of Z Legends APK</h3> -<p>Some of the features that make Z Legends APK an amazing game are:</p> -<ul> -<li>It has high-quality graphics and animations that bring the characters and the environments to life.</li> -<li>It has realistic sound effects and voice acting that enhance the immersion and the excitement of the game.</li> -<li>It has simple and intuitive controls that make it easy to perform combos and special moves.</li> -<li>It has various modes that offer different challenges and rewards.</li> -<li>It has a multiplayer mode that allows you to fight against other players online.</li> -<li>It has a customization option that lets you change the appearance and the skills of your character.</li> -<li>It has a low file size that does not take up much space on your device.</li> -</ul> - <h3>The characters of Z Legends APK</h3> -<p>Z Legends APK has a large roster of characters that you can choose from. Each character has their own strengths and weaknesses, as well as their own unique moves and transformations. Some of the characters that you can play as are:</p> -<table> -<tr><th>Name</th><th>Category</th><th>Special Move</th><th>Transformation</th></tr> -<tr><td>Goku</td><td>Saiyan</td><td>Kamehameha</td><td>Super Saiyan 1-4, God, Blue</td></tr> -<tr><td>Vegeta</td><td>Saiyan</td><td>Final Flash</td><td>Super Saiyan 1-4, God, Blue, Evolution</td></tr> -<tr><td>Gohan</td><td>Saiyan/Human</td><td>Masenko</td><td>Super Saiyan 1-2, Ultimate</td></tr> -<tr><td>Piccolo</td><td>Namekian</td><td>Special Beam Cannon</td><td>Fused with Nail/Kami</td></tr> -<tr><td>Frieza</td><td>Arcosian</td><td>Death Beam</td><td>Final Form, Golden Form</td></tr> -<tr><td>Cell</td><td>Bio-Android</td><td>Kamehameha/Solar Kamehameha</td><td>Perfect Form/Super Perfect Form</td></tr> -<tr><td>Buu</td><td>Majin</td><td>Vanishing Ball/Chocolate Beam</td><td>Fat Buu/Evil Bu the screen that lets you transform your character when your energy meter is full and you meet the conditions.</li> -<li>A pause button on the top right corner of the screen that lets you pause the game and access the menu.</li> -</ul> - <h3>The tips and tricks for Z Legends APK</h3> -<p>Z Legends APK is a game that requires skill and strategy to win. Here are some of the tips and tricks that can help you improve your performance:</p> -<ul> -<li>Learn the strengths and weaknesses of each character and choose the one that suits your playstyle.</li> -<li>Practice your combos and special moves in the training mode and master them in the real battles.</li> -<li>Use your guard wisely and avoid spamming it. You can also dodge or counterattack when your opponent is vulnerable.</li> -<li>Use your power wisely and save it for the right moment. You can also use it to cancel your attacks or escape from combos.</li> -<li>Use your special moves wisely and aim them carefully. You can also use them to finish off your opponent or to create openings.</li> -<li>Use your transformation wisely and activate it when you have an advantage. You can also use it to change the tide of the battle or to surprise your opponent.</li> -<li>Play online mode and challenge other players to test your skills and learn from them.</li> -</ul> - <h2>Why should you play Z Legends APK?</h2> -<p>Z Legends APK is a game that offers a lot of fun and excitement for cosmic fighting fans. Here are some of the reasons why you should play it:</p> - <h3>The pros and cons of Z Legends APK</h3> -<p>Z Legends APK has its pros and cons, like any other game. Here are some of them:</p> -<p>z legends apk download<br /> -z legends apk latest version<br /> -z legends apk update<br /> -z legends apk free<br /> -z legends apk mod<br /> -z legends apk hack<br /> -z legends apk offline<br /> -z legends apk online<br /> -z legends apk for android<br /> -z legends apk for pc<br /> -z legends apk for ios<br /> -z legends apk for windows<br /> -z legends apk for mac<br /> -z legends apk for linux<br /> -z legends apk for firestick<br /> -z legends apk gameplay<br /> -z legends apk review<br /> -z legends apk rating<br /> -z legends apk size<br /> -z legends apk features<br /> -z legends apk tips<br /> -z legends apk tricks<br /> -z legends apk cheats<br /> -z legends apk guide<br /> -z legends apk walkthrough<br /> -z legends apk dragon ball<br /> -z legends apk stickman warriors<br /> -z legends apk super saiyan<br /> -z legends apk dragon warriors<br /> -z legends apk legend fighters<br /> -z legends apk adventure game<br /> -z legends apk action game<br /> -z legends apk role playing game<br /> -z legends apk arcade game<br /> -z legends apk casual game<br /> -z legends apk strategy game<br /> -z legends apk sports game<br /> -z legends apk simulation game<br /> -z legends apk racing game<br /> -z legends apk puzzle game<br /> -z legends apk card game<br /> -z legends apk music game<br /> -z legends apk board game<br /> -z legends apk educational game<br /> -z legends apk trivia game<br /> -z legends apk word game<br /> -z legends APKCombo[^1^]</p> -<table> -<tr><th>Pros</th><th>Cons</th></tr> -<tr><td>It has high-quality graphics and sound effects that make the game realistic and immersive.</td><td>It may have some bugs and glitches that affect the game performance and experience.</td></tr> -<tr><td>It has simple and intuitive controls that make the game easy to play and enjoy.</td><td>It may have some compatibility issues with some devices and models.</td></tr> -<tr><td>It has various modes that offer different challenges and rewards for different preferences and tastes.</td><td>It may have some balance issues with some characters and moves that make the game unfair or boring.</td></tr> -<tr><td>It has a multiplayer mode that allows you to play with other players online and have fun together.</td><td>It may have some connection issues with some servers and regions that affect the game stability and quality.</td></tr> -<tr><td>It has a customization option that allows you to personalize your character and make it unique.</td><td>It may have some in-app purchases that require real money to unlock some features or items.</td></tr> -</table> - <h3>The ratings and reviews of Z Legends APK</h3> -<p>Z Legends APK has received positive ratings and reviews from many players who have tried it. Here are some of them:</p> -<blockquote>"This game is awesome! I love the graphics, the sound, and the gameplay. It feels like I'm watching the series again. The characters are amazing and their moves are epic. I recommend this game to anyone who likes cosmic fighting games."</blockquote> -<blockquote>"This game is very good. I like the controls, the modes, and the online mode. It is easy to play and fun to win. The characters are cool and their transformations are awesome. I enjoy playing this game with my friends."</blockquote> -<blockquote>"This game is decent. I like the concept, the story, and the customization option. It is interesting to play and challenging to master. The characters are nice and their special moves are impressive. I wish the game had more characters and stages."</blockquote> - <h2>Conclusion</h2> -<p>Z Legends APK is a cosmic fighting game for Android that lets you play as your favorite character from the popular series and fight against other cosmic warriors in various environments and situations. You can enjoy the high-quality graphics, the realistic sound effects, the simple controls, the various modes, the multiplayer mode, and the customization option of this game. You can also unlock new characters, stages, and features as you progress in the game. Z Legends APK is a game that is worth trying if you are a fan of cosmic fighting games.</p> - <h2>FAQs</h2> -<p>Here are some of the frequently asked questions about Z Legends APK:</p> -<ul> -<li>Q: Is Z Legends APK safe to download and install?</li> -<li>A: Yes, Z Legends APK is safe to download and install. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source and scan it before installing it.</li> -<li>Q: Is Z Legends APK free to play?</li> -<li>A: Yes, Z Legends APK is free to play. You can download and install it without paying any money. However, it may have some in-app purchases that require real money to unlock some features or items.</li> -<li>Q: Is Z Legends APK compatible with my device?</li> -<li>A: Z Legends APK is compatible with most Android devices that have version 4.4 or higher. However, it may not work well on some devices or models due to compatibility issues. You can check the requirements and the compatibility list on the official website of Z Legends APK.</li> -<li>Q: How can I update Z Legends APK?</li> -<li>A: You can update Z Legends APK by downloading and installing the latest version from the official website of Z Legends APK or from other sources. You can also check for updates in the game settings or in the app store.</li> -<li>Q: How can I contact the developers of Z Legends APK?</li> -<li>A: You can contact the developers of Z Legends APK by sending them an email at . You can also follow them on their social media accounts or visit their website for more information.</li> -</ul></p> 197e85843d<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/resnet.py b/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/resnet.py deleted file mode 100644 index 1cb3ac057ee2d52c46fc94685b5d4e698aad8d5f..0000000000000000000000000000000000000000 --- a/spaces/coreml-community/ControlNet-v1-1-Annotators-cpu/annotator/uniformer/mmcv/cnn/resnet.py +++ /dev/null @@ -1,316 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import logging - -import torch.nn as nn -import torch.utils.checkpoint as cp - -from .utils import constant_init, kaiming_init - - -def conv3x3(in_planes, out_planes, stride=1, dilation=1): - """3x3 convolution with padding.""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - dilation=dilation, - bias=False) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - super(BasicBlock, self).__init__() - assert style in ['pytorch', 'caffe'] - self.conv1 = conv3x3(inplanes, planes, stride, dilation) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - assert not with_cp - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, - inplanes, - planes, - stride=1, - dilation=1, - downsample=None, - style='pytorch', - with_cp=False): - """Bottleneck block. - - If style is "pytorch", the stride-two layer is the 3x3 conv layer, if - it is "caffe", the stride-two layer is the first 1x1 conv layer. - """ - super(Bottleneck, self).__init__() - assert style in ['pytorch', 'caffe'] - if style == 'pytorch': - conv1_stride = 1 - conv2_stride = stride - else: - conv1_stride = stride - conv2_stride = 1 - self.conv1 = nn.Conv2d( - inplanes, planes, kernel_size=1, stride=conv1_stride, bias=False) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=conv2_stride, - padding=dilation, - dilation=dilation, - bias=False) - - self.bn1 = nn.BatchNorm2d(planes) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - self.dilation = dilation - self.with_cp = with_cp - - def forward(self, x): - - def _inner_forward(x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - - return out - - if self.with_cp and x.requires_grad: - out = cp.checkpoint(_inner_forward, x) - else: - out = _inner_forward(x) - - out = self.relu(out) - - return out - - -def make_res_layer(block, - inplanes, - planes, - blocks, - stride=1, - dilation=1, - style='pytorch', - with_cp=False): - downsample = None - if stride != 1 or inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append( - block( - inplanes, - planes, - stride, - dilation, - downsample, - style=style, - with_cp=with_cp)) - inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(inplanes, planes, 1, dilation, style=style, with_cp=with_cp)) - - return nn.Sequential(*layers) - - -class ResNet(nn.Module): - """ResNet backbone. - - Args: - depth (int): Depth of resnet, from {18, 34, 50, 101, 152}. - num_stages (int): Resnet stages, normally 4. - strides (Sequence[int]): Strides of the first block of each stage. - dilations (Sequence[int]): Dilation of each stage. - out_indices (Sequence[int]): Output from which stages. - style (str): `pytorch` or `caffe`. If set to "pytorch", the stride-two - layer is the 3x3 conv layer, otherwise the stride-two layer is - the first 1x1 conv layer. - frozen_stages (int): Stages to be frozen (all param fixed). -1 means - not freezing any parameters. - bn_eval (bool): Whether to set BN layers as eval mode, namely, freeze - running stats (mean and var). - bn_frozen (bool): Whether to freeze weight and bias of BN layers. - with_cp (bool): Use checkpoint or not. Using checkpoint will save some - memory while slowing down the training speed. - """ - - arch_settings = { - 18: (BasicBlock, (2, 2, 2, 2)), - 34: (BasicBlock, (3, 4, 6, 3)), - 50: (Bottleneck, (3, 4, 6, 3)), - 101: (Bottleneck, (3, 4, 23, 3)), - 152: (Bottleneck, (3, 8, 36, 3)) - } - - def __init__(self, - depth, - num_stages=4, - strides=(1, 2, 2, 2), - dilations=(1, 1, 1, 1), - out_indices=(0, 1, 2, 3), - style='pytorch', - frozen_stages=-1, - bn_eval=True, - bn_frozen=False, - with_cp=False): - super(ResNet, self).__init__() - if depth not in self.arch_settings: - raise KeyError(f'invalid depth {depth} for resnet') - assert num_stages >= 1 and num_stages <= 4 - block, stage_blocks = self.arch_settings[depth] - stage_blocks = stage_blocks[:num_stages] - assert len(strides) == len(dilations) == num_stages - assert max(out_indices) < num_stages - - self.out_indices = out_indices - self.style = style - self.frozen_stages = frozen_stages - self.bn_eval = bn_eval - self.bn_frozen = bn_frozen - self.with_cp = with_cp - - self.inplanes = 64 - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - - self.res_layers = [] - for i, num_blocks in enumerate(stage_blocks): - stride = strides[i] - dilation = dilations[i] - planes = 64 * 2**i - res_layer = make_res_layer( - block, - self.inplanes, - planes, - num_blocks, - stride=stride, - dilation=dilation, - style=self.style, - with_cp=with_cp) - self.inplanes = planes * block.expansion - layer_name = f'layer{i + 1}' - self.add_module(layer_name, res_layer) - self.res_layers.append(layer_name) - - self.feat_dim = block.expansion * 64 * 2**(len(stage_blocks) - 1) - - def init_weights(self, pretrained=None): - if isinstance(pretrained, str): - logger = logging.getLogger() - from ..runner import load_checkpoint - load_checkpoint(self, pretrained, strict=False, logger=logger) - elif pretrained is None: - for m in self.modules(): - if isinstance(m, nn.Conv2d): - kaiming_init(m) - elif isinstance(m, nn.BatchNorm2d): - constant_init(m, 1) - else: - raise TypeError('pretrained must be a str or None') - - def forward(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - outs = [] - for i, layer_name in enumerate(self.res_layers): - res_layer = getattr(self, layer_name) - x = res_layer(x) - if i in self.out_indices: - outs.append(x) - if len(outs) == 1: - return outs[0] - else: - return tuple(outs) - - def train(self, mode=True): - super(ResNet, self).train(mode) - if self.bn_eval: - for m in self.modules(): - if isinstance(m, nn.BatchNorm2d): - m.eval() - if self.bn_frozen: - for params in m.parameters(): - params.requires_grad = False - if mode and self.frozen_stages >= 0: - for param in self.conv1.parameters(): - param.requires_grad = False - for param in self.bn1.parameters(): - param.requires_grad = False - self.bn1.eval() - self.bn1.weight.requires_grad = False - self.bn1.bias.requires_grad = False - for i in range(1, self.frozen_stages + 1): - mod = getattr(self, f'layer{i}') - mod.eval() - for param in mod.parameters(): - param.requires_grad = False diff --git a/spaces/cymic/Talking_Head_Anime_3/tha3/nn/__init__.py b/spaces/cymic/Talking_Head_Anime_3/tha3/nn/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/davidpiscasio/unpaired-img2img/models/pix2pix_model.py b/spaces/davidpiscasio/unpaired-img2img/models/pix2pix_model.py deleted file mode 100644 index 939eb887ee371a2685e71e17bffded7ae8c08b34..0000000000000000000000000000000000000000 --- a/spaces/davidpiscasio/unpaired-img2img/models/pix2pix_model.py +++ /dev/null @@ -1,127 +0,0 @@ -import torch -from .base_model import BaseModel -from . import networks - - -class Pix2PixModel(BaseModel): - """ This class implements the pix2pix model, for learning a mapping from input images to output images given paired data. - - The model training requires '--dataset_mode aligned' dataset. - By default, it uses a '--netG unet256' U-Net generator, - a '--netD basic' discriminator (PatchGAN), - and a '--gan_mode' vanilla GAN loss (the cross-entropy objective used in the orignal GAN paper). - - pix2pix paper: https://arxiv.org/pdf/1611.07004.pdf - """ - @staticmethod - def modify_commandline_options(parser, is_train=True): - """Add new dataset-specific options, and rewrite default values for existing options. - - Parameters: - parser -- original option parser - is_train (bool) -- whether training phase or test phase. You can use this flag to add training-specific or test-specific options. - - Returns: - the modified parser. - - For pix2pix, we do not use image buffer - The training objective is: GAN Loss + lambda_L1 * ||G(A)-B||_1 - By default, we use vanilla GAN loss, UNet with batchnorm, and aligned datasets. - """ - # changing the default values to match the pix2pix paper (https://phillipi.github.io/pix2pix/) - parser.set_defaults(norm='batch', netG='unet_256', dataset_mode='aligned') - if is_train: - parser.set_defaults(pool_size=0, gan_mode='vanilla') - parser.add_argument('--lambda_L1', type=float, default=100.0, help='weight for L1 loss') - - return parser - - def __init__(self, opt): - """Initialize the pix2pix class. - - Parameters: - opt (Option class)-- stores all the experiment flags; needs to be a subclass of BaseOptions - """ - BaseModel.__init__(self, opt) - # specify the training losses you want to print out. The training/test scripts will call <BaseModel.get_current_losses> - self.loss_names = ['G_GAN', 'G_L1', 'D_real', 'D_fake'] - # specify the images you want to save/display. The training/test scripts will call <BaseModel.get_current_visuals> - self.visual_names = ['real_A', 'fake_B', 'real_B'] - # specify the models you want to save to the disk. The training/test scripts will call <BaseModel.save_networks> and <BaseModel.load_networks> - if self.isTrain: - self.model_names = ['G', 'D'] - else: # during test time, only load G - self.model_names = ['G'] - # define networks (both generator and discriminator) - self.netG = networks.define_G(opt.input_nc, opt.output_nc, opt.ngf, opt.netG, opt.norm, - not opt.no_dropout, opt.init_type, opt.init_gain, self.gpu_ids) - - if self.isTrain: # define a discriminator; conditional GANs need to take both input and output images; Therefore, #channels for D is input_nc + output_nc - self.netD = networks.define_D(opt.input_nc + opt.output_nc, opt.ndf, opt.netD, - opt.n_layers_D, opt.norm, opt.init_type, opt.init_gain, self.gpu_ids) - - if self.isTrain: - # define loss functions - self.criterionGAN = networks.GANLoss(opt.gan_mode).to(self.device) - self.criterionL1 = torch.nn.L1Loss() - # initialize optimizers; schedulers will be automatically created by function <BaseModel.setup>. - self.optimizer_G = torch.optim.Adam(self.netG.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) - self.optimizer_D = torch.optim.Adam(self.netD.parameters(), lr=opt.lr, betas=(opt.beta1, 0.999)) - self.optimizers.append(self.optimizer_G) - self.optimizers.append(self.optimizer_D) - - def set_input(self, input): - """Unpack input data from the dataloader and perform necessary pre-processing steps. - - Parameters: - input (dict): include the data itself and its metadata information. - - The option 'direction' can be used to swap images in domain A and domain B. - """ - AtoB = self.opt.direction == 'AtoB' - self.real_A = input['A' if AtoB else 'B'].to(self.device) - self.real_B = input['B' if AtoB else 'A'].to(self.device) - self.image_paths = input['A_paths' if AtoB else 'B_paths'] - - def forward(self): - """Run forward pass; called by both functions <optimize_parameters> and <test>.""" - self.fake_B = self.netG(self.real_A) # G(A) - - def backward_D(self): - """Calculate GAN loss for the discriminator""" - # Fake; stop backprop to the generator by detaching fake_B - fake_AB = torch.cat((self.real_A, self.fake_B), 1) # we use conditional GANs; we need to feed both input and output to the discriminator - pred_fake = self.netD(fake_AB.detach()) - self.loss_D_fake = self.criterionGAN(pred_fake, False) - # Real - real_AB = torch.cat((self.real_A, self.real_B), 1) - pred_real = self.netD(real_AB) - self.loss_D_real = self.criterionGAN(pred_real, True) - # combine loss and calculate gradients - self.loss_D = (self.loss_D_fake + self.loss_D_real) * 0.5 - self.loss_D.backward() - - def backward_G(self): - """Calculate GAN and L1 loss for the generator""" - # First, G(A) should fake the discriminator - fake_AB = torch.cat((self.real_A, self.fake_B), 1) - pred_fake = self.netD(fake_AB) - self.loss_G_GAN = self.criterionGAN(pred_fake, True) - # Second, G(A) = B - self.loss_G_L1 = self.criterionL1(self.fake_B, self.real_B) * self.opt.lambda_L1 - # combine loss and calculate gradients - self.loss_G = self.loss_G_GAN + self.loss_G_L1 - self.loss_G.backward() - - def optimize_parameters(self): - self.forward() # compute fake images: G(A) - # update D - self.set_requires_grad(self.netD, True) # enable backprop for D - self.optimizer_D.zero_grad() # set D's gradients to zero - self.backward_D() # calculate gradients for D - self.optimizer_D.step() # update D's weights - # update G - self.set_requires_grad(self.netD, False) # D requires no gradients when optimizing G - self.optimizer_G.zero_grad() # set G's gradients to zero - self.backward_G() # calculate graidents for G - self.optimizer_G.step() # udpate G's weights diff --git a/spaces/davila7/semantic-search/README.md b/spaces/davila7/semantic-search/README.md deleted file mode 100644 index c1d44e2785ac1478b66772545937d05fcc2a095a..0000000000000000000000000000000000000000 --- a/spaces/davila7/semantic-search/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Semantic Search -emoji: 🔍 -colorFrom: red -colorTo: blue -sdk: streamlit -sdk_version: 1.17.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/merge/cmap.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/merge/cmap.py deleted file mode 100644 index 3209a5d7b82c7ff0776dcae55e92c3cf816553a7..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/merge/cmap.py +++ /dev/null @@ -1,141 +0,0 @@ -# Copyright 2013 Google, Inc. All Rights Reserved. -# -# Google Author(s): Behdad Esfahbod, Roozbeh Pournader - -from fontTools.merge.unicode import is_Default_Ignorable -from fontTools.pens.recordingPen import DecomposingRecordingPen -import logging - - -log = logging.getLogger("fontTools.merge") - - -def computeMegaGlyphOrder(merger, glyphOrders): - """Modifies passed-in glyphOrders to reflect new glyph names. - Stores merger.glyphOrder.""" - megaOrder = {} - for glyphOrder in glyphOrders: - for i, glyphName in enumerate(glyphOrder): - if glyphName in megaOrder: - n = megaOrder[glyphName] - while (glyphName + "." + repr(n)) in megaOrder: - n += 1 - megaOrder[glyphName] = n - glyphName += "." + repr(n) - glyphOrder[i] = glyphName - megaOrder[glyphName] = 1 - merger.glyphOrder = megaOrder = list(megaOrder.keys()) - - -def _glyphsAreSame( - glyphSet1, - glyphSet2, - glyph1, - glyph2, - advanceTolerance=0.05, - advanceToleranceEmpty=0.20, -): - pen1 = DecomposingRecordingPen(glyphSet1) - pen2 = DecomposingRecordingPen(glyphSet2) - g1 = glyphSet1[glyph1] - g2 = glyphSet2[glyph2] - g1.draw(pen1) - g2.draw(pen2) - if pen1.value != pen2.value: - return False - # Allow more width tolerance for glyphs with no ink - tolerance = advanceTolerance if pen1.value else advanceToleranceEmpty - # TODO Warn if advances not the same but within tolerance. - if abs(g1.width - g2.width) > g1.width * tolerance: - return False - if hasattr(g1, "height") and g1.height is not None: - if abs(g1.height - g2.height) > g1.height * tolerance: - return False - return True - - -# Valid (format, platformID, platEncID) triplets for cmap subtables containing -# Unicode BMP-only and Unicode Full Repertoire semantics. -# Cf. OpenType spec for "Platform specific encodings": -# https://docs.microsoft.com/en-us/typography/opentype/spec/name -class _CmapUnicodePlatEncodings: - BMP = {(4, 3, 1), (4, 0, 3), (4, 0, 4), (4, 0, 6)} - FullRepertoire = {(12, 3, 10), (12, 0, 4), (12, 0, 6)} - - -def computeMegaCmap(merger, cmapTables): - """Sets merger.cmap and merger.glyphOrder.""" - - # TODO Handle format=14. - # Only merge format 4 and 12 Unicode subtables, ignores all other subtables - # If there is a format 12 table for a font, ignore the format 4 table of it - chosenCmapTables = [] - for fontIdx, table in enumerate(cmapTables): - format4 = None - format12 = None - for subtable in table.tables: - properties = (subtable.format, subtable.platformID, subtable.platEncID) - if properties in _CmapUnicodePlatEncodings.BMP: - format4 = subtable - elif properties in _CmapUnicodePlatEncodings.FullRepertoire: - format12 = subtable - else: - log.warning( - "Dropped cmap subtable from font '%s':\t" - "format %2s, platformID %2s, platEncID %2s", - fontIdx, - subtable.format, - subtable.platformID, - subtable.platEncID, - ) - if format12 is not None: - chosenCmapTables.append((format12, fontIdx)) - elif format4 is not None: - chosenCmapTables.append((format4, fontIdx)) - - # Build the unicode mapping - merger.cmap = cmap = {} - fontIndexForGlyph = {} - glyphSets = [None for f in merger.fonts] if hasattr(merger, "fonts") else None - - for table, fontIdx in chosenCmapTables: - # handle duplicates - for uni, gid in table.cmap.items(): - oldgid = cmap.get(uni, None) - if oldgid is None: - cmap[uni] = gid - fontIndexForGlyph[gid] = fontIdx - elif is_Default_Ignorable(uni) or uni in (0x25CC,): # U+25CC DOTTED CIRCLE - continue - elif oldgid != gid: - # Char previously mapped to oldgid, now to gid. - # Record, to fix up in GSUB 'locl' later. - if merger.duplicateGlyphsPerFont[fontIdx].get(oldgid) is None: - if glyphSets is not None: - oldFontIdx = fontIndexForGlyph[oldgid] - for idx in (fontIdx, oldFontIdx): - if glyphSets[idx] is None: - glyphSets[idx] = merger.fonts[idx].getGlyphSet() - # if _glyphsAreSame(glyphSets[oldFontIdx], glyphSets[fontIdx], oldgid, gid): - # continue - merger.duplicateGlyphsPerFont[fontIdx][oldgid] = gid - elif merger.duplicateGlyphsPerFont[fontIdx][oldgid] != gid: - # Char previously mapped to oldgid but oldgid is already remapped to a different - # gid, because of another Unicode character. - # TODO: Try harder to do something about these. - log.warning( - "Dropped mapping from codepoint %#06X to glyphId '%s'", uni, gid - ) - - -def renameCFFCharStrings(merger, glyphOrder, cffTable): - """Rename topDictIndex charStrings based on glyphOrder.""" - td = cffTable.cff.topDictIndex[0] - - charStrings = {} - for i, v in enumerate(td.CharStrings.charStrings.values()): - glyphName = glyphOrder[i] - charStrings[glyphName] = v - td.CharStrings.charStrings = charStrings - - td.charset = list(glyphOrder) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/eexec.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/eexec.py deleted file mode 100644 index cafa312cdaa4696b0624438e06418ade95438441..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/misc/eexec.py +++ /dev/null @@ -1,119 +0,0 @@ -""" -PostScript Type 1 fonts make use of two types of encryption: charstring -encryption and ``eexec`` encryption. Charstring encryption is used for -the charstrings themselves, while ``eexec`` is used to encrypt larger -sections of the font program, such as the ``Private`` and ``CharStrings`` -dictionaries. Despite the different names, the algorithm is the same, -although ``eexec`` encryption uses a fixed initial key R=55665. - -The algorithm uses cipher feedback, meaning that the ciphertext is used -to modify the key. Because of this, the routines in this module return -the new key at the end of the operation. - -""" - -from fontTools.misc.textTools import bytechr, bytesjoin, byteord - - -def _decryptChar(cipher, R): - cipher = byteord(cipher) - plain = ((cipher ^ (R >> 8))) & 0xFF - R = ((cipher + R) * 52845 + 22719) & 0xFFFF - return bytechr(plain), R - - -def _encryptChar(plain, R): - plain = byteord(plain) - cipher = ((plain ^ (R >> 8))) & 0xFF - R = ((cipher + R) * 52845 + 22719) & 0xFFFF - return bytechr(cipher), R - - -def decrypt(cipherstring, R): - r""" - Decrypts a string using the Type 1 encryption algorithm. - - Args: - cipherstring: String of ciphertext. - R: Initial key. - - Returns: - decryptedStr: Plaintext string. - R: Output key for subsequent decryptions. - - Examples:: - - >>> testStr = b"\0\0asdadads asds\265" - >>> decryptedStr, R = decrypt(testStr, 12321) - >>> decryptedStr == b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1' - True - >>> R == 36142 - True - """ - plainList = [] - for cipher in cipherstring: - plain, R = _decryptChar(cipher, R) - plainList.append(plain) - plainstring = bytesjoin(plainList) - return plainstring, int(R) - - -def encrypt(plainstring, R): - r""" - Encrypts a string using the Type 1 encryption algorithm. - - Note that the algorithm as described in the Type 1 specification requires the - plaintext to be prefixed with a number of random bytes. (For ``eexec`` the - number of random bytes is set to 4.) This routine does *not* add the random - prefix to its input. - - Args: - plainstring: String of plaintext. - R: Initial key. - - Returns: - cipherstring: Ciphertext string. - R: Output key for subsequent encryptions. - - Examples:: - - >>> testStr = b"\0\0asdadads asds\265" - >>> decryptedStr, R = decrypt(testStr, 12321) - >>> decryptedStr == b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1' - True - >>> R == 36142 - True - - >>> testStr = b'0d\nh\x15\xe8\xc4\xb2\x15\x1d\x108\x1a<6\xa1' - >>> encryptedStr, R = encrypt(testStr, 12321) - >>> encryptedStr == b"\0\0asdadads asds\265" - True - >>> R == 36142 - True - """ - cipherList = [] - for plain in plainstring: - cipher, R = _encryptChar(plain, R) - cipherList.append(cipher) - cipherstring = bytesjoin(cipherList) - return cipherstring, int(R) - - -def hexString(s): - import binascii - - return binascii.hexlify(s) - - -def deHexString(h): - import binascii - - h = bytesjoin(h.split()) - return binascii.unhexlify(h) - - -if __name__ == "__main__": - import sys - import doctest - - sys.exit(doctest.testmod().failed) diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/pointInsidePen.py b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/pointInsidePen.py deleted file mode 100644 index 8a579ae4c93f824b5ce3a5e80097aeffd5f5933d..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/fontTools/pens/pointInsidePen.py +++ /dev/null @@ -1,192 +0,0 @@ -"""fontTools.pens.pointInsidePen -- Pen implementing "point inside" testing -for shapes. -""" - -from fontTools.pens.basePen import BasePen -from fontTools.misc.bezierTools import solveQuadratic, solveCubic - - -__all__ = ["PointInsidePen"] - - -class PointInsidePen(BasePen): - - """This pen implements "point inside" testing: to test whether - a given point lies inside the shape (black) or outside (white). - Instances of this class can be recycled, as long as the - setTestPoint() method is used to set the new point to test. - - Typical usage: - - pen = PointInsidePen(glyphSet, (100, 200)) - outline.draw(pen) - isInside = pen.getResult() - - Both the even-odd algorithm and the non-zero-winding-rule - algorithm are implemented. The latter is the default, specify - True for the evenOdd argument of __init__ or setTestPoint - to use the even-odd algorithm. - """ - - # This class implements the classical "shoot a ray from the test point - # to infinity and count how many times it intersects the outline" (as well - # as the non-zero variant, where the counter is incremented if the outline - # intersects the ray in one direction and decremented if it intersects in - # the other direction). - # I found an amazingly clear explanation of the subtleties involved in - # implementing this correctly for polygons here: - # http://graphics.cs.ucdavis.edu/~okreylos/TAship/Spring2000/PointInPolygon.html - # I extended the principles outlined on that page to curves. - - def __init__(self, glyphSet, testPoint, evenOdd=False): - BasePen.__init__(self, glyphSet) - self.setTestPoint(testPoint, evenOdd) - - def setTestPoint(self, testPoint, evenOdd=False): - """Set the point to test. Call this _before_ the outline gets drawn.""" - self.testPoint = testPoint - self.evenOdd = evenOdd - self.firstPoint = None - self.intersectionCount = 0 - - def getWinding(self): - if self.firstPoint is not None: - # always make sure the sub paths are closed; the algorithm only works - # for closed paths. - self.closePath() - return self.intersectionCount - - def getResult(self): - """After the shape has been drawn, getResult() returns True if the test - point lies within the (black) shape, and False if it doesn't. - """ - winding = self.getWinding() - if self.evenOdd: - result = winding % 2 - else: # non-zero - result = self.intersectionCount != 0 - return not not result - - def _addIntersection(self, goingUp): - if self.evenOdd or goingUp: - self.intersectionCount += 1 - else: - self.intersectionCount -= 1 - - def _moveTo(self, point): - if self.firstPoint is not None: - # always make sure the sub paths are closed; the algorithm only works - # for closed paths. - self.closePath() - self.firstPoint = point - - def _lineTo(self, point): - x, y = self.testPoint - x1, y1 = self._getCurrentPoint() - x2, y2 = point - - if x1 < x and x2 < x: - return - if y1 < y and y2 < y: - return - if y1 >= y and y2 >= y: - return - - dx = x2 - x1 - dy = y2 - y1 - t = (y - y1) / dy - ix = dx * t + x1 - if ix < x: - return - self._addIntersection(y2 > y1) - - def _curveToOne(self, bcp1, bcp2, point): - x, y = self.testPoint - x1, y1 = self._getCurrentPoint() - x2, y2 = bcp1 - x3, y3 = bcp2 - x4, y4 = point - - if x1 < x and x2 < x and x3 < x and x4 < x: - return - if y1 < y and y2 < y and y3 < y and y4 < y: - return - if y1 >= y and y2 >= y and y3 >= y and y4 >= y: - return - - dy = y1 - cy = (y2 - dy) * 3.0 - by = (y3 - y2) * 3.0 - cy - ay = y4 - dy - cy - by - solutions = sorted(solveCubic(ay, by, cy, dy - y)) - solutions = [t for t in solutions if -0.0 <= t <= 1.0] - if not solutions: - return - - dx = x1 - cx = (x2 - dx) * 3.0 - bx = (x3 - x2) * 3.0 - cx - ax = x4 - dx - cx - bx - - above = y1 >= y - lastT = None - for t in solutions: - if t == lastT: - continue - lastT = t - t2 = t * t - t3 = t2 * t - - direction = 3 * ay * t2 + 2 * by * t + cy - incomingGoingUp = outgoingGoingUp = direction > 0.0 - if direction == 0.0: - direction = 6 * ay * t + 2 * by - outgoingGoingUp = direction > 0.0 - incomingGoingUp = not outgoingGoingUp - if direction == 0.0: - direction = ay - incomingGoingUp = outgoingGoingUp = direction > 0.0 - - xt = ax * t3 + bx * t2 + cx * t + dx - if xt < x: - continue - - if t in (0.0, -0.0): - if not outgoingGoingUp: - self._addIntersection(outgoingGoingUp) - elif t == 1.0: - if incomingGoingUp: - self._addIntersection(incomingGoingUp) - else: - if incomingGoingUp == outgoingGoingUp: - self._addIntersection(outgoingGoingUp) - # else: - # we're not really intersecting, merely touching - - def _qCurveToOne_unfinished(self, bcp, point): - # XXX need to finish this, for now doing it through a cubic - # (BasePen implements _qCurveTo in terms of a cubic) will - # have to do. - x, y = self.testPoint - x1, y1 = self._getCurrentPoint() - x2, y2 = bcp - x3, y3 = point - c = y1 - b = (y2 - c) * 2.0 - a = y3 - c - b - solutions = sorted(solveQuadratic(a, b, c - y)) - solutions = [ - t for t in solutions if ZERO_MINUS_EPSILON <= t <= ONE_PLUS_EPSILON - ] - if not solutions: - return - # XXX - - def _closePath(self): - if self._getCurrentPoint() != self.firstPoint: - self.lineTo(self.firstPoint) - self.firstPoint = None - - def _endPath(self): - """Insideness is not defined for open contours.""" - raise NotImplementedError diff --git a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2e429704.js b/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2e429704.js deleted file mode 100644 index 480cbcd8467c061d7db47bac472ba8bdc0f54846..0000000000000000000000000000000000000000 --- a/spaces/dcarpintero/nlp-summarizer-pegasus/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/index-2e429704.js +++ /dev/null @@ -1,4 +0,0 @@ -import{S as F,e as G,s as K,f as A,g as u,h as b,j as y,n as z,k,m as p,C as ce,av as Q,I as C,o as M,Z as Y,t as Z,x as q,p as T,B as ke,P,Y as B,K as D,F as V,G as I,w as j,u as H,H as O,V as ve,ae as pe,Q as we,R as je,r as J,v as U,E as ye}from"./index-9e76ffee.js";import{g as He}from"./color-5a2b6a59.js";import{B as Ne}from"./Button-30a08c0b.js";import{B as Be}from"./BlockLabel-9545c6da.js";import{E as Ce}from"./Empty-8e3485c0.js";function Me(s){let e,t,l;return{c(){e=A("svg"),t=A("path"),l=A("path"),u(t,"fill","currentColor"),u(t,"d","M12 15H5a3 3 0 0 1-3-3v-2a3 3 0 0 1 3-3h5V5a1 1 0 0 0-1-1H3V2h6a3 3 0 0 1 3 3zM5 9a1 1 0 0 0-1 1v2a1 1 0 0 0 1 1h5V9zm15 14v2a1 1 0 0 0 1 1h5v-4h-5a1 1 0 0 0-1 1z"),u(l,"fill","currentColor"),u(l,"d","M2 30h28V2Zm26-2h-7a3 3 0 0 1-3-3v-2a3 3 0 0 1 3-3h5v-2a1 1 0 0 0-1-1h-6v-2h6a3 3 0 0 1 3 3Z"),u(e,"xmlns","http://www.w3.org/2000/svg"),u(e,"xmlns:xlink","http://www.w3.org/1999/xlink"),u(e,"aria-hidden","true"),u(e,"role","img"),u(e,"class","iconify iconify--carbon"),u(e,"width","100%"),u(e,"height","100%"),u(e,"preserveAspectRatio","xMidYMid meet"),u(e,"viewBox","0 0 32 32")},m(i,a){b(i,e,a),y(e,t),y(e,l)},p:z,i:z,o:z,d(i){i&&k(e)}}}class ue extends F{constructor(e){super(),G(this,e,null,Me,K,{})}}function W(s,e,t){const l=s.slice();l[19]=e[t][0],l[28]=e[t][1];const i=typeof l[28]=="string"?parseInt(l[28]):l[28];return l[29]=i,l}function X(s,e,t){const l=s.slice();return l[19]=e[t][0],l[20]=e[t][1],l[22]=t,l}function $(s,e,t){const l=s.slice();return l[23]=e[t],l[25]=t,l}function x(s,e,t){const l=s.slice();return l[20]=e[t][0],l[26]=e[t][1],l[22]=t,l}function Se(s){let e,t,l=s[1]&&ee(),i=C(s[0]),a=[];for(let n=0;n<i.length;n+=1)a[n]=le(W(s,i,n));return{c(){l&&l.c(),e=M(),t=p("div");for(let n=0;n<a.length;n+=1)a[n].c();u(t,"class","textfield svelte-taudaj"),u(t,"data-testid","highlighted-text:textfield")},m(n,f){l&&l.m(n,f),b(n,e,f),b(n,t,f);for(let o=0;o<a.length;o+=1)a[o]&&a[o].m(t,null)},p(n,f){if(n[1]?l||(l=ee(),l.c(),l.m(e.parentNode,e)):l&&(l.d(1),l=null),f[0]&1){i=C(n[0]);let o;for(o=0;o<i.length;o+=1){const d=W(n,i,o);a[o]?a[o].p(d,f):(a[o]=le(d),a[o].c(),a[o].m(t,null))}for(;o<a.length;o+=1)a[o].d(1);a.length=i.length}},d(n){n&&(k(e),k(t)),l&&l.d(n),Y(a,n)}}}function Ee(s){let e,t,l=s[1]&&te(s),i=C(s[0]),a=[];for(let n=0;n<i.length;n+=1)a[n]=fe(X(s,i,n));return{c(){l&&l.c(),e=M(),t=p("div");for(let n=0;n<a.length;n+=1)a[n].c();u(t,"class","textfield svelte-taudaj")},m(n,f){l&&l.m(n,f),b(n,e,f),b(n,t,f);for(let o=0;o<a.length;o+=1)a[o]&&a[o].m(t,null)},p(n,f){if(n[1]?l?l.p(n,f):(l=te(n),l.c(),l.m(e.parentNode,e)):l&&(l.d(1),l=null),f[0]&95){i=C(n[0]);let o;for(o=0;o<i.length;o+=1){const d=X(n,i,o);a[o]?a[o].p(d,f):(a[o]=fe(d),a[o].c(),a[o].m(t,null))}for(;o<a.length;o+=1)a[o].d(1);a.length=i.length}},d(n){n&&(k(e),k(t)),l&&l.d(n),Y(a,n)}}}function ee(s){let e;return{c(){e=p("div"),e.innerHTML="<span>-1</span> <span>0</span> <span>+1</span>",u(e,"class","color-legend svelte-taudaj"),u(e,"data-testid","highlighted-text:color-legend")},m(t,l){b(t,e,l)},d(t){t&&k(e)}}}function le(s){let e,t,l=s[19]+"",i,a,n;return{c(){e=p("span"),t=p("span"),i=Z(l),a=M(),u(t,"class","text svelte-taudaj"),u(e,"class","textspan score-text svelte-taudaj"),u(e,"style",n="background-color: rgba("+(s[29]<0?"128, 90, 213,"+-s[29]:"239, 68, 60,"+s[29])+")")},m(f,o){b(f,e,o),y(e,t),y(t,i),y(e,a)},p(f,o){o[0]&1&&l!==(l=f[19]+"")&&q(i,l),o[0]&1&&n!==(n="background-color: rgba("+(f[29]<0?"128, 90, 213,"+-f[29]:"239, 68, 60,"+f[29])+")")&&u(e,"style",n)},d(f){f&&k(e)}}}function te(s){let e,t=C(Object.entries(s[3])),l=[];for(let i=0;i<t.length;i+=1)l[i]=ne(x(s,t,i));return{c(){e=p("div");for(let i=0;i<l.length;i+=1)l[i].c();u(e,"class","category-legend svelte-taudaj"),u(e,"data-testid","highlighted-text:category-legend")},m(i,a){b(i,e,a);for(let n=0;n<l.length;n+=1)l[n]&&l[n].m(e,null)},p(i,a){if(a[0]&392){t=C(Object.entries(i[3]));let n;for(n=0;n<t.length;n+=1){const f=x(i,t,n);l[n]?l[n].p(f,a):(l[n]=ne(f),l[n].c(),l[n].m(e,null))}for(;n<l.length;n+=1)l[n].d(1);l.length=t.length}},d(i){i&&k(e),Y(l,i)}}}function ne(s){let e,t=s[20]+"",l,i,a,n,f;function o(){return s[10](s[20])}function d(){return s[11](s[20])}return{c(){e=p("div"),l=Z(t),i=M(),u(e,"class","category-label svelte-taudaj"),u(e,"style",a="background-color:"+s[26].secondary)},m(r,h){b(r,e,h),y(e,l),y(e,i),n||(f=[T(e,"mouseover",o),T(e,"focus",d),T(e,"mouseout",s[12]),T(e,"blur",s[13])],n=!0)},p(r,h){s=r,h[0]&8&&t!==(t=s[20]+"")&&q(l,t),h[0]&8&&a!==(a="background-color:"+s[26].secondary)&&u(e,"style",a)},d(r){r&&k(e),n=!1,ke(f)}}}function se(s){let e,t,l=s[23]+"",i,a,n,f,o=!s[1]&&s[20]!==null&&ie(s);function d(){return s[14](s[22],s[19],s[20])}return{c(){e=p("span"),t=p("span"),i=Z(l),a=M(),o&&o.c(),u(t,"class","text svelte-taudaj"),B(t,"no-label",!s[3][s[20]]),u(e,"class","textspan svelte-taudaj"),B(e,"no-cat",s[20]===null||s[4]&&s[4]!==s[20]),B(e,"hl",s[20]!==null),B(e,"selectable",s[2]),D(e,"background-color",s[20]===null||s[4]&&s[4]!==s[20]?"":s[3][s[20]].secondary)},m(r,h){b(r,e,h),y(e,t),y(t,i),y(e,a),o&&o.m(e,null),n||(f=T(e,"click",d),n=!0)},p(r,h){s=r,h[0]&1&&l!==(l=s[23]+"")&&q(i,l),h[0]&9&&B(t,"no-label",!s[3][s[20]]),!s[1]&&s[20]!==null?o?o.p(s,h):(o=ie(s),o.c(),o.m(e,null)):o&&(o.d(1),o=null),h[0]&17&&B(e,"no-cat",s[20]===null||s[4]&&s[4]!==s[20]),h[0]&1&&B(e,"hl",s[20]!==null),h[0]&4&&B(e,"selectable",s[2]),h[0]&25&&D(e,"background-color",s[20]===null||s[4]&&s[4]!==s[20]?"":s[3][s[20]].secondary)},d(r){r&&k(e),o&&o.d(),n=!1,f()}}}function ie(s){let e,t,l=s[20]+"",i;return{c(){e=Z(`  - `),t=p("span"),i=Z(l),u(t,"class","label svelte-taudaj"),D(t,"background-color",s[20]===null||s[4]&&s[4]!==s[20]?"":s[3][s[20]].primary)},m(a,n){b(a,e,n),b(a,t,n),y(t,i)},p(a,n){n[0]&1&&l!==(l=a[20]+"")&&q(i,l),n[0]&25&&D(t,"background-color",a[20]===null||a[4]&&a[4]!==a[20]?"":a[3][a[20]].primary)},d(a){a&&(k(e),k(t))}}}function ae(s){let e;return{c(){e=p("br")},m(t,l){b(t,e,l)},d(t){t&&k(e)}}}function oe(s){let e=s[23].trim()!=="",t,l=s[25]<L(s[19]).length-1,i,a=e&&se(s),n=l&&ae();return{c(){a&&a.c(),t=M(),n&&n.c(),i=P()},m(f,o){a&&a.m(f,o),b(f,t,o),n&&n.m(f,o),b(f,i,o)},p(f,o){o[0]&1&&(e=f[23].trim()!==""),e?a?a.p(f,o):(a=se(f),a.c(),a.m(t.parentNode,t)):a&&(a.d(1),a=null),o[0]&1&&(l=f[25]<L(f[19]).length-1),l?n||(n=ae(),n.c(),n.m(i.parentNode,i)):n&&(n.d(1),n=null)},d(f){f&&(k(t),k(i)),a&&a.d(f),n&&n.d(f)}}}function fe(s){let e,t=C(L(s[19])),l=[];for(let i=0;i<t.length;i+=1)l[i]=oe($(s,t,i));return{c(){for(let i=0;i<l.length;i+=1)l[i].c();e=P()},m(i,a){for(let n=0;n<l.length;n+=1)l[n]&&l[n].m(i,a);b(i,e,a)},p(i,a){if(a[0]&95){t=C(L(i[19]));let n;for(n=0;n<t.length;n+=1){const f=$(i,t,n);l[n]?l[n].p(f,a):(l[n]=oe(f),l[n].c(),l[n].m(e.parentNode,e))}for(;n<l.length;n+=1)l[n].d(1);l.length=t.length}},d(i){i&&k(e),Y(l,i)}}}function Ve(s){let e;function t(a,n){return a[5]==="categories"?Ee:Se}let l=t(s),i=l(s);return{c(){e=p("div"),i.c(),u(e,"class","container svelte-taudaj")},m(a,n){b(a,e,n),i.m(e,null)},p(a,n){l===(l=t(a))&&i?i.p(a,n):(i.d(1),i=l(a),i&&(i.c(),i.m(e,null)))},i:z,o:z,d(a){a&&k(e),i.d()}}}function L(s){return s.split(` -`)}function Ie(s,e,t){const l=typeof document<"u";let{value:i=[]}=e,{show_legend:a=!1}=e,{color_map:n={}}=e,{selectable:f=!1}=e,o,d={},r="";function h(){for(const m in n){const w=n[m].trim();w in Q?t(3,d[m]=Q[w],d):t(3,d[m]={primary:l?v(n[m],1):n[m],secondary:l?v(n[m],.5):n[m]},d)}}function v(m,w){if(!o){var R=document.createElement("canvas");o=R.getContext("2d")}o.fillStyle=m,o.fillRect(0,0,1,1);const[he,ge,be]=o.getImageData(0,0,1,1).data;return o.clearRect(0,0,1,1),`rgba(${he}, ${ge}, ${be}, ${255/w})`}const N=ce();let c;function g(m){t(4,r=m)}function S(){t(4,r="")}const E=m=>g(m),_=m=>g(m),_e=()=>S(),me=()=>S(),de=(m,w,R)=>{N("select",{index:m,value:[w,R]})};return s.$$set=m=>{"value"in m&&t(0,i=m.value),"show_legend"in m&&t(1,a=m.show_legend),"color_map"in m&&t(9,n=m.color_map),"selectable"in m&&t(2,f=m.selectable)},s.$$.update=()=>{if(s.$$.dirty[0]&513){if(n||t(9,n={}),i.length>0){for(let[m,w]of i)if(w!==null)if(typeof w=="string"){if(t(5,c="categories"),!(w in n)){let R=He(Object.keys(n).length);t(9,n[w]=R,n)}}else t(5,c="scores")}h()}},[i,a,f,d,r,c,N,g,S,n,E,_,_e,me,de]}class Oe extends F{constructor(e){super(),G(this,e,Ie,Ve,K,{value:0,show_legend:1,color_map:9,selectable:2},null,[-1,-1])}}function re(s){let e,t;return e=new Be({props:{Icon:ue,label:s[6],float:!1,disable:s[7]===!1}}),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},p(l,i){const a={};i&64&&(a.label=l[6]),i&128&&(a.disable=l[7]===!1),e.$set(a)},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function Re(s){let e,t;return e=new Ce({props:{$$slots:{default:[ze]},$$scope:{ctx:s}}}),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},p(l,i){const a={};i&32768&&(a.$$scope={dirty:i,ctx:l}),e.$set(a)},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function Te(s){let e,t;return e=new Oe({props:{selectable:s[10],value:s[4],show_legend:s[5],color_map:s[0]}}),e.$on("select",s[13]),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},p(l,i){const a={};i&1024&&(a.selectable=l[10]),i&16&&(a.value=l[4]),i&32&&(a.show_legend=l[5]),i&1&&(a.color_map=l[0]),e.$set(a)},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function ze(s){let e,t;return e=new ue({}),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function Ze(s){let e,t,l,i,a,n,f;const o=[s[11]];let d={};for(let c=0;c<o.length;c+=1)d=ve(d,o[c]);e=new pe({props:d});let r=s[6]&&re(s);const h=[Te,Re],v=[];function N(c,g){return c[4]?0:1}return i=N(s),a=v[i]=h[i](s),{c(){V(e.$$.fragment),t=M(),r&&r.c(),l=M(),a.c(),n=P()},m(c,g){I(e,c,g),b(c,t,g),r&&r.m(c,g),b(c,l,g),v[i].m(c,g),b(c,n,g),f=!0},p(c,g){const S=g&2048?we(o,[je(c[11])]):{};e.$set(S),c[6]?r?(r.p(c,g),g&64&&j(r,1)):(r=re(c),r.c(),j(r,1),r.m(l.parentNode,l)):r&&(J(),H(r,1,1,()=>{r=null}),U());let E=i;i=N(c),i===E?v[i].p(c,g):(J(),H(v[E],1,1,()=>{v[E]=null}),U(),a=v[i],a?a.p(c,g):(a=v[i]=h[i](c),a.c()),j(a,1),a.m(n.parentNode,n))},i(c){f||(j(e.$$.fragment,c),j(r),j(a),f=!0)},o(c){H(e.$$.fragment,c),H(r),H(a),f=!1},d(c){c&&(k(t),k(l),k(n)),O(e,c),r&&r.d(c),v[i].d(c)}}}function De(s){let e,t;return e=new Ne({props:{test_id:"highlighted-text",visible:s[3],elem_id:s[1],elem_classes:s[2],padding:!1,container:s[7],scale:s[8],min_width:s[9],$$slots:{default:[Ze]},$$scope:{ctx:s}}}),{c(){V(e.$$.fragment)},m(l,i){I(e,l,i),t=!0},p(l,[i]){const a={};i&8&&(a.visible=l[3]),i&2&&(a.elem_id=l[1]),i&4&&(a.elem_classes=l[2]),i&128&&(a.container=l[7]),i&256&&(a.scale=l[8]),i&512&&(a.min_width=l[9]),i&36081&&(a.$$scope={dirty:i,ctx:l}),e.$set(a)},i(l){t||(j(e.$$.fragment,l),t=!0)},o(l){H(e.$$.fragment,l),t=!1},d(l){O(e,l)}}}function Le(s,e,t){let{elem_id:l=""}=e,{elem_classes:i=[]}=e,{visible:a=!0}=e,{value:n}=e,f,{show_legend:o}=e,{color_map:d={}}=e,{label:r="Highlighted Text"}=e,{container:h=!0}=e,{scale:v=null}=e,{min_width:N=void 0}=e,{selectable:c=!1}=e,{loading_status:g}=e;const S=ce();function E(_){ye.call(this,s,_)}return s.$$set=_=>{"elem_id"in _&&t(1,l=_.elem_id),"elem_classes"in _&&t(2,i=_.elem_classes),"visible"in _&&t(3,a=_.visible),"value"in _&&t(4,n=_.value),"show_legend"in _&&t(5,o=_.show_legend),"color_map"in _&&t(0,d=_.color_map),"label"in _&&t(6,r=_.label),"container"in _&&t(7,h=_.container),"scale"in _&&t(8,v=_.scale),"min_width"in _&&t(9,N=_.min_width),"selectable"in _&&t(10,c=_.selectable),"loading_status"in _&&t(11,g=_.loading_status)},s.$$.update=()=>{s.$$.dirty&1&&!d&&Object.keys(d).length&&t(0,d),s.$$.dirty&4112&&n!==f&&(t(12,f=n),S("change"))},[d,l,i,a,n,o,r,h,v,N,c,g,f,E]}class Ye extends F{constructor(e){super(),G(this,e,Le,De,K,{elem_id:1,elem_classes:2,visible:3,value:4,show_legend:5,color_map:0,label:6,container:7,scale:8,min_width:9,selectable:10,loading_status:11})}}const Pe=Ye,Qe=["static"];export{Pe as Component,Qe as modes}; -//# sourceMappingURL=index-2e429704.js.map diff --git a/spaces/dcq/freegpt-webui/run.py b/spaces/dcq/freegpt-webui/run.py deleted file mode 100644 index 1de4452d6118de6bdb58a591018440e829180390..0000000000000000000000000000000000000000 --- a/spaces/dcq/freegpt-webui/run.py +++ /dev/null @@ -1,34 +0,0 @@ -from server.app import app -from server.website import Website -from server.backend import Backend_Api -from json import load - - -if __name__ == '__main__': - - # Load configuration from config.json - config = load(open('config.json', 'r')) - site_config = config['site_config'] - - # Set up the website routes - site = Website(app) - for route in site.routes: - app.add_url_rule( - route, - view_func=site.routes[route]['function'], - methods=site.routes[route]['methods'], - ) - - # Set up the backend API routes - backend_api = Backend_Api(app, config) - for route in backend_api.routes: - app.add_url_rule( - route, - view_func=backend_api.routes[route]['function'], - methods=backend_api.routes[route]['methods'], - ) - - # Run the Flask server - print(f"Running on port {site_config['port']}") - app.run(**site_config) - print(f"Closing port {site_config['port']}") diff --git a/spaces/declare-lab/tango/diffusers/examples/dreambooth/README.md b/spaces/declare-lab/tango/diffusers/examples/dreambooth/README.md deleted file mode 100644 index d53f17114404be5c7790802b364d1a7bdb0cb99f..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/examples/dreambooth/README.md +++ /dev/null @@ -1,464 +0,0 @@ -# DreamBooth training example - -[DreamBooth](https://arxiv.org/abs/2208.12242) is a method to personalize text2image models like stable diffusion given just a few(3~5) images of a subject. -The `train_dreambooth.py` script shows how to implement the training procedure and adapt it for stable diffusion. - - -## Running locally with PyTorch - -### Installing the dependencies - -Before running the scripts, make sure to install the library's training dependencies: - -**Important** - -To make sure you can successfully run the latest versions of the example scripts, we highly recommend **installing from source** and keeping the install up to date as we update the example scripts frequently and install some example-specific requirements. To do this, execute the following steps in a new virtual environment: -```bash -git clone https://github.com/huggingface/diffusers -cd diffusers -pip install -e . -``` - -Then cd in the example folder and run -```bash -pip install -r requirements.txt -``` - -And initialize an [🤗Accelerate](https://github.com/huggingface/accelerate/) environment with: - -```bash -accelerate config -``` - -Or for a default accelerate configuration without answering questions about your environment - -```bash -accelerate config default -``` - -Or if your environment doesn't support an interactive shell e.g. a notebook - -```python -from accelerate.utils import write_basic_config -write_basic_config() -``` - -### Dog toy example - -Now let's get our dataset. Download images from [here](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ) and save them in a directory. This will be our training data. - -And launch the training using - -**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___** - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --instance_prompt="a photo of sks dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --max_train_steps=400 -``` - -### Training with prior-preservation loss - -Prior-preservation is used to avoid overfitting and language-drift. Refer to the paper to learn more about it. For prior-preservation we first generate images using the model with a class prompt and then use those during training along with our data. -According to the paper, it's recommended to generate `num_epochs * num_samples` images for prior-preservation. 200-300 works well for most cases. The `num_class_images` flag sets the number of images to generate with the class prompt. You can place existing images in `class_data_dir`, and the training script will generate any additional images so that `num_class_images` are present in `class_data_dir` during training time. - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - - -### Training on a 16GB GPU: - -With the help of gradient checkpointing and the 8-bit optimizer from bitsandbytes it's possible to run train dreambooth on a 16GB GPU. - -To install `bitandbytes` please refer to this [readme](https://github.com/TimDettmers/bitsandbytes#requirements--installation). - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=2 --gradient_checkpointing \ - --use_8bit_adam \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - - -### Training on a 12GB GPU: - -It is possible to run dreambooth on a 12GB GPU by using the following optimizations: -- [gradient checkpointing and the 8-bit optimizer](#training-on-a-16gb-gpu) -- [xformers](#training-with-xformers) -- [setting grads to none](#set-grads-to-none) - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 --gradient_checkpointing \ - --use_8bit_adam \ - --enable_xformers_memory_efficient_attention \ - --set_grads_to_none \ - --learning_rate=2e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - - -### Training on a 8 GB GPU: - -By using [DeepSpeed](https://www.deepspeed.ai/) it's possible to offload some -tensors from VRAM to either CPU or NVME allowing to train with less VRAM. - -DeepSpeed needs to be enabled with `accelerate config`. During configuration -answer yes to "Do you want to use DeepSpeed?". With DeepSpeed stage 2, fp16 -mixed precision and offloading both parameters and optimizer state to cpu it's -possible to train on under 8 GB VRAM with a drawback of requiring significantly -more RAM (about 25 GB). See [documentation](https://huggingface.co/docs/accelerate/usage_guides/deepspeed) for more DeepSpeed configuration options. - -Changing the default Adam optimizer to DeepSpeed's special version of Adam -`deepspeed.ops.adam.DeepSpeedCPUAdam` gives a substantial speedup but enabling -it requires CUDA toolchain with the same version as pytorch. 8-bit optimizer -does not seem to be compatible with DeepSpeed at the moment. - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch --mixed_precision="fp16" train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --sample_batch_size=1 \ - --gradient_accumulation_steps=1 --gradient_checkpointing \ - --learning_rate=5e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - -### Fine-tune text encoder with the UNet. - -The script also allows to fine-tune the `text_encoder` along with the `unet`. It's been observed experimentally that fine-tuning `text_encoder` gives much better results especially on faces. -Pass the `--train_text_encoder` argument to the script to enable training `text_encoder`. - -___Note: Training text encoder requires more memory, with this option the training won't fit on 16GB GPU. It needs at least 24GB VRAM.___ - -```bash -export MODEL_NAME="CompVis/stable-diffusion-v1-4" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -accelerate launch train_dreambooth.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_text_encoder \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --use_8bit_adam \ - --gradient_checkpointing \ - --learning_rate=2e-6 \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - -### Using DreamBooth for pipelines other than Stable Diffusion - -The [AltDiffusion pipeline](https://huggingface.co/docs/diffusers/api/pipelines/alt_diffusion) also supports dreambooth fine-tuning. The process is the same as above, all you need to do is replace the `MODEL_NAME` like this: - -``` -export MODEL_NAME="CompVis/stable-diffusion-v1-4" --> export MODEL_NAME="BAAI/AltDiffusion-m9" -or -export MODEL_NAME="CompVis/stable-diffusion-v1-4" --> export MODEL_NAME="BAAI/AltDiffusion" -``` - -### Inference - -Once you have trained a model using the above command, you can run inference simply using the `StableDiffusionPipeline`. Make sure to include the `identifier` (e.g. sks in above example) in your prompt. - -```python -from diffusers import StableDiffusionPipeline -import torch - -model_id = "path-to-your-trained-model" -pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda") - -prompt = "A photo of sks dog in a bucket" -image = pipe(prompt, num_inference_steps=50, guidance_scale=7.5).images[0] - -image.save("dog-bucket.png") -``` - -### Inference from a training checkpoint - -You can also perform inference from one of the checkpoints saved during the training process, if you used the `--checkpointing_steps` argument. Please, refer to [the documentation](https://huggingface.co/docs/diffusers/main/en/training/dreambooth#performing-inference-using-a-saved-checkpoint) to see how to do it. - -## Training with Low-Rank Adaptation of Large Language Models (LoRA) - -Low-Rank Adaption of Large Language Models was first introduced by Microsoft in [LoRA: Low-Rank Adaptation of Large Language Models](https://arxiv.org/abs/2106.09685) by *Edward J. Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, Weizhu Chen* - -In a nutshell, LoRA allows to adapt pretrained models by adding pairs of rank-decomposition matrices to existing weights and **only** training those newly added weights. This has a couple of advantages: -- Previous pretrained weights are kept frozen so that the model is not prone to [catastrophic forgetting](https://www.pnas.org/doi/10.1073/pnas.1611835114) -- Rank-decomposition matrices have significantly fewer parameters than the original model, which means that trained LoRA weights are easily portable. -- LoRA attention layers allow to control to which extent the model is adapted towards new training images via a `scale` parameter. - -[cloneofsimo](https://github.com/cloneofsimo) was the first to try out LoRA training for Stable Diffusion in -the popular [lora](https://github.com/cloneofsimo/lora) GitHub repository. - -### Training - -Let's get started with a simple example. We will re-use the dog example of the [previous section](#dog-toy-example). - -First, you need to set-up your dreambooth training example as is explained in the [installation section](#Installing-the-dependencies). -Next, let's download the dog dataset. Download images from [here](https://drive.google.com/drive/folders/1BO_dyz-p65qhBRRMRA4TbZ8qW4rB99JZ) and save them in a directory. Make sure to set `INSTANCE_DIR` to the name of your directory further below. This will be our training data. - -Now, you can launch the training. Here we will use [Stable Diffusion 1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5). - -**___Note: Change the `resolution` to 768 if you are using the [stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) 768x768 model.___** - -**___Note: It is quite useful to monitor the training progress by regularly generating sample images during training. [wandb](https://docs.wandb.ai/quickstart) is a nice solution to easily see generating images during training. All you need to do is to run `pip install wandb` before training and pass `--report_to="wandb"` to automatically log images.___** - - -```bash -export MODEL_NAME="runwayml/stable-diffusion-v1-5" -export INSTANCE_DIR="path-to-instance-images" -export OUTPUT_DIR="path-to-save-model" -``` - -For this example we want to directly store the trained LoRA embeddings on the Hub, so -we need to be logged in and add the `--push_to_hub` flag. - -```bash -huggingface-cli login -``` - -Now we can start training! - -```bash -accelerate launch train_dreambooth_lora.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --instance_prompt="a photo of sks dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --gradient_accumulation_steps=1 \ - --checkpointing_steps=100 \ - --learning_rate=1e-4 \ - --report_to="wandb" \ - --lr_scheduler="constant" \ - --lr_warmup_steps=0 \ - --max_train_steps=500 \ - --validation_prompt="A photo of sks dog in a bucket" \ - --validation_epochs=50 \ - --seed="0" \ - --push_to_hub -``` - -**___Note: When using LoRA we can use a much higher learning rate compared to vanilla dreambooth. Here we -use *1e-4* instead of the usual *2e-6*.___** - -The final LoRA embedding weights have been uploaded to [patrickvonplaten/lora_dreambooth_dog_example](https://huggingface.co/patrickvonplaten/lora_dreambooth_dog_example). **___Note: [The final weights](https://huggingface.co/patrickvonplaten/lora/blob/main/pytorch_attn_procs.bin) are only 3 MB in size which is orders of magnitudes smaller than the original model.** - -The training results are summarized [here](https://api.wandb.ai/report/patrickvonplaten/xm6cd5q5). -You can use the `Step` slider to see how the model learned the features of our subject while the model trained. - -### Inference - -After training, LoRA weights can be loaded very easily into the original pipeline. First, you need to -load the original pipeline: - -```python -from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler -import torch - -pipe = DiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16) -pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config) -pipe.to("cuda") -``` - -Next, we can load the adapter layers into the UNet with the [`load_attn_procs` function](https://huggingface.co/docs/diffusers/api/loaders#diffusers.loaders.UNet2DConditionLoadersMixin.load_attn_procs). - -```python -pipe.unet.load_attn_procs("patrickvonplaten/lora_dreambooth_dog_example") -``` - -Finally, we can run the model in inference. - -```python -image = pipe("A picture of a sks dog in a bucket", num_inference_steps=25).images[0] -``` - -## Training with Flax/JAX - -For faster training on TPUs and GPUs you can leverage the flax training example. Follow the instructions above to get the model and dataset before running the script. - -____Note: The flax example don't yet support features like gradient checkpoint, gradient accumulation etc, so to use flax for faster training we will need >30GB cards.___ - - -Before running the scripts, make sure to install the library's training dependencies: - -```bash -pip install -U -r requirements_flax.txt -``` - - -### Training without prior preservation loss - -```bash -export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" -export INSTANCE_DIR="path-to-instance-images" -export OUTPUT_DIR="path-to-save-model" - -python train_dreambooth_flax.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --output_dir=$OUTPUT_DIR \ - --instance_prompt="a photo of sks dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --learning_rate=5e-6 \ - --max_train_steps=400 -``` - - -### Training with prior preservation loss - -```bash -export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -python train_dreambooth_flax.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --learning_rate=5e-6 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - - -### Fine-tune text encoder with the UNet. - -```bash -export MODEL_NAME="duongna/stable-diffusion-v1-4-flax" -export INSTANCE_DIR="path-to-instance-images" -export CLASS_DIR="path-to-class-images" -export OUTPUT_DIR="path-to-save-model" - -python train_dreambooth_flax.py \ - --pretrained_model_name_or_path=$MODEL_NAME \ - --train_text_encoder \ - --instance_data_dir=$INSTANCE_DIR \ - --class_data_dir=$CLASS_DIR \ - --output_dir=$OUTPUT_DIR \ - --with_prior_preservation --prior_loss_weight=1.0 \ - --instance_prompt="a photo of sks dog" \ - --class_prompt="a photo of dog" \ - --resolution=512 \ - --train_batch_size=1 \ - --learning_rate=2e-6 \ - --num_class_images=200 \ - --max_train_steps=800 -``` - -### Training with xformers: -You can enable memory efficient attention by [installing xFormers](https://github.com/facebookresearch/xformers#installing-xformers) and padding the `--enable_xformers_memory_efficient_attention` argument to the script. This is not available with the Flax/JAX implementation. - -You can also use Dreambooth to train the specialized in-painting model. See [the script in the research folder for details](https://github.com/huggingface/diffusers/tree/main/examples/research_projects/dreambooth_inpaint). - -### Set grads to none - -To save even more memory, pass the `--set_grads_to_none` argument to the script. This will set grads to None instead of zero. However, be aware that it changes certain behaviors, so if you start experiencing any problems, remove this argument. - -More info: https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html - -### Experimental results -You can refer to [this blog post](https://huggingface.co/blog/dreambooth) that discusses some of DreamBooth experiments in detail. Specifically, it recommends a set of DreamBooth-specific tips and tricks that we have found to work well for a variety of subjects. diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/ddim/__init__.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/ddim/__init__.py deleted file mode 100644 index 85e8118e75e7e4352f8efb12552ba9fff4bf491c..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/ddim/__init__.py +++ /dev/null @@ -1 +0,0 @@ -from .pipeline_ddim import DDIMPipeline diff --git a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py b/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py deleted file mode 100644 index 822bd49ce31ca8d6bb53bc41b4f4fa6411e6b319..0000000000000000000000000000000000000000 --- a/spaces/declare-lab/tango/diffusers/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_latent_upscale.py +++ /dev/null @@ -1,518 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -from typing import Callable, List, Optional, Union - -import numpy as np -import PIL -import torch -import torch.nn.functional as F -from transformers import CLIPTextModel, CLIPTokenizer - -from ...models import AutoencoderKL, UNet2DConditionModel -from ...schedulers import EulerDiscreteScheduler -from ...utils import is_accelerate_available, logging, randn_tensor -from ..pipeline_utils import DiffusionPipeline, ImagePipelineOutput - - -logger = logging.get_logger(__name__) # pylint: disable=invalid-name - - -# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.preprocess -def preprocess(image): - if isinstance(image, torch.Tensor): - return image - elif isinstance(image, PIL.Image.Image): - image = [image] - - if isinstance(image[0], PIL.Image.Image): - w, h = image[0].size - w, h = (x - x % 64 for x in (w, h)) # resize to integer multiple of 64 - - image = [np.array(i.resize((w, h)))[None, :] for i in image] - image = np.concatenate(image, axis=0) - image = np.array(image).astype(np.float32) / 255.0 - image = image.transpose(0, 3, 1, 2) - image = 2.0 * image - 1.0 - image = torch.from_numpy(image) - elif isinstance(image[0], torch.Tensor): - image = torch.cat(image, dim=0) - return image - - -class StableDiffusionLatentUpscalePipeline(DiffusionPipeline): - r""" - Pipeline to upscale the resolution of Stable Diffusion output images by a factor of 2. - - This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the - library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) - - Args: - vae ([`AutoencoderKL`]): - Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. - text_encoder ([`CLIPTextModel`]): - Frozen text-encoder. Stable Diffusion uses the text portion of - [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically - the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant. - tokenizer (`CLIPTokenizer`): - Tokenizer of class - [CLIPTokenizer](https://huggingface.co/docs/transformers/main/en/model_doc/clip#transformers.CLIPTokenizer). - unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents. - scheduler ([`SchedulerMixin`]): - A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of - [`EulerDiscreteScheduler`]. - """ - - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: EulerDiscreteScheduler, - ): - super().__init__() - - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - ) - - def enable_sequential_cpu_offload(self, gpu_id=0): - r""" - Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, - text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a - `torch.device('meta') and loaded to GPU only when their specific submodule has its `forward` method called. - """ - if is_accelerate_available(): - from accelerate import cpu_offload - else: - raise ImportError("Please install accelerate via `pip install accelerate`") - - device = torch.device(f"cuda:{gpu_id}") - - for cpu_offloaded_model in [self.unet, self.text_encoder, self.vae]: - if cpu_offloaded_model is not None: - cpu_offload(cpu_offloaded_model, device) - - @property - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline._execution_device - def _execution_device(self): - r""" - Returns the device on which the pipeline's models will be executed. After calling - `pipeline.enable_sequential_cpu_offload()` the execution device can only be inferred from Accelerate's module - hooks. - """ - if not hasattr(self.unet, "_hf_hook"): - return self.device - for module in self.unet.modules(): - if ( - hasattr(module, "_hf_hook") - and hasattr(module._hf_hook, "execution_device") - and module._hf_hook.execution_device is not None - ): - return torch.device(module._hf_hook.execution_device) - return self.device - - def _encode_prompt(self, prompt, device, do_classifier_free_guidance, negative_prompt): - r""" - Encodes the prompt into text encoder hidden states. - - Args: - prompt (`str` or `list(int)`): - prompt to be encoded - device: (`torch.device`): - torch device - do_classifier_free_guidance (`bool`): - whether to use classifier free guidance or not - negative_prompt (`str` or `List[str]`): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - """ - batch_size = len(prompt) if isinstance(prompt, list) else 1 - - text_inputs = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_length=True, - return_tensors="pt", - ) - text_input_ids = text_inputs.input_ids - - untruncated_ids = self.tokenizer(prompt, padding="longest", return_tensors="pt").input_ids - - if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(text_input_ids, untruncated_ids): - removed_text = self.tokenizer.batch_decode(untruncated_ids[:, self.tokenizer.model_max_length - 1 : -1]) - logger.warning( - "The following part of your input was truncated because CLIP can only handle sequences up to" - f" {self.tokenizer.model_max_length} tokens: {removed_text}" - ) - - text_encoder_out = self.text_encoder( - text_input_ids.to(device), - output_hidden_states=True, - ) - text_embeddings = text_encoder_out.hidden_states[-1] - text_pooler_out = text_encoder_out.pooler_output - - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - uncond_tokens: List[str] - if negative_prompt is None: - uncond_tokens = [""] * batch_size - elif type(prompt) is not type(negative_prompt): - raise TypeError( - f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !=" - f" {type(prompt)}." - ) - elif isinstance(negative_prompt, str): - uncond_tokens = [negative_prompt] - elif batch_size != len(negative_prompt): - raise ValueError( - f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:" - f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches" - " the batch size of `prompt`." - ) - else: - uncond_tokens = negative_prompt - - max_length = text_input_ids.shape[-1] - uncond_input = self.tokenizer( - uncond_tokens, - padding="max_length", - max_length=max_length, - truncation=True, - return_length=True, - return_tensors="pt", - ) - - uncond_encoder_out = self.text_encoder( - uncond_input.input_ids.to(device), - output_hidden_states=True, - ) - - uncond_embeddings = uncond_encoder_out.hidden_states[-1] - uncond_pooler_out = uncond_encoder_out.pooler_output - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - text_pooler_out = torch.cat([uncond_pooler_out, text_pooler_out]) - - return text_embeddings, text_pooler_out - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.decode_latents - def decode_latents(self, latents): - latents = 1 / self.vae.config.scaling_factor * latents - image = self.vae.decode(latents).sample - image = (image / 2 + 0.5).clamp(0, 1) - # we always cast to float32 as this does not cause significant overhead and is compatible with bfloat16 - image = image.cpu().permute(0, 2, 3, 1).float().numpy() - return image - - def check_inputs(self, prompt, image, callback_steps): - if not isinstance(prompt, str) and not isinstance(prompt, list): - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if ( - not isinstance(image, torch.Tensor) - and not isinstance(image, PIL.Image.Image) - and not isinstance(image, list) - ): - raise ValueError( - f"`image` has to be of type `torch.Tensor`, `PIL.Image.Image` or `list` but is {type(image)}" - ) - - # verify batch size of prompt and image are same if image is a list or tensor - if isinstance(image, list) or isinstance(image, torch.Tensor): - if isinstance(prompt, str): - batch_size = 1 - else: - batch_size = len(prompt) - if isinstance(image, list): - image_batch_size = len(image) - else: - image_batch_size = image.shape[0] if image.ndim == 4 else 1 - if batch_size != image_batch_size: - raise ValueError( - f"`prompt` has batch size {batch_size} and `image` has batch size {image_batch_size}." - " Please make sure that passed `prompt` matches the batch size of `image`." - ) - - if (callback_steps is None) or ( - callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0) - ): - raise ValueError( - f"`callback_steps` has to be a positive integer but is {callback_steps} of type" - f" {type(callback_steps)}." - ) - - # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.prepare_latents - def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None): - shape = (batch_size, num_channels_latents, height, width) - if latents is None: - latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype) - else: - if latents.shape != shape: - raise ValueError(f"Unexpected latents shape, got {latents.shape}, expected {shape}") - latents = latents.to(device) - - # scale the initial noise by the standard deviation required by the scheduler - latents = latents * self.scheduler.init_noise_sigma - return latents - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - image: Union[torch.FloatTensor, PIL.Image.Image, List[PIL.Image.Image]], - num_inference_steps: int = 75, - guidance_scale: float = 9.0, - negative_prompt: Optional[Union[str, List[str]]] = None, - generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None, - latents: Optional[torch.FloatTensor] = None, - output_type: Optional[str] = "pil", - return_dict: bool = True, - callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None, - callback_steps: int = 1, - ): - r""" - Function invoked when calling the pipeline for generation. - - Args: - prompt (`str` or `List[str]`): - The prompt or prompts to guide the image upscaling. - image (`PIL.Image.Image` or List[`PIL.Image.Image`] or `torch.FloatTensor`): - `Image`, or tensor representing an image batch which will be upscaled. If it's a tensor, it can be - either a latent output from a stable diffusion model, or an image tensor in the range `[-1, 1]`. It - will be considered a `latent` if `image.shape[1]` is `4`; otherwise, it will be considered to be an - image representation and encoded using this pipeline's `vae` encoder. - num_inference_steps (`int`, *optional*, defaults to 50): - The number of denoising steps. More denoising steps usually lead to a higher quality image at the - expense of slower inference. - guidance_scale (`float`, *optional*, defaults to 7.5): - Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598). - `guidance_scale` is defined as `w` of equation 2. of [Imagen - Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale > - 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`, - usually at the expense of lower image quality. - negative_prompt (`str` or `List[str]`, *optional*): - The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored - if `guidance_scale` is less than `1`). - eta (`float`, *optional*, defaults to 0.0): - Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to - [`schedulers.DDIMScheduler`], will be ignored for others. - generator (`torch.Generator`, *optional*): - One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html) - to make generation deterministic. - latents (`torch.FloatTensor`, *optional*): - Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image - generation. Can be used to tweak the same generation with different prompts. If not provided, a latents - tensor will ge generated by sampling using the supplied random `generator`. - output_type (`str`, *optional*, defaults to `"pil"`): - The output format of the generate image. Choose between - [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a - plain tuple. - callback (`Callable`, *optional*): - A function that will be called every `callback_steps` steps during inference. The function will be - called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`. - callback_steps (`int`, *optional*, defaults to 1): - The frequency at which the `callback` function will be called. If not specified, the callback will be - called at every step. - - Examples: - ```py - >>> from diffusers import StableDiffusionLatentUpscalePipeline, StableDiffusionPipeline - >>> import torch - - - >>> pipeline = StableDiffusionPipeline.from_pretrained( - ... "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16 - ... ) - >>> pipeline.to("cuda") - - >>> model_id = "stabilityai/sd-x2-latent-upscaler" - >>> upscaler = StableDiffusionLatentUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16) - >>> upscaler.to("cuda") - - >>> prompt = "a photo of an astronaut high resolution, unreal engine, ultra realistic" - >>> generator = torch.manual_seed(33) - - >>> low_res_latents = pipeline(prompt, generator=generator, output_type="latent").images - - >>> with torch.no_grad(): - ... image = pipeline.decode_latents(low_res_latents) - >>> image = pipeline.numpy_to_pil(image)[0] - - >>> image.save("../images/a1.png") - - >>> upscaled_image = upscaler( - ... prompt=prompt, - ... image=low_res_latents, - ... num_inference_steps=20, - ... guidance_scale=0, - ... generator=generator, - ... ).images[0] - - >>> upscaled_image.save("../images/a2.png") - ``` - - Returns: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`: - [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] if `return_dict` is True, otherwise a `tuple. - When returning a tuple, the first element is a list with the generated images, and the second element is a - list of `bool`s denoting whether the corresponding generated image likely represents "not-safe-for-work" - (nsfw) content, according to the `safety_checker`. - """ - - # 1. Check inputs - self.check_inputs(prompt, image, callback_steps) - - # 2. Define call parameters - batch_size = 1 if isinstance(prompt, str) else len(prompt) - device = self._execution_device - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - - if guidance_scale == 0: - prompt = [""] * batch_size - - # 3. Encode input prompt - text_embeddings, text_pooler_out = self._encode_prompt( - prompt, device, do_classifier_free_guidance, negative_prompt - ) - - # 4. Preprocess image - image = preprocess(image) - image = image.to(dtype=text_embeddings.dtype, device=device) - if image.shape[1] == 3: - # encode image if not in latent-space yet - image = self.vae.encode(image).latent_dist.sample() * self.vae.config.scaling_factor - - # 5. set timesteps - self.scheduler.set_timesteps(num_inference_steps, device=device) - timesteps = self.scheduler.timesteps - - batch_multiplier = 2 if do_classifier_free_guidance else 1 - image = image[None, :] if image.ndim == 3 else image - image = torch.cat([image] * batch_multiplier) - - # 5. Add noise to image (set to be 0): - # (see below notes from the author): - # "the This step theoretically can make the model work better on out-of-distribution inputs, but mostly just seems to make it match the input less, so it's turned off by default." - noise_level = torch.tensor([0.0], dtype=torch.float32, device=device) - noise_level = torch.cat([noise_level] * image.shape[0]) - inv_noise_level = (noise_level**2 + 1) ** (-0.5) - - image_cond = F.interpolate(image, scale_factor=2, mode="nearest") * inv_noise_level[:, None, None, None] - image_cond = image_cond.to(text_embeddings.dtype) - - noise_level_embed = torch.cat( - [ - torch.ones(text_pooler_out.shape[0], 64, dtype=text_pooler_out.dtype, device=device), - torch.zeros(text_pooler_out.shape[0], 64, dtype=text_pooler_out.dtype, device=device), - ], - dim=1, - ) - - timestep_condition = torch.cat([noise_level_embed, text_pooler_out], dim=1) - - # 6. Prepare latent variables - height, width = image.shape[2:] - num_channels_latents = self.vae.config.latent_channels - latents = self.prepare_latents( - batch_size, - num_channels_latents, - height * 2, # 2x upscale - width * 2, - text_embeddings.dtype, - device, - generator, - latents, - ) - - # 7. Check that sizes of image and latents match - num_channels_image = image.shape[1] - if num_channels_latents + num_channels_image != self.unet.config.in_channels: - raise ValueError( - f"Incorrect configuration settings! The config of `pipeline.unet`: {self.unet.config} expects" - f" {self.unet.config.in_channels} but received `num_channels_latents`: {num_channels_latents} +" - f" `num_channels_image`: {num_channels_image} " - f" = {num_channels_latents+num_channels_image}. Please verify the config of" - " `pipeline.unet` or your `image` input." - ) - - # 9. Denoising loop - num_warmup_steps = 0 - - with self.progress_bar(total=num_inference_steps) as progress_bar: - for i, t in enumerate(timesteps): - sigma = self.scheduler.sigmas[i] - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - scaled_model_input = self.scheduler.scale_model_input(latent_model_input, t) - - scaled_model_input = torch.cat([scaled_model_input, image_cond], dim=1) - # preconditioning parameter based on Karras et al. (2022) (table 1) - timestep = torch.log(sigma) * 0.25 - - noise_pred = self.unet( - scaled_model_input, - timestep, - encoder_hidden_states=text_embeddings, - timestep_cond=timestep_condition, - ).sample - - # in original repo, the output contains a variance channel that's not used - noise_pred = noise_pred[:, :-1] - - # apply preconditioning, based on table 1 in Karras et al. (2022) - inv_sigma = 1 / (sigma**2 + 1) - noise_pred = inv_sigma * latent_model_input + self.scheduler.scale_model_input(sigma, t) * noise_pred - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents).prev_sample - - # call the callback, if provided - if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0): - progress_bar.update() - if callback is not None and i % callback_steps == 0: - callback(i, t, latents) - - # 10. Post-processing - image = self.decode_latents(latents) - - # 11. Convert to PIL - if output_type == "pil": - image = self.numpy_to_pil(image) - - if not return_dict: - return (image,) - - return ImagePipelineOutput(images=image) diff --git a/spaces/deepseek-ai/deepseek-coder-33b-instruct/style.css b/spaces/deepseek-ai/deepseek-coder-33b-instruct/style.css deleted file mode 100644 index 60878febc13db001635a52688abfe34d95e6c309..0000000000000000000000000000000000000000 --- a/spaces/deepseek-ai/deepseek-coder-33b-instruct/style.css +++ /dev/null @@ -1,16 +0,0 @@ -h1 { - text-align: center; -} - -#duplicate-button { - margin: auto; - color: white; - background: #1565c0; - border-radius: 100vh; -} - -.contain { - max-width: 900px; - margin: auto; - padding-top: 1.5rem; -} diff --git a/spaces/deepwisdom/MetaGPT/metagpt/tools/metagpt_oas3_api_svc.py b/spaces/deepwisdom/MetaGPT/metagpt/tools/metagpt_oas3_api_svc.py deleted file mode 100644 index 5c23f6566cce23a42f1b7c9ef02c4720dd7b1a4d..0000000000000000000000000000000000000000 --- a/spaces/deepwisdom/MetaGPT/metagpt/tools/metagpt_oas3_api_svc.py +++ /dev/null @@ -1,43 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@Time : 2023/8/17 -@Author : mashenquan -@File : metagpt_oas3_api_svc.py -@Desc : MetaGPT OpenAPI Specification 3.0 REST API service -""" -import asyncio -from pathlib import Path -import sys - -import connexion - -sys.path.append(str(Path(__file__).resolve().parent.parent.parent)) # fix-bug: No module named 'metagpt' - - -def oas_http_svc(): - """Start the OAS 3.0 OpenAPI HTTP service""" - app = connexion.AioHttpApp(__name__, specification_dir='../../.well-known/') - app.add_api("metagpt_oas3_api.yaml") - app.add_api("openapi.yaml") - app.run(port=8080) - - -async def async_main(): - """Start the OAS 3.0 OpenAPI HTTP service in the background.""" - loop = asyncio.get_event_loop() - loop.run_in_executor(None, oas_http_svc) - - # TODO: replace following codes: - while True: - await asyncio.sleep(1) - print("sleep") - - -def main(): - oas_http_svc() - - -if __name__ == "__main__": - # asyncio.run(async_main()) - main() diff --git a/spaces/dfurman/chat-all-in/cached-all-mpnet-base-v2/train_script.py b/spaces/dfurman/chat-all-in/cached-all-mpnet-base-v2/train_script.py deleted file mode 100644 index 5452ae48154ce9166d43f5a70a6e410add5ac1c3..0000000000000000000000000000000000000000 --- a/spaces/dfurman/chat-all-in/cached-all-mpnet-base-v2/train_script.py +++ /dev/null @@ -1,397 +0,0 @@ -""" -Train script for a single file - -Need to set the TPU address first: -export XRT_TPU_CONFIG="localservice;0;localhost:51011" -""" - -import torch.multiprocessing as mp -import threading -import time -import random -import sys -import argparse -import gzip -import json -import logging -import tqdm -import torch -from torch import nn -from torch.utils.data import DataLoader -import torch -import torch_xla -import torch_xla.core -import torch_xla.core.functions -import torch_xla.core.xla_model as xm -import torch_xla.distributed.xla_multiprocessing as xmp -import torch_xla.distributed.parallel_loader as pl -import os -from shutil import copyfile - - -from transformers import ( - AdamW, - AutoModel, - AutoTokenizer, - get_linear_schedule_with_warmup, - set_seed, -) - - -class AutoModelForSentenceEmbedding(nn.Module): - def __init__(self, model_name, tokenizer, normalize=True): - super(AutoModelForSentenceEmbedding, self).__init__() - - self.model = AutoModel.from_pretrained(model_name) - self.normalize = normalize - self.tokenizer = tokenizer - - def forward(self, **kwargs): - model_output = self.model(**kwargs) - embeddings = self.mean_pooling(model_output, kwargs["attention_mask"]) - if self.normalize: - embeddings = torch.nn.functional.normalize(embeddings, p=2, dim=1) - - return embeddings - - def mean_pooling(self, model_output, attention_mask): - token_embeddings = model_output[ - 0 - ] # First element of model_output contains all token embeddings - input_mask_expanded = ( - attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() - ) - return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp( - input_mask_expanded.sum(1), min=1e-9 - ) - - def save_pretrained(self, output_path): - if xm.is_master_ordinal(): - self.tokenizer.save_pretrained(output_path) - self.model.config.save_pretrained(output_path) - - xm.save(self.model.state_dict(), os.path.join(output_path, "pytorch_model.bin")) - - -def train_function(index, args, queue): - tokenizer = AutoTokenizer.from_pretrained(args.model) - model = AutoModelForSentenceEmbedding(args.model, tokenizer) - - ### Train Loop - device = xm.xla_device() - model = model.to(device) - - # Instantiate optimizer - optimizer = AdamW(params=model.parameters(), lr=2e-5, correct_bias=True) - - lr_scheduler = get_linear_schedule_with_warmup( - optimizer=optimizer, - num_warmup_steps=500, - num_training_steps=args.steps, - ) - - # Now we train the model - cross_entropy_loss = nn.CrossEntropyLoss() - max_grad_norm = 1 - - model.train() - - for global_step in tqdm.trange(args.steps, disable=not xm.is_master_ordinal()): - #### Get the batch data - batch = queue.get() - # print(index, "batch {}x{}".format(len(batch), ",".join([str(len(b)) for b in batch]))) - - if len(batch[0]) == 2: # (anchor, positive) - text1 = tokenizer( - [b[0] for b in batch], - return_tensors="pt", - max_length=args.max_length, - truncation=True, - padding="max_length", - ) - text2 = tokenizer( - [b[1] for b in batch], - return_tensors="pt", - max_length=args.max_length, - truncation=True, - padding="max_length", - ) - - ### Compute embeddings - embeddings_a = model(**text1.to(device)) - embeddings_b = model(**text2.to(device)) - - ### Gather all embedings - embeddings_a = torch_xla.core.functions.all_gather(embeddings_a) - embeddings_b = torch_xla.core.functions.all_gather(embeddings_b) - - ### Compute similarity scores 512 x 512 - scores = torch.mm(embeddings_a, embeddings_b.transpose(0, 1)) * args.scale - - ### Compute cross-entropy loss - labels = torch.tensor( - range(len(scores)), dtype=torch.long, device=embeddings_a.device - ) # Example a[i] should match with b[i] - - ## Symmetric loss as in CLIP - loss = ( - cross_entropy_loss(scores, labels) - + cross_entropy_loss(scores.transpose(0, 1), labels) - ) / 2 - - else: # (anchor, positive, negative) - text1 = tokenizer( - [b[0] for b in batch], - return_tensors="pt", - max_length=args.max_length, - truncation=True, - padding="max_length", - ) - text2 = tokenizer( - [b[1] for b in batch], - return_tensors="pt", - max_length=args.max_length, - truncation=True, - padding="max_length", - ) - text3 = tokenizer( - [b[2] for b in batch], - return_tensors="pt", - max_length=args.max_length, - truncation=True, - padding="max_length", - ) - - embeddings_a = model(**text1.to(device)) - embeddings_b1 = model(**text2.to(device)) - embeddings_b2 = model(**text3.to(device)) - - embeddings_a = torch_xla.core.functions.all_gather(embeddings_a) - embeddings_b1 = torch_xla.core.functions.all_gather(embeddings_b1) - embeddings_b2 = torch_xla.core.functions.all_gather(embeddings_b2) - - embeddings_b = torch.cat([embeddings_b1, embeddings_b2]) - - ### Compute similarity scores 512 x 1024 - scores = torch.mm(embeddings_a, embeddings_b.transpose(0, 1)) * args.scale - - ### Compute cross-entropy loss - labels = torch.tensor( - range(len(scores)), dtype=torch.long, device=embeddings_a.device - ) # Example a[i] should match with b[i] - - ## One-way loss - loss = cross_entropy_loss(scores, labels) - - # Backward pass - optimizer.zero_grad() - loss.backward() - torch.nn.utils.clip_grad_norm_(model.parameters(), max_grad_norm) - - xm.optimizer_step(optimizer, barrier=True) - lr_scheduler.step() - - # Save model - if (global_step + 1) % args.save_steps == 0: - output_path = os.path.join(args.output, str(global_step + 1)) - xm.master_print("save model: " + output_path) - model.save_pretrained(output_path) - - output_path = os.path.join(args.output, "final") - xm.master_print("save model final: " + output_path) - model.save_pretrained(output_path) - - -def produce_data(args, queue, filepaths, dataset_indices): - global_batch_size = args.batch_size * args.nprocs # Global batch size - size_per_dataset = int( - global_batch_size / args.datasets_per_batch - ) # How many datasets per batch - num_same_dataset = int(size_per_dataset / args.batch_size) - print("producer", "global_batch_size", global_batch_size) - print("producer", "size_per_dataset", size_per_dataset) - print("producer", "num_same_dataset", num_same_dataset) - - datasets = [] - for filepath in filepaths: - if "reddit_" in filepath: # Special dataset class for Reddit files - data_obj = RedditDataset(filepath) - else: - data_obj = Dataset(filepath) - datasets.append(iter(data_obj)) - - # Store if dataset is in a 2 col or 3 col format - num_cols = {idx: len(next(dataset)) for idx, dataset in enumerate(datasets)} - - while True: - texts_in_batch = set() - batch_format = None # 2 vs 3 col format for this batch - - # Add data from several sub datasets - for _ in range(args.datasets_per_batch): - valid_dataset = False # Check that datasets have the same 2/3 col format - while not valid_dataset: - data_idx = random.choice(dataset_indices) - if batch_format is None: - batch_format = num_cols[data_idx] - valid_dataset = True - else: # Check that this dataset has the same format - valid_dataset = batch_format == num_cols[data_idx] - - # Get data from this dataset - dataset = datasets[data_idx] - for _ in range(num_same_dataset): - for _ in range(args.nprocs): - batch_device = [] # A batch for one device - while len(batch_device) < args.batch_size: - sample = next(dataset) - in_batch = False - for text in sample: - if text in texts_in_batch: - in_batch = True - break - - if not in_batch: - for text in sample: - texts_in_batch.add(text) - batch_device.append(sample) - - queue.put(batch_device) - - -class RedditDataset: - """ - A class that handles the reddit data files - """ - - def __init__(self, filepath): - self.filepath = filepath - - def __iter__(self): - while True: - with gzip.open(self.filepath, "rt") as fIn: - for line in fIn: - data = json.loads(line) - - if "response" in data and "context" in data: - yield [data["response"], data["context"]] - - -class Dataset: - """ - A class that handles one dataset - """ - - def __init__(self, filepath): - self.filepath = filepath - - def __iter__(self): - max_dataset_size = 10 * 1000 * 1000 # Cache small datasets in memory - dataset = [] - data_format = None - - while dataset is None or len(dataset) == 0: - with gzip.open(self.filepath, "rt") as fIn: - for line in fIn: - data = json.loads(line) - if isinstance(data, dict): - data = data["texts"] - - if data_format is None: - data_format = len(data) - - # Ensure that all entries are of the same 2/3 col format - assert len(data) == data_format - - if dataset is not None: - dataset.append(data) - if len(dataset) >= max_dataset_size: - dataset = None - - yield data - - # Data loaded. Now stream to the queue - # Shuffle for each epoch - while True: - random.shuffle(dataset) - for data in dataset: - yield data - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--model", default="nreimers/MiniLM-L6-H384-uncased") - parser.add_argument("--steps", type=int, default=2000) - parser.add_argument("--save_steps", type=int, default=10000) - parser.add_argument("--batch_size", type=int, default=64) - parser.add_argument("--max_length", type=int, default=128) - parser.add_argument("--nprocs", type=int, default=8) - parser.add_argument( - "--datasets_per_batch", type=int, default=2, help="Number of datasets per batch" - ) - parser.add_argument( - "--scale", - type=float, - default=20, - help="Use 20 for cossim, and 1 when you work with unnormalized embeddings with dot product", - ) - parser.add_argument( - "--data_folder", default="/data", help="Folder with your dataset files" - ) - parser.add_argument("data_config", help="A data_config.json file") - parser.add_argument("output") - args = parser.parse_args() - - # Ensure global batch size is divisble by data_sample_size - assert (args.batch_size * args.nprocs) % args.datasets_per_batch == 0 - - logging.info("Output: " + args.output) - if os.path.exists(args.output): - print("Output folder already exists.") - input("Continue?") - - # Write train script to output path - os.makedirs(args.output, exist_ok=True) - - data_config_path = os.path.join(args.output, "data_config.json") - copyfile(args.data_config, data_config_path) - - train_script_path = os.path.join(args.output, "train_script.py") - copyfile(__file__, train_script_path) - with open(train_script_path, "a") as fOut: - fOut.write("\n\n# Script was called via:\n#python " + " ".join(sys.argv)) - - # Load data config - with open(args.data_config) as fIn: - data_config = json.load(fIn) - - queue = mp.Queue(maxsize=100 * args.nprocs) - - filepaths = [] - dataset_indices = [] - for idx, data in enumerate(data_config): - filepaths.append( - os.path.join(os.path.expanduser(args.data_folder), data["name"]) - ) - dataset_indices.extend([idx] * data["weight"]) - - # Start producer - p = mp.Process(target=produce_data, args=(args, queue, filepaths, dataset_indices)) - p.start() - - # Run training - print("Start processes:", args.nprocs) - xmp.spawn( - train_function, args=(args, queue), nprocs=args.nprocs, start_method="fork" - ) - print("Training done") - print( - "It might be that not all processes exit automatically. In that case you must manually kill this process." - ) - print("With 'pkill python' you can kill all remaining python processes") - p.kill() - exit() - - -# Script was called via: -# python train_many_data_files_v2.py --steps 1000000 --batch_size 64 --model microsoft/mpnet-base train_data_configs/all_datasets_v4.json output/all_datasets_v4_mpnet-base diff --git a/spaces/diacanFperku/AutoGPT/Chuppa Rustam 3 Full Movie Hd 1080p In Hindi.md b/spaces/diacanFperku/AutoGPT/Chuppa Rustam 3 Full Movie Hd 1080p In Hindi.md deleted file mode 100644 index 84eb1d860712783b7f9c9b57b875685fb5353c0f..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Chuppa Rustam 3 Full Movie Hd 1080p In Hindi.md +++ /dev/null @@ -1,11 +0,0 @@ -<h2>Chuppa Rustam 3 Full Movie Hd 1080p In Hindi</h2><br /><p><b><b>DOWNLOAD</b> ✓ <a href="https://gohhs.com/2uFUkJ">https://gohhs.com/2uFUkJ</a></b></p><br /><br /> - -Chhupa Rustam (2001_Film) | Full HD movie | Sanjay Kapoor | Manisha Koirala | Mamta Kulkarni |. (2:7:59 min). Chhupa Rustam (छुपा रुस्तम) Super hit. Chhupa Rustam - watch online for free. -Chhupa Rustam - watch online for free. -Watch movies online for free in good quality. -Chhupa Rustam - watch online. -Chhupa Rustam -. -Chhupa Rustam - watch online for free. 8a78ff9644<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/diacanFperku/AutoGPT/Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.md b/spaces/diacanFperku/AutoGPT/Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.md deleted file mode 100644 index a16e44e7c1d9515169e331136438fad11d30113b..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.md +++ /dev/null @@ -1,134 +0,0 @@ - -<h1>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator: A Powerful Tool for Auto Repair Professionals</h1> -<p>If you are looking for a reliable and comprehensive source of repair information for cars and trucks, you should consider Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator. This software is designed to help you diagnose, repair and maintain vehicles with ease and accuracy.</p> -<p>In this article, we will review the features, benefits and installation of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator and show you how it can improve your auto service business.</p> -<h2>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator</h2><br /><p><b><b>DOWNLOAD</b> 🔗 <a href="https://gohhs.com/2uFTaj">https://gohhs.com/2uFTaj</a></b></p><br /><br /> -<h2>What is Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with access to a vast database of repair information for vehicles from the American and imported markets.</p> -<p>The software contains detailed descriptions of the technology, service and maintenance procedures, diagnostic codes and troubleshooting tips, wiring diagrams, parts and labor estimates, and more.</p> -<p>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is one of the best programs for auto repair and is an indispensable tool for auto service professionals.</p> -<h2>What are the features of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator has many features that make it a powerful and user-friendly tool for auto repair professionals.</p> -<p>Some of the features include:</p> -<p></p> -<ul> -<li>A simple and intuitive interface that allows you to easily navigate through the database and find the information you need.</li> -<li>A comprehensive coverage of vehicles from various manufacturers, models and years, including cars, light trucks, vans and SUVs.</li> -<li>A diagnosis and repair section that provides you with step-by-step instructions, illustrations, specifications and tips for fixing any problem with your vehicle.</li> -<li>A parts and labor section that gives you original part numbers, illustrations, prices and labor times for any repair job.</li> -<li>A wiring diagram section that shows you the electrical components and circuits of your vehicle in color-coded diagrams.</li> -<li>An estimator section that helps you calculate the cost of any repair job based on your location, labor rates and parts prices.</li> -</ul> -<h2>What are the benefits of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator can help you improve your auto service business in many ways.</p> -<p>Some of the benefits include:</p> -<ul> -<li>Increasing your productivity and efficiency by providing you with accurate and up-to-date information on any vehicle.</li> -<li>Reducing your costs and errors by giving you precise parts and labor estimates for any repair job.</li> -<li>Enhancing your customer satisfaction and loyalty by delivering high-quality service and repairs in a timely manner.</li> -<li>Gaining a competitive edge over other auto service providers by using a trusted and reputable source of repair information.</li> -</ul> -<h2>How to install Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>To install Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator on your computer, you will need to follow these steps:</p> -<ol> -<li>Download the software from a reliable source or use a DVD disc.</li> -<li>Extract the files from the compressed folder or insert the DVD disc into your drive.</li> -<li>Run the setup.exe file and follow the instructions on the screen.</li> -<li>Enter the activation code when prompted.</li> -<li>Enjoy using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator on your computer.</li> -</ol> -<h2>Conclusion</h2> -<p>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with a comprehensive source of repair information for vehicles from the American and imported markets.</p> -<p>The software has many features and benefits that can help you improve your auto service business by increasing your productivity, efficiency, accuracy, quality and customer satisfaction.</p> -<p>If you are looking for a reliable and comprehensive source of repair information for cars and trucks, you should consider Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator as your choice.</p> -<h2>How to use Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>Using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is easy and convenient. You can access the software from your computer or from a mobile device via the internet.</p> -<p>To use the software, you need to follow these steps:</p> -<ol> -<li>Select the vehicle make, model and year from the drop-down menus or enter the VIN number.</li> -<li>Choose the system or component you want to work on from the menu or use the search function.</li> -<li>View the information you need from the diagnosis and repair, parts and labor, wiring diagram or estimator sections.</li> -<li>Print or save the information as needed.</li> -</ol> -<p>You can also customize the software settings to suit your preferences and needs, such as language, units, currency, labor rates and more.</p> -<h2>What are the system requirements for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>To run Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator on your computer, you need to have the following minimum system requirements:</p> -<ul> -<li>Microsoft® Windows XP, Windows 2000*, Windows NT 4.0*, Windows ME, Windows 98*</li> -<li>233 MHz Intel® Business class Computer</li> -<li>64 Megabytes (MB) Random Access Memory (RAM)</li> -<li>15" Super VGA color monitor 800x600 resolution</li> -</ul> -<p>You also need to have an internet connection to access the online version of the software or to update your offline version.</p> -<h2>Where can I get Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>You can get Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator from various sources online or offline.</p> -<p>Some of the sources include:</p> -<ul> -<li>The official website of Mitchell 1, where you can purchase a subscription or a DVD disc of the software.</li> -<li>The online platforms of RuTracker.org or MHH AUTO, where you can download the software for free or for a fee.</li> -<li>The audio platforms of SoundCloud, where you can listen to excerpts of the software or purchase a full version.</li> -</ul> -<p>However, you should be careful when choosing a source for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator, as some sources may not be reliable, secure or legal.</p> -<p>You should always check the reputation, reviews and ratings of the source before downloading or purchasing the software.</p> -<h2>What are the alternatives to Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is not the only repair software available on the market, but it has some alternatives that you may want to consider.</p> -<p>Some of the alternatives include:</p> -<ul> -<li>Alldata Repair - a repair software that provides OEM information for vehicles from 1982 to present.</li> -<li>Autodata - a repair software that provides technical information for vehicles from Europe, Asia and the US.</li> -<li>Haynes Pro - a repair software that provides workshop manuals, wiring diagrams and technical data for vehicles from various manufacturers.</li> -</ul> -<p>Each of these alternatives has its own advantages and disadvantages, and you should compare them with Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator before making a decision.</p> -<h2>How to update Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>To keep your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator up to date with the latest information and data, you need to update it regularly.</p> -<p>To update your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator, you need to follow these steps:</p> -<ol> -<li>Connect your computer or mobile device to the internet.</li> -<li>Launch your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator software.</li> -<li>Click on the update button or menu option.</li> -<li>Follow the instructions on the screen to download and install the latest updates.</li> -<li>Restart your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator software.</li> -</ol> -<p>You can also check the official website of Mitchell 1 for any news or announcements about new updates or versions of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.</p> -<h2>How to uninstall Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>If you want to uninstall Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator from your computer or mobile device, you need to follow these steps:</p> -<ol> -<li>Close your Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator software.</li> -<li>Go to your control panel or settings menu.</li> -<li>Select the add or remove programs or applications option.</li> -<li>Find and select Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator from the list of programs or applications.</li> -<li>Click on the uninstall or remove button.</li> -<li>Follow the instructions on the screen to complete the uninstallation process.</li> -</ol> -<p>You can also delete any files or folders related to Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator from your computer or mobile device.</p> -<h2>What are the FAQs about Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with a comprehensive source of repair information for vehicles from the American and imported markets.</p> -<p>Here are some of the frequently asked questions (FAQs) about Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator:</p> -<ul> -<li>Q: How much does Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator cost?</li> -<li>A: The cost of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator depends on the type and duration of your subscription or purchase. You can check the official website of Mitchell 1 for the latest pricing and offers.</li> -<li>Q: How can I get a free trial of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</li> -<li>A: You can get a free trial of Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator by registering on the official website of Mitchell 1 and requesting a demo.</li> -<li>Q: How can I get technical support for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</li> -<li>A: You can get technical support for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator by contacting the customer service team of Mitchell 1 via phone, email or chat.</li> -<li>Q: How can I get feedback or suggestions for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</li> -<li>A: You can get feedback or suggestions for Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator by joining the online community of Mitchell 1 and sharing your opinions and experiences with other users.</li> -</ul> -<h2>What are the tips and tricks for using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator?</h2> -<p>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with a comprehensive source of repair information for vehicles from the American and imported markets.</p> -<p>Here are some of the tips and tricks for using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator:</p> -<ul> -<li>TIP: You can use the search function to find the information you need quickly and easily.</li> -<li>TIP: You can use the bookmarks and favorites functions to save and access the information you use frequently.</li> -<li>TIP: You can use the print and save functions to create and store copies of the information you need.</li> -<li>TIP: You can use the zoom and pan functions to adjust and view the graphics and illustrations better.</li> -<li>TIP: You can use the help function to access the user guide and tutorials for using Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator.</li> -</ul> -<h2>Conclusion</h2> -<p>Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator is a software program that provides you with a comprehensive source of repair information for vehicles from the American and imported markets.</p> -<p>The software has many features and benefits that can help you improve your auto service business by increasing your productivity, efficiency, accuracy, quality and customer satisfaction.</p> -<p>The software is easy to install, use and update, and has various sources of support, feedback and suggestions.</p> -<p>The software also has some alternatives and competitors that you may want to compare and evaluate before making a decision.</p> -<p>If you are looking for a reliable and comprehensive source of repair information for cars and trucks, you should consider Mitchell OnDemand 5.8.1.9 (1q 2011) Estimator as your choice.</p> 3cee63e6c2<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/PATCHED Adobe Premiere Pro CC 2018 V13.1.1.15 Patch.md b/spaces/diacanFperku/AutoGPT/PATCHED Adobe Premiere Pro CC 2018 V13.1.1.15 Patch.md deleted file mode 100644 index 8afd2e220b9203aadad4d11952f2a08a598f69e0..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/PATCHED Adobe Premiere Pro CC 2018 V13.1.1.15 Patch.md +++ /dev/null @@ -1,5 +0,0 @@ -<br /> -<p>walpZoffoopyiptyday [url=[[[[ ReFWocheNuththegodat [url=<br /> walpZoffoopyiptyday [url= My Romance] [url= Adobe Premiere Pro CC 2018 v13.1.1.15 Patch [url=»] [url= www.Gaebywew.org/puk-ladhi-harit-599.html»] [url= semarang ultima ultras insanse 2012 320p [url= ReFWocheNuththegodat [url= [url= Taiseertaids [url= melsAtterve [url= Jaipur Auto Expo 2017 [url=<br /> walpZoffoopyiptyday [url= Losing My Religion (wame song) Wame [url= Tesulu Spotsong [url=<br /> walpZoffoopyiptyday [url= wenga kanna kokoro [url= 2nd film Watch Super heroes torrent [url= melsAtterve [url=<br /> walpZoffoopyiptyday [url= pdffree 1.jpg [url= briletypeAbumunult [url= ReFWocheNuththegodat [url= [url= adobe-reader for blackberry android [url= briletypeAbumunlt [url= harris pc 29 mp3 2013 godat[/url] briletypeAbumunult [url= briletypeAbumunult [url= thatmatrixmat finder site dluminant godathe [url= ad, improved_photo (17) iMGSRC.RU [url= Themes Taiseertaids [url= Taiseertaids <br /> PACKED Adobe Premiere Pro CC 2018 v13.1.1.15 Patch <p>wallpZoffoopyi. [url= lagu Rangda, Angka, Link 2. [url= coc 180 (Torrent) TPX 02 [url= V10HD ESPAOL, Link 2 [url= sesspaphpag [url= Taiseertaids [url= tomas milian squadra antimafia dvx torrent ita [url] ita[/url]<br /> [url= AttachmentsTakeLonger-to-arrive [url= 2 Fresh Crack With Torrent Free Download for [Win [url= Taiseertaids [url= En vivoDallas Mavericks vs Chicago Bulls Dallas Mavericks vs Chicago Bulls en lnea Link 2 [url= sesspaphpag [url= en lnea[/url]Description Of Nature 6th Ed.pdf[/url]EquantyroarkPata [url= money[/url] If YouForwardedMailTakeLongerToArrive [url= Taiseertaids [url= Flissinneple [url=<br /> does-forwarded-mail-take-longer-to-arrive [url= 2 Fresh Crack With Torrent Free Download for [Win [url= Taiseertaids [url= Freedom[/url]PPC[/url]2 Fresh Crack With Torrent Free Torrent For [Win [url= Top [/url]S. Tacos Los & Shar [url]Guarda Union La Calera v [Win. Dan [url] in lnea[/url]<br /> walpZoffoopyi.pallati.</p> -<h2>PATCHED Adobe Premiere Pro CC 2018 v13.1.1.15 Patch</h2><br /><p><b><b>Download</b> ✪✪✪ <a href="https://gohhs.com/2uFTKN">https://gohhs.com/2uFTKN</a></b></p><br /><br /> 899543212b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/diacanFperku/AutoGPT/Race Gurram Telugu Movie REPACK Download 720p Hd.md b/spaces/diacanFperku/AutoGPT/Race Gurram Telugu Movie REPACK Download 720p Hd.md deleted file mode 100644 index 479c42efddcd4c24f72da1dfe754b4936f2f8ccc..0000000000000000000000000000000000000000 --- a/spaces/diacanFperku/AutoGPT/Race Gurram Telugu Movie REPACK Download 720p Hd.md +++ /dev/null @@ -1,8 +0,0 @@ -<h2>Race Gurram Telugu Movie Download 720p Hd</h2><br /><p><b><b>Download Zip</b> ✪ <a href="https://gohhs.com/2uFV55">https://gohhs.com/2uFV55</a></b></p><br /><br /> -<br /> -Race Gurram Gala Gala Video Song Promo 720p hd. Gala Gala Promo Video From Race Gurram Gala Gala . Gala Gala Promo Video Song Promo Video . Gala Gala Video Song Promo Video Promo Video Song Promoo Video Song Promoo Video Song Promoo Video Song -Provided to YouTube by Universal Music Group Race Gurram Gala Gala Video Song Promo 2017 Gurram Race Gurram Gala Gala Promo Video Song Promo 2017 ... -Gurram Race Gurram Gala Gala Promo Video Song Promo 2017 Gurram Race Race Gurram Gala Gala ... 8a78ff9644<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/digitalxingtong/Un-Bert-Vits2/text/japanese.py b/spaces/digitalxingtong/Un-Bert-Vits2/text/japanese.py deleted file mode 100644 index ddedafa0c5b7986068dc6c91637a86febc3923a9..0000000000000000000000000000000000000000 --- a/spaces/digitalxingtong/Un-Bert-Vits2/text/japanese.py +++ /dev/null @@ -1,104 +0,0 @@ -# modified from https://github.com/CjangCjengh/vits/blob/main/text/japanese.py -import re -import sys - -import pyopenjtalk - -from text import symbols - -# Regular expression matching Japanese without punctuation marks: -_japanese_characters = re.compile( - r'[A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# Regular expression matching non-Japanese characters or punctuation marks: -_japanese_marks = re.compile( - r'[^A-Za-z\d\u3005\u3040-\u30ff\u4e00-\u9fff\uff11-\uff19\uff21-\uff3a\uff41-\uff5a\uff66-\uff9d]') - -# List of (symbol, Japanese) pairs for marks: -_symbols_to_japanese = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('%', 'パーセント') -]] - - -# List of (consonant, sokuon) pairs: -_real_sokuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'Q([↑↓]*[kg])', r'k#\1'), - (r'Q([↑↓]*[tdjʧ])', r't#\1'), - (r'Q([↑↓]*[sʃ])', r's\1'), - (r'Q([↑↓]*[pb])', r'p#\1') -]] - -# List of (consonant, hatsuon) pairs: -_real_hatsuon = [(re.compile('%s' % x[0]), x[1]) for x in [ - (r'N([↑↓]*[pbm])', r'm\1'), - (r'N([↑↓]*[ʧʥj])', r'n^\1'), - (r'N([↑↓]*[tdn])', r'n\1'), - (r'N([↑↓]*[kg])', r'ŋ\1') -]] - - - -def post_replace_ph(ph): - rep_map = { - ':': ',', - ';': ',', - ',': ',', - '。': '.', - '!': '!', - '?': '?', - '\n': '.', - "·": ",", - '、': ",", - '...': '…', - 'v': "V" - } - if ph in rep_map.keys(): - ph = rep_map[ph] - if ph in symbols: - return ph - if ph not in symbols: - ph = 'UNK' - return ph - -def symbols_to_japanese(text): - for regex, replacement in _symbols_to_japanese: - text = re.sub(regex, replacement, text) - return text - - -def preprocess_jap(text): - '''Reference https://r9y9.github.io/ttslearn/latest/notebooks/ch10_Recipe-Tacotron.html''' - text = symbols_to_japanese(text) - sentences = re.split(_japanese_marks, text) - marks = re.findall(_japanese_marks, text) - text = [] - for i, sentence in enumerate(sentences): - if re.match(_japanese_characters, sentence): - p = pyopenjtalk.g2p(sentence) - text += p.split(" ") - - if i < len(marks): - text += [marks[i].replace(' ', '')] - return text - -def text_normalize(text): - # todo: jap text normalize - return text - -def g2p(norm_text): - phones = preprocess_jap(norm_text) - phones = [post_replace_ph(i) for i in phones] - # todo: implement tones and word2ph - tones = [0 for i in phones] - word2ph = [1 for i in phones] - return phones, tones, word2ph - - -if __name__ == '__main__': - for line in open("../../../Downloads/transcript_utf8.txt").readlines(): - text = line.split(":")[1] - phones, tones, word2ph = g2p(text) - for p in phones: - if p == "z": - print(text, phones) - sys.exit(0) diff --git a/spaces/dineshreddy/WALT/mmcv_custom/__init__.py b/spaces/dineshreddy/WALT/mmcv_custom/__init__.py deleted file mode 100644 index 7e0e39b03e2a149c33c372472b2b814a872ec55c..0000000000000000000000000000000000000000 --- a/spaces/dineshreddy/WALT/mmcv_custom/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -# -*- coding: utf-8 -*- - -from .checkpoint import load_checkpoint - -__all__ = ['load_checkpoint'] diff --git a/spaces/dolceschokolade/chatbot-mini/next-i18next.config.js b/spaces/dolceschokolade/chatbot-mini/next-i18next.config.js deleted file mode 100644 index a478a6390ff9716b607da65fc20199228917cdaa..0000000000000000000000000000000000000000 --- a/spaces/dolceschokolade/chatbot-mini/next-i18next.config.js +++ /dev/null @@ -1,33 +0,0 @@ -module.exports = { - i18n: { - defaultLocale: 'en', - locales: [ - "bn", - "de", - "en", - "es", - "fr", - "he", - "id", - "it", - "ja", - "ko", - "pl", - "pt", - "ru", - "ro", - "sv", - "te", - "vi", - "zh", - "ar", - "tr", - "ca", - "fi", - ], - }, - localePath: - typeof window === 'undefined' - ? require('path').resolve('./public/locales') - : '/public/locales', -}; diff --git a/spaces/dongsiqie/Image-to-Line-Drawings/app.py b/spaces/dongsiqie/Image-to-Line-Drawings/app.py deleted file mode 100644 index 5d1c9e8d9ae50a0d180e025ce2f8d5542dbbcd82..0000000000000000000000000000000000000000 --- a/spaces/dongsiqie/Image-to-Line-Drawings/app.py +++ /dev/null @@ -1,127 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -import gradio as gr -from PIL import Image -import torchvision.transforms as transforms - -norm_layer = nn.InstanceNorm2d - -class ResidualBlock(nn.Module): - def __init__(self, in_features): - super(ResidualBlock, self).__init__() - - conv_block = [ nn.ReflectionPad2d(1), - nn.Conv2d(in_features, in_features, 3), - norm_layer(in_features), - nn.ReLU(inplace=True), - nn.ReflectionPad2d(1), - nn.Conv2d(in_features, in_features, 3), - norm_layer(in_features) - ] - - self.conv_block = nn.Sequential(*conv_block) - - def forward(self, x): - return x + self.conv_block(x) - - -class Generator(nn.Module): - def __init__(self, input_nc, output_nc, n_residual_blocks=9, sigmoid=True): - super(Generator, self).__init__() - - # Initial convolution block - model0 = [ nn.ReflectionPad2d(3), - nn.Conv2d(input_nc, 64, 7), - norm_layer(64), - nn.ReLU(inplace=True) ] - self.model0 = nn.Sequential(*model0) - - # Downsampling - model1 = [] - in_features = 64 - out_features = in_features*2 - for _ in range(2): - model1 += [ nn.Conv2d(in_features, out_features, 3, stride=2, padding=1), - norm_layer(out_features), - nn.ReLU(inplace=True) ] - in_features = out_features - out_features = in_features*2 - self.model1 = nn.Sequential(*model1) - - model2 = [] - # Residual blocks - for _ in range(n_residual_blocks): - model2 += [ResidualBlock(in_features)] - self.model2 = nn.Sequential(*model2) - - # Upsampling - model3 = [] - out_features = in_features//2 - for _ in range(2): - model3 += [ nn.ConvTranspose2d(in_features, out_features, 3, stride=2, padding=1, output_padding=1), - norm_layer(out_features), - nn.ReLU(inplace=True) ] - in_features = out_features - out_features = in_features//2 - self.model3 = nn.Sequential(*model3) - - # Output layer - model4 = [ nn.ReflectionPad2d(3), - nn.Conv2d(64, output_nc, 7)] - if sigmoid: - model4 += [nn.Sigmoid()] - - self.model4 = nn.Sequential(*model4) - - def forward(self, x, cond=None): - out = self.model0(x) - out = self.model1(out) - out = self.model2(out) - out = self.model3(out) - out = self.model4(out) - - return out - -model1 = Generator(3, 1, 3) -model1.load_state_dict(torch.load('model.pth', map_location=torch.device('cpu'))) -model1.eval() - -model2 = Generator(3, 1, 3) -model2.load_state_dict(torch.load('model2.pth', map_location=torch.device('cpu'))) -model2.eval() - -def predict(input_img, ver): - input_img = Image.open(input_img) - transform = transforms.Compose([transforms.Resize(256, Image.BICUBIC), transforms.ToTensor()]) - input_img = transform(input_img) - input_img = torch.unsqueeze(input_img, 0) - - drawing = 0 - with torch.no_grad(): - if ver == 'Simple Lines': - drawing = model2(input_img)[0].detach() - else: - drawing = model1(input_img)[0].detach() - - drawing = transforms.ToPILImage()(drawing) - return drawing - -title="Image to Line Drawings - Complex and Simple Portraits and Landscapes" -examples=[ -['01.jpeg', 'Simple Lines'], ['02.jpeg', 'Simple Lines'], ['03.jpeg', 'Simple Lines'], -['07.jpeg', 'Complex Lines'], ['08.jpeg', 'Complex Lines'], ['09.jpeg', 'Complex Lines'], -['10.jpeg', 'Simple Lines'], ['11.jpeg', 'Simple Lines'], ['12.jpeg', 'Simple Lines'], -['01.jpeg', 'Complex Lines'], ['02.jpeg', 'Complex Lines'], ['03.jpeg', 'Complex Lines'], -['04.jpeg', 'Simple Lines'], ['05.jpeg', 'Simple Lines'], ['06.jpeg', 'Simple Lines'], -['07.jpeg', 'Simple Lines'], ['08.jpeg', 'Simple Lines'], ['09.jpeg', 'Simple Lines'], -['04.jpeg', 'Complex Lines'], ['05.jpeg', 'Complex Lines'], ['06.jpeg', 'Complex Lines'], -['10.jpeg', 'Complex Lines'], ['11.jpeg', 'Complex Lines'], ['12.jpeg', 'Complex Lines'], -['Upload Wild Horses 2.jpeg', 'Complex Lines'] -] - -iface = gr.Interface(predict, [gr.inputs.Image(type='filepath'), - gr.inputs.Radio(['Complex Lines','Simple Lines'], type="value", default='Simple Lines', label='version')], - gr.outputs.Image(type="pil"), title=title,examples=examples) - -iface.launch() \ No newline at end of file diff --git a/spaces/dongsiqie/pandora/Dockerfile b/spaces/dongsiqie/pandora/Dockerfile deleted file mode 100644 index 184b990027a341908949391c781b73b2ac333d21..0000000000000000000000000000000000000000 --- a/spaces/dongsiqie/pandora/Dockerfile +++ /dev/null @@ -1,8 +0,0 @@ -FROM python:3.11 -RUN apt update -RUN apt install git -RUN git clone https://github.com/yangjianchuan/pandora-cloud-serverless.git -WORKDIR "pandora-cloud-serverless" -RUN pip install -r requirements.txt -EXPOSE 8018 -CMD ["python", "main.py"] \ No newline at end of file diff --git a/spaces/dongyi/MMFS/models/modules/vit/extractor.py b/spaces/dongyi/MMFS/models/modules/vit/extractor.py deleted file mode 100644 index 060cd90b2ae6eaf2a9272fa496790e6fce32d2a3..0000000000000000000000000000000000000000 --- a/spaces/dongyi/MMFS/models/modules/vit/extractor.py +++ /dev/null @@ -1,165 +0,0 @@ -# code from https://github.com/omerbt/Splice/blob/master/models/extractor.py - -import torch - - -def attn_cosine_sim(x, eps=1e-08): - x = x[0] # TEMP: getting rid of redundant dimension, TBF - norm1 = x.norm(dim=2, keepdim=True) - factor = torch.clamp(norm1 @ norm1.permute(0, 2, 1), min=eps) - sim_matrix = (x @ x.permute(0, 2, 1)) / factor - return sim_matrix - - -class VitExtractor: - BLOCK_KEY = 'block' - ATTN_KEY = 'attn' - PATCH_IMD_KEY = 'patch_imd' - QKV_KEY = 'qkv' - KEY_LIST = [BLOCK_KEY, ATTN_KEY, PATCH_IMD_KEY, QKV_KEY] - - def __init__(self, model_name, device): - self.model = torch.hub.load('facebookresearch/dino:main', model_name).to(device) - self.model.eval() - self.model_name = model_name - self.hook_handlers = [] - self.layers_dict = {} - self.outputs_dict = {} - for key in VitExtractor.KEY_LIST: - self.layers_dict[key] = [] - self.outputs_dict[key] = [] - self._init_hooks_data() - - def _init_hooks_data(self): - self.layers_dict[VitExtractor.BLOCK_KEY] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] - self.layers_dict[VitExtractor.ATTN_KEY] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] - self.layers_dict[VitExtractor.QKV_KEY] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] - self.layers_dict[VitExtractor.PATCH_IMD_KEY] = [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11] - for key in VitExtractor.KEY_LIST: - # self.layers_dict[key] = kwargs[key] if key in kwargs.keys() else [] - self.outputs_dict[key] = [] - - def _register_hooks(self, **kwargs): - for block_idx, block in enumerate(self.model.blocks): - if block_idx in self.layers_dict[VitExtractor.BLOCK_KEY]: - self.hook_handlers.append(block.register_forward_hook(self._get_block_hook())) - if block_idx in self.layers_dict[VitExtractor.ATTN_KEY]: - self.hook_handlers.append(block.attn.attn_drop.register_forward_hook(self._get_attn_hook())) - if block_idx in self.layers_dict[VitExtractor.QKV_KEY]: - self.hook_handlers.append(block.attn.qkv.register_forward_hook(self._get_qkv_hook())) - if block_idx in self.layers_dict[VitExtractor.PATCH_IMD_KEY]: - self.hook_handlers.append(block.attn.register_forward_hook(self._get_patch_imd_hook())) - - def _clear_hooks(self): - for handler in self.hook_handlers: - handler.remove() - self.hook_handlers = [] - - def _get_block_hook(self): - def _get_block_output(model, input, output): - self.outputs_dict[VitExtractor.BLOCK_KEY].append(output) - - return _get_block_output - - def _get_attn_hook(self): - def _get_attn_output(model, inp, output): - self.outputs_dict[VitExtractor.ATTN_KEY].append(output) - - return _get_attn_output - - def _get_qkv_hook(self): - def _get_qkv_output(model, inp, output): - self.outputs_dict[VitExtractor.QKV_KEY].append(output) - - return _get_qkv_output - - # TODO: CHECK ATTN OUTPUT TUPLE - def _get_patch_imd_hook(self): - def _get_attn_output(model, inp, output): - self.outputs_dict[VitExtractor.PATCH_IMD_KEY].append(output[0]) - - return _get_attn_output - - def get_feature_from_input(self, input_img): # List([B, N, D]) - self._register_hooks() - self.model(input_img) - feature = self.outputs_dict[VitExtractor.BLOCK_KEY] - self._clear_hooks() - self._init_hooks_data() - return feature - - def get_qkv_feature_from_input(self, input_img): - self._register_hooks() - self.model(input_img) - feature = self.outputs_dict[VitExtractor.QKV_KEY] - self._clear_hooks() - self._init_hooks_data() - return feature - - def get_attn_feature_from_input(self, input_img): - self._register_hooks() - self.model(input_img) - feature = self.outputs_dict[VitExtractor.ATTN_KEY] - self._clear_hooks() - self._init_hooks_data() - return feature - - def get_patch_size(self): - return 8 if "8" in self.model_name else 16 - - def get_width_patch_num(self, input_img_shape): - b, c, h, w = input_img_shape - patch_size = self.get_patch_size() - return w // patch_size - - def get_height_patch_num(self, input_img_shape): - b, c, h, w = input_img_shape - patch_size = self.get_patch_size() - return h // patch_size - - def get_patch_num(self, input_img_shape): - patch_num = 1 + (self.get_height_patch_num(input_img_shape) * self.get_width_patch_num(input_img_shape)) - return patch_num - - def get_head_num(self): - if "dino" in self.model_name: - return 6 if "s" in self.model_name else 12 - return 6 if "small" in self.model_name else 12 - - def get_embedding_dim(self): - if "dino" in self.model_name: - return 384 if "s" in self.model_name else 768 - return 384 if "small" in self.model_name else 768 - - def get_queries_from_qkv(self, qkv, input_img_shape): - patch_num = self.get_patch_num(input_img_shape) - head_num = self.get_head_num() - embedding_dim = self.get_embedding_dim() - q = qkv.reshape(patch_num, 3, head_num, embedding_dim // head_num).permute(1, 2, 0, 3)[0] - return q - - def get_keys_from_qkv(self, qkv, input_img_shape): - patch_num = self.get_patch_num(input_img_shape) - head_num = self.get_head_num() - embedding_dim = self.get_embedding_dim() - k = qkv.reshape(patch_num, 3, head_num, embedding_dim // head_num).permute(1, 2, 0, 3)[1] - return k - - def get_values_from_qkv(self, qkv, input_img_shape): - patch_num = self.get_patch_num(input_img_shape) - head_num = self.get_head_num() - embedding_dim = self.get_embedding_dim() - v = qkv.reshape(patch_num, 3, head_num, embedding_dim // head_num).permute(1, 2, 0, 3)[2] - return v - - def get_keys_from_input(self, input_img, layer_num): - qkv_features = self.get_qkv_feature_from_input(input_img)[layer_num] - keys = self.get_keys_from_qkv(qkv_features, input_img.shape) - return keys - - def get_keys_self_sim_from_input(self, input_img, layer_num): - keys = self.get_keys_from_input(input_img, layer_num=layer_num) - h, t, d = keys.shape - concatenated_keys = keys.transpose(0, 1).reshape(t, h * d) - ssim_map = attn_cosine_sim(concatenated_keys[None, None, ...]) - return ssim_map diff --git a/spaces/ds520/bingo/src/lib/hooks/use-at-bottom.tsx b/spaces/ds520/bingo/src/lib/hooks/use-at-bottom.tsx deleted file mode 100644 index d37c8cf4162adcb0064e08ecec24eb731416b045..0000000000000000000000000000000000000000 --- a/spaces/ds520/bingo/src/lib/hooks/use-at-bottom.tsx +++ /dev/null @@ -1,23 +0,0 @@ -import * as React from 'react' - -export function useAtBottom(offset = 0) { - const [isAtBottom, setIsAtBottom] = React.useState(false) - - React.useEffect(() => { - const handleScroll = () => { - setIsAtBottom( - window.innerHeight + window.scrollY >= - document.body.offsetHeight - offset - ) - } - - window.addEventListener('scroll', handleScroll, { passive: true }) - handleScroll() - - return () => { - window.removeEventListener('scroll', handleScroll) - } - }, [offset]) - - return isAtBottom -} diff --git a/spaces/duycse1603/math2tex/HybridViT/module/component/feature_extractor/clova_impl/resnet.py b/spaces/duycse1603/math2tex/HybridViT/module/component/feature_extractor/clova_impl/resnet.py deleted file mode 100644 index e32faa3bef418369ada99770c3bbc938fb1c0d8a..0000000000000000000000000000000000000000 --- a/spaces/duycse1603/math2tex/HybridViT/module/component/feature_extractor/clova_impl/resnet.py +++ /dev/null @@ -1,262 +0,0 @@ -from typing import Dict -from collections import OrderedDict -import torch -import torch.nn as nn -import torch.nn.functional as F -from ..addon_module.visual_attention import GlobalContext -from .....helper import clean_state_dict - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = self._conv3x3(inplanes, planes) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = self._conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def _conv3x3(self, in_planes, out_planes, stride=1): - "3x3 convolution with padding" - return nn.Conv2d(in_planes, out_planes, kernel_size=3, stride=stride, - padding=1, bias=False) - - def zero_init_last_bn(self): - nn.init.zeros_(self.bn2.weight) - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - -class ResNet(nn.Module): - def __init__(self, input_channel, output_channel, block, layers, with_gcb=True, debug=False, zero_init_last_bn=False): - super(ResNet, self).__init__() - self.with_gcb = with_gcb - - self.output_channel_block = [int(output_channel / 4), int(output_channel / 2), output_channel, output_channel] - self.inplanes = int(output_channel / 8) - - self.conv0_1 = nn.Conv2d(input_channel, int(output_channel / 16), - kernel_size=3, stride=1, padding=1, bias=False) - self.bn0_1 = nn.BatchNorm2d(int(output_channel / 16)) - - self.conv0_2 = nn.Conv2d(int(output_channel / 16), self.inplanes, - kernel_size=3, stride=1, padding=1, bias=False) - self.bn0_2 = nn.BatchNorm2d(self.inplanes) - self.relu = nn.ReLU(inplace=True) - - self.maxpool1 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) - self.layer1 = self._make_layer(block, self.output_channel_block[0], layers[0]) - self.conv1 = nn.Conv2d(self.output_channel_block[0], self.output_channel_block[ - 0], kernel_size=3, stride=1, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(self.output_channel_block[0]) - - self.maxpool2 = nn.MaxPool2d(kernel_size=2, stride=2, padding=0) - self.layer2 = self._make_layer(block, self.output_channel_block[1], layers[1], stride=1) - self.conv2 = nn.Conv2d(self.output_channel_block[1], self.output_channel_block[ - 1], kernel_size=3, stride=1, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(self.output_channel_block[1]) - - self.maxpool3 = nn.MaxPool2d(kernel_size=2, stride=(2, 1), padding=(0, 1)) - self.layer3 = self._make_layer(block, self.output_channel_block[2], layers[2], stride=1) - self.conv3 = nn.Conv2d(self.output_channel_block[2], self.output_channel_block[ - 2], kernel_size=3, stride=1, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(self.output_channel_block[2]) - - self.layer4 = self._make_layer(block, self.output_channel_block[3], layers[3], stride=1) - - self.conv4_1 = nn.Conv2d(self.output_channel_block[3], self.output_channel_block[ - 3], kernel_size=2, stride=(2, 1), padding=(0, 1), bias=False) - self.bn4_1 = nn.BatchNorm2d(self.output_channel_block[3]) - - self.conv4_2 = nn.Conv2d(self.output_channel_block[3], self.output_channel_block[ - 3], kernel_size=2, stride=1, padding=0, bias=False) - self.bn4_2 = nn.BatchNorm2d(self.output_channel_block[3]) - - self.init_weights(zero_init_last_bn=zero_init_last_bn) - self.debug = debug - - def zero_init_last_bn(self): - nn.init.zeros_(self.bn4_2.weight) - - def init_weights(self, zero_init_last_bn=True): - initialized = ['global_cxt', 'bottleneck_add', 'bottleneck_mul'] - for n, m in self.named_modules(): - if any([d in n for d in initialized]): - continue - elif isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_(m.weight, mode='fan_out', nonlinearity='relu') - elif isinstance(m, nn.BatchNorm2d): - nn.init.ones_(m.weight) - nn.init.zeros_(m.bias) - if zero_init_last_bn: - for m in self.modules(): - if hasattr(m, 'zero_init_last_bn'): - m.zero_init_last_bn() - - def _make_layer(self, block, planes, blocks, with_gcb=False, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d(self.inplanes, planes * block.expansion, - kernel_size=1, stride=stride, bias=False), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - if self.with_gcb: - layers.append(GlobalContext(planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - if self.debug: - print('input shape', x.shape) - - x = self.conv0_1(x) - x = self.bn0_1(x) - x = self.relu(x) - - if self.debug: - print('conv1 shape', x.shape) - - x = self.conv0_2(x) - x = self.bn0_2(x) - x = self.relu(x) - - if self.debug: - print('conv2 shape', x.shape) - - x = self.maxpool1(x) - - if self.debug: - print('pool1 shape', x.shape) - - x = self.layer1(x) - - if self.debug: - print('block1 shape', x.shape) - - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - - if self.debug: - print('conv3 shape', x.shape) - - x = self.maxpool2(x) - - if self.debug: - print('pool2 shape', x.shape) - - x = self.layer2(x) - - if self.debug: - print('block2 shape', x.shape) - - x = self.conv2(x) - x = self.bn2(x) - x = self.relu(x) - - if self.debug: - print('conv4 shape', x.shape) - - x = self.maxpool3(x) - - if self.debug: - print('pool3 shape', x.shape) - - x = self.layer3(x) - - if self.debug: - print('block3 shape', x.shape) - - x = self.conv3(x) - x = self.bn3(x) - x = self.relu(x) - - if self.debug: - print('conv5 shape', x.shape) - - x = self.layer4(x) - - if self.debug: - print('block4 shape', x.shape) - - x = self.conv4_1(x) - x = self.bn4_1(x) - x = self.relu(x) - - if self.debug: - print('conv6 shape', x.shape) - - x = self.conv4_2(x) - x = self.bn4_2(x) - x = self.relu(x) - - if self.debug: - print('conv7 shape', x.shape) - - return x - -class ResNet_FeatureExtractor(nn.Module): - """ FeatureExtractor of FAN (http://openaccess.thecvf.com/content_ICCV_2017/papers/Cheng_Focusing_Attention_Towards_ICCV_2017_paper.pdf) """ - - def __init__(self, input_channel=3, output_channel=512, gcb=False, pretrained=False, weight_dir=None, debug=False): - super(ResNet_FeatureExtractor, self).__init__() - self.ConvNet = ResNet(input_channel, output_channel, BasicBlock, [1, 2, 5, 3], gcb, debug) - self.in_chans = input_channel - if pretrained: - assert weight_dir is not None - self.load_pretrained(weight_dir) - - def forward(self, input): - output = self.ConvNet(input) - return output - - def load_pretrained(self, weight_dir): - state_dict: OrderedDict = torch.load(weight_dir) - cleaned_state_dict = clean_state_dict(state_dict) - new_state_dict = OrderedDict() - name: str - param: torch.FloatTensor - for name, param in cleaned_state_dict.items(): - if name.startswith('FeatureExtraction'): - output_name = name.replace('FeatureExtraction.', '') - if output_name == 'ConvNet.conv0_1.weight': - print('Old', param.shape) - new_param = param.repeat(1, self.in_chans, 1, 1) - print('New', new_param.shape) - else: new_param = param - new_state_dict[output_name] = new_param - print("=> Loading pretrained weight for ResNet backbone") - self.load_state_dict(new_state_dict) - -if __name__ == '__main__': - model = ResNet_FeatureExtractor(input_channel=1, debug=True) - a = torch.rand(1, 1, 128, 480) - output = model(a) - print(output.shape) \ No newline at end of file diff --git a/spaces/elvis-d/tweet-sentiment-analysis.GRADIO/README.md b/spaces/elvis-d/tweet-sentiment-analysis.GRADIO/README.md deleted file mode 100644 index 2d30a72544dc2bfa960e1bde721c52aae5cf21a9..0000000000000000000000000000000000000000 --- a/spaces/elvis-d/tweet-sentiment-analysis.GRADIO/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Tweet Sentiment Analysis.GRADIO -emoji: 📊 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.37.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/emc348/faces-through-time/criteria/__init__.py b/spaces/emc348/faces-through-time/criteria/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/f2api/gpt-academic/request_llm/bridge_chatgpt.py b/spaces/f2api/gpt-academic/request_llm/bridge_chatgpt.py deleted file mode 100644 index eef8fbf0b43f30b915f770f4bc54120c84ebd092..0000000000000000000000000000000000000000 --- a/spaces/f2api/gpt-academic/request_llm/bridge_chatgpt.py +++ /dev/null @@ -1,285 +0,0 @@ -# 借鉴了 https://github.com/GaiZhenbiao/ChuanhuChatGPT 项目 - -""" - 该文件中主要包含三个函数 - - 不具备多线程能力的函数: - 1. predict: 正常对话时使用,具备完备的交互功能,不可多线程 - - 具备多线程调用能力的函数 - 2. predict_no_ui:高级实验性功能模块调用,不会实时显示在界面上,参数简单,可以多线程并行,方便实现复杂的功能逻辑 - 3. predict_no_ui_long_connection:在实验过程中发现调用predict_no_ui处理长文档时,和openai的连接容易断掉,这个函数用stream的方式解决这个问题,同样支持多线程 -""" - -import json -import time -import gradio as gr -import logging -import traceback -import requests -import importlib - -# config_private.py放自己的秘密如API和代理网址 -# 读取时首先看是否存在私密的config_private配置文件(不受git管控),如果有,则覆盖原config文件 -from toolbox import get_conf, update_ui, is_any_api_key, select_api_key, what_keys, clip_history, trimmed_format_exc -proxies, API_KEY, TIMEOUT_SECONDS, MAX_RETRY = \ - get_conf('proxies', 'API_KEY', 'TIMEOUT_SECONDS', 'MAX_RETRY') - -timeout_bot_msg = '[Local Message] Request timeout. Network error. Please check proxy settings in config.py.' + \ - '网络错误,检查代理服务器是否可用,以及代理设置的格式是否正确,格式须是[协议]://[地址]:[端口],缺一不可。' - -def get_full_error(chunk, stream_response): - """ - 获取完整的从Openai返回的报错 - """ - while True: - try: - chunk += next(stream_response) - except: - break - return chunk - - -def predict_no_ui_long_connection(inputs, llm_kwargs, history=[], sys_prompt="", observe_window=None, console_slience=False): - """ - 发送至chatGPT,等待回复,一次性完成,不显示中间过程。但内部用stream的方法避免中途网线被掐。 - inputs: - 是本次问询的输入 - sys_prompt: - 系统静默prompt - llm_kwargs: - chatGPT的内部调优参数 - history: - 是之前的对话列表 - observe_window = None: - 用于负责跨越线程传递已经输出的部分,大部分时候仅仅为了fancy的视觉效果,留空即可。observe_window[0]:观测窗。observe_window[1]:看门狗 - """ - watch_dog_patience = 5 # 看门狗的耐心, 设置5秒即可 - headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt=sys_prompt, stream=True) - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=False - from .bridge_all import model_info - endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - response = requests.post(endpoint, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS); break - except requests.exceptions.ReadTimeout as e: - retry += 1 - traceback.print_exc() - if retry > MAX_RETRY: raise TimeoutError - if MAX_RETRY!=0: print(f'请求超时,正在重试 ({retry}/{MAX_RETRY}) ……') - - stream_response = response.iter_lines() - result = '' - while True: - try: chunk = next(stream_response).decode() - except StopIteration: - break - except requests.exceptions.ConnectionError: - chunk = next(stream_response).decode() # 失败了,重试一次?再失败就没办法了。 - if len(chunk)==0: continue - if not chunk.startswith('data:'): - error_msg = get_full_error(chunk.encode('utf8'), stream_response).decode() - if "reduce the length" in error_msg: - raise ConnectionAbortedError("OpenAI拒绝了请求:" + error_msg) - else: - raise RuntimeError("OpenAI拒绝了请求:" + error_msg) - if ('data: [DONE]' in chunk): break # api2d 正常完成 - json_data = json.loads(chunk.lstrip('data:'))['choices'][0] - delta = json_data["delta"] - if len(delta) == 0: break - if "role" in delta: continue - if "content" in delta: - result += delta["content"] - if not console_slience: print(delta["content"], end='') - if observe_window is not None: - # 观测窗,把已经获取的数据显示出去 - if len(observe_window) >= 1: observe_window[0] += delta["content"] - # 看门狗,如果超过期限没有喂狗,则终止 - if len(observe_window) >= 2: - if (time.time()-observe_window[1]) > watch_dog_patience: - raise RuntimeError("用户取消了程序。") - else: raise RuntimeError("意外Json结构:"+delta) - if json_data['finish_reason'] == 'length': - raise ConnectionAbortedError("正常结束,但显示Token不足,导致输出不完整,请削减单次输入的文本量。") - return result - - -def predict(inputs, llm_kwargs, plugin_kwargs, chatbot, history=[], system_prompt='', stream = True, additional_fn=None): - """ - 发送至chatGPT,流式获取输出。 - 用于基础的对话功能。 - inputs 是本次问询的输入 - top_p, temperature是chatGPT的内部调优参数 - history 是之前的对话列表(注意无论是inputs还是history,内容太长了都会触发token数量溢出的错误) - chatbot 为WebUI中显示的对话列表,修改它,然后yeild出去,可以直接修改对话界面内容 - additional_fn代表点击的哪个按钮,按钮见functional.py - """ - if is_any_api_key(inputs): - chatbot._cookies['api_key'] = inputs - chatbot.append(("输入已识别为openai的api_key", what_keys(inputs))) - yield from update_ui(chatbot=chatbot, history=history, msg="api_key已导入") # 刷新界面 - return - elif not is_any_api_key(chatbot._cookies['api_key']): - chatbot.append((inputs, "缺少api_key。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。")) - yield from update_ui(chatbot=chatbot, history=history, msg="缺少api_key") # 刷新界面 - return - - if additional_fn is not None: - import core_functional - importlib.reload(core_functional) # 热更新prompt - core_functional = core_functional.get_core_functions() - if "PreProcess" in core_functional[additional_fn]: inputs = core_functional[additional_fn]["PreProcess"](inputs) # 获取预处理函数(如果有的话) - inputs = core_functional[additional_fn]["Prefix"] + inputs + core_functional[additional_fn]["Suffix"] - - raw_input = inputs - logging.info(f'[raw_input] {raw_input}') - chatbot.append((inputs, "")) - yield from update_ui(chatbot=chatbot, history=history, msg="等待响应") # 刷新界面 - - try: - headers, payload = generate_payload(inputs, llm_kwargs, history, system_prompt, stream) - except RuntimeError as e: - chatbot[-1] = (inputs, f"您提供的api-key不满足要求,不包含任何可用于{llm_kwargs['llm_model']}的api-key。您可能选择了错误的模型或请求源。") - yield from update_ui(chatbot=chatbot, history=history, msg="api-key不满足要求") # 刷新界面 - return - - history.append(inputs); history.append("") - - retry = 0 - while True: - try: - # make a POST request to the API endpoint, stream=True - from .bridge_all import model_info - endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - response = requests.post(endpoint, headers=headers, proxies=proxies, - json=payload, stream=True, timeout=TIMEOUT_SECONDS);break - except: - retry += 1 - chatbot[-1] = ((chatbot[-1][0], timeout_bot_msg)) - retry_msg = f",正在重试 ({retry}/{MAX_RETRY}) ……" if MAX_RETRY > 0 else "" - yield from update_ui(chatbot=chatbot, history=history, msg="请求超时"+retry_msg) # 刷新界面 - if retry > MAX_RETRY: raise TimeoutError - - gpt_replying_buffer = "" - - is_head_of_the_stream = True - if stream: - stream_response = response.iter_lines() - while True: - try: - chunk = next(stream_response) - except StopIteration: - # 非OpenAI官方接口的出现这样的报错,OpenAI和API2D不会走这里 - from toolbox import regular_txt_to_markdown; tb_str = '```\n' + trimmed_format_exc() + '```' - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 远程返回错误: \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk.decode())}") - yield from update_ui(chatbot=chatbot, history=history, msg="远程返回错误:" + chunk.decode()) # 刷新界面 - return - - # print(chunk.decode()[6:]) - if is_head_of_the_stream and (r'"object":"error"' not in chunk.decode()): - # 数据流的第一帧不携带content - is_head_of_the_stream = False; continue - - if chunk: - try: - chunk_decoded = chunk.decode() - # 前者API2D的 - if ('data: [DONE]' in chunk_decoded) or (len(json.loads(chunk_decoded[6:])['choices'][0]["delta"]) == 0): - # 判定为数据流的结束,gpt_replying_buffer也写完了 - logging.info(f'[response] {gpt_replying_buffer}') - break - # 处理数据流的主体 - chunkjson = json.loads(chunk_decoded[6:]) - status_text = f"finish_reason: {chunkjson['choices'][0]['finish_reason']}" - # 如果这里抛出异常,一般是文本过长,详情见get_full_error的输出 - gpt_replying_buffer = gpt_replying_buffer + json.loads(chunk_decoded[6:])['choices'][0]["delta"]["content"] - history[-1] = gpt_replying_buffer - chatbot[-1] = (history[-2], history[-1]) - yield from update_ui(chatbot=chatbot, history=history, msg=status_text) # 刷新界面 - - except Exception as e: - traceback.print_exc() - yield from update_ui(chatbot=chatbot, history=history, msg="Json解析不合常规") # 刷新界面 - chunk = get_full_error(chunk, stream_response) - chunk_decoded = chunk.decode() - error_msg = chunk_decoded - if "reduce the length" in error_msg: - if len(history) >= 2: history[-1] = ""; history[-2] = "" # 清除当前溢出的输入:history[-2] 是本次输入, history[-1] 是本次输出 - history = clip_history(inputs=inputs, history=history, tokenizer=model_info[llm_kwargs['llm_model']]['tokenizer'], - max_token_limit=(model_info[llm_kwargs['llm_model']]['max_token'])) # history至少释放二分之一 - chatbot[-1] = (chatbot[-1][0], "[Local Message] Reduce the length. 本次输入过长, 或历史数据过长. 历史缓存数据已部分释放, 您可以请再次尝试. (若再次失败则更可能是因为输入过长.)") - # history = [] # 清除历史 - elif "does not exist" in error_msg: - chatbot[-1] = (chatbot[-1][0], f"[Local Message] Model {llm_kwargs['llm_model']} does not exist. 模型不存在, 或者您没有获得体验资格.") - elif "Incorrect API key" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Incorrect API key. OpenAI以提供了不正确的API_KEY为由, 拒绝服务.") - elif "exceeded your current quota" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] You exceeded your current quota. OpenAI以账户额度不足为由, 拒绝服务.") - elif "bad forward key" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Bad forward key. API2D账户额度不足.") - elif "Not enough point" in error_msg: - chatbot[-1] = (chatbot[-1][0], "[Local Message] Not enough point. API2D账户点数不足.") - else: - from toolbox import regular_txt_to_markdown - tb_str = '```\n' + trimmed_format_exc() + '```' - chatbot[-1] = (chatbot[-1][0], f"[Local Message] 异常 \n\n{tb_str} \n\n{regular_txt_to_markdown(chunk_decoded)}") - yield from update_ui(chatbot=chatbot, history=history, msg="Json异常" + error_msg) # 刷新界面 - return - -def generate_payload(inputs, llm_kwargs, history, system_prompt, stream): - """ - 整合所有信息,选择LLM模型,生成http请求,为发送请求做准备 - """ - if not is_any_api_key(llm_kwargs['api_key']): - raise AssertionError("你提供了错误的API_KEY。\n\n1. 临时解决方案:直接在输入区键入api_key,然后回车提交。\n\n2. 长效解决方案:在config.py中配置。") - - api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) - - headers = { - "Content-Type": "application/json", - "Authorization": f"Bearer {api_key}" - } - - conversation_cnt = len(history) // 2 - - messages = [{"role": "system", "content": system_prompt}] - if conversation_cnt: - for index in range(0, 2*conversation_cnt, 2): - what_i_have_asked = {} - what_i_have_asked["role"] = "user" - what_i_have_asked["content"] = history[index] - what_gpt_answer = {} - what_gpt_answer["role"] = "assistant" - what_gpt_answer["content"] = history[index+1] - if what_i_have_asked["content"] != "": - if what_gpt_answer["content"] == "": continue - if what_gpt_answer["content"] == timeout_bot_msg: continue - messages.append(what_i_have_asked) - messages.append(what_gpt_answer) - else: - messages[-1]['content'] = what_gpt_answer['content'] - - what_i_ask_now = {} - what_i_ask_now["role"] = "user" - what_i_ask_now["content"] = inputs - messages.append(what_i_ask_now) - - payload = { - "model": llm_kwargs['llm_model'].strip('api2d-'), - "messages": messages, - "temperature": llm_kwargs['temperature'], # 1.0, - "top_p": llm_kwargs['top_p'], # 1.0, - "n": 1, - "stream": stream, - "presence_penalty": 0, - "frequency_penalty": 0, - } - try: - print(f" {llm_kwargs['llm_model']} : {conversation_cnt} : {inputs[:100]} ..........") - except: - print('输入中可能存在乱码。') - return headers,payload - - diff --git a/spaces/falterWliame/Face_Mask_Detection/PreSonus Studio One Pro 4.1.3 [Extra Quality] Crack Activation Key [Latest].md b/spaces/falterWliame/Face_Mask_Detection/PreSonus Studio One Pro 4.1.3 [Extra Quality] Crack Activation Key [Latest].md deleted file mode 100644 index 5288f77489c4054e04a4048639a57e9f405ea79a..0000000000000000000000000000000000000000 --- a/spaces/falterWliame/Face_Mask_Detection/PreSonus Studio One Pro 4.1.3 [Extra Quality] Crack Activation Key [Latest].md +++ /dev/null @@ -1,8 +0,0 @@ - -<p>you can find the full version of<strong>presonus studio one download</strong>under just one of the most popular software websites all over the world. that is, cnet.com. moreover, it is safe and free of charge to download and install. studio room one totally free is among the latest versions of the softwares, and is a total of the upgraded studio room. the whole setup provides a hefty <strong style=font-size: 1rem;>presonus studio one download</strong>which will perform on your system. </strong></p> -<p><strong>presonus studio one download</strong>is a thoughtfully crafted studio version with all-new new features that create the perfect foundation for your commercial music or professional mixing. studio one 5 mac crack is a compelling and expert audio editing software. it effortlessly integrates the tried and true studio version with the latest advanced. it is an excellent combination along with a powerful studio.</p> -<h2>PreSonus Studio One Pro 4.1.3 Crack Activation Key [Latest]</h2><br /><p><b><b>Download</b> ⚙ <a href="https://urlca.com/2uDd1i">https://urlca.com/2uDd1i</a></b></p><br /><br /> -<p>best<strong>presonus studio one download</strong>provides the complete and unrestricted mixing project page with a re-designed view controls. it will permit you to modify the settings of your hardware and the most recent studio room. the most recent <strong> presonus studio one download </strong>maintains and links to make new functions through shows and courses. moreover, the studio is the simplest to use among the list of the vital audio editing software.</p> -<p>all these in addition to the studio will operate on a mac computer. there is a powerful studio one. the key is the most popular <strong style=font-size: 1rem;>presonus studio one download</strong>software which permits you to perform on the basis of your mac computer. a powerful software which gives you a complete access to the <strong style=font-size: 1rem;>presonus studio one download</strong> studio as well as full stability. the software is offered on the market for completely free.</p> 899543212b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/DTX Multi 12 (Champeta) - A Versatile and Compact Batera DTX for Acoustic Drummers - APK Download.md b/spaces/fatiXbelha/sd/DTX Multi 12 (Champeta) - A Versatile and Compact Batera DTX for Acoustic Drummers - APK Download.md deleted file mode 100644 index f2055bb816de96f93eb51f3d6747307a8dfd33d6..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/DTX Multi 12 (Champeta) - A Versatile and Compact Batera DTX for Acoustic Drummers - APK Download.md +++ /dev/null @@ -1,179 +0,0 @@ - -<h1>What is batería dtx multi 12 apk and why you need it</h1> - <p>If you are a drummer or a music lover who enjoys playing urban sounds (champeta), you may have heard of batería dtx multi 12 apk. This is a music & audio app developed by AppsKD that allows you to turn your Android device into a virtual drum pad. With this app, you can create and play amazing beats using different drum kits and sounds. You can also record and share your tracks with others, or connect the app with your Yamaha DTX-MULTI 12 electronic drum pad for more control and options.</p> - <p>In this article, we will show you how to download and install batería dtx multi 12 apk on your Android device, how to use it to create and play urban sounds (champeta), how to connect it with your Yamaha DTX-MULTI 12, and what are the pros and cons of this app. We will also give you some alternatives to batería dtx multi 12 apk that you may want to try. By the end of this article, you will have a better understanding of what batería dtx multi 12 apk is and why you need it.</p> -<h2>batería dtx multi 12 apk</h2><br /><p><b><b>DOWNLOAD</b> ✦✦✦ <a href="https://urllie.com/2uNFmZ">https://urllie.com/2uNFmZ</a></b></p><br /><br /> - <h2>How to download and install batería dtx multi 12 apk on your Android device</h2> - <p>There are two ways to get batería dtx multi 12 apk on your Android device. One is to download it from the Google Play Store, where it has been available since November 2021. The other is to download it from other sources, such as APKPure or APKMirror. Here are the steps for both methods:</p> - <h3>Method 1: Download from Google Play Store</h3> - <ol> -<li>Open the Google Play Store app on your Android device.</li> -<li>Search for "batería dtx multi 12" or "DTX Multi 12 (Champeta)" in the search bar.</li> -<li>Select the app from the search results and tap on "Install".</li> -<li>Wait for the app to download and install on your device.</li> -<li>Once installed, tap on "Open" to launch the app.</li> -</ol> - <h3>Method 2: Download from other sources</h3> - <ol> -<li>Go to a website that offers APK files for Android apps, such as APKPure or APKMirror.</li> -<li>Search for "batería dtx multi 12" or "DTX Multi 12 (Champeta)" in the search bar.</li> -<li>Select the app from the search results and tap on "Download APK".</li> <li>Wait for the APK file to download on your device.</li> -<li>Before installing the APK file, you may need to enable the "Unknown sources" option on your device settings. This will allow you to install apps from sources other than the Google Play Store.</li> -<li>Locate the APK file on your device using a file manager app or your browser's downloads folder.</li> -<li>Tap on the APK file and follow the instructions to install the app on your device.</li> -<li>Once installed, tap on "Open" to launch the app.</li> -</ol> - <p>Note: Downloading and installing APK files from other sources may pose some risks, such as malware, viruses, or data theft. Therefore, we recommend that you only download APK files from trusted and reputable websites, and scan them with an antivirus app before installing them. We are not responsible for any damages or losses caused by using APK files from other sources.</p> - <h2>How to use batería dtx multi 12 apk to create and play urban sounds (champeta)</h2> - <p>Now that you have downloaded and installed batería dtx multi 12 apk on your Android device, you are ready to use it to create and play urban sounds (champeta). Champeta is a musical genre that originated in the Caribbean coast of Colombia, influenced by African, Caribbean, and Latin American rhythms. It is characterized by its upbeat tempo, catchy melodies, and social commentary lyrics. Champeta is often played with electronic drum pads, such as the Yamaha DTX-MULTI 12, which can produce a variety of sounds and effects.</p> - <p>Batería dtx multi 12 apk is designed to emulate the Yamaha DTX-MULTI 12 electronic drum pad, but with more features and options. You can use the app to play champeta sounds using different drum kits and sounds, record and share your beats with others, or connect the app with your Yamaha DTX-MULTI 12 for more control and options. Here are some tutorials on how to use batería dtx multi 12 apk to create and play urban sounds (champeta):</p> - <h3>How to choose from the different drum kits and sounds</h3> - <p>Batería dtx multi 12 apk offers you 12 drum pads that you can customize with different drum kits and sounds. You can choose from over 100 sounds, including acoustic drums, electronic drums, percussion, effects, vocals, and more. You can also adjust the volume, pitch, pan, reverb, and delay of each sound. Here are the steps to choose from the different drum kits and sounds:</p> - <ol> -<li>Launch the app and tap on the "Menu" icon at the top left corner of the screen.</li> -<li>Tap on "Drum Kit" to see the list of available drum kits. You can scroll left or right to see more options.</li> -<li>Tap on the drum kit that you want to use. The app will load the drum kit and assign it to the 12 drum pads.</li> -<li>If you want to change the sound of a specific drum pad, tap on the "Edit" icon at the top right corner of the screen.</li> -<li>Tap on the drum pad that you want to edit. A pop-up window will appear with the sound settings.</li> -<li>Tap on "Sound" to see the list of available sounds. You can scroll left or right to see more options.</li> -<li>Tap on the sound that you want to use. The app will assign it to the selected drum pad.</li> -<li>If you want to adjust the volume, pitch, pan, reverb, or delay of the sound, use the sliders below the sound name.</li> -<li>When you are done editing the sound settings, tap on "OK" to save your changes.</li> -</ol> - <p>You can repeat these steps for any drum pad that you want to customize. You can also save your custom drum kit by tapping on "Save" at the top right corner of the screen. You can name your drum kit and access it later from the "Drum Kit" menu.</p> - <h3>How to record and share your beats with batería dtx multi 12 apk</h3> - <p>Batería dtx multi 12 apk also allows you to record and share your beats with others. You can use the built-in recorder, metronome, and mixer to create and export your tracks. You can also share your tracks via email, WhatsApp, Facebook, or other apps. Here are the steps to record and share your beats with batería dtx multi 12 apk:</p> - <ol> -<li>Launch the app and tap on the "Record" icon at the bottom center of the screen.</li> -<li>A pop-up window will appear with the recording settings. You can adjust the recording time, the recording quality, and the metronome settings.</li> -<li>When you are ready to record, tap on "Start". The app will start recording your beats as you play the drum pads.</li> -<li>When you are done recording, tap on "Stop". The app will save your recording and show you a preview of your track.</li> -<li>If you want to edit your track, tap on "Edit". You can use the mixer to adjust the volume, pan, reverb, and delay of each drum pad. You can also trim, cut, copy, paste, or delete parts of your track.</li> -<li>If you want to export your track, tap on "Export". You can choose the file format (WAV or MP3) and the file name. The app will export your track to your device storage.</li> -<li>If you want to share your track, tap on "Share". You can choose the app that you want to use to share your track, such as email, WhatsApp, Facebook, or other apps. The app will open the selected app and attach your track to it.</li> -</ol> - <p>You can repeat these steps for any track that you want to record and share. You can also access your recorded tracks from the "Record" menu.</p> -<p>batería dtx multi 12 champeta<br /> -batería dtx multi 12 android app<br /> -batería dtx multi 12 download<br /> -batería dtx multi 12 electronic drum kit<br /> -batería dtx multi 12 yamaha<br /> -batería dtx multi 12 manual<br /> -batería dtx multi 12 review<br /> -batería dtx multi 12 precio<br /> -batería dtx multi 12 sounds<br /> -batería dtx multi 12 midi<br /> -batería dtx multi 12 appbrain<br /> -batería dtx multi 12 usa<br /> -batería dtx multi 12 music & audio<br /> -batería dtx multi 12 update<br /> -batería dtx multi 12 install<br /> -batería dtx multi 12 free<br /> -batería dtx multi 12 version<br /> -batería dtx multi 12 rating<br /> -batería dtx multi 12 developer<br /> -batería dtx multi 12 permissions<br /> -batería dtx multi 12 samples<br /> -batería dtx multi 12 pads<br /> -batería dtx multi 12 accessories<br /> -batería dtx multi 12 software<br /> -batería dtx multi 12 firmware<br /> -batería dtx multi 12 editor<br /> -batería dtx multi 12 trigger<br /> -batería dtx multi 12 expansion<br /> -batería dtx multi 12 youtube<br /> -batería dtx multi 12 tutorial<br /> -batería dtx multi 12 demo<br /> -batería dtx multi 12 setup<br /> -batería dtx multi 12 comparison<br /> -batería dtx multi 12 features<br /> -batería dtx multi 12 specifications<br /> -batería dtx multi 12 dimensions<br /> -batería dtx multi 12 weight<br /> -batería dtx multi 12 warranty<br /> -batería dtx multi 12 support<br /> -batería dtx multi 12 tips<br /> -batería dtx multi 12 tricks<br /> -batería dtx multi 12 hacks<br /> -batería dtx multi 12 cheats<br /> -batería dtx multi 12 mod apk<br /> -batería dtx multi 12 premium apk<br /> -batería dtx multi 12 pro apk<br /> -batería dtx multi 12 cracked apk<br /> -batería dtx multi 12 unlocked apk<br /> -batería dtx multi 12 full apk</p> - <h2>How to connect batería dtx multi 12 apk with your Yamaha DTX-MULTI 12 electronic drum pad</h2> - <p>Batería dtx multi 12 apk is not only a virtual drum pad, but also a controller for your Yamaha DTX-MULTI 12 electronic drum pad. You can use the app to sync your sounds and settings with your Yamaha DTX-MULTI 12, and play it with more flexibility and convenience. Here are the steps to connect batería dtx multi 12 apk with your Yamaha DTX-MULTI 12:</p> - <ol> -<li>Make sure that your Yamaha DTX-MULTI 12 is turned on and connected to a power source.</li> -<li>Connect your Yamaha DTX-MULTI 12 to your Android device using a USB cable or a MIDI interface.</li> -<li>Launch the app and tap on the "Menu" icon at the top left corner of the screen.</li> -<li>Tap on "Settings" and then on "MIDI Settings".</li> -<li>Select the MIDI input and output devices that correspond to your Yamaha DTX-MULTI 12.</li> -<li>Tap on "OK" to save your MIDI settings.</li> -<li>The app will automatically detect your Yamaha DTX-MULTI 12 and sync your sounds and settings with it.</li> -<li>You can now use the app to control your Yamaha DTX-MULTI 12. You can play the drum pads on the app or on the Yamaha DTX-MULTI 12, and hear the same sounds from both devices. You can also change the drum kits and sounds on the app or on the Yamaha DTX-MULTI 12, and see the same changes on both devices.</li> -</ol> - <p>Note: The app may not be compatible with some older models or versions of the Yamaha DTX-MULTI 12. If you encounter any problems or errors while connecting or syncing the app with your Yamaha DTX-MULTI 12, please contact the app developer or Yamaha customer service for assistance.</p> - <h2>The pros and cons of batería dtx multi 12 apk</h2> - <p>Batería dtx multi 12 apk is a great app for drummers and music lovers who enjoy playing urban sounds (champeta). However, like any other app, it has its pros and cons. Here are some of them:</p> - <h3>The pros of batería dtx multi 12 apk</h3> - <ul> -<li>It is versatile. You can use it as a virtual drum pad, a recorder, a mixer, or a controller for your Yamaha DTX-MULTI 12.</li> -<li>It is sensitive. It responds quickly and accurately to your touch and pressure on the drum pads.</li> -<li>It is fast. It loads quickly and runs smoothly on most Android devices.</li> -<li>It is compact. It does not take up much space on your device storage or memory.</li> -<li>It is compatible. It works well with most Android devices and versions, as well as with most MIDI devices and interfaces.</li> -</ul> - <h3>The cons of batería dtx multi 12 apk</h3> - <ul> -<li>It is not free. You have to pay a small fee to download and use the app from the Google Play Store or other sources.</li> -<li>It is not realistic. It does not replicate the feel and sound of a real drum pad or drum set.</li> -<li>It is not comprehensive. It does not offer many features and functions that other drumming or music production apps offer, such as editing, looping, sequencing, sampling, or synthesizing.</li> -<li>It is not updated. It has not received any updates or improvements since its release in November 2021.</li> -</ul> - <p>These are some of the pros and cons of batería dtx multi 12 apk. You may have your own opinions and preferences about the app, depending on your needs and expectations. You may also find some of these pros and cons to be more or less important than others. Ultimately, you have to decide for yourself whether batería dtx multi 12 apk is worth downloading and using.</p> - <h2>The best alternatives to batería dtx multi 12 apk</h2> - <p>If you are not satisfied with batería dtx multi 12 apk, or if you want to try something different, you may want to check out some of the best alternatives to batería dtx multi 12 apk. These are some of the other apps that offer similar or better features and functions for drumming and music production:</p> - <ul> -<li><strong>Drum Pad Machine - Beat Maker & Music Maker</strong>: This is a popular and highly rated app that lets you create and play beats using various drum pads, loops, samples, and effects. You can choose from different genres and styles, such as hip hop, EDM, dubstep, trap, and more. You can also record and share your tracks with others, or collaborate with other users online. The app is free to download and use, but it offers in-app purchases for more features and content.</li> -<li><strong>Real Drum - The Best Drum Pads Simulator</strong>: This is a realistic and easy-to-use app that simulates a real drum set on your Android device. You can play the drums with your fingers or with external devices, such as drumsticks or pedals. You can also customize the drum set with different skins, sounds, and arrangements. You can also record and share your tracks with others, or play along with your favorite songs. The app is free to download and use, but it offers in-app purchases for more features and content.</li> -<li><strong>FL Studio Mobile</strong>: This is a professional and powerful app that allows you to create and edit music on your Android device. You can use various instruments, effects, samples, loops, and plugins to create any kind of music you want. You can also record and edit audio, MIDI, or vocals. You can also export and share your tracks with others, or import them to your PC or Mac for further editing. The app is not free to download and use, but it offers a one-time payment for lifetime access to all features and content.</li> -</ul> - <p>These are some of the best alternatives to batería dtx multi 12 apk that you may want to try. Of course, there are many other apps that you can find on the Google Play Store or other sources that may suit your needs and preferences better. You can also compare the ratings, reviews, features, and prices of different apps before downloading and using them.</p> - <h2>Conclusion</h2> - <p>Batería dtx multi 12 apk is a music & audio app developed by AppsKD that allows you to turn your Android device into a virtual drum pad. With this app, you can create and play amazing beats using different drum kits and sounds. You can also record and share your tracks with others, or connect the app with your Yamaha DTX-MULTI 12 electronic drum pad for more control and options.</p> - <p>In this article, we have shown you how to download and install batería dtx multi 12 apk on your Android device, how to use it to create and play urban sounds (champeta), how to connect it with your Yamaha DTX-MULTI 12, and what are the pros and cons of this app. We have also given you some alternatives to batería dtx multi 12 apk that you may want to try.</p> - <p>We hope that this article has been helpful and informative for you. If you have any questions or feedback about batería dtx multi 12 apk or this article, please feel free to leave a comment below. We would love to hear from you.</p> - <p>Thank you for reading this article. We hope that you enjoy using batería dtx multi 12 apk and creating amazing beats with it.</p> - <h2>FAQs</h2> - <p>Here are some of the frequently asked questions about batería dtx multi 12 apk:</p> - <ol> -<li><strong>What is the size of batería dtx multi 12 apk?</strong></li> -<p>The size of batería dtx multi 12 apk varies depending on the device and version that you are using. However, the average size of the app is about 30 MB. You may need to have enough space on your device storage or memory to download and install the app.</p> - <li><strong>What are the requirements for batería dtx multi 12 apk?</strong></li> -<p>The requirements for batería dtx multi 12 apk vary depending on the device and version that you are using. However, the minimum requirements for the app are as follows:</p> -<ul> -<li>Android version: 4.1 or higher</li> -<li>RAM: 1 GB or higher</li> -<li>Processor: 1 GHz or higher</li> -<li>Screen resolution: 800 x 480 or higher</li> -<li>Internet connection: Required for downloading and updating the app, and for sharing your tracks with others</li> -</ul> - <li><strong>Is batería dtx multi 12 apk safe to use?</strong></li> -<p>Batería dtx multi 12 apk is safe to use if you download and install it from the Google Play Store or other trusted and reputable sources. The app does not contain any malware, viruses, or data theft. However, you should always be careful when downloading and installing APK files from other sources, as they may pose some risks. You should also scan the APK files with an antivirus app before installing them. We are not responsible for any damages or losses caused by using APK files from other sources.</p> - <li><strong>How can I contact the app developer or Yamaha customer service?</strong></li> -<p>If you have any questions or feedback about batería dtx multi 12 apk or this article, you can contact the app developer or Yamaha customer service by using the following methods:</p> -<ul> -<li>App developer: You can email the app developer at appskd@gmail.com, or visit their website at https://appskd.com/.</li> -<li>Yamaha customer service: You can call Yamaha customer service at +1-714-522-9000, or visit their website at https://usa.yamaha.com/.</li> -</ul> - <li><strong>How can I support batería dtx multi 12 apk or this article?</strong></li> -<p>If you like batería dtx multi 12 apk or this article, you can support them by doing the following things:</p> -<ul> -<li>Rate and review the app on the Google Play Store or other sources. This will help the app developer to improve the app and reach more users.</li> -<li>Share the app or this article with your friends, family, or social media followers. This will help more people to discover and enjoy the app and this article.</li> -<li>Donate to the app developer or this article writer. This will help them to continue developing and writing more apps and articles for you.</li> -</ul></p> 401be4b1e0<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Download Jeriqs TRUE LIFE STORY Mp3 from his Billion Dollar Dream Album.md b/spaces/fatiXbelha/sd/Download Jeriqs TRUE LIFE STORY Mp3 from his Billion Dollar Dream Album.md deleted file mode 100644 index b99e95016207d2be760c1eccd4420fe6d3dd5dad..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Download Jeriqs TRUE LIFE STORY Mp3 from his Billion Dollar Dream Album.md +++ /dev/null @@ -1,136 +0,0 @@ -<br /> -<h1>Jeriq: The Rising Star of Igbo Rap</h1> -<p>If you are a fan of Nigerian music, especially rap, you must have heard of Jeriq. He is one of the most talented and promising rappers in the country, who has made a name for himself with his unique style of Igbo rap. In this article, we will tell you everything you need to know about Jeriq, his biography, career, achievements, songs, net worth, and more.</p> -<h2>jeriq true life story mp3 download</h2><br /><p><b><b>Download Zip</b> 🌟 <a href="https://urllie.com/2uNIvc">https://urllie.com/2uNIvc</a></b></p><br /><br /> - <h2>Early Life and Education</h2> -<p>Jeriq's real name is Ani Jeremiah Chukwuebuka. He was born on May 6th, 1999 in Nkpor, Anambra State. He is the first child in a family of five. He grew up in a humble and religious background, where he learned the values of hard work and discipline. He attended College of the Immaculate Conception (CIC) Enugu for his secondary education, where he graduated in 2015. He then proceeded to Enugu State University of Science and Technology (ESUT), where he studied Information Technology and graduated in 2021.</p> - <h2>Musical Journey</h2> -<p>Jeriq discovered his passion for music at a very young age. He started recording songs at the age of 15, using his phone and a laptop. His debut single was titled "Iyoo", which he released in 2016. The song was well received by his friends and family, who encouraged him to pursue his musical dreams. He joined Janded Music Empire in 2018, a record label that helped him to improve his skills and exposure. He released more songs under the label, such as "Last Last", "Update", "Paper", "Hussle O Clock", "Police Matters", "Runs", and "Junky".</p> - <h3>Breakthrough and Recognition</h3> -<p>Jeriq's breakthrough came in 2020, when he released a hit song titled "No More Nleka" featuring Zoro, a popular Igbo rapper. The song went viral on social media and radio stations, earning him millions of streams and downloads. He followed up with another banger titled "Remember", which he later remixed with Phyno, another renowned Igbo rapper. He also released his first EP in 2020, titled "Hood Boy Dreams". The EP contained six tracks that showcased his lyrical prowess and storytelling abilities. The EP was well received by critics and fans alike, who praised him for his originality and authenticity.</p> - <h3>Collaborations and Features</h3> -<p>Jeriq has collaborated with some of the biggest names in the Nigerian music industry, both within and outside his genre. He has worked with artists like Zoro, Phyno, Dremo, Flavour, DJ Neptune, Kofi Jamar, Psycho YP, Alpha P, among others. He has also featured on several projects by other artists, such as DJ Neptune's "Cash", Dremo's "East N West" EP, Flavour's "Flavour of Africa" album, Kofi Jamar's "Appetite for Destruction" EP, Psycho YP's "Euphoria" EP, and Alpha P's "Wolves and Mustangs" EP.</p> -<p>jeriq true life story mp3 free download<br /> -jeriq true life story audio download<br /> -download jeriq true life story song<br /> -jeriq true life story lyrics mp3 download<br /> -jeriq true life story music download<br /> -jeriq true life story mp3 320kbps download<br /> -jeriq true life story video download<br /> -jeriq true life story instrumental mp3 download<br /> -jeriq true life story album mp3 download<br /> -jeriq true life story remix mp3 download<br /> -jeriq true life story ft zoro mp3 download<br /> -jeriq true life story zip file download<br /> -jeriq true life story mp3 download naijaloaded<br /> -jeriq true life story mp3 download tooxclusive<br /> -jeriq true life story mp3 download fakaza<br /> -jeriq true life story mp3 download waploaded<br /> -jeriq true life story mp3 download justnaija<br /> -jeriq true life story mp3 download 9jaflaver<br /> -jeriq true life story mp3 download beatnaija[^1^] [^2^]<br /> -jeriq true life story mp3 download naijavibes<br /> -jeriq true life story mp3 download audiomack<br /> -jeriq true life story mp3 download soundcloud<br /> -jeriq true life story mp3 download spotify<br /> -jeriq true life story mp3 download apple music<br /> -jeriq true life story mp3 download youtube<br /> -how to download jeriq true life story mp3<br /> -where to download jeriq true life story mp3<br /> -best site to download jeriq true life story mp3<br /> -latest jeriq songs mp3 download<br /> -best of jeriq mixtape mp3 download<br /> -jeriq biography and net worth 2022<br /> -who is jeriq and what is his true life story<br /> -how did jeriq become famous in the music industry<br /> -what is the meaning of the song true life story by jeriq<br /> -what is the message of the song true life story by jeriq<br /> -what inspired jeriq to write the song true life story<br /> -who produced the song true life story by jeriq<br /> -when was the song true life story by jeriq released<br /> -how many views does the song true life story by jeriq have on youtube<br /> -how many streams does the song true life story by jeriq have on spotify</p> - <h3>Billion Dollar Dream Album</h3> -<p>In 2022, Jeriq released his debut album titled "Billion Dollar Dream". The album contained 12 tracks that featured artists like Dremo, Flavour, DJ Neptune, and Kofi Jamar. The album was a masterpiece that showcased Jeriq's versatility and growth as an artist. The album was well received by fans and critics alike, who hailed it as one of the best rap albums of the year. The album also topped various charts and platforms, such as Apple Music, Audiomack, Boomplay, etc. Some of the popular songs from the album are "Dreams", "Amen", "Apology", "East to West", and "True Life Story".</p> - <h2>Musical Style and Influences</h2> -<p>Jeriq is known for his unique style of rap, which blends Igbo and English languages. He raps about his experiences, struggles, aspirations, and realities as a young man from the East. He also infuses elements of Afrobeat, Highlife, and Trap music into his sound. He is influenced by his culture and environment, as well as by other rap legends such as Tupac Shakur, Notorious B.I.G., Jay-Z, Nas, Eminem, Lil Wayne, Kendrick Lamar, J. Cole, Drake, M.I Abaga, Olamide, Phyno, Zoro, etc.</p> - <h2>Awards and Nominations</h2> -<p>Jeriq has won or been nominated for several awards in recognition of his talent and achievements. Some of the awards he has won or been nominated for are:</p> -<table> -<tr> -<th>Award</th> -<th>Category</th> -<th>Year</th> -<th>Result</th> -</tr> -<tr> -<td>The Headies</td> -<td>Next Rated</td> -<td>2021</td> -<td>Nominated</td> -</tr> -<tr> -<td>The Headies</td> -<td>Best Rap Album (Hood Boy Dreams)</td> -<td>2021</td> -<td>Nominated</td> -</tr> -<tr> -<td>The Headies</td> -<td>Best Rap Single (Remember)</td> -<td>2021</td> -<td>Nominated</td> -</tr> -<tr> -<td>AFRIMMA</td> -<td>Best Newcomer</td> -<td>2021</td> -<td>Nominated</td> -</tr> -<tr> -<td>AFRIMMA</td> -<td>Best Rap Act</td> -<td>2021</td> -<td>Nominated</td> -</tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr><td colspan="4">...</td></tr> - <tr>< -|im_end|>assistant [assistant](#message) Some possible continuations of the assistant's message are: - </table><p>You can see that Jeriq has been recognized by some of the most prestigious awards in the Nigerian music industry. He has also been praised by his peers and fans for his rap skills and consistency.</p> - - </table><p>This shows that Jeriq is not only a talented rapper but also a successful one. He has achieved a lot in a short span of time and has proven himself to be one of the best in the game.</p> - - </table><p>As you can see, Jeriq has made a mark in the Nigerian music scene with his awards and nominations. He has also earned the respect and admiration of his fellow artists and listeners for his rap prowess and originality.</p> <h2>Personal Life and Net Worth</h2> -<p>Jeriq is very private about his personal life and does not reveal much about his family, relationships, or hobbies. He prefers to focus on his music and career, and avoid unnecessary controversies. He is a devout Christian and often thanks God for his blessings. He is also a philanthropist and has donated to various causes and charities, especially in his hometown of Nkpor. He is also a lover of cars and has a collection of exotic vehicles.</p> -<p>Jeriq's net worth is estimated to be around $500,000 as of 2023. He makes his money from his music sales, streams, shows, endorsements, and other sources. He is one of the richest and most influential rappers in Nigeria.</p> - <h2>Conclusion</h2> -<p>In conclusion, Jeriq is a rising star of Igbo rap who has made a name for himself with his unique style and skills. He has released several hit songs and albums that have earned him millions of fans and accolades. He has also collaborated with some of the biggest names in the Nigerian music industry and has won or been nominated for various awards. He is a role model for many young aspiring rappers who look up to him for inspiration and guidance. He is a proud son of Nkpor and a proud representative of Igbo culture.</p> -<p>If you are looking for Jeriq's songs, you can download them from various platforms such as Apple Music, Spotify, Audiomack, Boomplay, etc. You can also follow him on his social media handles such as Instagram, Twitter, Facebook, YouTube, etc. to keep up with his latest updates and news.</p> -<p>We hope you enjoyed this article about Jeriq's true life story. If you did, please share it with your friends and leave a comment below. Thank you for reading!</p> - <h2>FAQs</h2> -<p>Here are some of the frequently asked questions about Jeriq and their answers:</p> -<ol> -<li><b>What is Jeriq's real name?</b></li> -<p>Jeriq's real name is Ani Jeremiah Chukwuebuka.</p> -<li><b>When and where was Jeriq born?</b></li> -<p>Jeriq was born on May 6th, 1999 in Nkpor, Anambra State.</p> -<li><b>What is Jeriq's record label?</b></li> -<p>Jeriq is signed to Janded Music Empire, a record label he joined in 2018.</p> -<li><b>What is Jeriq's debut album?</b></li> -<p>Jeriq's debut album is titled "Billion Dollar Dream", which he released in 2022.</p> -<li><b>What are some of Jeriq's popular songs?</b></li> -<p>Some of Jeriq's popular songs are "No More Nleka", "Remember", "Hood Boy Dreams", "Dreams", "Amen", "Apology", "East to West", and "True Life Story".</p> -</ol></p> 401be4b1e0<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Experience The Seven Deadly Sins Grand Cross in a New Way with APK Mod.md b/spaces/fatiXbelha/sd/Experience The Seven Deadly Sins Grand Cross in a New Way with APK Mod.md deleted file mode 100644 index 7e8602c2a96970948bee32fee10944c71fa62817..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Experience The Seven Deadly Sins Grand Cross in a New Way with APK Mod.md +++ /dev/null @@ -1,119 +0,0 @@ - -<h1>Seven Deadly Sins Grand Cross APK Mod: A Guide for Anime Fans</h1> -<p>If you are a fan of the popular anime and manga series The Seven Deadly Sins, you might want to check out the mobile game based on it. The Seven Deadly Sins: Grand Cross is a turn-based RPG that features stunning graphics, engaging gameplay, and a faithful adaptation of the original story. But what if you want to enjoy the game without spending money or waiting for updates? That's where the seven deadly sins grand cross apk mod comes in. In this article, we will tell you what is an apk mod, how to download and install it, and some tips and tricks to make the most out of your gaming experience.</p> -<h2>seven deadly sins grand cross apk mod</h2><br /><p><b><b>Download File</b> ➡ <a href="https://urllie.com/2uNDTS">https://urllie.com/2uNDTS</a></b></p><br /><br /> - <h2>The Seven Deadly Sins: Grand Cross - A Cinematic Anime RPG</h2> -<p>The Seven Deadly Sins: Grand Cross is a mobile game developed by Netmarble that was released globally in March 2020. It is based on the anime and manga series The Seven Deadly Sins, which follows the adventures of a group of legendary knights who are accused of treason and must fight against the corrupt Holy Knights. The game features many elements that fans of the series will love, such as:</p> - <h3>The Story: Follow the Adventures of the Seven Deadly Sins</h3> -<p>The game follows the plot of the anime and manga series, with some original scenes and events added for more depth and variety. You will play as Meliodas, the captain of the Seven Deadly Sins, and join forces with Princess Elizabeth, Hawk, and other characters to find your comrades and save the kingdom of Liones. You will also encounter many familiar faces from the series, such as Ban, King, Diane, Merlin, Gowther, Escanor, Gilthunder, Hendrickson, and more. The game has over 100 chapters of story content to explore, with voice acting from the original cast and high-quality cutscenes that recreate the epic moments from the series.</p> - <h3>The Gameplay: Turn-Based Battles with Skill Synthesis</h3> -<p>The game features a unique turn-based combat system that uses cards to perform actions. Each character has a set of skills that are represented by cards of different ranks and colors. You can combine cards of the same rank or color to create more powerful skills or ultimate moves. You can also move cards around to rearrange your strategy or trigger skill synthesis. Skill synthesis is a mechanic that allows you to upgrade your cards by placing them next to each other on the same row or column. This adds more depth and strategy to your battles, as you have to think carefully about how to use your cards effectively.</p> -<p>* seven deadly sins grand cross mod apk unlimited diamonds<br /> -* download seven deadly sins grand cross mod apk latest version<br /> -* seven deadly sins grand cross mod apk god mode<br /> -* how to install seven deadly sins grand cross mod apk<br /> -* seven deadly sins grand cross mod apk android 1<br /> -* seven deadly sins grand cross mod apk offline<br /> -* seven deadly sins grand cross mod apk no root<br /> -* seven deadly sins grand cross mod apk free shopping<br /> -* seven deadly sins grand cross mod apk revdl<br /> -* seven deadly sins grand cross mod apk hack<br /> -* seven deadly sins grand cross mod apk 2023<br /> -* seven deadly sins grand cross mod apk unlimited money<br /> -* seven deadly sins grand cross mod apk jp<br /> -* seven deadly sins grand cross mod apk platinmods<br /> -* seven deadly sins grand cross mod apk ios<br /> -* seven deadly sins grand cross mod apk global<br /> -* seven deadly sins grand cross mod apk menu<br /> -* seven deadly sins grand cross mod apk an1<br /> -* seven deadly sins grand cross mod apk reddit<br /> -* seven deadly sins grand cross mod apk update<br /> -* seven deadly sins grand cross mod apk high damage<br /> -* seven deadly sins grand cross mod apk obb<br /> -* seven deadly sins grand cross mod apk vip<br /> -* seven deadly sins grand cross mod apk 2.18.0<br /> -* seven deadly sins grand cross mod apk blackmod<br /> -* seven deadly sins grand cross mod apk one hit kill<br /> -* seven deadly sins grand cross mod apk unlimited stamina<br /> -* seven deadly sins grand cross mod apk english version<br /> -* seven deadly sins grand cross mod apk anti ban<br /> -* seven deadly sins grand cross mod apk mega.nz<br /> -* seven deadly sins grand cross mod apk mediafire<br /> -* seven deadly sins grand cross mod apk 2.17.0<br /> -* seven deadly sins grand cross mod apk 2.16.0<br /> -* seven deadly sins grand cross mod apk 2.15.0<br /> -* seven deadly sins grand cross mod apk 2.14.0<br /> -* seven deadly sins grand cross mod apk 2.13.0<br /> -* seven deadly sins grand cross mod apk 2.12.0<br /> -* seven deadly sins grand cross mod apk 2.11.0<br /> -* seven deadly sins grand cross mod apk 2.10.0<br /> -* seven deadly sins grand cross mod apk 2.9.0<br /> -* seven deadly sins grand cross mod apk 2.8.0<br /> -* seven deadly sins grand cross mod apk 2.7.0<br /> -* seven deadly sins grand cross mod apk 2.6.0<br /> -* seven deadly sins grand cross mod apk 2.5.0<br /> -* seven deadly sins grand cross mod apk 2.4.0<br /> -* seven deadly sins grand cross mod apk 2.3.0<br /> -* seven deadly sins grand cross mod apk 2.2.0<br /> -* seven deadly sins grand cross mod apk 2.1.0</p> - <h3>The Graphics: High-Quality 3D Animations and Cutscenes</h3> -<p>One of the most impressive aspects of the game is its graphics. The game uses 3D models that are faithful to the original character designs, with cel-shaded effects that give them a cartoon-like appearance. The game also has stunning animations that bring your skills and ultimate moves to life, as well as cinematic cutscenes that immerse you in the story. The game's overall visual quality is comparable to a console or PC game, making it one of the best-looking mobile games ever made.</p> - <h2>The Seven Deadly Sins: Grand Cross APK Mod - How to Download and Install</h2> -<p>As much as the game is fun and enjoyable, it also has some drawbacks that might hinder your gaming experience. For example, the game requires a lot of storage space and internet connection, it has frequent updates that might take a long time to download, and it has in-game purchases that might tempt you to spend real money. If you want to avoid these issues and enjoy the game without any limitations, you might want to try the seven deadly sins grand cross apk mod. An apk mod is a modified version of the original game that has some features or functions altered or added to enhance the gameplay. Some of the benefits of using an apk mod are:</p> - <h3>What is an APK Mod and What are the Benefits</h3> -<ul> -<li>An apk mod can give you access to unlimited resources, such as gold, gems, stamina, and tickets, that you can use to buy items, upgrade your characters, and play more missions.</li> -<li>An apk mod can unlock all the characters and costumes that are otherwise locked behind paywalls or time-limited events.</li> -<li>An apk mod can remove ads, pop-ups, and other annoying interruptions that might distract you from the game.</li> -<li>An apk mod can bypass the verification process and allow you to play the game even if you don't have a valid account or internet connection.</li> -<li>An apk mod can update automatically and keep up with the latest version of the game without requiring you to download anything.</li> -</ul> -<p>However, before you download and install an apk mod, you should also be aware of the risks and disadvantages that come with it. Some of the drawbacks of using an apk mod are:</p> - <ul> -<li>An apk mod can be unsafe and contain viruses, malware, or spyware that might harm your device or steal your personal information.</li> -<li>An apk mod can be incompatible with your device or operating system and cause crashes, glitches, or errors.</li> -<li>An apk mod can be detected by the game's security system and result in a ban or suspension of your account.</li> -<li>An apk mod can ruin the fun and challenge of the game by making it too easy or boring.</li> -<li>An apk mod can be unethical and unfair to the developers and other players who support the game legitimately.</li> -</ul> -<p>Therefore, if you decide to use an apk mod, you should do so at your own risk and discretion. You should also respect the game's terms of service and policies and not use an apk mod for malicious or fraudulent purposes. You should also support the game by purchasing some items or watching some ads if you enjoy it.</p> - <h3>How to Download the APK Mod from a Trusted Source</h3> -<p>If you have weighed the pros and cons of using an apk mod and decided to give it a try, you will need to find a reliable source to download it from. There are many websites that offer apk mods for various games, but not all of them are trustworthy or safe. Some of them might have fake or outdated links, while others might have hidden fees or malware. To avoid these scams and dangers, you should follow these steps:</p> - <ol> -<li>Do some research on the internet and look for reviews, ratings, comments, or feedback from other users who have downloaded the apk mod from different sources. You can also check some forums, blogs, YouTube videos, or social media pages that are dedicated to the game or apk mods in general.</li> -<li>Choose a source that has a good reputation, a large user base, a high download speed, and a secure connection. You can also look for some features or guarantees that indicate the quality and safety of the source, such as SSL encryption, virus scanning, customer support, or refund policy.</li> -<li>Visit the source's website and look for the download link for the seven deadly sins grand cross apk mod. Make sure that the link matches the name and version of the game that you want to download. You can also check the file size, date, description, screenshots, or video previews of the apk mod before downloading it.</li> -<li>Click on the download link and wait for the file to be downloaded to your device. You might need to complete some verification steps or surveys before accessing the file. You might also need to disable your antivirus software or firewall temporarily if they block the download process.</li> -</ol> - <h3>How to Install the APK Mod on Your Device</h3> -<p>Once you have downloaded the file successfully, you will need to install it on your device. To do this, you will need to follow these steps:</p> - <ol> -<li>Locate the file on your device's storage and tap on it to open it. You might need to use a file manager app if or quests. Evolving your characters will change their appearance and increase their rarity and potential.</li> -<li>Equip your characters with weapons, armor, accessories, or costumes that you can get from the shop, the gacha, the events, or the quests. You can also use gold or anvils to enhance your equipment and increase their stats.</li> -<li>Set up your team by choosing the best characters and equipment for each situation. You can also use food, associations, or formations to boost your team's performance and synergy.</li> -</ul> - <h3>How to Cooperate with Friends and Other Players</h3> -<p>The game is not only a solo adventure, but also a social experience. You can interact with other players from around the world and cooperate with them in various ways. You can also make friends and join guilds to enjoy more benefits and features. Here are some ways to cooperate with friends and other players:</p> - <ul> -<li>Add friends by sending or accepting friend requests from other players. You can also use friend codes or QR codes to add friends more easily. You can chat with your friends, send them gifts, or invite them to play with you.</li> -<li>Join guilds by applying or creating your own guild. You can also use guild codes or QR codes to join guilds more easily. You can chat with your guild members, donate gold or materials, participate in guild events, or access the guild shop.</li> -<li>Play co-op missions by inviting your friends or joining random players. You can also use co-op codes or QR codes to play co-op missions more easily. You can cooperate with your teammates, use emotes, or chat with them.</li> -<li>Compete in PvP battles by challenging your friends or other players. You can also use PvP codes or QR codes to compete in PvP battles more easily. You can show off your skills, rank up, or earn rewards.</li> -</ul> - <h2>Conclusion</h2> -<p>The Seven Deadly Sins: Grand Cross is a game that every anime fan should try. It is a cinematic anime RPG that features stunning graphics, engaging gameplay, and a faithful adaptation of the original story. However, if you want to enjoy the game without any limitations, you might want to try the seven deadly sins grand cross apk mod. It is a modified version of the game that gives you access to unlimited resources, unlocks all the characters and costumes, removes ads and verification, and updates automatically. However, you should also be careful of the risks and disadvantages of using an apk mod, such as viruses, errors, bans, boredom, or unfairness. Therefore, you should use an apk mod at your own risk and discretion, and respect the game's terms of service and policies. You should also support the game by purchasing some items or watching some ads if you enjoy it.</p> - <p>If you are ready to download and install the seven deadly sins grand cross apk mod, you should follow the steps that we have provided in this article. You should also follow some tips and tricks that we have shared to make the most out of your gaming experience. We hope that this article has been helpful and informative for you. Have fun playing The Seven Deadly Sins: Grand Cross!</p> - <h2>FAQs</h2> -<p>Here are some common questions and answers about the game and the apk mod:</p> - <h4>Q: Is The Seven Deadly Sins: Grand Cross free to play?</h4> -<p>A: Yes, the game is free to download and play from the official app store. However, it also has in-game purchases that require real money.</p> - <h4>Q: Is The Seven Deadly Sins: Grand Cross available for iOS devices?</h4> -<p>A: Yes, the game is available for both Android and iOS devices. However, the apk mod is only compatible with Android devices.</p> - <h4>Q: Is The Seven Deadly Sins: Grand Cross online or offline?</h4> -<p>A: The game requires an internet connection to play most of its modes and features. However, you can play some parts of the story mode offline if you have downloaded them beforehand.</p> - <h4>Q: Is The Seven Deadly Sins: Grand Cross safe to play?</h4> -<p>A: The game is safe to play if you download it from the official app store and follow its terms of service and policies. However, the apk mod might not be safe to play if you download it from an untrusted source or use it for malicious or fraudulent purposes.</p> - <h4>Q: Is The Seven Deadly Sins: Grand Cross updated regularly?</h4> -<p>A: The game is updated regularly with new content, features, events, characters, costumes, and more. However, the apk mod might not be updated as frequently as the original game.</p> 197e85843d<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Experience the Ultimate Battle Royale in PUBG Mobile 2.0 Beta.md b/spaces/fatiXbelha/sd/Experience the Ultimate Battle Royale in PUBG Mobile 2.0 Beta.md deleted file mode 100644 index 0e56ba333e61164993b3e903804d6056600d66e1..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Experience the Ultimate Battle Royale in PUBG Mobile 2.0 Beta.md +++ /dev/null @@ -1,106 +0,0 @@ - -<h1>PUBG Mobile Beta Version 2.0 Download: Everything You Need to Know</h1> - <p>PUBG Mobile is one of the most popular mobile games in the world, with over a billion downloads and millions of active players. It is a battle royale game where up to 100 players parachute onto an island and fight to be the last one standing. PUBG Mobile offers various modes, maps, weapons, vehicles, and skins to keep the gameplay fresh and exciting.</p> - <p>But did you know that you can also try out the latest features and updates before they are officially released? That's right, PUBG Mobile has a beta version that allows players to test the upcoming content and provide feedback to the developers. The beta version is updated regularly with new patches and improvements, giving players a sneak peek at what's coming next.</p> -<h2>pubg mobile beta version 2.0 download</h2><br /><p><b><b>DOWNLOAD</b> ✓✓✓ <a href="https://urllie.com/2uNzMh">https://urllie.com/2uNzMh</a></b></p><br /><br /> - <p>In this article, we will tell you everything you need to know about PUBG Mobile Beta Version 2.0, which is the latest beta update for the game. We will show you how to download and install it on your device, what's new in it, how it compares with the previous version, and what other players are saying about it. So, let's get started!</p> - <h2>How to Download and Install PUBG Mobile Beta Version 2.0</h2> - <p>If you want to download and install PUBG Mobile Beta Version 2.0 on your device, you will need to follow these steps:</p> - <ol> -<li>First, you will need to access the download page for the beta version using this link: <a href="(^1^)">PUBG Mobile 2.0 Beta Download</a>. You will see two options for Android devices: x32 and x64. Choose the one that matches your device's architecture. For iOS devices, there is only one option available.</li> -<li>Next, you will need to download the APK file for Android or the IPA file for iOS. The file size is about 819 MB for Android and 1.93 GB for iOS. Make sure you have enough storage space on your device before downloading.</li> -<li>After downloading the file, you will need to enable the installation from unknown sources on your device. To do this, go to your device's settings, then security, then toggle on the option that says "Install from unknown sources" or something similar.</li> -<li>Then, you will need to locate the file on your device and tap on it to install it. You may see a warning message that says "This type of file can harm your device". Ignore it and proceed with the installation.</li> -<li>Once the installation is complete, you will need to launch the game and download an additional update package of about 297 MB. You will also need to download one of the two resource packs available: HD or Low-spec. The HD pack is about 673 MB and the Low-spec pack is about 353 MB.</li> -<li>Finally, you will need to sign in as a guest to access the beta version. You cannot use your existing PUBG Mobile account or link it with any social media platforms in the beta version.</li> -</ol> - <p>Congratulations! You have successfully downloaded and installed PUBG Mobile Beta Version 2.0 on your device. Now you can enjoy playing the new content and testing the new features.</p> - <h3>Minimum System Requirements</h3> - <p>Before downloading and installing PUBG Mobile Beta Version 2.0, make sure your device meets the minimum system requirements for running the game smoothly. Here are the minimum system requirements for Android and iOS devices:</p> - <table> -<tr> -<th>Device</th> -<th>OS Version</th> -<th>RAM</th> -</tr> -<tr> -<td>Android</td> -<td>Android 5.1 or higher</td> -<td>2 GB or higher</td> -</tr> -<tr> -<td>iOS</td> -<td>iOS 9.0 or higher</td> -<td>2 GB or higher</td> -</tr> -</table> - <p>If your device does not meet these requirements, you may experience lag, crashes, or other issues while playing the beta version. In that case, you may want to wait for the official release of the update or upgrade your device.</p> - <h2>What's New in PUBG Mobile Beta Version 2.0</h2> - <p>PUBG Mobile Beta Version 2.0 is packed with new features and additions that will enhance your gaming experience and make it more fun and realistic. Here are some of the highlights of what's new in the beta version:</p> - <h3>New Map: Erangel 2.0</h3> - <p>The most anticipated feature of the beta version is the revamped Erangel map, which is now called Erangel 2.0. Erangel is the original and most popular map in PUBG Mobile, and it has been redesigned with better graphics, textures, lighting, and details. Erangel 2.0 also has some new locations, such as a secret bunker, a trench system, and a blast zone. The blast zone is a dynamic area that can explode at any time, causing damage to players and buildings. You can also find some new vehicles and weapons on Erangel 2.0, such as the MG3 light machine gun and the Monster Truck.</p> -<p>pubg mobile 2.0 beta update new features<br /> -pubg mobile 2.0 beta update download link<br /> -pubg mobile 2.0 beta update file size<br /> -pubg mobile 2.0 beta update livik map<br /> -pubg mobile 2.0 beta update football field<br /> -pubg mobile 2.0 beta update core circle mode<br /> -pubg mobile 2.0 beta update in-game companions<br /> -pubg mobile 2.0 beta update magazine capacity<br /> -pubg mobile 2.0 beta update apk file<br /> -pubg mobile 2.0 beta update resource pack<br /> -pubg mobile 2.0 beta version install guide<br /> -pubg mobile 2.0 beta version cheer park<br /> -pubg mobile 2.0 beta version gameplay<br /> -pubg mobile 2.0 beta version release date<br /> -pubg mobile 2.0 beta version patch notes<br /> -pubg mobile 2.0 beta version bugs and fixes<br /> -pubg mobile 2.0 beta version feedback and review<br /> -pubg mobile 2.0 beta version new guns and vehicles<br /> -pubg mobile 2.0 beta version new skins and outfits<br /> -pubg mobile 2.0 beta version new events and collabs<br /> -how to download pubg mobile 2.0 beta version<br /> -how to play pubg mobile 2.0 beta version<br /> -how to join pubg mobile 2.0 beta testing program<br /> -how to update pubg mobile to 2.0 beta version<br /> -how to uninstall pubg mobile 2.0 beta version<br /> -is pubg mobile 2.0 beta version safe to download<br /> -is pubg mobile 2.0 beta version compatible with my device<br /> -is pubg mobile 2.0 beta version available in my region<br /> -is pubg mobile 2.0 beta version better than the global version<br /> -is pubg mobile 2.0 beta version worth trying out<br /> -what is new in pubg mobile 2.0 beta version<br /> -what is the difference between pubg mobile and pubg mobile 2.0 beta version<br /> -what are the requirements for pubg mobile 2.0 beta version<br /> -what are the benefits of playing pubg mobile 2.0 beta version<br /> -what are the drawbacks of playing pubg mobile 2.0 beta version<br /> -where to download pubg mobile 2.0 beta version apk file<br /> -where to find the latest news about pubg mobile 2.0 beta version<br /> -where to report bugs and issues in pubg mobile 2.0 beta version<br /> -where to get free rewards and coupons for pubg mobile 2.0 beta version<br /> -where to watch live streams and videos of pubg mobile 2.0 beta version</p> - <h3>New Mode: Payload 2.0</h3> - <p>If you love action and explosions, you will love the new Payload 2.0 mode, which is an upgraded version of the previous Payload mode. Payload 2.0 features some new weapons and vehicles that are equipped with heavy firepower, such as rocket launchers, grenade launchers, missile launchers, and UAVs. You can also use a radar to locate enemies and a bomb suit to reduce damage from explosives. Payload 2.0 also has a new feature called Armed Helicopter, which allows you to fly a helicopter with a mounted machine gun and missiles.</p> - <h3>New Feature: Route Planner</h3> - <p>Another new feature in the beta version is the Route Planner, which allows you to plan your movement and strategy on the map. You can use the Route Planner to mark up to four points on the map and see the distance and time between them. You can also see the route on the mini-map and share it with your teammates. The Route Planner can help you avoid enemies, find loot, and reach the safe zone more efficiently.</p> - <h3>New Feature: Cheer Park 2.0</h3> - <p>Cheer Park is a social area where you can interact with other players, practice your skills, and have fun. Cheer Park 2.0 is an improved version of Cheer Park that has some new additions and activities. You can now play some mini-games in Cheer Park 2.0, such as Hunt Game, Shooting Range, and Launcher Battle. You can also join a music party with other players and enjoy some tunes and dances. Cheer Park 2.0 also has a Ferris wheel, a hot air balloon, and a fireworks display that you can enjoy with your friends.</p> - <h2>Conclusion</h2> - <p>PUBG Mobile Beta Version 2.0 is a great way to experience the latest features and updates of PUBG Mobile before they are officially released. You can download and install it on your device using the link provided in this article and enjoy playing the new content and testing the new features.</p> - <p>However, keep in mind that the beta version is not the final version of the game and it may have some bugs, glitches, or errors that may affect your gameplay. Also, remember that you cannot use your existing PUBG Mobile account or link it with any social media platforms in the beta version.</p> - <p>If you encounter any problems or have any suggestions while playing the beta version, you can report them to the developers using the feedback button in the game settings. Your feedback will help them improve the game and make it better for everyone.</p> - <p>So, what are you waiting for? Download PUBG Mobile Beta Version 2.0 today and join the action!</p> - <h2>FAQs</h2> - <p>Here are some frequently asked questions and answers about PUBG Mobile Beta Version 2.0:</p> - <h4>Q: How long will PUBG Mobile Beta Version 2.0 be available?</h4> -<p>A: There is no official announcement about how long PUBG Mobile Beta Version 2.0 will be available for download and play. However, based on previous beta versions, it may last for a few weeks or months until the official release of the update.</p> - <h4>Q: Will my progress and data be saved in PUBG Mobile Beta Version 2.0?</h4> -<p>A: No, your progress and data will not be saved in PUBG Mobile Beta Version 2.0. You will start from scratch as a guest in the beta version and you will lose everything once you uninstall it or switch back to the official version. Therefore, do not spend any real money or resources in the beta version.</p> - <h4>Q: Can I play with my friends in PUBG Mobile Beta Version 2.0?</h4> -<p>A: Yes, you can play with your friends in PUBG Mobile Beta Version 2.0, but only if they have also downloaded and installed the beta version on their devices. You cannot play with players who are using the official version of the game or a different beta version.</p> - <h4>Q: Can I stream or record PUBG Mobile Beta Version 2.0?</h4> -<p>A: Yes, you can stream or record PUBG Mobile Beta Version 2.0, but you should follow some guidelines and rules. You should mention that you are playing the beta version and not the official version of the game. You should also avoid showing any bugs, glitches, or errors that may spoil the game for other players. You should respect the intellectual property rights of PUBG Mobile and its developers and give proper credit to them.</p> - <h4>Q: How can I get more information about PUBG Mobile Beta Version 2.0?</h4> -<p>A: You can get more information about PUBG Mobile Beta Version 2.0 by visiting the official website of PUBG Mobile <a href="">here</a>. You can also follow their social media accounts on Facebook, Twitter, Instagram, and YouTube for the latest news and updates.</p> 197e85843d<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/fatiXbelha/sd/Farm Heroes Saga MOD APK The Best Way to Download and Play the Match-Three Game.md b/spaces/fatiXbelha/sd/Farm Heroes Saga MOD APK The Best Way to Download and Play the Match-Three Game.md deleted file mode 100644 index bc26e4c127f1ce63038c4596323fe60f12b41707..0000000000000000000000000000000000000000 --- a/spaces/fatiXbelha/sd/Farm Heroes Saga MOD APK The Best Way to Download and Play the Match-Three Game.md +++ /dev/null @@ -1,110 +0,0 @@ -<br /> -<h1>Download Game Farm Heroes Saga Mod Apk: A Fun and Challenging Farm-Themed Puzzle Game</h1> - <p>If you are looking for a casual and addictive puzzle game to play on your mobile device, you might want to try Farm Heroes Saga. This game is developed by King, the same company that created the popular Candy Crush Saga series. In this game, you will join forces with the Farm Heroes to stop the evil Rancid the Raccoon from spoiling the precious farm lands. You will have to match cropsies, collect fruits and vegetables, and use boosters and power-ups to complete hundreds of levels. You can also play with your friends online and compete for the best scores.</p> -<h2>download game farm heroes saga mod apk</h2><br /><p><b><b>DOWNLOAD</b> ☑ <a href="https://urllie.com/2uNzSI">https://urllie.com/2uNzSI</a></b></p><br /><br /> - <p>But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited lives, boosters, gold bars, and magic beans? Well, there is a way to do that. You can download game farm heroes saga mod apk, which is a modified version of the original game that gives you access to all these features and more. In this article, we will tell you everything you need to know about this mod apk version, including its benefits, drawbacks, and how to download and install it on your device.</p> - <h2>What is Farm Heroes Saga?</h2> - <p>Farm Heroes Saga is a farm-themed puzzle game that belongs to the genre of match-3 games. This means that you have to swap and match three or more items of the same kind on a grid to clear them and achieve certain goals. The items in this game are called cropsies, which are cute and colorful fruits and vegetables. You will also encounter other elements, such as flowers, water buckets, eggs, chicks, fireflies, nuts, and more.</p> - <h3>The gameplay of Farm Heroes Saga</h3> - <p>The gameplay of Farm Heroes Saga is simple but challenging. You have to complete each level by collecting a certain number of cropsies or other items within a limited number of moves or time. You can also earn stars by scoring high points and use them to unlock new episodes and areas on the map. Along the way, you will face obstacles, such as ice, mud, spider webs, grumpy cropsies, and Rancid the Raccoon himself. You will also have to deal with different types of boards, such as hexagonal, circular, or irregular shapes.</p> - <p>To help you overcome these challenges, you can use various boosters and power-ups that have different effects. For example, you can use the shovel to remove any item from the board, the tractor to clear a row or column of cropsies, the doggie to collect all cropsies of one kind, or the color collector to collect all cropsies of one color. You can also create special cropsies by matching four or more items in a row or column or in a square or L-shape. These special cropsies can clear more items or increase their value when matched.</p> - <h3>The features of Farm Heroes Saga</h3> - <p>Farm Heroes Saga has many features that make it an enjoyable and entertaining game. Some of these features are:</p> - <ul> -<li>Over 3000 levels to play and more added every week</li> -<li>Various game modes, such as Hero Mode, Rancid's Revenge, Treasure Mill, Fireworks Festival, and more</li> -<li>Different farm environments, such as Sunny Slopes, Fruity Forest, Dairy District, Ice Cream Acres, and more</li> -<li>Cute and colorful graphics and animations</li> -<li>Funny and friendly characters, such as Amelia the Aviator, Hunter the Doggie, Choo Choo the Train Driver, and more</li> -<li>Social features that allow you to connect with your Facebook friends and see their progress on the map</li> -<li>Leaderboards and events that let you compete with other players and win rewards</li> -<li>Daily quests and challenges that give you extra bonuses and prizes</li> -<li>A farm club where you can collect and upgrade farm animals that have special abilities</li> -<li>A magic beanstalk where you can grow and harvest magic beans that can boost your gameplay</li> -<li>A wheel of fortune where you can spin and win free items every day</li> -</ul> - <h2>Why download game farm heroes saga mod apk?</h2> - <p>As you can see, Farm Heroes Saga is a fun and challenging game that can keep you entertained for hours. However, it also has some limitations and drawbacks that might affect your gaming experience. For example, you might run out of lives, boosters, gold bars, or magic beans, which are essential for playing the game. You might also encounter some levels that are too hard or frustrating to complete. You might also want to unlock all the episodes and areas without having to wait or pay.</p> - <p>That's why some players choose to download game farm heroes saga mod apk, which is a modified version of the original game that gives you some advantages and benefits. Some of these benefits are:</p> -<p>download farm heroes saga mod apk unlimited lives<br /> -farm heroes saga mod apk latest version download<br /> -how to download farm heroes saga mod apk for android<br /> -farm heroes saga hack mod apk free download<br /> -download game farm heroes saga mod apk offline<br /> -farm heroes saga mod apk unlimited gold bars download<br /> -download farm heroes saga mod apk revdl<br /> -farm heroes saga mod apk android 1 download<br /> -download game farm heroes saga mod apk 2023<br /> -farm heroes saga mod apk unlimited everything download<br /> -download farm heroes saga mod apk no root<br /> -farm heroes saga mod apk unlimited moves download<br /> -download game farm heroes saga mod apk rexdl<br /> -farm heroes saga mod apk 6.15.3 download<br /> -download farm heroes saga mod apk pure<br /> -farm heroes saga mod apk unlimited beans download<br /> -download game farm heroes saga mod apk happymod<br /> -farm heroes saga mod apk 5.54.5 download<br /> -download farm heroes saga mod apk for pc<br /> -farm heroes saga mod apk unlimited boosters download<br /> -download game farm heroes saga mod apk android oyun club<br /> -farm heroes saga mod apk 4.11.3 download<br /> -download farm heroes saga mod apk old version<br /> -farm heroes saga mod apk unlimited stars download<br /> -download game farm heroes saga mod apk uptodown<br /> -farm heroes saga mod apk 5.2.10 download<br /> -download farm heroes saga mod apk new version<br /> -farm heroes saga mod apk unlimited magic beans download<br /> -download game farm heroes saga mod apk online<br /> -farm heroes saga mod apk 5.4.8 download</p> - <h3>The benefits of the mod apk version</h3> - <ul> -<li>You will have unlimited lives, boosters, gold bars, and magic beans, which means you can play the game as long as you want without any interruptions or limitations</li> -<li>You will have all the episodes and areas unlocked from the start, which means you can explore the whole map and enjoy all the game modes and features</li> -<li>You will have all the farm animals upgraded to the maximum level, which means you can use their special abilities to help you complete the levels</li> -<li>You will have all the cropsies increased in value by 100%, which means you can score higher points and earn more stars</li> -<li>You will have all the ads removed from the game, which means you can play the game without any distractions or annoyances</li> -</ul> - <h3>The drawbacks of the mod apk version</h3> - <p>However, downloading game farm heroes saga mod apk also has some drawbacks and risks that you should be aware of. Some of these drawbacks are:</p> - <ul> -<li>You might lose your progress or data if you uninstall the original game or update the mod apk version</li> -<li>You might encounter some bugs or errors that might affect the gameplay or performance of the game</li> -<li>You might get banned or suspended from the game if the developers detect that you are using a mod apk version</li> -<li>You might expose your device to malware or viruses that might harm your system or steal your information</li> -<li>You might miss out on some updates or features that are only available in the original game</li> -</ul> - <h2>How to download game farm heroes saga mod apk?</h2> - <p>If you decide to download game farm heroes saga mod apk, you should follow these steps carefully to ensure a safe and successful installation. You should also make sure that your device meets the minimum requirements for running the game, such as Android 4.4 or higher, 2 GB of RAM, and 100 MB of free storage space.</p> - <h3>The steps to download and install the mod apk file</h3> - <ol> -<li>Go to a reliable and trusted website that offers the mod apk file for Farm Heroes Saga. You can search for it on Google or use one of these links: </li> -<li>Download the mod apk file to your device. You might need to enable the option of "Unknown Sources" in your settings to allow the installation of apps from sources other than Google Play Store.</li> -<li>Locate the downloaded file in your file manager and tap on it to start the installation process. Follow the instructions on the screen and wait for it to finish.</li> -<li>Launch the game and enjoy playing with unlimited resources and features.</li> -</ol> - <h3>The tips to play the game safely and smoothly</h3> - <ul> -<li>Do not uninstall or update the original game if you want to keep your progress or data.</li> -<li>Do not connect your Facebook account to the mod apk version if you want to avoid getting banned or suspended.</li> -<li>Do not use too many boosters or power-ups in one level if you want to avoid getting detected or reported by other players.</li> -<li>Do not download or install any other mod apk files from unknown or untrusted sources if you want to protect your device from malware or viruses.</li> -<li>Do not miss out on any updates or features that are only available in the original game if you want to enjoy the latest content and improvements.</li> -</ul> - <h2>Conclusion</h2> - <p>Farm Heroes Saga is a fun and challenging farm-themed puzzle game that can keep you entertained for hours. However, it also has some limitations and drawbacks that might affect your gaming experience. That's why some players choose to download game farm heroes saga mod apk, which is a modified version of the original game that gives you some advantages and benefits. However, downloading game farm heroes saga mod apk also has some drawbacks and risks that you should be aware of. In this article, we have told you everything you need to know about this mod apk version, including its benefits, drawbacks, and how to download and install it on your device. We hope you found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.</p> - <h2>FAQs</h2> - <p>Here are some frequently asked questions about game farm heroes saga mod apk:</p> - <ol> -<li><b>What is the latest version of game farm heroes saga mod apk?</b></li> -<p>The latest version of game farm heroes saga mod apk is 5.63.6, which was released on June 15, 2023. This version has some bug fixes and improvements.</p> -<li><b>Is game farm heroes saga mod apk safe to use?</b></li> -<p>Game farm heroes saga mod apk is safe to use as long as you download it from a reliable and trusted website. However, you should always be careful and cautious when downloading or installing any mod apk files from unknown or untrusted sources.</p> -<li><b>Can I play game farm heroes saga mod apk offline?</b></li> -<p>Yes, you can play game farm heroes saga mod apk offline without any internet connection. However, you will not be able to access some features or functions that require online connectivity, such as social features, leaderboards, events, or updates.</p> -<li><b>Can I play game farm heroes saga mod apk with my friends?</b></li> -<p>Yes, you can play game farm heroes saga mod apk with your friends online and compete for the best scores. However, you should not connect your Facebook account to the mod apk version if you want to avoid getting banned or suspended.</p> -<li><b>Can I update game farm heroes saga mod apk?</b></li> -<p>Yes, you can update game farm heroes saga mod apk whenever there is a new version available. However, you should always backup your progress or data before updating the mod apk version to avoid losing them.</p> -</ol></p> 401be4b1e0<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/fb700/chat3/crazy_functions/test_project/cpp/libJPG/jpge.h b/spaces/fb700/chat3/crazy_functions/test_project/cpp/libJPG/jpge.h deleted file mode 100644 index a46c805ab80aab491f7f9508b3a008b149866bee..0000000000000000000000000000000000000000 --- a/spaces/fb700/chat3/crazy_functions/test_project/cpp/libJPG/jpge.h +++ /dev/null @@ -1,172 +0,0 @@ - -// jpge.h - C++ class for JPEG compression. -// Public domain, Rich Geldreich <richgel99@gmail.com> -// Alex Evans: Added RGBA support, linear memory allocator. -#ifndef JPEG_ENCODER_H -#define JPEG_ENCODER_H - -#include <stdint.h> - -namespace jpge -{ - typedef unsigned char uint8; - typedef signed short int16; - typedef signed int int32; - typedef unsigned short uint16; - typedef unsigned int uint32; - typedef unsigned int uint; - - // JPEG chroma subsampling factors. Y_ONLY (grayscale images) and H2V2 (color images) are the most common. - enum subsampling_t { Y_ONLY = 0, H1V1 = 1, H2V1 = 2, H2V2 = 3 }; - - // JPEG compression parameters structure. - struct params - { - inline params() : m_quality(85), m_subsampling(H2V2), m_no_chroma_discrim_flag(false), m_two_pass_flag(false) { } - - inline bool check_valid() const - { - if ((m_quality < 1) || (m_quality > 100)) return false; - if ((uint)m_subsampling > (uint)H2V2) return false; - return true; - } - - // Quality: 1-100, higher is better. Typical values are around 50-95. - int m_quality; - - // m_subsampling: - // 0 = Y (grayscale) only - // 1 = YCbCr, no subsampling (H1V1, YCbCr 1x1x1, 3 blocks per MCU) - // 2 = YCbCr, H2V1 subsampling (YCbCr 2x1x1, 4 blocks per MCU) - // 3 = YCbCr, H2V2 subsampling (YCbCr 4x1x1, 6 blocks per MCU-- very common) - subsampling_t m_subsampling; - - // Disables CbCr discrimination - only intended for testing. - // If true, the Y quantization table is also used for the CbCr channels. - bool m_no_chroma_discrim_flag; - - bool m_two_pass_flag; - }; - - // Writes JPEG image to a file. - // num_channels must be 1 (Y) or 3 (RGB), image pitch must be width*num_channels. - bool compress_image_to_jpeg_file(const char *pFilename, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Writes JPEG image to memory buffer. - // On entry, buf_size is the size of the output buffer pointed at by pBuf, which should be at least ~1024 bytes. - // If return value is true, buf_size will be set to the size of the compressed data. - bool compress_image_to_jpeg_file_in_memory(void *pBuf, int64_t &buf_size, int64_t width, int64_t height, int64_t num_channels, const uint8 *pImage_data, const params &comp_params = params()); - - // Output stream abstract class - used by the jpeg_encoder class to write to the output stream. - // put_buf() is generally called with len==JPGE_OUT_BUF_SIZE bytes, but for headers it'll be called with smaller amounts. - class output_stream - { - public: - virtual ~output_stream() { }; - virtual bool put_buf(const void* Pbuf, int64_t len) = 0; - template<class T> inline bool put_obj(const T& obj) { return put_buf(&obj, sizeof(T)); } - }; - - // Lower level jpeg_encoder class - useful if more control is needed than the above helper functions. - class jpeg_encoder - { - public: - jpeg_encoder(); - ~jpeg_encoder(); - - // Initializes the compressor. - // pStream: The stream object to use for writing compressed data. - // params - Compression parameters structure, defined above. - // width, height - Image dimensions. - // channels - May be 1, or 3. 1 indicates grayscale, 3 indicates RGB source data. - // Returns false on out of memory or if a stream write fails. - bool init(output_stream *pStream, int64_t width, int64_t height, int64_t src_channels, const params &comp_params = params()); - - const params &get_params() const { return m_params; } - - // Deinitializes the compressor, freeing any allocated memory. May be called at any time. - void deinit(); - - uint get_total_passes() const { return m_params.m_two_pass_flag ? 2 : 1; } - inline uint get_cur_pass() { return m_pass_num; } - - // Call this method with each source scanline. - // width * src_channels bytes per scanline is expected (RGB or Y format). - // You must call with NULL after all scanlines are processed to finish compression. - // Returns false on out of memory or if a stream write fails. - bool process_scanline(const void* pScanline); - - private: - jpeg_encoder(const jpeg_encoder &); - jpeg_encoder &operator =(const jpeg_encoder &); - - typedef int32 sample_array_t; - - output_stream *m_pStream; - params m_params; - uint8 m_num_components; - uint8 m_comp_h_samp[3], m_comp_v_samp[3]; - int m_image_x, m_image_y, m_image_bpp, m_image_bpl; - int m_image_x_mcu, m_image_y_mcu; - int m_image_bpl_xlt, m_image_bpl_mcu; - int m_mcus_per_row; - int m_mcu_x, m_mcu_y; - uint8 *m_mcu_lines[16]; - uint8 m_mcu_y_ofs; - sample_array_t m_sample_array[64]; - int16 m_coefficient_array[64]; - int32 m_quantization_tables[2][64]; - uint m_huff_codes[4][256]; - uint8 m_huff_code_sizes[4][256]; - uint8 m_huff_bits[4][17]; - uint8 m_huff_val[4][256]; - uint32 m_huff_count[4][256]; - int m_last_dc_val[3]; - enum { JPGE_OUT_BUF_SIZE = 2048 }; - uint8 m_out_buf[JPGE_OUT_BUF_SIZE]; - uint8 *m_pOut_buf; - uint m_out_buf_left; - uint32 m_bit_buffer; - uint m_bits_in; - uint8 m_pass_num; - bool m_all_stream_writes_succeeded; - - void optimize_huffman_table(int table_num, int table_len); - void emit_byte(uint8 i); - void emit_word(uint i); - void emit_marker(int marker); - void emit_jfif_app0(); - void emit_dqt(); - void emit_sof(); - void emit_dht(uint8 *bits, uint8 *val, int index, bool ac_flag); - void emit_dhts(); - void emit_sos(); - void emit_markers(); - void compute_huffman_table(uint *codes, uint8 *code_sizes, uint8 *bits, uint8 *val); - void compute_quant_table(int32 *dst, int16 *src); - void adjust_quant_table(int32 *dst, int32 *src); - void first_pass_init(); - bool second_pass_init(); - bool jpg_open(int p_x_res, int p_y_res, int src_channels); - void load_block_8_8_grey(int x); - void load_block_8_8(int x, int y, int c); - void load_block_16_8(int x, int c); - void load_block_16_8_8(int x, int c); - void load_quantized_coefficients(int component_num); - void flush_output_buffer(); - void put_bits(uint bits, uint len); - void code_coefficients_pass_one(int component_num); - void code_coefficients_pass_two(int component_num); - void code_block(int component_num); - void process_mcu_row(); - bool terminate_pass_one(); - bool terminate_pass_two(); - bool process_end_of_image(); - void load_mcu(const void* src); - void clear(); - void init(); - }; - -} // namespace jpge - -#endif // JPEG_ENCODER \ No newline at end of file diff --git a/spaces/fb700/chatglm-fitness-RLHF/README.md b/spaces/fb700/chatglm-fitness-RLHF/README.md deleted file mode 100644 index 517eeb6f3369dd574f100a7c80c04a8fcb5bac4e..0000000000000000000000000000000000000000 --- a/spaces/fb700/chatglm-fitness-RLHF/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: BofanAI-chatglm-fitness-RLHF -emoji: 🐰 -colorFrom: purple -colorTo: green -sdk: gradio -sdk_version: 3.23.0 -app_file: app.py -pinned: false -license: mit ---- - - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/utils/optimize.py b/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/utils/optimize.py deleted file mode 100644 index b2e5518d7bd687ab2ef0106c1e3a40fd40f1531c..0000000000000000000000000000000000000000 --- a/spaces/feng2022/Time-TravelRephotography/Time_TravelRephotography/utils/optimize.py +++ /dev/null @@ -1,230 +0,0 @@ -import math -from argparse import ( - ArgumentParser, - Namespace, -) -from typing import ( - Dict, - Iterable, - Optional, - Tuple, -) - -import numpy as np -from tqdm import tqdm -import torch -from torch import nn -import torch.nn.functional as F -from torch.utils.tensorboard import SummaryWriter -from torchvision.utils import make_grid -from torchvision.transforms import Resize - -#from optim import get_optimizer_class, OPTIMIZER_MAP -from losses.regularize_noise import NoiseRegularizer -from optim import RAdam -from utils.misc import ( - iterable_to_str, - optional_string, -) - - -class OptimizerArguments: - @staticmethod - def add_arguments(parser: ArgumentParser): - parser.add_argument('--coarse_min', type=int, default=32) - parser.add_argument('--wplus_step', type=int, nargs="+", default=[250, 750], help="#step for optimizing w_plus") - #parser.add_argument('--lr_rampup', type=float, default=0.05) - #parser.add_argument('--lr_rampdown', type=float, default=0.25) - parser.add_argument('--lr', type=float, default=0.1) - parser.add_argument('--noise_strength', type=float, default=.0) - parser.add_argument('--noise_ramp', type=float, default=0.75) - #parser.add_argument('--optimize_noise', action="store_true") - parser.add_argument('--camera_lr', type=float, default=0.01) - - parser.add_argument("--log_dir", default="log/projector", help="tensorboard log directory") - parser.add_argument("--log_freq", type=int, default=10, help="log frequency") - parser.add_argument("--log_visual_freq", type=int, default=50, help="log frequency") - - @staticmethod - def to_string(args: Namespace) -> str: - return ( - f"lr{args.lr}_{args.camera_lr}-c{args.coarse_min}" - + f"-wp({iterable_to_str(args.wplus_step)})" - + optional_string(args.noise_strength, f"-n{args.noise_strength}") - ) - - -class LatentNoiser(nn.Module): - def __init__( - self, generator: torch.nn, - noise_ramp: float = 0.75, noise_strength: float = 0.05, - n_mean_latent: int = 10000 - ): - super().__init__() - - self.noise_ramp = noise_ramp - self.noise_strength = noise_strength - - with torch.no_grad(): - # TODO: get 512 from generator - noise_sample = torch.randn(n_mean_latent, 512, device=generator.device) - latent_out = generator.style(noise_sample) - - latent_mean = latent_out.mean(0) - self.latent_std = ((latent_out - latent_mean).pow(2).sum() / n_mean_latent) ** 0.5 - - def forward(self, latent: torch.Tensor, t: float) -> torch.Tensor: - strength = self.latent_std * self.noise_strength * max(0, 1 - t / self.noise_ramp) ** 2 - noise = torch.randn_like(latent) * strength - return latent + noise - - -class Optimizer: - @classmethod - def optimize( - cls, - generator: torch.nn, - criterion: torch.nn, - degrade: torch.nn, - target: torch.Tensor, # only used in writer since it's mostly baked in criterion - latent_init: torch.Tensor, - noise_init: torch.Tensor, - args: Namespace, - writer: Optional[SummaryWriter] = None, - ) -> Tuple[torch.Tensor, torch.Tensor]: - # do not optimize generator - generator = generator.eval() - target = target.detach() - # prepare parameters - noises = [] - for n in noise_init: - noise = n.detach().clone() - noise.requires_grad = True - noises.append(noise) - - - def create_parameters(latent_coarse): - parameters = [ - {'params': [latent_coarse], 'lr': args.lr}, - {'params': noises, 'lr': args.lr}, - {'params': degrade.parameters(), 'lr': args.camera_lr}, - ] - return parameters - - - device = target.device - - # start optimize - total_steps = np.sum(args.wplus_step) - max_coarse_size = (2 ** (len(args.wplus_step) - 1)) * args.coarse_min - noiser = LatentNoiser(generator, noise_ramp=args.noise_ramp, noise_strength=args.noise_strength).to(device) - latent = latent_init.detach().clone() - for coarse_level, steps in enumerate(args.wplus_step): - if criterion.weights["contextual"] > 0: - with torch.no_grad(): - # synthesize new sibling image using the current optimization results - # FIXME: update rgbs sibling - sibling, _, _ = generator([latent], input_is_latent=True, randomize_noise=True) - criterion.update_sibling(sibling) - - coarse_size = (2 ** coarse_level) * args.coarse_min - latent_coarse, latent_fine = cls.split_latent( - latent, generator.get_latent_size(coarse_size)) - parameters = create_parameters(latent_coarse) - optimizer = RAdam(parameters) - - print(f"Optimizing {coarse_size}x{coarse_size}") - pbar = tqdm(range(steps)) - for si in pbar: - latent = torch.cat((latent_coarse, latent_fine), dim=1) - niters = si + np.sum(args.wplus_step[:coarse_level]) - latent_noisy = noiser(latent, niters / total_steps) - img_gen, _, rgbs = generator([latent_noisy], input_is_latent=True, noise=noises) - # TODO: use coarse_size instead of args.coarse_size for rgb_level - loss, losses = criterion(img_gen, degrade=degrade, noises=noises, rgbs=rgbs) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - NoiseRegularizer.normalize(noises) - - # log - pbar.set_description("; ".join([f"{k}: {v.item(): .3e}" for k, v in losses.items()])) - - if writer is not None and niters % args.log_freq == 0: - cls.log_losses(writer, niters, loss, losses, criterion.weights) - cls.log_parameters(writer, niters, degrade.named_parameters()) - if writer is not None and niters % args.log_visual_freq == 0: - cls.log_visuals(writer, niters, img_gen, target, degraded=degrade(img_gen), rgbs=rgbs) - - latent = torch.cat((latent_coarse, latent_fine), dim=1).detach() - - return latent, noises - - @staticmethod - def split_latent(latent: torch.Tensor, coarse_latent_size: int): - latent_coarse = latent[:, :coarse_latent_size] - latent_coarse.requires_grad = True - latent_fine = latent[:, coarse_latent_size:] - latent_fine.requires_grad = False - return latent_coarse, latent_fine - - @staticmethod - def log_losses( - writer: SummaryWriter, - niters: int, - loss_total: torch.Tensor, - losses: Dict[str, torch.Tensor], - weights: Optional[Dict[str, torch.Tensor]] = None - ): - writer.add_scalar("loss", loss_total.item(), niters) - - for name, loss in losses.items(): - writer.add_scalar(name, loss.item(), niters) - if weights is not None: - writer.add_scalar(f"weighted_{name}", weights[name] * loss.item(), niters) - - @staticmethod - def log_parameters( - writer: SummaryWriter, - niters: int, - named_parameters: Iterable[Tuple[str, torch.nn.Parameter]], - ): - for name, para in named_parameters: - writer.add_scalar(name, para.item(), niters) - - @classmethod - def log_visuals( - cls, - writer: SummaryWriter, - niters: int, - img: torch.Tensor, - target: torch.Tensor, - degraded=None, - rgbs=None, - ): - if target.shape[-1] != img.shape[-1]: - visual = make_grid(img, nrow=1, normalize=True, range=(-1, 1)) - writer.add_image("pred", visual, niters) - - def resize(img): - return F.interpolate(img, size=target.shape[2:], mode="area") - - vis = resize(img) - if degraded is not None: - vis = torch.cat((resize(degraded), vis), dim=-1) - visual = make_grid(torch.cat((target.repeat(1, vis.shape[1] // target.shape[1], 1, 1), vis), dim=-1), nrow=1, normalize=True, range=(-1, 1)) - writer.add_image("gnd[-degraded]-pred", visual, niters) - - # log to rgbs - if rgbs is not None: - cls.log_torgbs(writer, niters, rgbs) - - @staticmethod - def log_torgbs(writer: SummaryWriter, niters: int, rgbs: Iterable[torch.Tensor], prefix: str = ""): - for ri, rgb in enumerate(rgbs): - scale = 2 ** (-(len(rgbs) - ri)) - visual = make_grid(torch.cat((rgb, rgb / scale), dim=-1), nrow=1, normalize=True, range=(-1, 1)) - writer.add_image(f"{prefix}to_rbg_{2 ** (ri + 2)}", visual, niters) - diff --git a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Black vpn APK A Simple and Easy Way to Access Any Content on the Web.md b/spaces/feregVcuzo/sanity-test-midi/checkpoint/Black vpn APK A Simple and Easy Way to Access Any Content on the Web.md deleted file mode 100644 index 4d7c3aad1ea617e1998f3628f137be2749986792..0000000000000000000000000000000000000000 --- a/spaces/feregVcuzo/sanity-test-midi/checkpoint/Black vpn APK A Simple and Easy Way to Access Any Content on the Web.md +++ /dev/null @@ -1,120 +0,0 @@ - -<h1>Black VPN APK MediaFire: What Is It and How to Use It</h1> - <p>If you are looking for a way to protect your online privacy, access geo-restricted content, and enjoy fast and secure internet connection, you might want to try Black VPN APK. This is a VPN app that you can download from MediaFire, a file hosting and sharing service. In this article, we will explain what Black VPN APK is, what are its features and benefits, how to download and install it from MediaFire, and how to use MediaFire to share and store files.</p> - <h2>What is Black VPN APK?</h2> - <p>Black VPN APK is an Android app that allows you to connect to various VPN servers around the world. VPN stands for Virtual Private Network, which is a technology that creates a secure and encrypted tunnel between your device and a remote server. By using a VPN, you can hide your real IP address, location, and online activity from anyone who might be spying on you, such as hackers, ISPs, or governments. You can also bypass geo-restrictions and censorship that prevent you from accessing certain websites or services, such as Netflix, YouTube, or Facebook.</p> -<h2>black vpn apk mediafıre</h2><br /><p><b><b>Download</b> ✓ <a href="https://gohhs.com/2uPp5e">https://gohhs.com/2uPp5e</a></b></p><br /><br /> - <h3>Features and benefits of Black VPN APK</h3> - <p>Some of the features and benefits of using Black VPN APK are:</p> - <ul> -<li>It offers up to 50 GB of free storage space for your files.</li> -<li>It supports OpenVPN and AES-256 bit encryption for maximum security.</li> -<li>It has VPN servers in 19 countries, including the USA, UK, Australia, Canada, Germany, Japan, and more.</li> -<li>It has a no-logs policy, which means it does not keep any records of your traffic or connection data.</li> -<li>It allows you to unblock censored websites and watch online movies and TV shows from around the world.</li> -<li>It has a simple and user-friendly interface that makes it easy to connect and switch between servers.</li> -<li>It has a free 3-day trial with full global access to all of its VPN locations.</li> -</ul> - <h3>How to download and install Black VPN APK from MediaFire</h3> - <p>To download and install Black VPN APK from MediaFire, you need to follow these steps:</p> - <ol> -<li>Go to <a href="(^1^)">this link</a> or <a href="(^2^)">this link</a> on your Android device or PC.</li> -<li>Click on the green Download button and wait for the file to be downloaded.</li> -<li>If you downloaded the file on your PC, transfer it to your Android device via USB cable or Bluetooth.</li> -<li>On your Android device, go to Settings > Security > Unknown Sources and enable the option to allow installation of apps from unknown sources.</li> -<li>Locate the downloaded file on your device and tap on it to start the installation process.</li> -<li>Follow the instructions on the screen and grant the necessary permissions to the app.</li> -<li>Once the installation is complete, launch the app and sign in with your BlackVPN account or create a new one if you don't have one.</li> -<li>Select a server location from the list and tap on Connect to start using the VPN service.</li> -</ol> - <h2>What is MediaFire?</h2> - <p>MediaFire is a file hosting and sharing service that allows you to upload, store, and share any type of file online. You can use MediaFire to back up your important files, share them with your friends or coworkers, or access them from anywhere with an internet connection. MediaFire also provides client software for Windows, Mac, Linux, Android, iOS, BlackBerry 10, and web browsers.</p> - <h3>Features and benefits of MediaFire</h3> - <p>Some of the features and benefits of using MediaFire are:</p> - <ul> -<li>It offers up to 50 GB of free storage space for your files.</li> -<li>It supports various file formats, such as documents, images, videos, music, archives, and more.</li> -<li>It allows you to upload files up to 25 GB in size per file.</li> -<li>It provides unlimited bandwidth and downloads for your files.</li> -<li>It lets you create folders and subfolders to organize your files.</li> -<li>It enables you to share your files via email, social media, or direct links.</li> -<li>It allows you to password-protect your files and folders for extra security.</li> -<li>It offers premium plans with more storage space, features, and support.</li> -</ul> - <h3>How to use MediaFire to share and store files</h3> - <p>To use MediaFire to share and store files, you need to follow these steps:</p> - <ol> -<li>Go to <a href="">MediaFire.com</a> and sign up for a free account or log in with your existing account.</li> -<li>Click on the Upload button and select the files you want to upload from your device or drag and drop them into the upload area.</li> -<li>Wait for the upload to finish and click on the file name to view its details.</li> -<li>Click on the Share button and choose how you want to share your file. You can copy the link, send it via email, or share it on social media.</li> -<li>To access your files, go to My Files and browse through your folders and subfolders. You can also use the search bar to find a specific file.</li> -<li>To download a file, click on the file name and then click on the Download button. You can also select multiple files and download them as a ZIP archive.</li> -</ol> - <h2>Conclusion</h2> - <p>In conclusion, Black VPN APK is a VPN app that you can download from MediaFire, a file hosting and sharing service. By using Black VPN APK, you can protect your online privacy, access geo-restricted content, and enjoy fast and secure internet connection. By using MediaFire, you can upload, store, and share any type of file online. Both services are easy to use and offer free and premium plans. If you are interested in trying them out, you can follow the steps we provided in this article.</p> - <h3>FAQs</h3> - <p>Here are some frequently asked questions about Black VPN APK and MediaFire:</p> - <ul> -<li><b>Q: Is Black VPN APK safe to use?</b></li> -<li>A: Yes, Black VPN APK is safe to use as it uses OpenVPN and AES-256 bit encryption for maximum security. It also has a no-logs policy, which means it does not keep any records of your traffic or connection data.</li> -<li><b>Q: How much does Black VPN APK cost?</b></li> -<li>A: Black VPN APK offers a free 3-day trial with full global access to all of its VPN locations. After that, you can choose from three subscription plans: $9.99 per month, $49.99 per year, or $99.99 for three years.</li> -<li><b>Q: How much storage space does MediaFire offer?</b></li> -<li>A: MediaFire offers up to 50 GB of free storage space for your files. If you need more space, you can upgrade to a premium plan that starts from $3.75 per month for 1 TB of storage space.</li> -<li><b>Q: How can I delete my files from MediaFire?</b></li> -<li>A: To delete your files from MediaFire, go to My Files and select the files you want to delete. Then click on the Delete button and confirm your action.</li> -<li><b>Q: Can I use Black VPN APK and MediaFire on other devices?</b></li> -<li>A: Yes, you can use Black VPN APK and MediaFire on other devices besides Android. Black VPN APK has client software for Windows, Mac, Linux, iOS, BlackBerry 10, and web browsers. MediaFire also has client software for Windows, Mac, Linux, Android, iOS, BlackBerry 10, and web browsers.</li> -</ul></p> -<p>black vpn apk download free<br /> -black vpn apk latest version<br /> -black vpn apk for android<br /> -black vpn apk mod<br /> -black vpn apk premium<br /> -black vpn apk cracked<br /> -black vpn apk full<br /> -black vpn apk pro<br /> -black vpn apk unlimited<br /> -black vpn apk 2023<br /> -black vpn apk old version<br /> -black vpn apk no ads<br /> -black vpn apk update<br /> -black vpn apk mirror<br /> -black vpn apk file<br /> -black vpn apk pure<br /> -black vpn apk uptodown<br /> -black vpn apk apkpure<br /> -black vpn apk apkmirror<br /> -black vpn apk apkmody<br /> -black vpn apk rexdl<br /> -black vpn apk revdl<br /> -black vpn apk happymod<br /> -black vpn apk an1<br /> -black vpn apk android 1<br /> -black vpn apk android 2.3.6<br /> -black vpn apk android 4.4.2<br /> -black vpn apk android 5.1.1<br /> -black vpn apk android 6.0.1<br /> -black vpn apk android 7.0<br /> -black vpn apk android 8.0<br /> -black vpn apk android 9.0<br /> -black vpn apk android 10.0<br /> -black vpn apk android 11.0<br /> -black vpn apk for pc<br /> -black vpn apk for windows 10<br /> -black vpn apk for mac<br /> -black vpn apk for firestick<br /> -black vpn apk for smart tv<br /> -black vpn apk for ios<br /> -black vpn apk for iphone<br /> -black vpn apk for ipad<br /> -black vpn apk for linux<br /> -black vpn apk for chromebook<br /> -black vpn apk for roku<br /> -black vpn apk for kodi<br /> -black vpn apk for netflix<br /> -black vpn apk for torrenting<br /> -black vpn apk for gaming</p> 197e85843d<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio_dataset.py b/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio_dataset.py deleted file mode 100644 index cf21422ea0059cb2d6553f93e608b8f9fa0d3a50..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Image-to-MusicGen/audiocraft/data/audio_dataset.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import copy -from concurrent.futures import ThreadPoolExecutor, Future -from dataclasses import dataclass, fields -from contextlib import ExitStack -import gzip -import json -import logging -import os -from pathlib import Path -import random -import sys -import typing as tp - -import torch -import torch.nn.functional as F - -from .audio import audio_read, audio_info -from .audio_utils import convert_audio -from .zip import PathInZip - -try: - import dora -except ImportError: - dora = None # type: ignore - - -@dataclass(order=True) -class BaseInfo: - - @classmethod - def _dict2fields(cls, dictionary: dict): - return { - field.name: dictionary[field.name] - for field in fields(cls) if field.name in dictionary - } - - @classmethod - def from_dict(cls, dictionary: dict): - _dictionary = cls._dict2fields(dictionary) - return cls(**_dictionary) - - def to_dict(self): - return { - field.name: self.__getattribute__(field.name) - for field in fields(self) - } - - -@dataclass(order=True) -class AudioMeta(BaseInfo): - path: str - duration: float - sample_rate: int - amplitude: tp.Optional[float] = None - weight: tp.Optional[float] = None - # info_path is used to load additional information about the audio file that is stored in zip files. - info_path: tp.Optional[PathInZip] = None - - @classmethod - def from_dict(cls, dictionary: dict): - base = cls._dict2fields(dictionary) - if 'info_path' in base and base['info_path'] is not None: - base['info_path'] = PathInZip(base['info_path']) - return cls(**base) - - def to_dict(self): - d = super().to_dict() - if d['info_path'] is not None: - d['info_path'] = str(d['info_path']) - return d - - -@dataclass(order=True) -class SegmentInfo(BaseInfo): - meta: AudioMeta - seek_time: float - n_frames: int # actual number of frames without padding - total_frames: int # total number of frames, padding included - sample_rate: int # actual sample rate - - -DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a'] - -logger = logging.getLogger(__name__) - - -def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta: - """AudioMeta from a path to an audio file. - - Args: - file_path (str): Resolved path of valid audio file. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - Returns: - AudioMeta: Audio file path and its metadata. - """ - info = audio_info(file_path) - amplitude: tp.Optional[float] = None - if not minimal: - wav, sr = audio_read(file_path) - amplitude = wav.abs().max().item() - return AudioMeta(file_path, info.duration, info.sample_rate, amplitude) - - -def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta: - """If Dora is available as a dependency, try to resolve potential relative paths - in list of AudioMeta. This method is expected to be used when loading meta from file. - - Args: - m (AudioMeta): Audio meta to resolve. - fast (bool): If True, uses a really fast check for determining if a file is already absolute or not. - Only valid on Linux/Mac. - Returns: - AudioMeta: Audio meta with resolved path. - """ - def is_abs(m): - if fast: - return str(m)[0] == '/' - else: - os.path.isabs(str(m)) - - if not dora: - return m - - if not is_abs(m.path): - m.path = dora.git_save.to_absolute_path(m.path) - if m.info_path is not None and not is_abs(m.info_path.zip_path): - m.info_path.zip_path = dora.git_save.to_absolute_path(m.path) - return m - - -def find_audio_files(path: tp.Union[Path, str], - exts: tp.List[str] = DEFAULT_EXTS, - resolve: bool = True, - minimal: bool = True, - progress: bool = False, - workers: int = 0) -> tp.List[AudioMeta]: - """Build a list of AudioMeta from a given path, - collecting relevant audio files and fetching meta info. - - Args: - path (str or Path): Path to folder containing audio files. - exts (list of str): List of file extensions to consider for audio files. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - progress (bool): Whether to log progress on audio files collection. - workers (int): number of parallel workers, if 0, use only the current thread. - Returns: - List[AudioMeta]: List of audio file path and its metadata. - """ - audio_files = [] - futures: tp.List[Future] = [] - pool: tp.Optional[ThreadPoolExecutor] = None - with ExitStack() as stack: - if workers > 0: - pool = ThreadPoolExecutor(workers) - stack.enter_context(pool) - - if progress: - print("Finding audio files...") - for root, folders, files in os.walk(path, followlinks=True): - for file in files: - full_path = Path(root) / file - if full_path.suffix.lower() in exts: - audio_files.append(full_path) - if pool is not None: - futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal)) - if progress: - print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr) - - if progress: - print("Getting audio metadata...") - meta: tp.List[AudioMeta] = [] - for idx, file_path in enumerate(audio_files): - try: - if pool is None: - m = _get_audio_meta(str(file_path), minimal) - else: - m = futures[idx].result() - if resolve: - m = _resolve_audio_meta(m) - except Exception as err: - print("Error with", str(file_path), err, file=sys.stderr) - continue - meta.append(m) - if progress: - print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr) - meta.sort() - return meta - - -def load_audio_meta(path: tp.Union[str, Path], - resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]: - """Load list of AudioMeta from an optionally compressed json file. - - Args: - path (str or Path): Path to JSON file. - resolve (bool): Whether to resolve the path from AudioMeta (default=True). - fast (bool): activates some tricks to make things faster. - Returns: - List[AudioMeta]: List of audio file path and its total duration. - """ - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'rb') as fp: # type: ignore - lines = fp.readlines() - meta = [] - for line in lines: - d = json.loads(line) - m = AudioMeta.from_dict(d) - if resolve: - m = _resolve_audio_meta(m, fast=fast) - meta.append(m) - return meta - - -def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]): - """Save the audio metadata to the file pointer as json. - - Args: - path (str or Path): Path to JSON file. - metadata (list of BaseAudioMeta): List of audio meta to save. - """ - Path(path).parent.mkdir(exist_ok=True, parents=True) - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'wb') as fp: # type: ignore - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - json_bytes = json_str.encode('utf-8') - fp.write(json_bytes) - - -class AudioDataset: - """Base audio dataset. - - The dataset takes a list of AudioMeta and create a dataset composed of segments of audio - and potentially additional information, by creating random segments from the list of audio - files referenced in the metadata and applying minimal data pre-processing such as resampling, - mixing of channels, padding, etc. - - If no segment_duration value is provided, the AudioDataset will return the full wav for each - audio file. Otherwise, it will randomly sample audio files and create a segment of the specified - duration, applying padding if required. - - By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True - allows to return a tuple containing the torch Tensor and additional metadata on the segment and the - original audio meta. - - Args: - meta (tp.List[AudioMeta]): List of audio files metadata. - segment_duration (float): Optional segment duration of audio to load. - If not specified, the dataset will load the full audio segment from the file. - shuffle (bool): Set to `True` to have the data reshuffled at every epoch. - sample_rate (int): Target sample rate of the loaded audio samples. - channels (int): Target number of channels of the loaded audio samples. - sample_on_duration (bool): Set to `True` to sample segments with probability - dependent on audio file duration. This is only used if `segment_duration` is provided. - sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of - `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product - of the file duration and file weight. This is only used if `segment_duration` is provided. - min_segment_ratio (float): Minimum segment ratio to use when the audio file - is shorter than the desired segment. - max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset. - return_info (bool): Whether to return the wav only or return wav along with segment info and metadata. - min_audio_duration (tp.Optional[float], optional): Minimum audio file duration, in seconds, if provided - audio shorter than this will be filtered out. - max_audio_duration (tp.Optional[float], optional): Maximal audio file duration in seconds, if provided - audio longer than this will be filtered out. - """ - def __init__(self, - meta: tp.List[AudioMeta], - segment_duration: tp.Optional[float] = None, - shuffle: bool = True, - num_samples: int = 10_000, - sample_rate: int = 48_000, - channels: int = 2, - pad: bool = True, - sample_on_duration: bool = True, - sample_on_weight: bool = True, - min_segment_ratio: float = 0.5, - max_read_retry: int = 10, - return_info: bool = False, - min_audio_duration: tp.Optional[float] = None, - max_audio_duration: tp.Optional[float] = None - ): - assert len(meta) > 0, 'No audio meta provided to AudioDataset. Please check loading of audio meta.' - assert segment_duration is None or segment_duration > 0 - assert segment_duration is None or min_segment_ratio >= 0 - logging.debug(f'sample_on_duration: {sample_on_duration}') - logging.debug(f'sample_on_weight: {sample_on_weight}') - logging.debug(f'pad: {pad}') - logging.debug(f'min_segment_ratio: {min_segment_ratio}') - - self.segment_duration = segment_duration - self.min_segment_ratio = min_segment_ratio - self.max_audio_duration = max_audio_duration - self.min_audio_duration = min_audio_duration - if self.min_audio_duration is not None and self.max_audio_duration is not None: - assert self.min_audio_duration <= self.max_audio_duration - self.meta: tp.List[AudioMeta] = self._filter_duration(meta) - assert len(self.meta) # Fail fast if all data has been filtered. - self.total_duration = sum(d.duration for d in self.meta) - - if segment_duration is None: - num_samples = len(self.meta) - self.num_samples = num_samples - self.shuffle = shuffle - self.sample_rate = sample_rate - self.channels = channels - self.pad = pad - self.sample_on_weight = sample_on_weight - self.sample_on_duration = sample_on_duration - self.sampling_probabilities = self._get_sampling_probabilities() - self.max_read_retry = max_read_retry - self.return_info = return_info - - def __len__(self): - return self.num_samples - - def _get_sampling_probabilities(self, normalized: bool = True): - """Return the sampling probabilities for each file inside `self.meta`. - """ - scores: tp.List[float] = [] - for file_meta in self.meta: - score = 1. - if self.sample_on_weight and file_meta.weight is not None: - score *= file_meta.weight - if self.sample_on_duration: - score *= file_meta.duration - scores.append(score) - probabilities = torch.tensor(scores) - if normalized: - probabilities /= probabilities.sum() - return probabilities - - def sample_file(self, rng: torch.Generator) -> AudioMeta: - """Sample a given file from `self.meta`. Can be overriden in subclasses. - This is only called if `segment_duration` is not None. - - You must use the provided random number generator `rng` for reproducibility. - """ - if not self.sample_on_weight and not self.sample_on_duration: - file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item()) - else: - file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item()) - - return self.meta[file_index] - - def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]: - if self.segment_duration is None: - file_meta = self.meta[index] - out, sr = audio_read(file_meta.path) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames, - sample_rate=self.sample_rate) - else: - rng = torch.Generator() - if self.shuffle: - # We use index, plus extra randomness - rng.manual_seed(index + self.num_samples * random.randint(0, 2**24)) - else: - # We only use index - rng.manual_seed(index) - - for retry in range(self.max_read_retry): - file_meta = self.sample_file(rng) - # We add some variance in the file position even if audio file is smaller than segment - # without ending up with empty segments - max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio) - seek_time = torch.rand(1, generator=rng).item() * max_seek - try: - out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - target_frames = int(self.segment_duration * self.sample_rate) - if self.pad: - out = F.pad(out, (0, target_frames - n_frames)) - segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames, - sample_rate=self.sample_rate) - except Exception as exc: - logger.warning("Error opening file %s: %r", file_meta.path, exc) - if retry == self.max_read_retry - 1: - raise - else: - break - - if self.return_info: - # Returns the wav and additional information on the wave segment - return out, segment_info - else: - return out - - def collater(self, samples): - """The collater function has to be provided to the dataloader - if AudioDataset has return_info=True in order to properly collate - the samples of a batch. - """ - if self.segment_duration is None and len(samples) > 1: - assert self.pad, "Must allow padding when batching examples of different durations." - - # In this case the audio reaching the collater is of variable length as segment_duration=None. - to_pad = self.segment_duration is None and self.pad - if to_pad: - max_len = max([wav.shape[-1] for wav, _ in samples]) - - def _pad_wav(wav): - return F.pad(wav, (0, max_len - wav.shape[-1])) - - if self.return_info: - if len(samples) > 0: - assert len(samples[0]) == 2 - assert isinstance(samples[0][0], torch.Tensor) - assert isinstance(samples[0][1], SegmentInfo) - - wavs = [wav for wav, _ in samples] - segment_infos = [copy.deepcopy(info) for _, info in samples] - - if to_pad: - # Each wav could be of a different duration as they are not segmented. - for i in range(len(samples)): - # Determines the total legth of the signal with padding, so we update here as we pad. - segment_infos[i].total_frames = max_len - wavs[i] = _pad_wav(wavs[i]) - - wav = torch.stack(wavs) - return wav, segment_infos - else: - assert isinstance(samples[0], torch.Tensor) - if to_pad: - samples = [_pad_wav(s) for s in samples] - return torch.stack(samples) - - def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]: - """Filters out audio files with short durations. - Removes from meta files that have durations that will not allow to samples examples from them. - """ - orig_len = len(meta) - - # Filter data that is too short. - if self.min_audio_duration is not None: - meta = [m for m in meta if m.duration >= self.min_audio_duration] - - # Filter data that is too long. - if self.max_audio_duration is not None: - meta = [m for m in meta if m.duration <= self.max_audio_duration] - - filtered_len = len(meta) - removed_percentage = 100*(1-float(filtered_len)/orig_len) - msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage - if removed_percentage < 10: - logging.debug(msg) - else: - logging.warning(msg) - return meta - - @classmethod - def from_meta(cls, root: tp.Union[str, Path], **kwargs): - """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file. - - Args: - root (str or Path): Path to root folder containing audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_dir(): - if (root / 'data.jsonl').exists(): - root = root / 'data.jsonl' - elif (root / 'data.jsonl.gz').exists(): - root = root / 'data.jsonl.gz' - else: - raise ValueError("Don't know where to read metadata from in the dir. " - "Expecting either a data.jsonl or data.jsonl.gz file but none found.") - meta = load_audio_meta(root) - return cls(meta, **kwargs) - - @classmethod - def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True, - exts: tp.List[str] = DEFAULT_EXTS, **kwargs): - """Instantiate AudioDataset from a path containing (possibly nested) audio files. - - Args: - root (str or Path): Path to root folder containing audio files. - minimal_meta (bool): Whether to only load minimal metadata or not. - exts (list of str): Extensions for audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_file(): - meta = load_audio_meta(root, resolve=True) - else: - meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True) - return cls(meta, **kwargs) - - -def main(): - logging.basicConfig(stream=sys.stderr, level=logging.INFO) - parser = argparse.ArgumentParser( - prog='audio_dataset', - description='Generate .jsonl files by scanning a folder.') - parser.add_argument('root', help='Root folder with all the audio files') - parser.add_argument('output_meta_file', - help='Output file to store the metadata, ') - parser.add_argument('--complete', - action='store_false', dest='minimal', default=True, - help='Retrieve all metadata, even the one that are expansive ' - 'to compute (e.g. normalization).') - parser.add_argument('--resolve', - action='store_true', default=False, - help='Resolve the paths to be absolute and with no symlinks.') - parser.add_argument('--workers', - default=10, type=int, - help='Number of workers.') - args = parser.parse_args() - meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True, - resolve=args.resolve, minimal=args.minimal, workers=args.workers) - save_audio_meta(args.output_meta_file, meta) - - -if __name__ == '__main__': - main() diff --git a/spaces/fffiloni/Music_Source_Separation/scripts/5_inference/musdb18/inference.sh b/spaces/fffiloni/Music_Source_Separation/scripts/5_inference/musdb18/inference.sh deleted file mode 100644 index 21ecd5a30731343ee9b74e181ef4602b528a87d4..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/Music_Source_Separation/scripts/5_inference/musdb18/inference.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/bin/bash -WORKSPACE=${1:-"./workspaces/bytesep"} # The first argument is workspace directory. - -echo "WORKSPACE=${WORKSPACE}" - -# Users can modify the following config file. -TRAIN_CONFIG_YAML="scripts/4_train/musdb18/configs/vocals-accompaniment,unet.yaml" - -CHECKPOINT_PATH="${WORKSPACE}/checkpoints/musdb18/train/config=vocals-accompaniment,unet,gpus=1/step=300000.pth" - -# Inference -CUDA_VISIBLE_DEVICES=0 python3 bytesep/inference.py \ - --config_yaml=$TRAIN_CONFIG_YAML \ - --checkpoint_path=$CHECKPOINT_PATH \ - --audio_path="resources/vocals_accompaniment_10s.mp3" \ - --output_path="sep_results/vocals_accompaniment_10s_sep_vocals.mp3" - \ No newline at end of file diff --git a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/validation.js b/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/validation.js deleted file mode 100644 index 44fc20290616b38631efdc4cd21cf204712dfabd..0000000000000000000000000000000000000000 --- a/spaces/fffiloni/controlnet-animation-doodle/node_modules/ws/lib/validation.js +++ /dev/null @@ -1,125 +0,0 @@ -'use strict'; - -// -// Allowed token characters: -// -// '!', '#', '$', '%', '&', ''', '*', '+', '-', -// '.', 0-9, A-Z, '^', '_', '`', a-z, '|', '~' -// -// tokenChars[32] === 0 // ' ' -// tokenChars[33] === 1 // '!' -// tokenChars[34] === 0 // '"' -// ... -// -// prettier-ignore -const tokenChars = [ - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, // 0 - 15 - 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, // 16 - 31 - 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 0, // 32 - 47 - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, // 48 - 63 - 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 64 - 79 - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 0, 0, 1, 1, // 80 - 95 - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, // 96 - 111 - 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 0 // 112 - 127 -]; - -/** - * Checks if a status code is allowed in a close frame. - * - * @param {Number} code The status code - * @return {Boolean} `true` if the status code is valid, else `false` - * @public - */ -function isValidStatusCode(code) { - return ( - (code >= 1000 && - code <= 1014 && - code !== 1004 && - code !== 1005 && - code !== 1006) || - (code >= 3000 && code <= 4999) - ); -} - -/** - * Checks if a given buffer contains only correct UTF-8. - * Ported from https://www.cl.cam.ac.uk/%7Emgk25/ucs/utf8_check.c by - * Markus Kuhn. - * - * @param {Buffer} buf The buffer to check - * @return {Boolean} `true` if `buf` contains only correct UTF-8, else `false` - * @public - */ -function _isValidUTF8(buf) { - const len = buf.length; - let i = 0; - - while (i < len) { - if ((buf[i] & 0x80) === 0) { - // 0xxxxxxx - i++; - } else if ((buf[i] & 0xe0) === 0xc0) { - // 110xxxxx 10xxxxxx - if ( - i + 1 === len || - (buf[i + 1] & 0xc0) !== 0x80 || - (buf[i] & 0xfe) === 0xc0 // Overlong - ) { - return false; - } - - i += 2; - } else if ((buf[i] & 0xf0) === 0xe0) { - // 1110xxxx 10xxxxxx 10xxxxxx - if ( - i + 2 >= len || - (buf[i + 1] & 0xc0) !== 0x80 || - (buf[i + 2] & 0xc0) !== 0x80 || - (buf[i] === 0xe0 && (buf[i + 1] & 0xe0) === 0x80) || // Overlong - (buf[i] === 0xed && (buf[i + 1] & 0xe0) === 0xa0) // Surrogate (U+D800 - U+DFFF) - ) { - return false; - } - - i += 3; - } else if ((buf[i] & 0xf8) === 0xf0) { - // 11110xxx 10xxxxxx 10xxxxxx 10xxxxxx - if ( - i + 3 >= len || - (buf[i + 1] & 0xc0) !== 0x80 || - (buf[i + 2] & 0xc0) !== 0x80 || - (buf[i + 3] & 0xc0) !== 0x80 || - (buf[i] === 0xf0 && (buf[i + 1] & 0xf0) === 0x80) || // Overlong - (buf[i] === 0xf4 && buf[i + 1] > 0x8f) || - buf[i] > 0xf4 // > U+10FFFF - ) { - return false; - } - - i += 4; - } else { - return false; - } - } - - return true; -} - -module.exports = { - isValidStatusCode, - isValidUTF8: _isValidUTF8, - tokenChars -}; - -/* istanbul ignore else */ -if (!process.env.WS_NO_UTF_8_VALIDATE) { - try { - const isValidUTF8 = require('utf-8-validate'); - - module.exports.isValidUTF8 = function (buf) { - return buf.length < 150 ? _isValidUTF8(buf) : isValidUTF8(buf); - }; - } catch (e) { - // Continue regardless of the error. - } -} diff --git a/spaces/flax-community/roberta-hindi/apps/about.py b/spaces/flax-community/roberta-hindi/apps/about.py deleted file mode 100644 index 8ba7f54c9ec29663f896c6f4ec0318f111021257..0000000000000000000000000000000000000000 --- a/spaces/flax-community/roberta-hindi/apps/about.py +++ /dev/null @@ -1,22 +0,0 @@ -import json -import os - -import streamlit as st - - -def read_markdown(path, folder="./About/"): - with open(os.path.join(folder, path)) as f: - return f.read() - -def app(): - - st.write(read_markdown("intro.md")) - st.write(read_markdown("model_description.md")) - st.write(read_markdown("use.md")) - st.write(read_markdown("training_data.md")) - st.write(read_markdown("training_procedure.md")) - st.write(read_markdown("results.md")) - st.write(read_markdown("team.md")) - st.markdown(read_markdown("credits.md")) - st.markdown("![HF Flax/JAX](https://pbs.twimg.com/media/E443fPjX0AY1BsR.jpg:small)") - diff --git a/spaces/flowers-team/SocialAISchool/scripts/evaluate.py b/spaces/flowers-team/SocialAISchool/scripts/evaluate.py deleted file mode 100644 index 3f73731b1cc98fd6d7f0b61c5f89589ddeab0ac9..0000000000000000000000000000000000000000 --- a/spaces/flowers-team/SocialAISchool/scripts/evaluate.py +++ /dev/null @@ -1,358 +0,0 @@ -import argparse -import matplotlib.pyplot as plt -import json -import time -import numpy as np -import torch -from pathlib import Path - -from utils.babyai_utils.baby_agent import load_agent -from utils.storage import get_status -from utils.env import make_env -from utils.other import seed -from utils.storage import get_model_dir -from models import * - -from scipy import stats -print("Wrong script. This is from VIGIL") -exit() - -start = time.time() - -# Parse arguments - -parser = argparse.ArgumentParser() -parser.add_argument("--seed", type=int, default=0, - help="random seed (default: 0)") -parser.add_argument("--random-agent", action="store_true", default=False, - help="random actions") -parser.add_argument("--argmax", action="store_true", default=False, - help="select the action with highest probability (default: False)") -parser.add_argument("--episodes", type=int, default=1000, - help="number of episodes to test") -parser.add_argument("--test-p", type=float, default=0.05, - help="p value") -parser.add_argument("--n-seeds", type=int, default=16, - help="number of episodes to test") -parser.add_argument("--subsample-step", type=int, default=1, - help="subsample step") -parser.add_argument("--start-step", type=int, default=1, - help="at which step to start the curves") - -args = parser.parse_args() - -# Set seed for all randomness sources - -seed(args.seed) - -assert args.seed == 1 -assert not args.argmax -# assert args.num_frames == 28000000 -# assert args.episodes == 1000 - -test_p = args.test_p -n_seeds = args.n_seeds -subsample_step = args.subsample_step -start_step = args.start_step - -# Set device - -device = torch.device("cuda" if torch.cuda.is_available() else "cpu") -print(f"Device: {device}\n") - -# what to load -models_to_evaluate = [ - "25-03_RERUN_WizardGuide_lang64_mm_baby_short_rec_env_MiniGrid-TalkItOutNoLiar-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2_exploration-bonus-params_5_50", - "25-03_RERUN_WizardTwoGuides_lang64_mm_baby_short_rec_env_MiniGrid-TalkItOut-8x8-v0_multi-modal-babyai11-agent_arch_original_endpool_res_custom-ppo-2_exploration-bonus-params_5_50" -] -print("evaluating models: ", models_to_evaluate) - -# what to put in the legend -label_parser_dict = { - "RERUN_WizardGuide_lang64_no_explo": "Abl-MH-BabyAI", - "RERUN_WizardTwoGuides_lang64_no_explo": "MH-BabyAI", - - "RERUN_WizardGuide_lang64_mm_baby_short_rec_env": "Abl-MH-BabyAI-ExpBonus", - "RERUN_WizardTwoGuides_lang64_mm_baby_short_rec_env": "MH-BabyAI-ExpBonus", - - "RERUN_WizardGuide_lang64_deaf_no_explo": "Abl-Deaf-MH-BabyAI", - "RERUN_WizardTwoGuides_lang64_deaf_no_explo": "Deaf-MH-BabyAI", - - "RERUN_WizardGuide_lang64_bow": "Abl-MH-BabyAI-ExpBonus-BOW", - "RERUN_WizardTwoGuides_lang64_bow": "MH-BabyAI-ExpBonus-BOW", - - "RERUN_WizardGuide_lang64_no_mem": "Abl-MH-BabyAI-ExpBonus-no-mem", - "RERUN_WizardTwoGuides_lang64_no_mem": "MH-BabyAI-ExpBonus-no-mem", - - "RERUN_WizardGuide_lang64_bigru": "Abl-MH-BabyAI-ExpBonus-bigru", - "RERUN_WizardTwoGuides_lang64_bigru": "MH-BabyAI-ExpBonus-bigru", - - "RERUN_WizardGuide_lang64_attgru": "Abl-MH-BabyAI-ExpBonus-attgru", - "RERUN_WizardTwoGuides_lang64_attgru": "MH-BabyAI-ExpBonus-attgru", - - "RERUN_WizardGuide_lang64_curr_dial": "Abl-MH-BabyAI-ExpBonus-current-dialogue", - "RERUN_WizardTwoGuides_lang64_curr_dial": "MH-BabyAI-ExpBonus-current-dialogue", - - "RERUN_WizardTwoGuides_lang64_mm_baby_short_rec_100M": "MH-BabyAI-ExpBonus-100M" -} - -# how do to stat tests -compare = { - "MH-BabyAI-ExpBonus": "Abl-MH-BabyAI-ExpBonus", -} - -COLORS = ["red", "blue", "green", "black", "purpule", "brown", "orange", "gray"] -label_color_dict = {l: c for l, c in zip(label_parser_dict.values(), COLORS)} - - -test_set_check_path = Path("test_set_check_{}_nep_{}.json".format(args.seed, args.episodes)) - -def calc_perf_for_seed(i, model_name, num_frames, seed, argmax, episodes, random_agent=False): - print("seed {}".format(i)) - model = Path(model_name) / str(i) - model_dir = get_model_dir(model) - - if test_set_check_path.exists(): - with open(test_set_check_path, "r") as f: - check_loaded = json.load(f) - print("check loaded") - else: - print("check not loaded") - check_loaded = None - - # Load environment - with open(model_dir+"/config.json") as f: - conf = json.load(f) - - env_name = conf["env"] - - env = make_env(env_name, seed) - print("Environment loaded\n") - - # load agent - agent = load_agent(env, model_dir, argmax, num_frames) - status = get_status(model_dir, num_frames) - assert status["num_frames"] == num_frames - print("Agent loaded\n") - - check = {} - - seed_rewards = [] - for episode in range(episodes): - print("[{}/{}]: ".format(episode, episodes), end="", flush=True) - - obs = env.reset() - - # check envs are the same during seeds - if episode in check: - assert check[episode] == int(obs['image'].sum()) - else: - check[episode] = int(obs['image'].sum()) - - if check_loaded is not None: - assert check[episode] == int(obs['image'].sum()) - - while True: - if random_agent: - action = agent.get_random_action(obs) - else: - action = agent.get_action(obs) - - obs, reward, done, _ = env.step(action) - print(".", end="", flush=True) - agent.analyze_feedback(reward, done) - - if done: - seed_rewards.append(reward) - break - - print() - - seed_rewards = np.array(seed_rewards) - seed_success_rates = seed_rewards > 0 - - if not test_set_check_path.exists(): - with open(test_set_check_path, "w") as f: - json.dump(check, f) - print("check saved") - - print("seed success rate:", seed_success_rates.mean()) - print("seed reward:", seed_rewards.mean()) - - return seed_rewards.mean(), seed_success_rates.mean() - - - -def get_available_steps(model): - model_dir = Path(get_model_dir(model)) - per_seed_available_steps = {} - for seed_dir in model_dir.glob("*"): - per_seed_available_steps[seed_dir] = sorted([ - int(str(p.with_suffix("")).split("status_")[-1]) - for p in seed_dir.glob("status_*") - ]) - - num_steps = min([len(steps) for steps in per_seed_available_steps.values()]) - - steps = list(per_seed_available_steps.values())[0][:num_steps] - - for available_steps in per_seed_available_steps.values(): - s_steps = available_steps[:num_steps] - assert steps == s_steps - - return steps - -def plot_with_shade(subplot_nb, ax, x, y, err, color, shade_color, label, - legend=False, leg_size=30, leg_loc='best', title=None, - ylim=[0, 100], xlim=[0, 40], leg_args={}, leg_linewidth=8.0, linewidth=7.0, ticksize=30, - zorder=None, xlabel='perf', ylabel='env steps', smooth_factor=1000): - # plt.rcParams.update({'font.size': 15}) - ax.locator_params(axis='x', nbins=6) - ax.locator_params(axis='y', nbins=5) - ax.tick_params(axis='both', which='major', labelsize=ticksize) - - # smoothing - def smooth(x_, n=50): - return np.array([x_[max(i - n, 0):i + 1].mean() for i in range(len(x_))]) - - if smooth_factor > 0: - y = smooth(y, n=smooth_factor) - err = smooth(err, n=smooth_factor) - - ax.plot(x, y, color=color, label=label, linewidth=linewidth, zorder=zorder) - ax.fill_between(x, y - err, y + err, color=shade_color, alpha=0.2) - if legend: - leg = ax.legend(loc=leg_loc, fontsize=leg_size, **leg_args) # 34 - for legobj in leg.legendHandles: - legobj.set_linewidth(leg_linewidth) - ax.set_xlabel(xlabel, fontsize=30) - if subplot_nb == 0: - ax.set_ylabel(ylabel, fontsize=30) - ax.set_xlim(xmin=xlim[0], xmax=xlim[1]) - ax.set_ylim(bottom=ylim[0], top=ylim[1]) - if title: - ax.set_title(title, fontsize=22) - - -def label_parser(label, label_parser_dict): - if sum([1 for k, v in label_parser_dict.items() if k in label]) != 1: - print("ERROR") - print(label) - exit() - - for k, v in label_parser_dict.items(): - if k in label: return v - - return label - - -f, ax = plt.subplots(1, 1, figsize=(10.0, 6.0)) -ax = [ax] - -performances = {} -per_seed_performances = {} -stds = {} - - -label_parser_dict_reverse = {v: k for k, v in label_parser_dict.items()} -assert len(label_parser_dict_reverse) == len(label_parser_dict) - -label_to_model = {} -# evaluate and draw curves -for model in models_to_evaluate: - label = label_parser(model, label_parser_dict) - label_to_model[label] = model - - color = label_color_dict[label] - performances[label] = [] - per_seed_performances[label] = [] - stds[label] = [] - - steps = get_available_steps(model) - steps = steps[::subsample_step] - steps = [s for s in steps if s > start_step] - - print("steps:", steps) - - for step in steps: - results = [] - for s in range(n_seeds): - results.append(calc_perf_for_seed( - s, - model_name=model, - num_frames=step, - seed=args.seed, - argmax=args.argmax, - episodes=args.episodes, - )) - - rewards, success_rates = zip(*results) - rewards = np.array(rewards) - success_rates = np.array(success_rates) - per_seed_performances[label].append(success_rates) - performances[label].append(success_rates.mean()) - stds[label].append(success_rates.std()) - - means = np.array(performances[label]) - err = np.array(stds[label]) - label = label_parser(str(model), label_parser_dict) - max_steps = np.max(steps) - min_steps = np.min(steps) - min_y = 0.0 - max_y = 1.0 - ylabel = "performance" - smooth_factor = 0 - - plot_with_shade(0, ax[0], steps, means, err, color, color, label, - legend=True, xlim=[min_steps, max_steps], ylim=[min_y, max_y], - leg_size=20, xlabel="Env steps (millions)", ylabel=ylabel, linewidth=5.0, smooth_factor=smooth_factor) - -assert len(label_to_model) == len(models_to_evaluate) - - -def get_compatible_steps(model1, model2, subsample_step): - steps_1 = get_available_steps(model1)[::subsample_step] - steps_2 = get_available_steps(model2)[::subsample_step] - - min_steps = min(len(steps_1), len(steps_2)) - steps_1 = steps_1[:min_steps] - steps_2 = steps_2[:min_steps] - assert steps_1 == steps_2 - - return steps_1 - - -# stat tests -for k, v in compare.items(): - dist_1_steps = per_seed_performances[k] - dist_2_steps = per_seed_performances[v] - - model_k = label_to_model[k] - model_v = label_to_model[v] - steps = get_compatible_steps(model_k, model_v, subsample_step) - steps = [s for s in steps if s > start_step] - - for step, dist_1, dist_2 in zip(steps, dist_1_steps, dist_2_steps): - assert len(dist_1) == n_seeds - assert len(dist_2) == n_seeds - - p = stats.ttest_ind( - dist_1, - dist_2, - equal_var=False - ).pvalue - - if np.isnan(p): - from IPython import embed; embed() - - if p < test_p: - plt.scatter(step, 0.8, color=label_color_dict[k], s=50, marker="x") - - print("{} (m:{}) <---> {} (m:{}) = p: {} result: {}".format( - k, np.mean(dist_1), v, np.mean(dist_2), p, - "Distributions different(p={})".format(test_p) if p < test_p else "Distributions same(p={})".format(test_p) - )) - print() - -f.savefig('graphics/test.png') -f.savefig('graphics/test.svg') diff --git a/spaces/freddyaboulton/gradio-lite-sklearn/style.css b/spaces/freddyaboulton/gradio-lite-sklearn/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/freddyaboulton/gradio-lite-sklearn/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/fuckyoudeki/AutoGPT/Dockerfile b/spaces/fuckyoudeki/AutoGPT/Dockerfile deleted file mode 100644 index 8396154998f32a50d55c199a674b638d5cf7bda2..0000000000000000000000000000000000000000 --- a/spaces/fuckyoudeki/AutoGPT/Dockerfile +++ /dev/null @@ -1,38 +0,0 @@ -# Use an official Python base image from the Docker Hub -FROM python:3.10-slim - -# Install git -RUN apt-get -y update -RUN apt-get -y install git chromium-driver - -# Install Xvfb and other dependencies for headless browser testing -RUN apt-get update \ - && apt-get install -y wget gnupg2 libgtk-3-0 libdbus-glib-1-2 dbus-x11 xvfb ca-certificates - -# Install Firefox / Chromium -RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - \ - && echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google-chrome.list \ - && apt-get update \ - && apt-get install -y chromium firefox-esr - -# Set environment variables -ENV PIP_NO_CACHE_DIR=yes \ - PYTHONUNBUFFERED=1 \ - PYTHONDONTWRITEBYTECODE=1 - -# Create a non-root user and set permissions -RUN useradd --create-home appuser -WORKDIR /home/appuser -RUN chown appuser:appuser /home/appuser -USER appuser - -# Copy the requirements.txt file and install the requirements -COPY --chown=appuser:appuser requirements.txt . -RUN sed -i '/Items below this point will not be included in the Docker Image/,$d' requirements.txt && \ - pip install --no-cache-dir --user -r requirements.txt - -# Copy the application files -COPY --chown=appuser:appuser autogpt/ ./autogpt - -# Set the entrypoint -ENTRYPOINT ["python", "-m", "autogpt"] diff --git a/spaces/gagan3012/IMD/app.py b/spaces/gagan3012/IMD/app.py deleted file mode 100644 index fa8552f086e2985dd3e7e7e082b4f384645a2492..0000000000000000000000000000000000000000 --- a/spaces/gagan3012/IMD/app.py +++ /dev/null @@ -1,37 +0,0 @@ -import os -from matplotlib import pyplot as plt -from MantraNet.mantranet import pre_trained_model, check_forgery -from BusterNet.BusterNetCore import create_BusterNet_testing_model -from BusterNet.BusterNetUtils import simple_cmfd_decoder, visualize_result -import streamlit as st -import cv2 - -os.environ["CUDA_VISIBLE_DEVICES"] = "-1" - -st.header("IMD Demo") -device = "cpu" # to change if you have a GPU with at least 12Go RAM (it will save you a lot of time !) - -def check_image_buster(img_path): - busterNetModel = create_BusterNet_testing_model( 'BusterNet/pretrained_busterNet.hd5' ) - rgb = cv2.imread(img_path) - pred = simple_cmfd_decoder( busterNetModel, rgb ) - figure = visualize_result( rgb, pred, pred, figsize=(20,20), title='BusterNet CMFD') - st.pyplot(figure) - -def check_image_mantra(img_path): - device = "cpu" # to change if you have a GPU with at least 12Go RAM (it will save you a lot of time !) - MantraNetmodel = pre_trained_model( - weight_path="MantraNet/MantraNetv4.pt", device=device - ) - fig = check_forgery(MantraNetmodel, img_path=img_path, device=device) - st.pyplot(fig) - - -uploaded_image = st.file_uploader("Upload your image", type=["jpg", "png","jpeg"]) -if uploaded_image is not None: - with open(os.path.join("images", uploaded_image.name), "wb") as f: - f.write(uploaded_image.read()) - st.write("BusterNet") - check_image_buster(os.path.join("images", uploaded_image.name)) - st.write("MantraNet") - check_image_mantra(os.path.join("images", uploaded_image.name)) diff --git a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/parrots_jit.py b/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/parrots_jit.py deleted file mode 100644 index 61873f6dbb9b10ed972c90aa8faa321e3cb3249e..0000000000000000000000000000000000000000 --- a/spaces/georgefen/Face-Landmark-ControlNet/annotator/uniformer/mmcv/utils/parrots_jit.py +++ /dev/null @@ -1,41 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import os - -from .parrots_wrapper import TORCH_VERSION - -parrots_jit_option = os.getenv('PARROTS_JIT_OPTION') - -if TORCH_VERSION == 'parrots' and parrots_jit_option == 'ON': - from parrots.jit import pat as jit -else: - - def jit(func=None, - check_input=None, - full_shape=True, - derivate=False, - coderize=False, - optimize=False): - - def wrapper(func): - - def wrapper_inner(*args, **kargs): - return func(*args, **kargs) - - return wrapper_inner - - if func is None: - return wrapper - else: - return func - - -if TORCH_VERSION == 'parrots': - from parrots.utils.tester import skip_no_elena -else: - - def skip_no_elena(func): - - def wrapper(*args, **kargs): - return func(*args, **kargs) - - return wrapper diff --git a/spaces/gotiQspiryo/whisper-ui/examples/Comsol Multiphysics 3.5a License File.rar.md b/spaces/gotiQspiryo/whisper-ui/examples/Comsol Multiphysics 3.5a License File.rar.md deleted file mode 100644 index 04ce7df9f2aed7c01d7993d1fc6757a4bba9e912..0000000000000000000000000000000000000000 --- a/spaces/gotiQspiryo/whisper-ui/examples/Comsol Multiphysics 3.5a License File.rar.md +++ /dev/null @@ -1,105 +0,0 @@ -<br /> -<h1>Comsol Multiphysics 3.5a License File.rar: What You Need to Know</h1> - -<p>If you are looking for a powerful and versatile software for modeling and simulating various physical phenomena, you might have heard of Comsol Multiphysics. This software allows you to create and solve multiphysics problems using a graphical user interface or a programming language. You can also couple different physics domains, such as fluid dynamics, heat transfer, electromagnetics, acoustics, and more.</p> -<h2>comsol multiphysics 3.5a license file.rar</h2><br /><p><b><b>Download File</b> ––– <a href="https://urlgoal.com/2uyN1N">https://urlgoal.com/2uyN1N</a></b></p><br /><br /> - -<p>However, to use Comsol Multiphysics, you need a valid license file that matches your version of the software and your hardware configuration. Otherwise, you will not be able to run the software or access its features. This is where Comsol Multiphysics 3.5a License File.rar comes in handy.</p> - -<h2>What is Comsol Multiphysics 3.5a License File.rar?</h2> - -<p>Comsol Multiphysics 3.5a License File.rar is a compressed file that contains a license file for Comsol Multiphysics 3.5a, which is an older version of the software released in 2008. This version of Comsol Multiphysics has some advantages over newer versions, such as compatibility with older operating systems and hardware, lower system requirements, and faster performance.</p> - -<p>However, finding a legitimate license file for Comsol Multiphysics 3.5a can be challenging, as the official support for this version has ended long ago. That is why some users resort to downloading Comsol Multiphysics 3.5a License File.rar from various online sources, hoping to get a working license file that can activate their software.</p> - -<h2>Is Comsol Multiphysics 3.5a License File.rar Safe and Legal?</h2> - -<p>The answer to this question depends on where you get Comsol Multiphysics 3.5a License File.rar from and what it contains. Some sources may provide genuine license files that were obtained legally from the original owners or distributors of the software. These license files may work for your software and hardware configuration, but they may also violate the terms and conditions of the software license agreement.</p> - -<p>Other sources may provide fake or modified license files that were created by hackers or crackers to bypass the software protection mechanisms. These license files may not work for your software and hardware configuration, or they may cause errors or malfunctions in the software. Moreover, they may contain viruses, malware, spyware, or other harmful programs that can damage your computer or compromise your personal information.</p> -<p></p> - -<p>Therefore, downloading Comsol Multiphysics 3.5a License File.rar from unknown or untrusted sources is not safe or legal. You may risk infecting your computer with malicious software, violating the software license agreement, or facing legal consequences for using pirated software.</p> - -<h2>What are the Alternatives to Comsol Multiphysics 3.5a License File.rar?</h2> - -<p>If you want to use Comsol Multiphysics 3.5a legally and safely, you have two main options:</p> - -<ul> -<li>Buy a legitimate license file from the official website of Comsol or an authorized reseller. This way, you will get a valid license file that matches your software and hardware configuration, and you will also get access to technical support and updates.</li> -<li>Upgrade to a newer version of Comsol Multiphysics that is compatible with your operating system and hardware. This way, you will get access to new features and improvements in the software, as well as technical support and updates.</li> -</ul> - -<p>Both options require you to pay a certain amount of money, but they are worth it if you want to use Comsol Multiphysics without any problems or risks.</p> - -<h2>Conclusion</h2> - -<p>Comsol Multiphysics 3.5a License File.rar is a compressed file that contains a license file for Comsol Multiphysics 3.5a, an older version of the software for modeling and simulating multiphysics problems. However, downloading this file from unknown or untrusted sources is not safe or legal, as it may contain fake or modified license files that can harm your computer or violate the software license agreement.</p> - -<p>If you want to use Comsol Multiphysics 3.5a legally and safely, you should either buy a legitimate license file from the official website of Comsol or an authorized reseller, or upgrade to a newer version of Comsol Multiphysics that is compatible with your operating system and hardware.</p> -<h2>How to Download Comsol Multiphysics 3.5a License File.rar?</h2> - -<p>If you have decided to download Comsol Multiphysics 3.5a License File.rar from an online source, you need to be careful and follow some steps to ensure a safe and successful download. Here are some tips to help you:</p> - -<ul> -<li>Choose a reliable and reputable source that offers Comsol Multiphysics 3.5a License File.rar for download. You can check the reviews, ratings, comments, and feedback of other users who have downloaded the file from the same source.</li> -<li>Scan the file with a trusted antivirus or anti-malware program before opening or extracting it. This will help you detect and remove any potential threats that may be hidden in the file.</li> -<li>Backup your important data and files before installing or running Comsol Multiphysics 3.5a with the license file. This will help you restore your system in case something goes wrong or the software causes any problems.</li> -<li>Follow the instructions provided by the source or the license file on how to install or activate Comsol Multiphysics 3.5a with the license file. Make sure you enter the correct information and settings when prompted.</li> -</ul> - -<p>By following these steps, you can download Comsol Multiphysics 3.5a License File.rar safely and successfully.</p> - -<h2>What are the Benefits of Using Comsol Multiphysics 3.5a?</h2> - -<p>Comsol Multiphysics 3.5a is an older version of the software, but it still has many benefits that make it worth using for some users. Here are some of them:</p> - -<ul> -<li>Comsol Multiphysics 3.5a is compatible with older operating systems and hardware that may not support newer versions of the software. This means you can use Comsol Multiphysics 3.5a on your existing computer without having to upgrade or change anything.</li> -<li>Comsol Multiphysics 3.5a has lower system requirements than newer versions of the software. This means you can run Comsol Multiphysics 3.5a faster and smoother on your computer without experiencing any lag or crashes.</li> -<li>Comsol Multiphysics 3.5a has fewer bugs and errors than newer versions of the software. This means you can use Comsol Multiphysics 3.5a without encountering any glitches or issues that may affect your modeling or simulation results.</li> -<li>Comsol Multiphysics 3.5a has all the essential features and functions that you need for modeling and simulating multiphysics problems. You can still create and solve complex and realistic problems using Comsol Multiphysics 3.5a without missing out on anything important.</li> -</ul> - -<p>By using Comsol Multiphysics 3.5a, you can enjoy these benefits and more.</p> -<h2>How to Use Comsol Multiphysics 3.5a?</h2> - -<p>Once you have installed and activated Comsol Multiphysics 3.5a with the license file, you can start using it to model and simulate multiphysics problems. Here are some steps to help you:</p> - -<ul> -<li>Launch Comsol Multiphysics 3.5a from your desktop or start menu. You will see a graphical user interface that allows you to create and manage your projects.</li> -<li>Create a new model or open an existing one. You can choose from various predefined templates or start from scratch.</li> -<li>Select the physics domains that you want to include in your model. You can choose from a wide range of physics interfaces, such as fluid dynamics, heat transfer, electromagnetics, acoustics, and more.</li> -<li>Define the geometry of your model using the built-in tools or importing external files. You can also modify the geometry using parameters, variables, or functions.</li> -<li>Specify the material properties, boundary conditions, initial conditions, and sources for your model. You can use the predefined materials or create your own.</li> -<li>Mesh your model using the automatic or manual meshing options. You can also refine or adapt the mesh to improve the accuracy of your results.</li> -<li>Solve your model using the built-in solvers or external solvers. You can also monitor the progress and convergence of your solution.</li> -<li>Visualize and analyze your results using the postprocessing tools. You can plot various quantities, such as fields, contours, vectors, streamlines, surfaces, and more.</li> -<li>Export your results to various formats, such as images, animations, tables, reports, or files.</li> -</ul> - -<p>By following these steps, you can use Comsol Multiphysics 3.5a to model and simulate multiphysics problems.</p> - -<h2>What are the Limitations of Comsol Multiphysics 3.5a?</h2> - -<p>Comsol Multiphysics 3.5a is an older version of the software, but it still has some limitations that you need to be aware of before using it. Here are some of them:</p> - -<ul> -<li>Comsol Multiphysics 3.5a is not compatible with newer operating systems and hardware that may support newer versions of the software. This means you may not be able to use Comsol Multiphysics 3.5a on your new computer or device.</li> -<li>Comsol Multiphysics 3.5a has higher system requirements than newer versions of the software. This means you may need a more powerful computer to run Comsol Multiphysics 3.5a smoothly and efficiently.</li> -<li>Comsol Multiphysics 3.5a has more bugs and errors than newer versions of the software. This means you may encounter some glitches or issues that may affect your modeling or simulation results.</li> -<li>Comsol Multiphysics 3.5a has fewer features and functions than newer versions of the software. This means you may miss out on some new and improved capabilities that are available in newer versions of Comsol Multiphysics.</li> -</ul> - -<p>By knowing these limitations, you can decide whether Comsol Multiphysics 3.5a is suitable for your needs or not.</p> -<h2>Conclusion</h2> - -<p>Comsol Multiphysics 3.5a License File.rar is a compressed file that contains a license file for Comsol Multiphysics 3.5a, an older version of the software for modeling and simulating multiphysics problems. However, downloading this file from unknown or untrusted sources is not safe or legal, as it may contain fake or modified license files that can harm your computer or violate the software license agreement.</p> - -<p>If you want to use Comsol Multiphysics 3.5a legally and safely, you should either buy a legitimate license file from the official website of Comsol or an authorized reseller, or upgrade to a newer version of Comsol Multiphysics that is compatible with your operating system and hardware.</p> - -<p>Comsol Multiphysics 3.5a has some benefits and limitations that you need to consider before using it. It is compatible with older operating systems and hardware, has lower system requirements, has fewer bugs and errors, and has all the essential features and functions for modeling and simulating multiphysics problems. However, it is not compatible with newer operating systems and hardware, has higher system requirements, has more bugs and errors, and has fewer features and functions than newer versions of Comsol Multiphysics.</p> - -<p>By following the tips and steps provided in this article, you can download, install, activate, use, and enjoy Comsol Multiphysics 3.5a without any problems or risks.</p> 3cee63e6c2<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/gradio/chatbot_streaming/README.md b/spaces/gradio/chatbot_streaming/README.md deleted file mode 100644 index abdea90b4235479c7f7933e68cd84909228c8c53..0000000000000000000000000000000000000000 --- a/spaces/gradio/chatbot_streaming/README.md +++ /dev/null @@ -1,12 +0,0 @@ - ---- -title: chatbot_streaming -emoji: 🔥 -colorFrom: indigo -colorTo: indigo -sdk: gradio -sdk_version: 4.1.2 -app_file: run.py -pinned: false -hf_oauth: true ---- diff --git a/spaces/gstaff/test_space/monstermaker.css b/spaces/gstaff/test_space/monstermaker.css deleted file mode 100644 index c4406d16d2070b0da19f00881fc7a9f622bea8a2..0000000000000000000000000000000000000000 --- a/spaces/gstaff/test_space/monstermaker.css +++ /dev/null @@ -1,1493 +0,0 @@ -.alert > :last-child { - margin-bottom: 0; } - -.alert-default { - border: 4px solid #dee2e6; - border-radius: 1em; } - -.app { - display: grid; - min-height: 100vh; - min-width: 300px; - grid-template-rows: 1fr auto; - grid-template-columns: 100%; } - .app .app-container { - max-width: 1200px; - margin: 0 auto; } - -.app .app-body { - background: #666f7f; - margin-top: 6.25rem; - display: flex; - flex-direction: column; } - .app .app-body .app-container { - padding: 0.5rem 10px; - width: 100%; - flex-grow: 1; } - -.app .app-header { - position: fixed; - top: 0; - width: 100%; - z-index: 2; - box-shadow: 0 0.25rem 0.75rem 0 #505762; } - @media screen and (min-width: 801px) { - .app .app-header .app-header-navbar { - display: none; } } - .app .app-header .app-header-heading { - background: #22252a; } - .app .app-header .app-header-heading .app-container { - padding: 0.625rem 10px; - display: grid; - grid-template-columns: auto repeat(8, 2rem); - grid-column-gap: 0.25rem; - align-items: end; } - @media screen and (max-width: 800px) { - .app .app-header .app-header-heading .app-container { - grid-template-columns: auto repeat(2, 2rem); } - .app .app-header .app-header-heading .app-container .btn:not(.btn-patreon):not(.btn-menu) { - display: none; } } - @media screen and (min-width: 801px) { - .app .app-header .app-header-heading .app-container .btn-menu { - display: none; } } - .app .app-header .app-header-heading h1 { - text-transform: uppercase; - font-weight: bold; - display: flex; - flex-direction: row; - line-height: 1; - margin: 0; } - .app .app-header .app-header-heading h1 img { - height: 2.5rem; - width: 2.5rem; - border: 2px solid #666f7f; - padding: 2px; - box-sizing: border-box; - border-radius: 100%; - margin-right: 0.325rem; } - .app .app-header .app-header-heading h1 a { - display: flex; - flex-direction: column; } - .app .app-header .app-header-heading h1 a span:first-of-type { - font-size: 0.95rem; - color: #da3737; - letter-spacing: 1px; } - .app .app-header .app-header-heading h1 a span:last-of-type { - color: #FFFFFF; - font-size: 1.9rem; - margin-bottom: -0.35rem; } - .app .app-header .app-header-heading h1 a:hover { - text-decoration: none; } - .app .app-header .app-header-heading h1 a:hover span:last-of-type { - text-decoration: underline; - text-decoration-color: #da3737; } - .app .app-header .app-header-heading .btn { - height: 2rem; - border-radius: 100%; - box-sizing: border-box; - display: flex; - align-items: center; - justify-content: center; - font-size: 1rem; - background-color: #666f7f; - border-color: #666f7f; - line-height: 1; - padding: 0; } - .app .app-header .app-header-heading .btn .fas, - .app .app-header .app-header-heading .btn .fab { - background: none; - padding: 0; - margin: 0; } - .app .app-header .app-header-heading .btn:hover { - background: #da3737; - border-color: #da3737; } - .app .app-header .app-header-heading .btn.btn-patreon { - background-color: #e85b46; - border-color: #e85b46; } - .app .app-header .app-header-heading .btn.btn-patreon:hover { - color: #e85b46; - background-color: #FFFFFF; - border-color: #FFFFFF; } - .app .app-header .app-header-navbar { - background: #22252a; } - .app .app-header .app-header-navbar .navbar { - padding: 0; } - .app .app-header .app-header-navbar .navbar .nav-item { - padding-left: 0.625rem; - padding-right: 0.625rem; } - .app .app-header .app-header-navbar .navbar .nav-item + .nav-item { - border-top: 1px dotted #444a54; } - .app .app-header .app-header-navbar .navbar .nav-item a { - color: white; - font-weight: bold; } - .app .app-header .app-header-navbar .navbar .nav-item a:hover { - color: #da3737; } - .app .app-header .app-header-navbar .navbar .nav-item button { - padding: 0; - color: white; - font-weight: bold; - display: block; - padding: 0.5rem 0; - background: none; - border: none; - box-shadow: none; - width: 100%; - text-align: left; } - .app .app-header .app-header-navbar .navbar .nav-item button:hover { - color: #da3737; - cursor: pointer; } - .app .app-header .app-header-navbar .navbar .nav-item .fab, - .app .app-header .app-header-navbar .navbar .nav-item .fas { - width: 2.5rem; - text-align: center; - margin-right: 0.325rem; } - .app .app-header .app-header-navbar .navbar .nav-item:first-child { - border-top: 1px solid #444a54; } - .app .app-header .app-header-navbar .navbar .nav-item:last-child { - border-bottom: 1px solid #444a54; } - .app .app-header .app-header-navbar .navbar .nav-item .nav-link-detail { - color: #666f7f; } - -.app .app-header .app-header-navigation { - background: #2d3138; } - .app .app-header .app-header-navigation .app-container { - padding: 0 10px; } - .app .app-header .app-header-navigation .nav { - display: grid; - grid-template-columns: repeat(2, 1fr); } - .app .app-header .app-header-navigation .nav .nav-item { - text-align: center; - font-weight: bold; } - .app .app-header .app-header-navigation .nav .nav-item .nav-link { - color: #666f7f; - padding: 0.5rem 0.5rem 0.25rem; - border-radius: 0; - border-bottom: 0.25rem solid #22252a; } - .app .app-header .app-header-navigation .nav .nav-item .nav-link .badge { - background: rgba(102, 111, 127, 0.5); } - .app .app-header .app-header-navigation .nav .nav-item .nav-link:not(.active):hover { - border-bottom: 0.25rem solid rgba(218, 55, 55, 0.5); - color: rgba(255, 255, 255, 0.5); } - .app .app-header .app-header-navigation .nav .nav-item .nav-link:not(.active):hover .fas, - .app .app-header .app-header-navigation .nav .nav-item .nav-link:not(.active):hover .fab { - color: rgba(218, 55, 55, 0.5); } - @media screen and (max-width: 800px) { - .app .app-header .app-header-navigation .nav .nav-item .nav-link .nav-link-text, - .app .app-header .app-header-navigation .nav .nav-item .nav-link .badge { - display: none; } } - .app .app-header .app-header-navigation .nav .nav-item .active { - background: none; - border-radius: 0; - border-bottom: 0.25rem solid #da3737; - color: white; } - .app .app-header .app-header-navigation .nav .nav-item .active .badge { - background: #666f7f; } - .app .app-header .app-header-navigation .nav .nav-item .active .fas, - .app .app-header .app-header-navigation .nav .nav-item .active .fab { - color: #da3737; } - .app .app-header .app-header-navigation .nav .nav-item .fas, - .app .app-header .app-header-navigation .nav .nav-item .fab { - margin-right: 0.325rem; } - .app .app-header .app-header-navigation .nav .nav-item .badge { - border-radius: 1cm; - font-size: inherit; - padding: 0.1625rem 0.325rem; } - -.app .app-footer { - background: #565e6b; - text-align: center; } - .app .app-footer .app-container { - padding: 1.25rem 10px; - display: grid; - grid-template-columns: 1fr 1fr 2fr; - text-align: left; } - @media screen and (max-width: 800px) { - .app .app-footer .app-container { - grid-template-columns: 1fr; - grid-row-gap: 0.5rem; - text-align: center; } - .app .app-footer .app-container p { - text-align: center; } } - .app .app-footer a { - color: inherit; - transition: color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out; - text-decoration: none; - display: inline-block; } - .app .app-footer a:hover { - color: #da3737; } - .app .app-footer p { - text-align: right; - margin: 0; - color: #f1f2f3; - font-size: 0.9rem; } - .app .app-footer p a:not(:first-of-type) { - margin-left: 0.75rem; } - .app .app-footer p .fab { - font-size: 2rem; - color: #9ca3af; - transition: color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out; } - .app .app-footer p .fab:hover { - color: #da3737; } - .app .app-footer ul { - margin: 0; - padding: 0; - list-style: none; - font-size: 0.9rem; - color: #f1f2f3; } - .app .app-footer h5 { - color: #9ca3af; - font-size: 0.75rem; - text-transform: uppercase; - font-weight: bold; } - .app .app-footer .links > :not(:last-child) { - margin-bottom: 0.5rem; } - -html { - font-size: 16px; } - @media screen and (max-width: 400px) { - html { - font-size: 12px; } } - -body { - font-family: 'Roboto', sans-serif; - background: #666f7f; } - -a { - color: #da3737; } - a:hover { - color: #da3737; } - -.breadcrumbs { - background: #565e6b; - margin-top: 0.5rem; - color: #dee2e6; - font-size: 0.9em; } - @media screen and (max-width: 800px) { - .breadcrumbs { - display: none; } } - .breadcrumbs .app-container { - padding: 0.5rem 10px; - max-width: 1200px; - margin: 0 auto; } - .breadcrumbs .divider, - .breadcrumbs .fas { - margin: 0 0.25rem; - color: #dee2e6; } - .breadcrumbs a { - color: inherit; } - .breadcrumbs a:hover { - color: white; } - -.btn { - border-radius: 2rem; - padding: 0 0.75rem 0 0; } - .btn .fas, - .btn .fab { - background: #da3737; - padding: 0.325rem; - line-height: inherit; - border-radius: 2rem; - margin: 0 0.325rem 0 0; } - .btn .fas:before, - .btn .fab:before { - min-width: 1.5rem; - display: block; - text-align: center; } - .btn.icon-only .fas, - .btn.icon-only .fab { - margin-right: -0.75rem; } - .btn.btn-sm { - padding: 0 0.5rem 0 0; } - .btn.btn-sm .fas, - .btn.btn-sm .fab { - padding: 0; - margin: 0 0.25rem 0 0; } - .btn.btn-sm .fas::before, - .btn.btn-sm .fab::before { - min-width: 1.3rem; } - .btn.btn-sm.icon-only { - padding-right: 0; } - .btn.btn-sm.icon-only .fas, - .btn.btn-sm.icon-only .fab { - margin-right: 0; } - -.btn-primary { - color: #FFFFFF; - background-color: #2d3138; - border-color: #2d3138; - font-weight: bold; } - .btn-primary.icon-only:not(.icon-border) { - border-color: #da3737; } - -.btn-secondary { - color: #FFFFFF; - background-color: #666f7f; - border-color: #666f7f; - font-weight: bold; } - .btn-secondary.icon-only:not(.icon-border) { - border-color: #da3737; } - -.btn-primary, -.btn-secondary { - display: inline-flex; - align-items: center; - border-width: 2px; } - .btn-primary:hover, .btn-primary:not(:disabled):not(:disabled):active, .btn-primary.dropdown-toggle[aria-expanded="true"], - .btn-secondary:hover, - .btn-secondary:not(:disabled):not(:disabled):active, - .btn-secondary.dropdown-toggle[aria-expanded="true"] { - background-color: #da3737; - border-color: #da3737; } - .btn-primary:not(:disabled):not(:disabled):focus, .btn-primary:not(:disabled):not(:disabled):active:focus, .btn-primary.dropdown-toggle[aria-expanded="true"], - .btn-secondary:not(:disabled):not(:disabled):focus, - .btn-secondary:not(:disabled):not(:disabled):active:focus, - .btn-secondary.dropdown-toggle[aria-expanded="true"] { - box-shadow: 0 0 0 0.2rem rgba(218, 55, 55, 0.5); } - -.card { - border: 0; - background: none; } - .card .card-header { - background: #dee2e6; - border: 0; - color: #22252a; - font-size: larger; - font-weight: bold; - border-bottom: 4px solid #ccd1d6; - border-radius: 0.5rem 0.5rem 0 0; - font-size: 1.4rem; } - @media screen and (max-width: 400px) { - .card .card-header { - font-size: 1.35rem; } } - .card .card-header .card-icon { - background: #da3737; - padding: 0.325rem 0.75em 0.325rem 0.325em; - border-radius: 0 1cm 1cm 0; - line-height: inherit; - margin: -0.325rem 0 -0.325rem -1.25rem; - color: white; } - .card .card-header .card-icon::before { - min-width: 1.75rem; - text-align: center; - display: block; } - .card .card-header .divider { - color: #da3737; } - .card .card-header .subtitle { - font-weight: normal; - font-size: 0.8em; } - .card .card-body { - background: #FFFFFF; } - .card .card-footer { - background: #dee2e6; - border: 0; - border-radius: 0 0 0.5rem 0.5rem; } - -.dropdown { - display: inline-block; } - .dropdown .dropdown-item { - padding-left: 0.75rem; - padding-right: 0.75rem; } - .dropdown .dropdown-item .fas { - margin-right: 0.75rem; } - .dropdown .icon-only::after { - display: none; } - -.dropdown-menu { - border: 1px solid #2d3138; - border-radius: .5rem; - box-shadow: 0px 2px 5px 0px #505762bf; } - .dropdown-menu button:hover { - cursor: pointer; - background: rgba(218, 55, 55, 0.15); } - .dropdown-menu button:hover .fas, - .dropdown-menu button:hover .fab { - color: #da3737; } - .dropdown-menu .dropdown-item .fas { - min-width: 1.1rem; - text-align: center; } - .dropdown-menu .dropdown-item.active, - .dropdown-menu .dropdown-item:active { - background: #da3737; - color: #FFFFFF; } - .dropdown-menu .dropdown-item.active .fas, - .dropdown-menu .dropdown-item.active .fab, - .dropdown-menu .dropdown-item:active .fas, - .dropdown-menu .dropdown-item:active .fab { - color: inherit; } - -.error { - display: flex; - height: 100%; - justify-content: center; - align-items: center; - flex-direction: column; } - .error .box { - color: #9ca3af; - border-radius: 1rem; - text-align: center; - padding: 1.25rem; - background: #565e6b; } - .error .box > :last-child { - margin-bottom: 0; } - .error .fas { - font-size: 4rem; - margin-bottom: 1rem; } - -.laboratory > .card-body { - padding: 0; - display: grid; - grid-template-columns: 380px 1fr; } - @media screen and (max-width: 840px) { - .laboratory > .card-body { - grid-template-columns: 344px 1fr; } } - @media screen and (max-width: 800px) { - .laboratory > .card-body { - grid-template-columns: 1fr; - grid-template-rows: auto auto; } } -.laboratory #laboratory-import-file { - display: none; } -.laboratory .laboratory-blueprint { - background: #dee2e6; } - -.blueprint-form { - background: #dee2e6; } - .blueprint-form .btn-help { - color: #b1b7bd; - box-shadow: none; - margin-right: -0.25em; } - .blueprint-form .btn-help:hover { - color: #da3737; } - .blueprint-form .btn-help .fas { - background: none; } - .blueprint-form .card { - overflow: visible; } - .blueprint-form .card-body { - padding: 0; - overflow: visible; } - .blueprint-form .card-body .card-header { - border-bottom: 1px solid #ccd1d6; } - .blueprint-form form { - background: #FFFFFF; } - .blueprint-form form[data-method="quickstart"] .manual-only { - display: none; } - .blueprint-form form[data-method="manual"] .quickstart-only { - display: none; } - .blueprint-form form:not([data-rank="solo"]) .solo-only { - display: none; } - .blueprint-form .form-group { - margin: 0; - display: flex; } - .blueprint-form .form-group.hidden { - display: none; } - .blueprint-form .form-group:not(:last-child) { - border-bottom: 1px solid #dee2e6; } - .blueprint-form .form-group > label { - width: 7.5rem; - flex-shrink: 0; - margin: 0; - font-weight: bold; - padding: 0.325rem 0.75rem; - box-sizing: border-box; - display: flex; - text-align: right; - justify-content: flex-end; - border-right: 1px solid #dee2e6; - background: rgba(222, 226, 230, 0.5); - line-height: calc(2.375rem - 0.75rem); } - .blueprint-form .form-group input, - .blueprint-form .form-group select { - border: 0; - background: none; } - .blueprint-form .form-group input::placeholder { - color: rgba(102, 111, 127, 0.5); } - .blueprint-form .form-group select { - padding-left: 0.5rem; } - .blueprint-form .form-group textarea { - padding: .375rem .75rem; - border: 0; - width: 100%; - box-sizing: border-box; - resize: vertical; - color: #495057; - min-height: 2.375rem; - font-size: 0.8em; } - .blueprint-form .form-group > :last-child { - margin-bottom: 0; } - .blueprint-form .form-group.section-end { - border-bottom: 2px solid #dee2e6; } - .blueprint-form .form-group .flex-input { - width: 100%; - display: flex; } - .blueprint-form .form-group .flex-input span { - opacity: 0.5; - padding: 0 0.5em; - display: flex; - align-items: center; - color: #b2b7c9; } - .blueprint-form .form-radio-list { - width: 100%; - display: flex; - padding: .375rem .75rem; } - .blueprint-form .form-radio-list .form-check-input:checked ~ * { - background: #da3737; - color: #FFFFFF; - font-weight: bold; } - .blueprint-form .form-radio-list .form-check-input { - display: none; } - .blueprint-form .form-radio-list .form-check { - flex-grow: 1; - margin: 0; - flex-shrink: 0; - width: 50%; } - .blueprint-form .form-radio-list .form-check:first-child .form-check-label { - border-radius: 4px 0 0 4px; } - .blueprint-form .form-radio-list .form-check:last-child .form-check-label { - border-radius: 0 4px 4px 0; } - .blueprint-form .form-radio-list .form-check-label { - flex-grow: 1; - text-align: center; - background: #dee2e6; - text-transform: uppercase; - font-size: 0.75rem; - color: #666f7f; } - .blueprint-form .form-radio-list .form-check-label:hover { - cursor: pointer; } - .blueprint-form .repeatable-section .card-footer { - background: #eef0f2; - padding-left: 0.75rem; - padding-right: 0.75rem; } - .blueprint-form .repeatable-item { - border-bottom: 2px solid #dee2e6; - position: relative; } - .blueprint-form .repeatable-item .dropdown-options { - position: absolute; - top: 0.325rem; - right: 0.75rem; } - .blueprint-form .repeatable-item .dropdown-options .btn { - border-color: #da3737; } - .blueprint-form .repeatable-item .dropdown-toggle { - opacity: .1; } - .blueprint-form .repeatable-item .dropdown-toggle:disabled { - display: none; } - .blueprint-form .repeatable-item .dropdown-toggle:hover, .blueprint-form .repeatable-item .dropdown-toggle[aria-expanded="true"] { - opacity: 1; } - .blueprint-form .accordion .card { - border-radius: 0; } - .blueprint-form .accordion .card .card-header { - padding: 0; - border-radius: 0; - display: flex; - border-bottom: 1px solid #ccd1d6; - margin-bottom: 0; } - .blueprint-form .accordion .card .card-header button { - background: #dee2e6; - display: block; - text-align: left; - font-weight: bold; - width: 100%; - color: #2d3138; - text-decoration: none; - padding: 0; - transition: color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out; - border-radius: 0; - border: 0; - display: flex; - align-items: center; - padding: 0.1875rem 0; } - .blueprint-form .accordion .card .card-header button .title { - flex-grow: 1; - margin-left: 0.325rem; } - .blueprint-form .accordion .card .card-header button:hover, .blueprint-form .accordion .card .card-header button[aria-expanded="true"] { - background: #da3737; - color: #FFFFFF; } - .blueprint-form .accordion .card .card-header button:hover .badge, .blueprint-form .accordion .card .card-header button[aria-expanded="true"] .badge { - background: #FFFFFF; - color: #da3737; } - .blueprint-form .accordion .card .card-header button .fa-chevron-right::before { - position: relative; - left: 1px; } - .blueprint-form .accordion .card .card-header button[aria-expanded="true"] .fa-chevron-right::before { - transform: rotate(90deg); } - .blueprint-form .accordion .card .card-header .badge { - background-color: #b1b7bd; - border-radius: 1cm; - padding: 0 0.325rem; - transition: color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out; - color: white; - float: right; - margin-right: 0.75rem; - line-height: 1.5; } - .blueprint-form .accordion .card .card-header .fas { - transition: color .15s ease-in-out, background-color .15s ease-in-out, border-color .15s ease-in-out, box-shadow .15s ease-in-out; - color: white; - border-radius: 1cm; - margin: 0 0 0 0.325rem; - width: 1.5rem; - height: 1.5rem; - padding: 0; - display: flex; - justify-content: center; - align-items: center; } - .blueprint-form .accordion .card .card-header .fas::before { - transition: transform .15s ease-in-out; } - .blueprint-form .accordion .card .card-footer { - border-radius: 0; } - .blueprint-form .accordion .card .collapse.show { - border-bottom: 1px solid #ccd1d6; } - .blueprint-form > .card-footer { - padding-left: 0.75rem; - padding-right: 0.75rem; } - -#modal-blueprint-traits .modal-body:not([filter='any']) .trait.any, #modal-blueprint-traits .modal-body:not([filter='controller']) .trait.controller, #modal-blueprint-traits .modal-body:not([filter='defender']) .trait.defender, #modal-blueprint-traits .modal-body:not([filter='lurker']) .trait.lurker, #modal-blueprint-traits .modal-body:not([filter='striker']) .trait.striker, #modal-blueprint-traits .modal-body:not([filter='supporter']) .trait.supporter, #modal-blueprint-traits .modal-body:not([filter='sniper']) .trait.sniper, #modal-blueprint-traits .modal-body:not([filter='scout']) .trait.scout { - display: none; } -#modal-blueprint-traits .modal-body select { - margin-bottom: 1.25rem; } -#modal-blueprint-traits .modal-body ul { - list-style: none; - padding-left: 0; } - #modal-blueprint-traits .modal-body ul .title { - margin: 0; } - #modal-blueprint-traits .modal-body ul .description { - font-size: small; - margin: 0; } -#modal-blueprint-traits .modal-body .form-check:hover { - background: #eef0f2; - margin-left: -1.25rem; - margin-right: -1.25rem; - padding-left: 2.5rem; - padding-right: 1.25rem; } -#modal-blueprint-traits .modal-body .form-check-label { - cursor: pointer; - padding-top: 0.325rem; - padding-bottom: 0.325rem; - width: 100%; } -#modal-blueprint-traits .modal-body .form-check-input { - top: 0.325rem; } - #modal-blueprint-traits .modal-body .form-check-input:checked ~ label { - color: #da3737; } - -#modal-markdown .accordion-markdown-help table { - background: #dee2e659; - overflow: hidden; - border-radius: 0.5em; - border: none; } - #modal-markdown .accordion-markdown-help table td { - border: none; } -#modal-markdown .accordion-markdown-help .card:not(:first-child) { - margin-top: 1rem; } -#modal-markdown .accordion-markdown-help .collapse.show { - border-bottom: none; } -#modal-markdown .accordion-markdown-help .card-header { - border: none; - background: none; - margin-left: -0.25em; - margin-right: -1em; } - #modal-markdown .accordion-markdown-help .card-header button { - background: #da3737; - color: #FFFFFF; - font-weight: bold; - padding-left: 0.1875rem; - padding-right: 0.5em; - border-radius: 1em 0 0 1em; } - #modal-markdown .accordion-markdown-help .card-header button::before { - content: "\f054"; - transition: transform .15s ease-in-out; - font-family: "Font Awesome 5 Free"; - -webkit-font-smoothing: antialiased; - font-style: normal; - font-variant: normal; - text-rendering: auto; - margin-right: 0.5em; - transform: rotate(90deg); - background: #da3737; - color: white; - width: 1.5em; - border-radius: 1em; - text-align: center; } - #modal-markdown .accordion-markdown-help .card-header button.collapsed::before { - transform: rotate(0deg); } - #modal-markdown .accordion-markdown-help .card-header button.collapsed:not(:hover) { - background: #dee2e6; - color: inherit; } -#modal-markdown .accordion-markdown-help .card-body { - padding: 1rem 0 0; } - #modal-markdown .accordion-markdown-help .card-body > :last-child { - margin-bottom: 0; } - -.monster-preview { - background-color: #ccd1d6; - display: flex; - flex-direction: column; - justify-content: flex-start; - align-items: center; - padding: 1.25rem 10px; - background-image: linear-gradient(45deg, #ccd1d6 25%, #c6ccd1 25%, #c6ccd1 50%, #ccd1d6 50%, #ccd1d6 75%, #c6ccd1 75%, #c6ccd1 100%); - background-size: 10px 10px; } - .monster-preview .btn-png { - margin-top: 1.25rem; - box-shadow: 0px 5px 10px 0px #50576240; } - .monster-preview .btn-png .fas { - transition: color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out; } - .monster-preview .btn-png:not(:hover) { - background: #FFFFFF; - border-color: #FFFFFF; - color: #666f7f; } - .monster-preview .btn-png:not(:hover) .fas { - color: #FFFFFF; } - .monster-preview .btn-columns { - margin-top: 1.25rem; - box-shadow: 0px 5px 10px 0px #50576240; } - .monster-preview .btn-columns .fas { - transition: color .15s ease-in-out,background-color .15s ease-in-out,border-color .15s ease-in-out,box-shadow .15s ease-in-out; } - .monster-preview .btn-columns:not(:hover) { - background: #FFFFFF; - border-color: #FFFFFF; } - .monster-preview .btn-columns:not(:hover) .fas { - color: #da3737; - background: none; } - -.modal-backdrop.show { - opacity: 0.75; } - -.modal .modal-header { - background: #22252a; - color: white; - border-bottom: 0; - border-radius: 0; - padding: 0.75rem 1.25rem; - border-bottom: 4px solid #dee2e6; } - .modal .modal-header .card-icon { - background: #da3737; - padding: 0.325rem 0.75rem 0.325rem 0.325rem; - border-radius: 0 1cm 1cm 0; - line-height: inherit; - margin: -0.325rem 0 -0.325rem -1.25rem; } - .modal .modal-header .card-icon::before { - min-width: 1.75rem; - text-align: center; - display: block; } - .modal .modal-header .modal-title { - font-weight: bold; } - .modal .modal-header .close { - opacity: 1; - color: #2d3138; - text-shadow: none; - margin: -0.75rem -1.25rem -0.75rem auto; - padding: 0.9rem 1.25rem; } - .modal .modal-header .close span { - transition: color .15s ease-in-out, background-color .15s ease-in-out, border-color .15s ease-in-out, box-shadow .15s ease-in-out; } - .modal .modal-header .close:hover, .modal .modal-header .close:focus { - color: white; - outline: none; - opacity: 1; } - .modal .modal-header .close:hover span, .modal .modal-header .close:focus span { - background: #da3737; } - .modal .modal-header .close span { - background: #666f7f; - display: block; - border-radius: 1cm 0 0 1cm; - padding: 0 0.75rem 0 0.75rem; - margin-right: -1.25rem; } -.modal .modal-content { - border: 1px solid transparent; - background: none; - overflow: hidden; - border: 2px solid #da3737; - border-radius: 0.75rem; } -.modal .modal-body { - background: white; - padding: 1.25rem; } - .modal .modal-body > :last-child { - margin-bottom: 0; } -.modal .modal-footer { - background: #dee2e6; - border-top: 0; - padding: 0.75rem 1.25rem; } - -#modal-settings .settings { - border: 4px solid #dee2e6; - border-radius: 1rem; } -#modal-settings .setting .btn { - margin-left: 0.5rem; } -#modal-settings .setting .content { - display: flex; - align-items: center; - padding: 0.75rem 1.25rem; } -#modal-settings .setting .confirmation { - display: none; - color: #da3737; - padding: 0.75rem 1.25rem; } -#modal-settings .setting .confirmed { - display: none; - color: green; - padding: 0.75rem 1.25rem; } - #modal-settings .setting .confirmed .fa-check-circle { - font-size: 2.5em; } -#modal-settings .setting select.form-control { - width: auto; - margin-left: 1em; - cursor: pointer; } -#modal-settings .setting.confirm .content, -#modal-settings .setting.confirm .confirmed { - display: none; } -#modal-settings .setting.confirm .confirmation { - display: flex; - align-items: center; } -#modal-settings .setting.confirmed .content, -#modal-settings .setting.confirmed .confirmation { - display: none; } -#modal-settings .setting.confirmed .confirmed { - display: flex; - align-items: center; } -#modal-settings .setting + .setting { - border-top: 2px solid #dee2e6; } -#modal-settings .setting .description { - flex-grow: 1; } - #modal-settings .setting .description .title { - font-weight: bold; - margin: 0; - text-transform: uppercase; - font-size: 0.9rem; } - #modal-settings .setting .description > :last-child { - margin-bottom: 0; } - -#modal-gmbinder .modal-body { - white-space: pre-line; - font-family: monospace; - font-size: small; } - -.monster { - font-size: 0.875rem; - position: relative; - width: 28.6em; - box-shadow: 0px 5px 10px 0px #50576240; - display: flex; - flex-direction: column; } - .monster > :first-child { - margin-top: 0; } - .monster > :last-child { - margin-bottom: 0; } - .monster .monster-contents { - width: 100%; } - .monster .monster-contents .monster-contents-body { - padding: 1.42em; - background: white; } - .monster.quickstart:not(.rank-solo) .solo-only { - display: none; } - .monster.quickstart .monster-header .monster-quickstart { - font-size: 0.8em; - font-weight: bold; - text-align: right; - margin-left: 0.8em; - flex-shrink: 0; } - .monster .monster-image.inline + * { - margin-top: 1.42em; } - .monster .monster-image img { - width: 100%; - border-radius: 0.25em; } - .monster .monster-image.banner { - width: 100%; } - .monster .monster-image.banner img { - border-radius: 0; } - .monster .monster-core > :last-child { - margin-bottom: 0; } - .monster .monster-core + * { - margin-top: 0.57em; } - .monster .monster-header { - display: inline-flex; - width: 100%; - justify-content: space-between; - align-items: flex-end; } - .monster .monster-header h4 { - font-weight: bold; - color: #9a1515; - margin: 0; - line-height: 1; - font-size: 2em; - font-family: "Alegreya Sans SC", sans-serif; - margin-top: -0.1em; } - .monster .monster-header p { - margin: 0; } - .monster .monster-header .monster-description { - font-style: italic; - font-size: 0.8em; } - .monster .monster-header .monster-description span:first-of-type { - display: inline-block; } - .monster .monster-header .monster-description span:first-of-type::first-letter { - text-transform: uppercase; } - .monster .monster-header .monster-description span + span::before { - content: " "; } - .monster .monster-header .monster-description * + .alignment::before { - content: ", "; } - .monster .monster-ac, - .monster .monster-hp, - .monster .monster-speed, - .monster .monster-saves, - .monster .monster-skills, - .monster .monster-languages, - .monster .monster-senses, - .monster .monster-immunities, - .monster .monster-vulnerabilities, - .monster .monster-resistances, - .monster .monster-conditions, - .monster .monster-challenge { - display: flex; - flex-wrap: wrap; } - .monster h5 { - font-weight: bold; - color: #9a1515; - font-family: "Alegreya Sans SC", sans-serif; - display: inline-block; - width: 100%; - margin-bottom: 0; - font-size: 1.45em; - margin-top: 0.36em !important; } - .monster h5 + * { - margin-top: 0.52em; } - .monster hr { - margin: 0.285em 0 0; - width: 100%; - min-height: 1px; - height: 0.143em; - border: 0; - background: linear-gradient(to right, rgba(154, 21, 21, 0.75), rgba(154, 21, 21, 0)); - -webkit-column-break-inside: avoid; - page-break-inside: avoid; - break-inside: avoid; } - .monster hr:first-of-type, .monster hr:last-of-type { - height: 0.2143em; - background: linear-gradient(to right, #9a1515, rgba(154, 21, 21, 0)); } - .monster hr + * { - margin-top: 0.285em; } - .monster hr:last-of-type + * { - margin-top: 0.57em; } - .monster .label { - font-weight: bold; } - .monster .monster-abilities { - display: inline-grid; - width: 100%; - grid-template-columns: repeat(6, 1fr); - text-align: center; - line-height: 1.4; } - .monster .monster-abilities .label { - color: #9a1515; - display: block; - text-transform: uppercase; } - .monster ul { - list-style: none; - padding: 0; - margin-bottom: 0; } - .monster ul .label { - color: #9a1515; } - .monster ul li > p { - margin: 0; } - .monster p { - margin-bottom: 0; } - .monster p + * { - margin-top: 0.57em; } - .monster .monster-trait + *, - .monster .monster-action + *, - .monster .monster-reaction + *, - .monster .monster-legendary-action + *, - .monster .monster-lair-action + * { - margin-top: 0.57em; } - .monster .monster-trait p + *, - .monster .monster-action p + *, - .monster .monster-reaction p + *, - .monster .monster-legendary-action p + *, - .monster .monster-lair-action p + * { - margin-top: 0.57em; } - .monster .monster-trait .name, - .monster .monster-action .name, - .monster .monster-reaction .name, - .monster .monster-legendary-action .name, - .monster .monster-lair-action .name { - font-weight: bold; - font-style: italic; } - .monster .monster-trait > :last-child, - .monster .monster-action > :last-child, - .monster .monster-reaction > :last-child, - .monster .monster-legendary-action > :last-child, - .monster .monster-lair-action > :last-child { - margin-bottom: 0; } - .monster .lair-initiative, - .monster .legendary-per-round, - .monster .paragon-actions { - font-size: 0.8em; - font-style: italic; } - .monster .monster-notes + * { - margin-top: 0.57em; } - .monster hr + .h5-border { - display: none; } - .monster .monster-footer { - font-style: italic; - opacity: 0.5; - font-size: 0.8em; } - .monster .line-break { - display: block; - height: 0.57em; } - .monster .h5-border { - min-height: 1px; - height: 0.0714em; - background: linear-gradient(to right, rgba(154, 21, 21, 0.75), rgba(154, 21, 21, 0)); - display: block; - margin-top: 0; - margin-bottom: 0.52em; } - .monster .h5-border.notes { - margin-top: 0.52em; - height: 0.2143em; } - @media screen and (max-width: 459px) { - .monster.columns-1 { - font-size: 0.75rem; } } - .monster.columns-2 { - width: 55em; - font-size: 0.85rem; } - @media screen and (max-width: 1200px) { - .monster.columns-2 { - font-size: 0.75rem; } } - @media screen and (max-width: 1095px) { - .monster.columns-2 { - font-size: 0.6rem; } } - @media screen and (max-width: 980px) { - .monster.columns-2 { - font-size: 0.5rem; } } - @media screen and (max-width: 880px) { - .monster.columns-2 { - font-size: 0.45rem; } } - @media screen and (max-width: 800px) { - .monster.columns-2 { - font-size: 0.7rem; } } - @media screen and (max-width: 680px) { - .monster.columns-2 { - font-size: 0.6rem; } } - @media screen and (max-width: 580px) { - .monster.columns-2 { - font-size: 0.5rem; } } - @media screen and (max-width: 490px) { - .monster.columns-2 { - font-size: 0.4rem; } } - @media screen and (max-width: 410px) { - .monster.columns-2 { - font-size: 0.35rem; } } - @media screen and (max-width: 400px) { - .monster.columns-2 { - font-size: 0.5rem; } } - @media screen and (max-width: 380px) { - .monster.columns-2 { - font-size: 0.45rem; } } - @media screen and (max-width: 350px) { - .monster.columns-2 { - font-size: 0.38rem; } } - .monster.columns-2 .monster-contents-body { - column-count: 2; - column-gap: 1.42em; } - .monster .h5-traits { - display: none; } - .monster .h5-traits + .h5-border { - display: none; } - .monster .h5-traits + .h5-border + .monster-trait { - margin-top: 0.57em; } - .monster.theme-5e .monster-contents-body { - background: url("https://i.imgur.com/wAhINL9.jpg"); } - .monster.theme-5e .monster-contents-header, - .monster.theme-5e .monster-contents-footer { - height: 0.4em; - min-height: 3px; - background: #bd9b4c; - border: 1px solid black; - border-left: 0; - border-right: 0; - width: 100%; - background-image: linear-gradient(45deg, #bd9b4c 25%, #b38720 25%, #b38720 50%, #bd9b4c 50%, #bd9b4c 75%, #b38720 75%, #b38720 100%); - background-size: 5px 5px; } - .monster.theme-transparent { - box-shadow: none; } - .monster.theme-transparent .monster-contents-body { - background: none; } - .monster.theme-giffyglyph { - box-shadow: none; } - .monster.theme-giffyglyph .h5-traits { - display: block; } - .monster.theme-giffyglyph ul { - padding: 0 !important; - margin-left: 1em; - margin-right: 1em; - border-radius: 0.5em; - flex-direction: column; - width: calc(100% - 2em); - border: 1px dotted rgba(88, 24, 13, 0.1); - overflow: hidden; } - .monster.theme-giffyglyph ul li { - display: flex; - width: 100%; } - .monster.theme-giffyglyph ul li:not(:last-child) { - border-bottom: 1px dotted rgba(88, 24, 13, 0.1); } - .monster.theme-giffyglyph ul li p { - width: 100%; - display: inline-flex; } - .monster.theme-giffyglyph ul li p .label { - font-family: "Alegreya Sans SC", sans-serif; - margin-right: 0.5em; - text-align: right; - display: inline-block; - width: 10.75em; - flex-shrink: 0; - font-size: 0.8em; - line-height: 1.9; - background: rgba(88, 24, 13, 0.05); - padding-right: 0.5em; } - .monster.theme-giffyglyph ul li p .label + span { - display: inline-block; - width: 100%; - padding-right: 0.5em; } - .monster.theme-giffyglyph hr { - display: none; } - .monster.theme-giffyglyph hr + * { - margin-top: 0.5em; } - .monster.theme-giffyglyph hr:first-of-type + * { - margin-top: 1em; } - .monster.theme-giffyglyph .quickstart-helpers { - margin-top: 0.5em; } - .monster.theme-giffyglyph .monster-contents-body { - padding: 0; - border-radius: 1em; - overflow: hidden; - box-shadow: 0px 5px 10px 0px #50576240; } - .monster.theme-giffyglyph .monster-contents-body .monster-header { - background: #58180d; - color: #FFFFFF; - padding: 0.75em 1em 0.5em; } - .monster.theme-giffyglyph .monster-contents-body .monster-header > * { - z-index: 1; - position: relative; } - .monster.theme-giffyglyph .monster-contents-body .monster-header h4 { - color: #FFFFFF; } - .monster.theme-giffyglyph .monster-contents-body > * { - padding-left: 1em; - padding-right: 1em; } - .monster.theme-giffyglyph .monster-abilities { - column-gap: 0.25em; - margin-top: 0.5em; - margin-bottom: 0; } - .monster.theme-giffyglyph .monster-stats + hr + *:not(.quickstart-helpers) { - margin-top: 0.5em !important; } - .monster.theme-giffyglyph .h5-border { - display: none; } - .monster.theme-giffyglyph .quickstart-helpers li { - display: flex; } - .monster.theme-giffyglyph h5 { - content: "Traits"; - background: rgba(88, 24, 13, 0.1); - display: block; - margin: 0 0.65em 0.3em; - font-weight: bold; - color: #9a1515; - font-family: "Alegreya Sans SC", sans-serif; - display: inline-block; - width: calc(100% - 1.3em); - padding: 0 0.5em !important; - border-radius: 0.3em; - line-height: 1.5; } - .monster.theme-giffyglyph h5 > * { - font-size: 1.45em; } - .monster.theme-giffyglyph .monster-ability { - background: rgba(88, 24, 13, 0.1); - padding: 0.4em 0; - border-radius: 0.5em; } - .monster.theme-giffyglyph .monster-footer { - padding-bottom: 1.4em; } - .monster.theme-giffyglyph .monster-image.banner { - background: #FFFFFF; - border-radius: 0.5em 0.5em 0 0; - overflow: hidden; - border-bottom: 0; - z-index: 1; } - .monster.theme-giffyglyph .monster-image.banner + .monster-contents .monster-contents-body { - border-top: 0; - border-top-left-radius: 0; - border-top-right-radius: 0; } - .monster.theme-giffyglyph.columns-2 .monster-contents-body { - padding-top: 1em; - padding-bottom: 1em; - column-gap: 0; } - .monster.theme-giffyglyph.columns-2 .monster-header { - border-radius: 0.5em; - margin-left: 1em; - width: calc(100% - 2em); } - .monster.theme-giffyglyph.columns-2 .monster-footer { - padding-bottom: 0; } - .monster.theme-giffyglyph.columns-2 hr:first-of-type + * { - margin-top: 0.5em; } - -.panel { - box-shadow: 0px 5px 10px 0px #505762; } - -.vault > .card-header { - padding-right: 0.75rem; } -.vault > .card-body { - padding: 0; } -.vault > .card-footer { - padding-left: 0.75rem; - padding-right: 0.75rem; } -.vault #vault-import-file, -.vault #vault-import-srd { - display: none; } -.vault .dataTables_wrapper { - display: flex; - flex-direction: column; } - .vault .dataTables_wrapper .top { - display: flex; - justify-content: space-between; - padding: 0.325rem 0.75rem; - background: #eef0f2; - border-radius: 0; - border-bottom: 0.25rem solid #da3737; } - .vault .dataTables_wrapper .top .dataTables_length { - min-width: 7rem; - margin-right: 0.75rem; } - .vault .dataTables_wrapper .top label { - margin: 0; - position: relative; - width: 100%; } - .vault .dataTables_wrapper .top label input { - padding-right: 1.6rem; } - .vault .dataTables_wrapper .top label input, - .vault .dataTables_wrapper .top label select { - border-radius: 1cm; - font-size: inherit; - line-height: inherit; - padding-top: 0; - padding-bottom: 0; - height: auto; - width: 100%; - box-sizing: border-box; - margin: 0; } - .vault .dataTables_wrapper .top .dataTables_filter { - flex-grow: 1; - margin-right: 0; - text-align: right; } - .vault .dataTables_wrapper .top .dataTables_filter label { - max-width: 16rem; } - .vault .dataTables_wrapper .top .dataTables_filter label::after { - font-family: "Font Awesome 5 Free"; - content: "\f002"; - -webkit-font-smoothing: antialiased; - display: inline-block; - font-style: normal; - font-variant: normal; - text-rendering: auto; - position: absolute; - right: 0.75rem; - font-weight: 900; - color: #666f7f; - line-height: inherit; } - .vault .dataTables_wrapper .content { - overflow: hidden; } - .vault .dataTables_wrapper table { - margin: 0 !important; - width: 100% !important; - table-layout: fixed; } - .vault .dataTables_wrapper table tbody tr:hover { - background: #eef0f2; - cursor: pointer; } - .vault .dataTables_wrapper table td, - .vault .dataTables_wrapper table th { - box-sizing: border-box; } - .vault .dataTables_wrapper table td { - padding: 0; } - .vault .dataTables_wrapper table th { - background: #eef0f2; - padding-top: 0.325rem; - padding-bottom: 0.325rem; } - .vault .dataTables_wrapper table th:not(:first-child) { - border-left: 1px solid #dee2e6; } - .vault .dataTables_wrapper table th::before, .vault .dataTables_wrapper table th::after { - bottom: 0.4em; - right: 0.5em; - font-family: "Font Awesome 5 Free"; - color: #da3737; } - @media screen and (max-width: 400px) { - .vault .dataTables_wrapper table th::before, .vault .dataTables_wrapper table th::after { - bottom: 0.25em; } } - .vault .dataTables_wrapper table th::before { - content: "\f884"; } - .vault .dataTables_wrapper table th::after { - content: "\f160"; } - .vault .dataTables_wrapper table .sorting::after, - .vault .dataTables_wrapper table .sorting_asc::after, - .vault .dataTables_wrapper table .sorting_desc::before { - opacity: 0; } - .vault .dataTables_wrapper table .sorting::before { - opacity: 0.1; - color: inherit; } - .vault .dataTables_wrapper table .col-id { - width: 4rem; - text-align: center; } - .vault .dataTables_wrapper table .col-ac { - width: 4rem; - text-align: center; } - .vault .dataTables_wrapper table .col-hp { - width: 5rem; - text-align: center; } - .vault .dataTables_wrapper table .col-level { - width: 4rem; - text-align: center; } - .vault .dataTables_wrapper table .col-role { - width: 6rem; } - .vault .dataTables_wrapper table .col-rank { - width: 5.75rem; } - .vault .dataTables_wrapper table .col-cr { - width: 4rem; - text-align: center; } - .vault .dataTables_wrapper table .col-description, - .vault .dataTables_wrapper table .col-role-rank, - .vault .dataTables_wrapper table .col-ac-hp { - overflow: hidden; - width: 0; } - .vault .dataTables_wrapper table .dataTables_empty { - padding: 0.5rem 0.75rem; } - @media screen and (max-width: 800px) { - .vault .dataTables_wrapper table .col-ac, - .vault .dataTables_wrapper table .col-hp, - .vault .dataTables_wrapper table .col-role, - .vault .dataTables_wrapper table .col-rank { - overflow: hidden !important; - width: 0; - padding: 0 !important; - border-right: none !important; } } - .vault .dataTables_wrapper table .table-card { - display: flex; - flex-direction: row; } - .vault .dataTables_wrapper table .table-card .col-id { - text-align: center; - flex-shrink: 0; - font-weight: bold; } - .vault .dataTables_wrapper table .table-card .col-name { - white-space: normal; - flex-grow: 1; } - .vault .dataTables_wrapper table .table-card .col-name .name { - margin: 0; - font-weight: bold; } - .vault .dataTables_wrapper table .table-card .col-name .role { - font-size: 0.7em; - font-style: italic; - opacity: .75; - margin: 0.25em 0 0; } - .vault .dataTables_wrapper table .table-card .col-name .role .fas { - opacity: .5; } - .vault .dataTables_wrapper table .table-card .col-name .role:not(.rank-solo) .players { - display: none; } - .vault .dataTables_wrapper table .table-card .col-name .details, - .vault .dataTables_wrapper table .table-card .col-name .stats { - font-size: 0.7em; - font-style: italic; - opacity: .75; - margin: 0; } - .vault .dataTables_wrapper table .table-card .col-hp::after { - font-family: "Font Awesome 5 Free"; - content: " \f004"; - color: #da3737; } - .vault .dataTables_wrapper table .table-card .col-id, - .vault .dataTables_wrapper table .table-card .col-name, - .vault .dataTables_wrapper table .table-card .col-ac, - .vault .dataTables_wrapper table .table-card .col-hp, - .vault .dataTables_wrapper table .table-card .col-level, - .vault .dataTables_wrapper table .table-card .col-role, - .vault .dataTables_wrapper table .table-card .col-rank, - .vault .dataTables_wrapper table .table-card .col-cr { - padding: 0.5rem 0.75rem; - overflow: hidden; - text-overflow: ellipsis; } - .vault .dataTables_wrapper table .table-card .col-id:not(:last-child), - .vault .dataTables_wrapper table .table-card .col-name:not(:last-child), - .vault .dataTables_wrapper table .table-card .col-ac:not(:last-child), - .vault .dataTables_wrapper table .table-card .col-hp:not(:last-child), - .vault .dataTables_wrapper table .table-card .col-level:not(:last-child), - .vault .dataTables_wrapper table .table-card .col-role:not(:last-child), - .vault .dataTables_wrapper table .table-card .col-rank:not(:last-child), - .vault .dataTables_wrapper table .table-card .col-cr:not(:last-child) { - border-right: 1px dotted #eef0f2; } - .vault .dataTables_wrapper table .table-card .col-description, - .vault .dataTables_wrapper table .table-card .col-role-rank, - .vault .dataTables_wrapper table .table-card .col-ac-hp { - overflow: hidden; - width: 0; } - .vault .dataTables_wrapper table .table-card .col-ac, - .vault .dataTables_wrapper table .table-card .col-hp, - .vault .dataTables_wrapper table .table-card .col-level, - .vault .dataTables_wrapper table .table-card .col-role, - .vault .dataTables_wrapper table .table-card .col-rank, - .vault .dataTables_wrapper table .table-card .col-cr { - flex-shrink: 0; } - .vault .dataTables_wrapper table .table-card .col-rank:not(.rank-solo) .players { - display: none; } - .vault .dataTables_wrapper table .table-card.method-manual .role { - display: none; } - @media screen and (min-width: 801px) { - .vault .dataTables_wrapper table .table-card .role { - display: none; } } - .vault .dataTables_wrapper .bottom { - display: flex; - justify-content: space-between; - align-items: center; - padding: 0.325rem 0.75rem; - background: #eef0f2; - border-radius: 0; - border-top: 0.25rem solid #da3737; } - .vault .dataTables_wrapper .bottom .dataTables_info { - padding: 0; - text-overflow: ellipsis; - overflow: hidden; - margin-right: 0.75rem; - white-space: normal; } - .vault .dataTables_wrapper .bottom .pagination { - margin: 0; } - @media screen and (max-width: 800px) { - .vault .dataTables_wrapper .bottom { - flex-direction: column; } - .vault .dataTables_wrapper .bottom .dataTables_info { - margin-bottom: 0.25rem; - margin-right: 0; } } - .vault .dataTables_wrapper .page-link { - color: #da3737; - background-color: white; - border: 1px solid white; - border: none; - border-radius: 1cm; - min-width: 2.2rem; - line-height: 1.2; - text-align: center; - font-weight: bold; } - .vault .dataTables_wrapper .page-item.active .page-link { - background-color: #da3737; - border-color: #da3737; - color: #FFFFFF; } - .vault .dataTables_wrapper .page-item.disabled .page-link { - color: rgba(0, 0, 0, 0.25); - background: #dee2e6; - border: 1px solid #dee2e6; } - .vault .dataTables_wrapper .page-item:not(:first-child) .page-link { - margin-left: 2px; } - .vault .dataTables_wrapper .page-item:first-child .page-link, .vault .dataTables_wrapper .page-item:last-child .page-link { - border-radius: 100%; } - -.vault.edit > .card-body { - padding: 0; - display: grid; - grid-template-columns: 1fr 380px; } - @media screen and (max-width: 840px) { - .vault.edit > .card-body { - grid-template-columns: 1fr 344px; } } - @media screen and (max-width: 800px) { - .vault.edit > .card-body { - grid-template-columns: 1fr; - grid-template-rows: auto auto; } } -.vault.edit > .card-footer { - display: flex; - justify-content: space-between; } -.vault.edit #monster-import-file { - display: none; } -.vault.edit .monster-blueprint { - background: #dee2e6; } - -/*# sourceMappingURL=monstermaker.1.0.3.3.css.map */ diff --git a/spaces/guetLzy/Real-ESRGAN-Demo/tests/test_utils.py b/spaces/guetLzy/Real-ESRGAN-Demo/tests/test_utils.py deleted file mode 100644 index 7919b74905495b4b6f4aa957a1f0b5d7a174c782..0000000000000000000000000000000000000000 --- a/spaces/guetLzy/Real-ESRGAN-Demo/tests/test_utils.py +++ /dev/null @@ -1,87 +0,0 @@ -import numpy as np -from basicsr.archs.rrdbnet_arch import RRDBNet - -from realesrgan.utils import RealESRGANer - - -def test_realesrganer(): - # initialize with default model - restorer = RealESRGANer( - scale=4, - model_path='experiments/pretrained_models/RealESRGAN_x4plus.pth', - model=None, - tile=10, - tile_pad=10, - pre_pad=2, - half=False) - assert isinstance(restorer.model, RRDBNet) - assert restorer.half is False - # initialize with user-defined model - model = RRDBNet(num_in_ch=3, num_out_ch=3, num_feat=64, num_block=6, num_grow_ch=32, scale=4) - restorer = RealESRGANer( - scale=4, - model_path='experiments/pretrained_models/RealESRGAN_x4plus_anime_6B.pth', - model=model, - tile=10, - tile_pad=10, - pre_pad=2, - half=True) - # test attribute - assert isinstance(restorer.model, RRDBNet) - assert restorer.half is True - - # ------------------ test pre_process ---------------- # - img = np.random.random((12, 12, 3)).astype(np.float32) - restorer.pre_process(img) - assert restorer.img.shape == (1, 3, 14, 14) - # with modcrop - restorer.scale = 1 - restorer.pre_process(img) - assert restorer.img.shape == (1, 3, 16, 16) - - # ------------------ test process ---------------- # - restorer.process() - assert restorer.output.shape == (1, 3, 64, 64) - - # ------------------ test post_process ---------------- # - restorer.mod_scale = 4 - output = restorer.post_process() - assert output.shape == (1, 3, 60, 60) - - # ------------------ test tile_process ---------------- # - restorer.scale = 4 - img = np.random.random((12, 12, 3)).astype(np.float32) - restorer.pre_process(img) - restorer.tile_process() - assert restorer.output.shape == (1, 3, 64, 64) - - # ------------------ test enhance ---------------- # - img = np.random.random((12, 12, 3)).astype(np.float32) - result = restorer.enhance(img, outscale=2) - assert result[0].shape == (24, 24, 3) - assert result[1] == 'RGB' - - # ------------------ test enhance with 16-bit image---------------- # - img = np.random.random((4, 4, 3)).astype(np.uint16) + 512 - result = restorer.enhance(img, outscale=2) - assert result[0].shape == (8, 8, 3) - assert result[1] == 'RGB' - - # ------------------ test enhance with gray image---------------- # - img = np.random.random((4, 4)).astype(np.float32) - result = restorer.enhance(img, outscale=2) - assert result[0].shape == (8, 8) - assert result[1] == 'L' - - # ------------------ test enhance with RGBA---------------- # - img = np.random.random((4, 4, 4)).astype(np.float32) - result = restorer.enhance(img, outscale=2) - assert result[0].shape == (8, 8, 4) - assert result[1] == 'RGBA' - - # ------------------ test enhance with RGBA, alpha_upsampler---------------- # - restorer.tile_size = 0 - img = np.random.random((4, 4, 4)).astype(np.float32) - result = restorer.enhance(img, outscale=2, alpha_upsampler=None) - assert result[0].shape == (8, 8, 4) - assert result[1] == 'RGBA' diff --git a/spaces/gwang-kim/DATID-3D/eg3d/viz/performance_widget.py b/spaces/gwang-kim/DATID-3D/eg3d/viz/performance_widget.py deleted file mode 100644 index deb208a741bf14dd57c70012fa23486902d31427..0000000000000000000000000000000000000000 --- a/spaces/gwang-kim/DATID-3D/eg3d/viz/performance_widget.py +++ /dev/null @@ -1,75 +0,0 @@ -# SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# SPDX-License-Identifier: LicenseRef-NvidiaProprietary -# -# NVIDIA CORPORATION, its affiliates and licensors retain all intellectual -# property and proprietary rights in and to this material, related -# documentation and any modifications thereto. Any use, reproduction, -# disclosure or distribution of this material and related documentation -# without an express license agreement from NVIDIA CORPORATION or -# its affiliates is strictly prohibited. - -import array -import numpy as np -import imgui -from gui_utils import imgui_utils - -#---------------------------------------------------------------------------- - -class PerformanceWidget: - def __init__(self, viz): - self.viz = viz - self.gui_times = [float('nan')] * 60 - self.render_times = [float('nan')] * 30 - self.fps_limit = 60 - self.use_vsync = False - self.is_async = False - self.force_fp32 = False - - @imgui_utils.scoped_by_object_id - def __call__(self, show=True): - viz = self.viz - self.gui_times = self.gui_times[1:] + [viz.frame_delta] - if 'render_time' in viz.result: - self.render_times = self.render_times[1:] + [viz.result.render_time] - del viz.result.render_time - - if show: - imgui.text('GUI') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 8): - imgui.plot_lines('##gui_times', array.array('f', self.gui_times), scale_min=0) - imgui.same_line(viz.label_w + viz.font_size * 9) - t = [x for x in self.gui_times if x > 0] - t = np.mean(t) if len(t) > 0 else 0 - imgui.text(f'{t*1e3:.1f} ms' if t > 0 else 'N/A') - imgui.same_line(viz.label_w + viz.font_size * 14) - imgui.text(f'{1/t:.1f} FPS' if t > 0 else 'N/A') - imgui.same_line(viz.label_w + viz.font_size * 18 + viz.spacing * 3) - with imgui_utils.item_width(viz.font_size * 6): - _changed, self.fps_limit = imgui.input_int('FPS limit', self.fps_limit, flags=imgui.INPUT_TEXT_ENTER_RETURNS_TRUE) - self.fps_limit = min(max(self.fps_limit, 5), 1000) - imgui.same_line(imgui.get_content_region_max()[0] - 1 - viz.button_w * 2 - viz.spacing) - _clicked, self.use_vsync = imgui.checkbox('Vertical sync', self.use_vsync) - - if show: - imgui.text('Render') - imgui.same_line(viz.label_w) - with imgui_utils.item_width(viz.font_size * 8): - imgui.plot_lines('##render_times', array.array('f', self.render_times), scale_min=0) - imgui.same_line(viz.label_w + viz.font_size * 9) - t = [x for x in self.render_times if x > 0] - t = np.mean(t) if len(t) > 0 else 0 - imgui.text(f'{t*1e3:.1f} ms' if t > 0 else 'N/A') - imgui.same_line(viz.label_w + viz.font_size * 14) - imgui.text(f'{1/t:.1f} FPS' if t > 0 else 'N/A') - imgui.same_line(viz.label_w + viz.font_size * 18 + viz.spacing * 3) - _clicked, self.is_async = imgui.checkbox('Separate process', self.is_async) - imgui.same_line(imgui.get_content_region_max()[0] - 1 - viz.button_w * 2 - viz.spacing) - _clicked, self.force_fp32 = imgui.checkbox('Force FP32', self.force_fp32) - - viz.set_fps_limit(self.fps_limit) - viz.set_vsync(self.use_vsync) - viz.set_async(self.is_async) - viz.args.force_fp32 = self.force_fp32 - -#---------------------------------------------------------------------------- diff --git a/spaces/gylleus/icongen/dnnlib/util.py b/spaces/gylleus/icongen/dnnlib/util.py deleted file mode 100644 index 76725336d01e75e1c68daa88be47f4fde0bbc63b..0000000000000000000000000000000000000000 --- a/spaces/gylleus/icongen/dnnlib/util.py +++ /dev/null @@ -1,477 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""Miscellaneous utility classes and functions.""" - -import ctypes -import fnmatch -import importlib -import inspect -import numpy as np -import os -import shutil -import sys -import types -import io -import pickle -import re -import requests -import html -import hashlib -import glob -import tempfile -import urllib -import urllib.request -import uuid - -from distutils.util import strtobool -from typing import Any, List, Tuple, Union - - -# Util classes -# ------------------------------------------------------------------------------------------ - - -class EasyDict(dict): - """Convenience class that behaves like a dict but allows access with the attribute syntax.""" - - def __getattr__(self, name: str) -> Any: - try: - return self[name] - except KeyError: - raise AttributeError(name) - - def __setattr__(self, name: str, value: Any) -> None: - self[name] = value - - def __delattr__(self, name: str) -> None: - del self[name] - - -class Logger(object): - """Redirect stderr to stdout, optionally print stdout to a file, and optionally force flushing on both stdout and the file.""" - - def __init__(self, file_name: str = None, file_mode: str = "w", should_flush: bool = True): - self.file = None - - if file_name is not None: - self.file = open(file_name, file_mode) - - self.should_flush = should_flush - self.stdout = sys.stdout - self.stderr = sys.stderr - - sys.stdout = self - sys.stderr = self - - def __enter__(self) -> "Logger": - return self - - def __exit__(self, exc_type: Any, exc_value: Any, traceback: Any) -> None: - self.close() - - def write(self, text: Union[str, bytes]) -> None: - """Write text to stdout (and a file) and optionally flush.""" - if isinstance(text, bytes): - text = text.decode() - if len(text) == 0: # workaround for a bug in VSCode debugger: sys.stdout.write(''); sys.stdout.flush() => crash - return - - if self.file is not None: - self.file.write(text) - - self.stdout.write(text) - - if self.should_flush: - self.flush() - - def flush(self) -> None: - """Flush written text to both stdout and a file, if open.""" - if self.file is not None: - self.file.flush() - - self.stdout.flush() - - def close(self) -> None: - """Flush, close possible files, and remove stdout/stderr mirroring.""" - self.flush() - - # if using multiple loggers, prevent closing in wrong order - if sys.stdout is self: - sys.stdout = self.stdout - if sys.stderr is self: - sys.stderr = self.stderr - - if self.file is not None: - self.file.close() - self.file = None - - -# Cache directories -# ------------------------------------------------------------------------------------------ - -_dnnlib_cache_dir = None - -def set_cache_dir(path: str) -> None: - global _dnnlib_cache_dir - _dnnlib_cache_dir = path - -def make_cache_dir_path(*paths: str) -> str: - if _dnnlib_cache_dir is not None: - return os.path.join(_dnnlib_cache_dir, *paths) - if 'DNNLIB_CACHE_DIR' in os.environ: - return os.path.join(os.environ['DNNLIB_CACHE_DIR'], *paths) - if 'HOME' in os.environ: - return os.path.join(os.environ['HOME'], '.cache', 'dnnlib', *paths) - if 'USERPROFILE' in os.environ: - return os.path.join(os.environ['USERPROFILE'], '.cache', 'dnnlib', *paths) - return os.path.join(tempfile.gettempdir(), '.cache', 'dnnlib', *paths) - -# Small util functions -# ------------------------------------------------------------------------------------------ - - -def format_time(seconds: Union[int, float]) -> str: - """Convert the seconds to human readable string with days, hours, minutes and seconds.""" - s = int(np.rint(seconds)) - - if s < 60: - return "{0}s".format(s) - elif s < 60 * 60: - return "{0}m {1:02}s".format(s // 60, s % 60) - elif s < 24 * 60 * 60: - return "{0}h {1:02}m {2:02}s".format(s // (60 * 60), (s // 60) % 60, s % 60) - else: - return "{0}d {1:02}h {2:02}m".format(s // (24 * 60 * 60), (s // (60 * 60)) % 24, (s // 60) % 60) - - -def ask_yes_no(question: str) -> bool: - """Ask the user the question until the user inputs a valid answer.""" - while True: - try: - print("{0} [y/n]".format(question)) - return strtobool(input().lower()) - except ValueError: - pass - - -def tuple_product(t: Tuple) -> Any: - """Calculate the product of the tuple elements.""" - result = 1 - - for v in t: - result *= v - - return result - - -_str_to_ctype = { - "uint8": ctypes.c_ubyte, - "uint16": ctypes.c_uint16, - "uint32": ctypes.c_uint32, - "uint64": ctypes.c_uint64, - "int8": ctypes.c_byte, - "int16": ctypes.c_int16, - "int32": ctypes.c_int32, - "int64": ctypes.c_int64, - "float32": ctypes.c_float, - "float64": ctypes.c_double -} - - -def get_dtype_and_ctype(type_obj: Any) -> Tuple[np.dtype, Any]: - """Given a type name string (or an object having a __name__ attribute), return matching Numpy and ctypes types that have the same size in bytes.""" - type_str = None - - if isinstance(type_obj, str): - type_str = type_obj - elif hasattr(type_obj, "__name__"): - type_str = type_obj.__name__ - elif hasattr(type_obj, "name"): - type_str = type_obj.name - else: - raise RuntimeError("Cannot infer type name from input") - - assert type_str in _str_to_ctype.keys() - - my_dtype = np.dtype(type_str) - my_ctype = _str_to_ctype[type_str] - - assert my_dtype.itemsize == ctypes.sizeof(my_ctype) - - return my_dtype, my_ctype - - -def is_pickleable(obj: Any) -> bool: - try: - with io.BytesIO() as stream: - pickle.dump(obj, stream) - return True - except: - return False - - -# Functionality to import modules/objects by name, and call functions by name -# ------------------------------------------------------------------------------------------ - -def get_module_from_obj_name(obj_name: str) -> Tuple[types.ModuleType, str]: - """Searches for the underlying module behind the name to some python object. - Returns the module and the object name (original name with module part removed).""" - - # allow convenience shorthands, substitute them by full names - obj_name = re.sub("^np.", "numpy.", obj_name) - obj_name = re.sub("^tf.", "tensorflow.", obj_name) - - # list alternatives for (module_name, local_obj_name) - parts = obj_name.split(".") - name_pairs = [(".".join(parts[:i]), ".".join(parts[i:])) for i in range(len(parts), 0, -1)] - - # try each alternative in turn - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module(module_name) # may raise ImportError - get_obj_from_module(module, local_obj_name) # may raise AttributeError - return module, local_obj_name - except: - pass - - # maybe some of the modules themselves contain errors? - for module_name, _local_obj_name in name_pairs: - try: - importlib.import_module(module_name) # may raise ImportError - except ImportError: - if not str(sys.exc_info()[1]).startswith("No module named '" + module_name + "'"): - raise - - # maybe the requested attribute is missing? - for module_name, local_obj_name in name_pairs: - try: - module = importlib.import_module(module_name) # may raise ImportError - get_obj_from_module(module, local_obj_name) # may raise AttributeError - except ImportError: - pass - - # we are out of luck, but we have no idea why - raise ImportError(obj_name) - - -def get_obj_from_module(module: types.ModuleType, obj_name: str) -> Any: - """Traverses the object name and returns the last (rightmost) python object.""" - if obj_name == '': - return module - obj = module - for part in obj_name.split("."): - obj = getattr(obj, part) - return obj - - -def get_obj_by_name(name: str) -> Any: - """Finds the python object with the given name.""" - module, obj_name = get_module_from_obj_name(name) - return get_obj_from_module(module, obj_name) - - -def call_func_by_name(*args, func_name: str = None, **kwargs) -> Any: - """Finds the python object with the given name and calls it as a function.""" - assert func_name is not None - func_obj = get_obj_by_name(func_name) - assert callable(func_obj) - return func_obj(*args, **kwargs) - - -def construct_class_by_name(*args, class_name: str = None, **kwargs) -> Any: - """Finds the python class with the given name and constructs it with the given arguments.""" - return call_func_by_name(*args, func_name=class_name, **kwargs) - - -def get_module_dir_by_obj_name(obj_name: str) -> str: - """Get the directory path of the module containing the given object name.""" - module, _ = get_module_from_obj_name(obj_name) - return os.path.dirname(inspect.getfile(module)) - - -def is_top_level_function(obj: Any) -> bool: - """Determine whether the given object is a top-level function, i.e., defined at module scope using 'def'.""" - return callable(obj) and obj.__name__ in sys.modules[obj.__module__].__dict__ - - -def get_top_level_function_name(obj: Any) -> str: - """Return the fully-qualified name of a top-level function.""" - assert is_top_level_function(obj) - module = obj.__module__ - if module == '__main__': - module = os.path.splitext(os.path.basename(sys.modules[module].__file__))[0] - return module + "." + obj.__name__ - - -# File system helpers -# ------------------------------------------------------------------------------------------ - -def list_dir_recursively_with_ignore(dir_path: str, ignores: List[str] = None, add_base_to_relative: bool = False) -> List[Tuple[str, str]]: - """List all files recursively in a given directory while ignoring given file and directory names. - Returns list of tuples containing both absolute and relative paths.""" - assert os.path.isdir(dir_path) - base_name = os.path.basename(os.path.normpath(dir_path)) - - if ignores is None: - ignores = [] - - result = [] - - for root, dirs, files in os.walk(dir_path, topdown=True): - for ignore_ in ignores: - dirs_to_remove = [d for d in dirs if fnmatch.fnmatch(d, ignore_)] - - # dirs need to be edited in-place - for d in dirs_to_remove: - dirs.remove(d) - - files = [f for f in files if not fnmatch.fnmatch(f, ignore_)] - - absolute_paths = [os.path.join(root, f) for f in files] - relative_paths = [os.path.relpath(p, dir_path) for p in absolute_paths] - - if add_base_to_relative: - relative_paths = [os.path.join(base_name, p) for p in relative_paths] - - assert len(absolute_paths) == len(relative_paths) - result += zip(absolute_paths, relative_paths) - - return result - - -def copy_files_and_create_dirs(files: List[Tuple[str, str]]) -> None: - """Takes in a list of tuples of (src, dst) paths and copies files. - Will create all necessary directories.""" - for file in files: - target_dir_name = os.path.dirname(file[1]) - - # will create all intermediate-level directories - if not os.path.exists(target_dir_name): - os.makedirs(target_dir_name) - - shutil.copyfile(file[0], file[1]) - - -# URL helpers -# ------------------------------------------------------------------------------------------ - -def is_url(obj: Any, allow_file_urls: bool = False) -> bool: - """Determine whether the given object is a valid URL string.""" - if not isinstance(obj, str) or not "://" in obj: - return False - if allow_file_urls and obj.startswith('file://'): - return True - try: - res = requests.compat.urlparse(obj) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - res = requests.compat.urlparse(requests.compat.urljoin(obj, "/")) - if not res.scheme or not res.netloc or not "." in res.netloc: - return False - except: - return False - return True - - -def open_url(url: str, cache_dir: str = None, num_attempts: int = 10, verbose: bool = True, return_filename: bool = False, cache: bool = True) -> Any: - """Download the given URL and return a binary-mode file object to access the data.""" - assert num_attempts >= 1 - assert not (return_filename and (not cache)) - - # Doesn't look like an URL scheme so interpret it as a local filename. - if not re.match('^[a-z]+://', url): - return url if return_filename else open(url, "rb") - - # Handle file URLs. This code handles unusual file:// patterns that - # arise on Windows: - # - # file:///c:/foo.txt - # - # which would translate to a local '/c:/foo.txt' filename that's - # invalid. Drop the forward slash for such pathnames. - # - # If you touch this code path, you should test it on both Linux and - # Windows. - # - # Some internet resources suggest using urllib.request.url2pathname() but - # but that converts forward slashes to backslashes and this causes - # its own set of problems. - if url.startswith('file://'): - filename = urllib.parse.urlparse(url).path - if re.match(r'^/[a-zA-Z]:', filename): - filename = filename[1:] - return filename if return_filename else open(filename, "rb") - - assert is_url(url) - - # Lookup from cache. - if cache_dir is None: - cache_dir = make_cache_dir_path('downloads') - - url_md5 = hashlib.md5(url.encode("utf-8")).hexdigest() - if cache: - cache_files = glob.glob(os.path.join(cache_dir, url_md5 + "_*")) - if len(cache_files) == 1: - filename = cache_files[0] - return filename if return_filename else open(filename, "rb") - - # Download. - url_name = None - url_data = None - with requests.Session() as session: - if verbose: - print("Downloading %s ..." % url, end="", flush=True) - for attempts_left in reversed(range(num_attempts)): - try: - with session.get(url) as res: - res.raise_for_status() - if len(res.content) == 0: - raise IOError("No data received") - - if len(res.content) < 8192: - content_str = res.content.decode("utf-8") - if "download_warning" in res.headers.get("Set-Cookie", ""): - links = [html.unescape(link) for link in content_str.split('"') if "export=download" in link] - if len(links) == 1: - url = requests.compat.urljoin(url, links[0]) - raise IOError("Google Drive virus checker nag") - if "Google Drive - Quota exceeded" in content_str: - raise IOError("Google Drive download quota exceeded -- please try again later") - - match = re.search(r'filename="([^"]*)"', res.headers.get("Content-Disposition", "")) - url_name = match[1] if match else url - url_data = res.content - if verbose: - print(" done") - break - except KeyboardInterrupt: - raise - except: - if not attempts_left: - if verbose: - print(" failed") - raise - if verbose: - print(".", end="", flush=True) - - # Save to cache. - if cache: - safe_name = re.sub(r"[^0-9a-zA-Z-._]", "_", url_name) - cache_file = os.path.join(cache_dir, url_md5 + "_" + safe_name) - temp_file = os.path.join(cache_dir, "tmp_" + uuid.uuid4().hex + "_" + url_md5 + "_" + safe_name) - os.makedirs(cache_dir, exist_ok=True) - with open(temp_file, "wb") as f: - f.write(url_data) - os.replace(temp_file, cache_file) # atomic - if return_filename: - return cache_file - - # Return data as file object. - assert not return_filename - return io.BytesIO(url_data) diff --git a/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/bias_act.cpp b/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/bias_act.cpp deleted file mode 100644 index 3adaeee2ae44e96655d354c2bdfb81de8ebfe6c6..0000000000000000000000000000000000000000 --- a/spaces/gyugnsu/DragGan-Inversion/torch_utils/ops/bias_act.cpp +++ /dev/null @@ -1,99 +0,0 @@ -// Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -// -// NVIDIA CORPORATION and its licensors retain all intellectual property -// and proprietary rights in and to this software, related documentation -// and any modifications thereto. Any use, reproduction, disclosure or -// distribution of this software and related documentation without an express -// license agreement from NVIDIA CORPORATION is strictly prohibited. - -#include <torch/extension.h> -#include <ATen/cuda/CUDAContext.h> -#include <c10/cuda/CUDAGuard.h> -#include "bias_act.h" - -//------------------------------------------------------------------------ - -static bool has_same_layout(torch::Tensor x, torch::Tensor y) -{ - if (x.dim() != y.dim()) - return false; - for (int64_t i = 0; i < x.dim(); i++) - { - if (x.size(i) != y.size(i)) - return false; - if (x.size(i) >= 2 && x.stride(i) != y.stride(i)) - return false; - } - return true; -} - -//------------------------------------------------------------------------ - -static torch::Tensor bias_act(torch::Tensor x, torch::Tensor b, torch::Tensor xref, torch::Tensor yref, torch::Tensor dy, int grad, int dim, int act, float alpha, float gain, float clamp) -{ - // Validate arguments. - TORCH_CHECK(x.is_cuda(), "x must reside on CUDA device"); - TORCH_CHECK(b.numel() == 0 || (b.dtype() == x.dtype() && b.device() == x.device()), "b must have the same dtype and device as x"); - TORCH_CHECK(xref.numel() == 0 || (xref.sizes() == x.sizes() && xref.dtype() == x.dtype() && xref.device() == x.device()), "xref must have the same shape, dtype, and device as x"); - TORCH_CHECK(yref.numel() == 0 || (yref.sizes() == x.sizes() && yref.dtype() == x.dtype() && yref.device() == x.device()), "yref must have the same shape, dtype, and device as x"); - TORCH_CHECK(dy.numel() == 0 || (dy.sizes() == x.sizes() && dy.dtype() == x.dtype() && dy.device() == x.device()), "dy must have the same dtype and device as x"); - TORCH_CHECK(x.numel() <= INT_MAX, "x is too large"); - TORCH_CHECK(b.dim() == 1, "b must have rank 1"); - TORCH_CHECK(b.numel() == 0 || (dim >= 0 && dim < x.dim()), "dim is out of bounds"); - TORCH_CHECK(b.numel() == 0 || b.numel() == x.size(dim), "b has wrong number of elements"); - TORCH_CHECK(grad >= 0, "grad must be non-negative"); - - // Validate layout. - TORCH_CHECK(x.is_non_overlapping_and_dense(), "x must be non-overlapping and dense"); - TORCH_CHECK(b.is_contiguous(), "b must be contiguous"); - TORCH_CHECK(xref.numel() == 0 || has_same_layout(xref, x), "xref must have the same layout as x"); - TORCH_CHECK(yref.numel() == 0 || has_same_layout(yref, x), "yref must have the same layout as x"); - TORCH_CHECK(dy.numel() == 0 || has_same_layout(dy, x), "dy must have the same layout as x"); - - // Create output tensor. - const at::cuda::OptionalCUDAGuard device_guard(device_of(x)); - torch::Tensor y = torch::empty_like(x); - TORCH_CHECK(has_same_layout(y, x), "y must have the same layout as x"); - - // Initialize CUDA kernel parameters. - bias_act_kernel_params p; - p.x = x.data_ptr(); - p.b = (b.numel()) ? b.data_ptr() : NULL; - p.xref = (xref.numel()) ? xref.data_ptr() : NULL; - p.yref = (yref.numel()) ? yref.data_ptr() : NULL; - p.dy = (dy.numel()) ? dy.data_ptr() : NULL; - p.y = y.data_ptr(); - p.grad = grad; - p.act = act; - p.alpha = alpha; - p.gain = gain; - p.clamp = clamp; - p.sizeX = (int)x.numel(); - p.sizeB = (int)b.numel(); - p.stepB = (b.numel()) ? (int)x.stride(dim) : 1; - - // Choose CUDA kernel. - void* kernel; - AT_DISPATCH_FLOATING_TYPES_AND_HALF(x.scalar_type(), "upfirdn2d_cuda", [&] - { - kernel = choose_bias_act_kernel<scalar_t>(p); - }); - TORCH_CHECK(kernel, "no CUDA kernel found for the specified activation func"); - - // Launch CUDA kernel. - p.loopX = 4; - int blockSize = 4 * 32; - int gridSize = (p.sizeX - 1) / (p.loopX * blockSize) + 1; - void* args[] = {&p}; - AT_CUDA_CHECK(cudaLaunchKernel(kernel, gridSize, blockSize, args, 0, at::cuda::getCurrentCUDAStream())); - return y; -} - -//------------------------------------------------------------------------ - -PYBIND11_MODULE(TORCH_EXTENSION_NAME, m) -{ - m.def("bias_act", &bias_act); -} - -//------------------------------------------------------------------------ diff --git a/spaces/hacksberg/plant/app.py b/spaces/hacksberg/plant/app.py deleted file mode 100644 index 172803d9a6508089bdb591f39a5ef8230558c315..0000000000000000000000000000000000000000 --- a/spaces/hacksberg/plant/app.py +++ /dev/null @@ -1,79 +0,0 @@ -import gradio as gr -import tensorflow as tf -import numpy as np -import cv2 -import os - -# Define the model path and image size -MODEL_PATH = "model.tflite" -IMAGE_SIZE = (299, 299) - -# Load the TFLite model -interpreter = tf.lite.Interpreter(model_path=MODEL_PATH) -interpreter.allocate_tensors() - -# Create a list of the class names in the correct order -classes = ["Apple___Apple_scab", "Apple___Black_rot", "Apple___Cedar_apple_rust", "Apple___healthy", "Blueberry___healthy", "Cherry_(including_sour)___Powdery_mildew", "Cherry_(including_sour)___healthy", "Corn_(maize)___Cercospora_leaf_spot Gray_leaf_spot", "Corn_(maize)___Common_rust_", "Corn_(maize)___Northern_Leaf_Blight", "Corn_(maize)___healthy", "Grape___Black_rot", "Grape___Esca_(Black_Measles)", "Grape___Leaf_blight_(Isariopsis_Leaf_Spot)", "Grape___healthy", "Orange___Haunglongbing_(Citrus_greening)", "Peach___Bacterial_spot", "Peach___healthy", - "Pepper,_bell___Bacterial_spot", "Pepper,_bell___healthy", "Potato___Early_blight", "Potato___Late_blight", "Potato___healthy", "Raspberry___healthy", "Soybean___healthy", "Squash___Powdery_mildew", "Strawberry___Leaf_scorch", "Strawberry___healthy", "Tomato___Bacterial_spot", "Tomato___Early_blight", "Tomato___Late_blight", "Tomato___Leaf_Mold", "Tomato___Septoria_leaf_spot", "Tomato___Spider_mites Two-spotted_spider_mite", "Tomato___Target_Spot", "Tomato___Tomato_Yellow_Leaf_Curl_Virus", "Tomato___Tomato_mosaic_virus", "Tomato___healthy"] - -# Define the function to preprocess the image - - -def preprocess_image(image): - img = cv2.resize(image, IMAGE_SIZE) - img = img.astype("float32") / 255.0 - img = np.expand_dims(img, axis=0) - return img - - -# Define the function to make predictions on an image - -def predict(image_path_or_pil_image): - # Load the TFLite model - interpreter = tf.lite.Interpreter(model_path=MODEL_PATH) - interpreter.allocate_tensors() - - # Get input and output details - input_details = interpreter.get_input_details() - output_details = interpreter.get_output_details() - - # Preprocess image - if isinstance(image_path_or_pil_image, str): - img = cv2.imread(image_path_or_pil_image) - img = preprocess_image(img, input_details[0]['shape'][1:3]) - else: - img = np.array(image_path_or_pil_image) - img = preprocess_image(img) - - # Set input tensor - interpreter.set_tensor(input_details[0]['index'], img) - - # Run inference - interpreter.invoke() - - # Get output tensor - output_data = interpreter.get_tensor(output_details[0]['index']) - - # Convert the output tensor to predictions - if isinstance(output_data, list): - results = {} - for label, confidence in zip(classes, output_data): - results[label] = confidence - else: - predictions = output_data.squeeze() - top_k = np.argsort(predictions)[::-1][:5] - results = {} - for idx in top_k: - results[classes[idx]] = float(predictions[idx]) - - return results - - -# Define the Gradio interface -inputs = gr.inputs.Image() -outputs = gr.outputs.Label(num_top_classes=5) -interface = gr.Interface(fn=predict, inputs=inputs, - outputs=outputs, capture_session=True) - -# Run the interface -interface.launch() diff --git a/spaces/hank1996/yolopv2/README.md b/spaces/hank1996/yolopv2/README.md deleted file mode 100644 index bf9e5ebbe31518ef6ef0e8a0d0a6228cd8e077b0..0000000000000000000000000000000000000000 --- a/spaces/hank1996/yolopv2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Yolopv2 -emoji: 🏢 -colorFrom: yellow -colorTo: pink -sdk: gradio -sdk_version: 3.1.7 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hanzportgas/rvc-models-v2/README.md b/spaces/hanzportgas/rvc-models-v2/README.md deleted file mode 100644 index f077cd85340c26ebfcb0857816d0f1f511408242..0000000000000000000000000000000000000000 --- a/spaces/hanzportgas/rvc-models-v2/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rvc Models -emoji: 🎤 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: ardha27/rvc-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hanzportgas/rvc-models/infer_pack/modules.py b/spaces/hanzportgas/rvc-models/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/hanzportgas/rvc-models/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/haotieu/Vietnamese-News-Summarizer/README.md b/spaces/haotieu/Vietnamese-News-Summarizer/README.md deleted file mode 100644 index 42865d947de6666f9a7227a6dfc69c8b12154eba..0000000000000000000000000000000000000000 --- a/spaces/haotieu/Vietnamese-News-Summarizer/README.md +++ /dev/null @@ -1,37 +0,0 @@ ---- -title: Vietnamese News Summarizer -emoji: 🤗 -colorFrom: green -colorTo: pink -sdk: gradio -app_file: app.py -pinned: false ---- - -# Configuration - -`title`: _string_ -Display title for the Space - -`emoji`: _string_ -Space emoji (emoji-only character allowed) - -`colorFrom`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`colorTo`: _string_ -Color for Thumbnail gradient (red, yellow, green, blue, indigo, purple, pink, gray) - -`sdk`: _string_ -Can be either `gradio`, `streamlit`, or `static` - -`sdk_version` : _string_ -Only applicable for `streamlit` SDK. -See [doc](https://hf.co/docs/hub/spaces) for more info on supported versions. - -`app_file`: _string_ -Path to your main application file (which contains either `gradio` or `streamlit` Python code, or `static` html code). -Path is relative to the root of the repository. - -`pinned`: _boolean_ -Whether the Space stays on top of your list. diff --git a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/structures/__init__.py b/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/structures/__init__.py deleted file mode 100644 index 618f526753b5813b86645023271b67b421ea4cb5..0000000000000000000000000000000000000000 --- a/spaces/hasibzunair/fifa-tryon-demo/Self-Correction-Human-Parsing-for-ACGPN/mhp_extension/detectron2/detectron2/structures/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. All Rights Reserved -from .boxes import Boxes, BoxMode, pairwise_iou -from .image_list import ImageList - -from .instances import Instances -from .keypoints import Keypoints, heatmaps_to_keypoints -from .masks import BitMasks, PolygonMasks, rasterize_polygons_within_box, polygons_to_bitmask -from .rotated_boxes import RotatedBoxes -from .rotated_boxes import pairwise_iou as pairwise_iou_rotated - -__all__ = [k for k in globals().keys() if not k.startswith("_")] diff --git a/spaces/hekbobo/bingo/src/lib/bots/bing/index.ts b/spaces/hekbobo/bingo/src/lib/bots/bing/index.ts deleted file mode 100644 index c75c69f94af8c3db92d4c90d465c219a2af72a4d..0000000000000000000000000000000000000000 --- a/spaces/hekbobo/bingo/src/lib/bots/bing/index.ts +++ /dev/null @@ -1,432 +0,0 @@ -import { fetch, WebSocket, debug } from '@/lib/isomorphic' -import WebSocketAsPromised from 'websocket-as-promised' -import { - SendMessageParams, - BingConversationStyle, - ConversationResponse, - ChatResponseMessage, - ConversationInfo, - InvocationEventType, - ChatError, - ErrorCode, - ChatUpdateCompleteResponse, - ImageInfo, - KBlobResponse -} from './types' - -import { convertMessageToMarkdown, websocketUtils, streamAsyncIterable } from './utils' -import { WatchDog, createChunkDecoder } from '@/lib/utils' - -type Params = SendMessageParams<{ bingConversationStyle: BingConversationStyle }> - -const OPTIONS_SETS = [ - 'nlu_direct_response_filter', - 'deepleo', - 'disable_emoji_spoken_text', - 'responsible_ai_policy_235', - 'enablemm', - 'iycapbing', - 'iyxapbing', - 'objopinion', - 'rweasgv2', - 'dagslnv1', - 'dv3sugg', - 'autosave', - 'iyoloxap', - 'iyoloneutral', - 'clgalileo', - 'gencontentv3', -] - -export class BingWebBot { - protected conversationContext?: ConversationInfo - protected cookie: string - protected ua: string - protected endpoint = '' - private lastText = '' - private asyncTasks: Array<Promise<any>> = [] - - constructor(opts: { - cookie: string - ua: string - bingConversationStyle?: BingConversationStyle - conversationContext?: ConversationInfo - }) { - const { cookie, ua, conversationContext } = opts - this.cookie = cookie?.includes(';') ? cookie : `_EDGE_V=1; _U=${cookie}` - this.ua = ua - this.conversationContext = conversationContext - } - - static buildChatRequest(conversation: ConversationInfo) { - const optionsSets = OPTIONS_SETS - if (conversation.conversationStyle === BingConversationStyle.Precise) { - optionsSets.push('h3precise') - } else if (conversation.conversationStyle === BingConversationStyle.Creative) { - optionsSets.push('h3imaginative') - } - return { - arguments: [ - { - source: 'cib', - optionsSets, - allowedMessageTypes: [ - 'ActionRequest', - 'Chat', - 'Context', - 'InternalSearchQuery', - 'InternalSearchResult', - 'Disengaged', - 'InternalLoaderMessage', - 'Progress', - 'RenderCardRequest', - 'SemanticSerp', - 'GenerateContentQuery', - 'SearchQuery', - ], - sliceIds: [ - 'winmuid1tf', - 'anssupfor_c', - 'imgchatgptv2', - 'tts2cf', - 'contansperf', - 'mlchatpc8500w', - 'mlchatpc2', - 'ctrlworkpay', - 'winshortmsgtf', - 'cibctrl', - 'sydtransctrl', - 'sydconfigoptc', - '0705trt4', - '517opinion', - '628ajcopus0', - '330uaugs0', - '529rwea', - '0626snptrcs0', - '424dagslnv1', - ], - isStartOfSession: conversation.invocationId === 0, - message: { - author: 'user', - inputMethod: 'Keyboard', - text: conversation.prompt, - imageUrl: conversation.imageUrl, - messageType: 'Chat', - }, - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - participant: { id: conversation.clientId }, - }, - ], - invocationId: conversation.invocationId.toString(), - target: 'chat', - type: InvocationEventType.StreamInvocation, - } - } - - async createConversation(): Promise<ConversationResponse> { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - - let resp: ConversationResponse | undefined - try { - const response = await fetch(this.endpoint + '/api/create', { method: 'POST', headers, redirect: 'error', mode: 'cors', credentials: 'include' }) - if (response.status === 404) { - throw new ChatError('Not Found', ErrorCode.NOTFOUND_ERROR) - } - resp = await response.json() as ConversationResponse - } catch (err) { - console.error('create conversation error', err) - } - - if (!resp?.result) { - throw new ChatError('你的 VPS 或代理可能被封禁,如有疑问,请前往 https://github.com/weaigc/bingo 咨询', ErrorCode.BING_IP_FORBIDDEN) - } - - const { value, message } = resp.result || {} - if (value !== 'Success') { - const errorMsg = `${value}: ${message}` - if (value === 'UnauthorizedRequest') { - if (/fetch failed/i.test(message || '')) { - throw new ChatError(errorMsg, ErrorCode.BING_IP_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.BING_UNAUTHORIZED) - } - if (value === 'TryLater') { - throw new ChatError(errorMsg, ErrorCode.BING_TRY_LATER) - } - if (value === 'Forbidden') { - throw new ChatError(errorMsg, ErrorCode.BING_FORBIDDEN) - } - throw new ChatError(errorMsg, ErrorCode.UNKOWN_ERROR) - } - return resp - } - - private async createContext(conversationStyle: BingConversationStyle) { - if (!this.conversationContext) { - const conversation = await this.createConversation() - this.conversationContext = { - conversationId: conversation.conversationId, - conversationSignature: conversation.conversationSignature, - clientId: conversation.clientId, - invocationId: 0, - conversationStyle, - prompt: '', - } - } - return this.conversationContext - } - - async sendMessage(params: Params) { - try { - await this.createContext(params.options.bingConversationStyle) - Object.assign(this.conversationContext!, { prompt: params.prompt, imageUrl: params.imageUrl }) - return this.sydneyProxy(params) - } catch (error) { - params.onEvent({ - type: 'ERROR', - error: error instanceof ChatError ? error : new ChatError('Catch Error', ErrorCode.UNKOWN_ERROR), - }) - } - } - - private async sydneyProxy(params: Params) { - const abortController = new AbortController() - const response = await fetch(this.endpoint + '/api/sydney', { - method: 'POST', - headers: { - 'Content-Type': 'application/json', - }, - signal: abortController.signal, - body: JSON.stringify(this.conversationContext!) - }) - if (response.status !== 200) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Unknown error', - ErrorCode.UNKOWN_ERROR, - ), - }) - } - params.signal?.addEventListener('abort', () => { - abortController.abort() - }) - - const textDecoder = createChunkDecoder() - for await (const chunk of streamAsyncIterable(response.body!)) { - this.parseEvents(params, websocketUtils.unpackMessage(textDecoder(chunk))) - } - } - - async sendWs() { - const wsConfig: ConstructorParameters<typeof WebSocketAsPromised>[1] = { - packMessage: websocketUtils.packMessage, - unpackMessage: websocketUtils.unpackMessage, - createWebSocket: (url) => new WebSocket(url, { - headers: { - 'accept-language': 'zh-CN,zh;q=0.9', - 'cache-control': 'no-cache', - 'User-Agent': this.ua, - pragma: 'no-cache', - cookie: this.cookie, - } - }) - } - const wsp = new WebSocketAsPromised('wss://sydney.bing.com/sydney/ChatHub', wsConfig) - - wsp.open().then(() => { - wsp.sendPacked({ protocol: 'json', version: 1 }) - wsp.sendPacked({ type: 6 }) - wsp.sendPacked(BingWebBot.buildChatRequest(this.conversationContext!)) - }) - - return wsp - } - - private async useWs(params: Params) { - const wsp = await this.sendWs() - const watchDog = new WatchDog() - wsp.onUnpackedMessage.addListener((events) => { - watchDog.watch(() => { - wsp.sendPacked({ type: 6 }) - }) - this.parseEvents(params, events) - }) - - wsp.onClose.addListener(() => { - watchDog.reset() - params.onEvent({ type: 'DONE' }) - wsp.removeAllListeners() - }) - - params.signal?.addEventListener('abort', () => { - wsp.removeAllListeners() - wsp.close() - }) - } - - private async createImage(prompt: string, id: string) { - try { - const headers = { - 'Accept-Encoding': 'gzip, deflate, br, zsdch', - 'User-Agent': this.ua, - 'x-ms-useragent': 'azsdk-js-api-client-factory/1.0.0-beta.1 core-rest-pipeline/1.10.0 OS/Win32', - cookie: this.cookie, - } - const query = new URLSearchParams({ - prompt, - id - }) - const response = await fetch(this.endpoint + '/api/image?' + query.toString(), - { - method: 'POST', - headers, - mode: 'cors', - credentials: 'include' - }) - .then(res => res.text()) - if (response) { - this.lastText += '\n' + response - } - } catch (err) { - console.error('Create Image Error', err) - } - } - - private buildKnowledgeApiPayload(imageUrl: string, conversationStyle: BingConversationStyle) { - const imageInfo: ImageInfo = {} - let imageBase64: string | undefined = undefined - const knowledgeRequest = { - imageInfo, - knowledgeRequest: { - invokedSkills: [ - 'ImageById' - ], - subscriptionId: 'Bing.Chat.Multimodal', - invokedSkillsRequestData: { - enableFaceBlur: true - }, - convoData: { - convoid: this.conversationContext?.conversationId, - convotone: conversationStyle, - } - }, - } - - if (imageUrl.startsWith('data:image/')) { - imageBase64 = imageUrl.replace('data:image/', ''); - const partIndex = imageBase64.indexOf(',') - if (partIndex) { - imageBase64 = imageBase64.substring(partIndex + 1) - } - } else { - imageInfo.url = imageUrl - } - return { knowledgeRequest, imageBase64 } - } - - async uploadImage(imageUrl: string, conversationStyle: BingConversationStyle = BingConversationStyle.Creative): Promise<KBlobResponse | undefined> { - if (!imageUrl) { - return - } - await this.createContext(conversationStyle) - const payload = this.buildKnowledgeApiPayload(imageUrl, conversationStyle) - - const response = await fetch(this.endpoint + '/api/kblob', - { - headers: { - 'Content-Type': 'application/json', - }, - method: 'POST', - mode: 'cors', - credentials: 'include', - body: JSON.stringify(payload), - }) - .then(res => res.json()) - .catch(e => { - console.log('Error', e) - }) - return response - } - - private async generateContent(message: ChatResponseMessage) { - if (message.contentType === 'IMAGE') { - this.asyncTasks.push(this.createImage(message.text, message.messageId)) - } - } - - private async parseEvents(params: Params, events: any) { - const conversation = this.conversationContext! - - events?.forEach(async (event: ChatUpdateCompleteResponse) => { - debug('bing event', event) - if (event.type === 3) { - await Promise.all(this.asyncTasks) - this.asyncTasks = [] - params.onEvent({ type: 'UPDATE_ANSWER', data: { text: this.lastText } }) - params.onEvent({ type: 'DONE' }) - conversation.invocationId = parseInt(event.invocationId, 10) + 1 - } else if (event.type === 1) { - const messages = event.arguments[0].messages - if (messages) { - const text = convertMessageToMarkdown(messages[0]) - this.lastText = text - params.onEvent({ type: 'UPDATE_ANSWER', data: { text, spokenText: messages[0].text, throttling: event.arguments[0].throttling } }) - } - } else if (event.type === 2) { - const messages = event.item.messages as ChatResponseMessage[] | undefined - if (!messages) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - event.item.result.error || 'Unknown error', - event.item.result.value === 'Throttled' ? ErrorCode.THROTTLE_LIMIT - : event.item.result.value === 'CaptchaChallenge' ? (this.conversationContext?.conversationId?.includes('BingProdUnAuthenticatedUsers') ? ErrorCode.BING_UNAUTHORIZED : ErrorCode.BING_CAPTCHA) - : ErrorCode.UNKOWN_ERROR - ), - }) - return - } - const limited = messages.some((message) => - message.contentOrigin === 'TurnLimiter' - || message.messageType === 'Disengaged' - ) - if (limited) { - params.onEvent({ - type: 'ERROR', - error: new ChatError( - 'Sorry, you have reached chat limit in this conversation.', - ErrorCode.CONVERSATION_LIMIT, - ), - }) - return - } - - const lastMessage = event.item.messages.at(-1) as ChatResponseMessage - const specialMessage = event.item.messages.find(message => message.author === 'bot' && message.contentType === 'IMAGE') - if (specialMessage) { - this.generateContent(specialMessage) - } - - if (lastMessage) { - const text = convertMessageToMarkdown(lastMessage) - this.lastText = text - params.onEvent({ - type: 'UPDATE_ANSWER', - data: { text, throttling: event.item.throttling, suggestedResponses: lastMessage.suggestedResponses, sourceAttributions: lastMessage.sourceAttributions }, - }) - } - } - }) - } - - resetConversation() { - this.conversationContext = undefined - } -} diff --git a/spaces/hf4all/bingo-api/README.md b/spaces/hf4all/bingo-api/README.md deleted file mode 100644 index 4976bc4bff5c4e7a8818836e9a6e4c27f7aee9cc..0000000000000000000000000000000000000000 --- a/spaces/hf4all/bingo-api/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: Bingo API -emoji: 💻🐳 -colorFrom: red -colorTo: blue -sdk: docker -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/create_train_gif.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/create_train_gif.py deleted file mode 100644 index 5660fbc552fcfa51c2c5a250e17dbac2ec812c8c..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/create_plots_new/create_train_gif.py +++ /dev/null @@ -1,158 +0,0 @@ -import imageio -from skimage import io -import skimage - -import os -from PIL import Image, ImageDraw, ImageFont, ImageOps -import copy - -from datetime import date -import numpy as np -from argparse import ArgumentParser -from skimage.transform import resize -# import matplotlib.pyplot as plt -import cv2 - - -def color_map(m): - return m[0] * np.array([1, 1, 1]) + (255 - m[0]) * np.array([0, 0, 1]) - - -def createOverlay(image, front, zone, boundary): - """ - creates an image with the front label overlaying the glacier image - - :param image: Image of the glacier - :param front: Image of the label of the front - :return: an rgb image with the black and white image and red front line - """ - - # value for NA area=0, stone=64, glacier=127, ocean with ice melange=254 - - image_rgb = np.array(image * 0.5, dtype=np.uint8) - - try: - image_rgb[zone == 0] += np.array(np.array([0, 0, 0]) / 2, dtype=np.uint8) - image_rgb[zone == 64] += np.array(np.array([52, 46, 55]) / 2, dtype=np.uint8) - image_rgb[zone == 127] += np.array(np.array([254, 254, 254]) / 2, dtype=np.uint8) - image_rgb[zone == 254] += np.array(np.array([60, 145, 230]) / 2, dtype=np.uint8) - - finally: - #try: - # image_rgb[boundary > 0] = np.array(np.array([241, 143, 1]), dtype=np.uint8) - #finally: - image_rgb[front == 255] = np.array(np.array([255, 0, 0]), dtype=np.uint8) - - return image_rgb - - -def create_target(sar_image_path): - sample_name = sar_image_path.split('/')[-1] - sar_image = cv2.imread(sar_image_path) - front_image_path = '/home/ho11laqe/PycharmProjects/data_raw/fronts_dilated_5/train/' + sample_name[ - :-len('.png')] + '_front.png' - zone_image_path = '/home/ho11laqe/PycharmProjects/data_raw/zones/train/' + sample_name[ - :-len('.png')] + '_zones.png' - - boundary_image_path = '/home/ho11laqe/PycharmProjects/data_raw/boundaries_dilated_5/train/' + sample_name[ - :-len( - '.png')] + '_boundary.png' - front = cv2.imread(front_image_path, cv2.IMREAD_GRAYSCALE) - zone = cv2.imread(zone_image_path, cv2.IMREAD_GRAYSCALE) - boundary = cv2.imread(boundary_image_path, cv2.IMREAD_GRAYSCALE) - overlay = createOverlay(sar_image, front, zone, boundary) - cv2.imwrite('output/target.png', cv2.cvtColor(overlay, cv2.COLOR_RGB2BGR)) - - -if __name__ == '__main__': - parser = ArgumentParser(add_help=False) - parser.add_argument('--image_dir', help="Directory with predictions as png") - args = parser.parse_args() - - image_dir = args.image_dir - - front_gif = [] - fronts = [] - zone_gif = [] - zones = [] - boundary_gif = [] - boundaries = [] - - sar_image_path = '/home/ho11laqe/PycharmProjects/data_raw/sar_images/train/DBE_2008-03-30_TSX_7_3_049.png' - sar_image = cv2.imread(sar_image_path) - shape = sar_image.shape - new_shape = (int(shape[1] / 4), int(shape[0] / 4)) - sar_image = cv2.resize(sar_image, new_shape) - - create_target(sar_image_path) - - list_images = os.listdir(image_dir) - list_images.sort(key=lambda y: int(y.split('_')[6])) - - for i, image_file in enumerate(list_images[:300]): - epoch = image_file.split('_')[6] - if image_file.endswith('_front.png'): - print(image_file) - front = cv2.imread(image_dir + '/' + image_file, cv2.IMREAD_GRAYSCALE) - front = cv2.resize(front, new_shape, interpolation=cv2.INTER_NEAREST) - # image = Image.fromarray(front) - # image_draw = ImageDraw.Draw(image) - # image_draw.text((1,1), 'Epoch: '+str(epoch)) - # front_gif.append(image) - fronts.append(front) - elif image_file.endswith('_zone.png'): - print(image_file) - zone = cv2.imread(image_dir + '/' + image_file, cv2.IMREAD_GRAYSCALE) - zone = cv2.resize(zone, new_shape, interpolation=cv2.INTER_NEAREST) - # image = Image.fromarray(zone) - # image_draw = ImageDraw.Draw(image) - # image_draw.text((1, 1), 'Epoch: ' + str(epoch)) - # zone_gif.append(image) - zones.append(zone) - elif image_file.endswith('_boundary.png'): - print(image_file) - boundary = cv2.imread(image_dir + '/' + image_file, cv2.IMREAD_GRAYSCALE) - boundary = cv2.resize(boundary, new_shape, interpolation=cv2.INTER_NEAREST) - # image = Image.fromarray(boundary) - # image_draw = ImageDraw.Draw(image) - # image_draw.text((1, 1), 'Epoch: ' + str(epoch)) - # boundary_gif.append(image) - boundaries.append(boundary) - - font = ImageFont.truetype("/usr/share/fonts/truetype/freefont/FreeMonoBold.ttf", 40) - font_legend = ImageFont.truetype("/usr/share/fonts/truetype/freefont/FreeMonoBold.ttf", 20) - overlay_gif = [] - for epoch, (front, zone, boundary) in enumerate(zip(fronts, zones, boundaries)): - overlay = createOverlay(sar_image, front, zone, boundary) - image = Image.fromarray(overlay) - image_draw = ImageDraw.Draw(image) - - image_draw.rectangle((0, 40, 195, 210), fill='gray') - - image_draw.rectangle((10, 60, 30, 80), fill=(60, 145, 230, 120)) - image_draw.text((35, 60), 'Ocean', font=font_legend) - - image_draw.rectangle((10, 90, 30, 110), fill=(255, 255, 255)) - image_draw.text((35, 90), 'Glacier', font=font_legend) - - image_draw.rectangle((10, 120, 30, 140), fill=(255, 0, 0)) - image_draw.text((35, 120), 'Glacier Front', font=font_legend) - - image_draw.rectangle((10, 150, 30, 170), fill=(92, 76, 85)) - image_draw.text((35, 150), 'Rock', font=font_legend) - - image_draw.rectangle((10, 180, 30, 200), fill=(0, 0, 0)) - image_draw.text((35, 180), 'Shadow', font=font_legend) - - image_draw.rectangle((0, 0, 330, 45), fill='gray') - image_draw.text((8, 1), 'Epoch:%03i' % epoch + '/' + str(len(fronts)), font=font, ) - if epoch < 10: - for i in range(10 - epoch): - print(i) - overlay_gif.append(image) - else: - overlay_gif.append(image) - - frame_one = overlay_gif[0] - frame_one.save("output/overlay.gif", format="GIF", append_images=overlay_gif, - save_all=True, duration=200, loop=0) diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task037_038_Chaos_Challenge.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task037_038_Chaos_Challenge.py deleted file mode 100644 index 9fa3dcc85dfa2671cf9e21bac133fe8ef9b76e60..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/dataset_conversion/Task037_038_Chaos_Challenge.py +++ /dev/null @@ -1,460 +0,0 @@ -# Copyright 2020 Division of Medical Image Computing, German Cancer Research Center (DKFZ), Heidelberg, Germany -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - - -from PIL import Image -import shutil -from collections import OrderedDict - -import dicom2nifti -import numpy as np -from batchgenerators.utilities.data_splitting import get_split_deterministic -from batchgenerators.utilities.file_and_folder_operations import * -from PIL import Image -import SimpleITK as sitk -from nnunet.paths import preprocessing_output_dir, nnUNet_raw_data -from nnunet.utilities.sitk_stuff import copy_geometry -from nnunet.inference.ensemble_predictions import merge - - -def load_png_stack(folder): - pngs = subfiles(folder, suffix="png") - pngs.sort() - loaded = [] - for p in pngs: - loaded.append(np.array(Image.open(p))) - loaded = np.stack(loaded, 0)[::-1] - return loaded - - -def convert_CT_seg(loaded_png): - return loaded_png.astype(np.uint16) - - -def convert_MR_seg(loaded_png): - result = np.zeros(loaded_png.shape) - result[(loaded_png > 55) & (loaded_png <= 70)] = 1 # liver - result[(loaded_png > 110) & (loaded_png <= 135)] = 2 # right kidney - result[(loaded_png > 175) & (loaded_png <= 200)] = 3 # left kidney - result[(loaded_png > 240) & (loaded_png <= 255)] = 4 # spleen - return result - - -def convert_seg_to_intensity_task5(seg): - seg_new = np.zeros(seg.shape, dtype=np.uint8) - seg_new[seg == 1] = 63 - seg_new[seg == 2] = 126 - seg_new[seg == 3] = 189 - seg_new[seg == 4] = 252 - return seg_new - - -def convert_seg_to_intensity_task3(seg): - seg_new = np.zeros(seg.shape, dtype=np.uint8) - seg_new[seg == 1] = 63 - return seg_new - - -def write_pngs_from_nifti(nifti, output_folder, converter=convert_seg_to_intensity_task3): - npy = sitk.GetArrayFromImage(sitk.ReadImage(nifti)) - seg_new = converter(npy) - for z in range(len(npy)): - Image.fromarray(seg_new[z]).save(join(output_folder, "img%03.0d.png" % z)) - - -def convert_variant2_predicted_test_to_submission_format(folder_with_predictions, - output_folder="/home/fabian/drives/datasets/results/nnUNet/test_sets/Task038_CHAOS_Task_3_5_Variant2/ready_to_submit", - postprocessing_file="/home/fabian/drives/datasets/results/nnUNet/ensembles/Task038_CHAOS_Task_3_5_Variant2/ensemble_2d__nnUNetTrainerV2__nnUNetPlansv2.1--3d_fullres__nnUNetTrainerV2__nnUNetPlansv2.1/postprocessing.json"): - """ - output_folder is where the extracted template is - :param folder_with_predictions: - :param output_folder: - :return: - """ - postprocessing_file = "/media/fabian/Results/nnUNet/3d_fullres/Task039_CHAOS_Task_3_5_Variant2_highres/" \ - "nnUNetTrainerV2__nnUNetPlansfixed/postprocessing.json" - - # variant 2 treats in and out phase as two training examples, so we need to ensemble these two again - final_predictions_folder = join(output_folder, "final") - maybe_mkdir_p(final_predictions_folder) - t1_patient_names = [i.split("_")[-1][:-7] for i in subfiles(folder_with_predictions, prefix="T1", suffix=".nii.gz", join=False)] - folder_for_ensembing0 = join(output_folder, "ens0") - folder_for_ensembing1 = join(output_folder, "ens1") - maybe_mkdir_p(folder_for_ensembing0) - maybe_mkdir_p(folder_for_ensembing1) - # now copy all t1 out phases in ens0 and all in phases in ens1. Name them the same. - for t1 in t1_patient_names: - shutil.copy(join(folder_with_predictions, "T1_in_%s.npz" % t1), join(folder_for_ensembing1, "T1_%s.npz" % t1)) - shutil.copy(join(folder_with_predictions, "T1_in_%s.pkl" % t1), join(folder_for_ensembing1, "T1_%s.pkl" % t1)) - shutil.copy(join(folder_with_predictions, "T1_out_%s.npz" % t1), join(folder_for_ensembing0, "T1_%s.npz" % t1)) - shutil.copy(join(folder_with_predictions, "T1_out_%s.pkl" % t1), join(folder_for_ensembing0, "T1_%s.pkl" % t1)) - shutil.copy(join(folder_with_predictions, "plans.pkl"), join(folder_for_ensembing0, "plans.pkl")) - shutil.copy(join(folder_with_predictions, "plans.pkl"), join(folder_for_ensembing1, "plans.pkl")) - - # there is a problem with T1_35 that I need to correct manually (different crop size, will not negatively impact results) - #ens0_softmax = np.load(join(folder_for_ensembing0, "T1_35.npz"))['softmax'] - ens1_softmax = np.load(join(folder_for_ensembing1, "T1_35.npz"))['softmax'] - #ens0_props = load_pickle(join(folder_for_ensembing0, "T1_35.pkl")) - #ens1_props = load_pickle(join(folder_for_ensembing1, "T1_35.pkl")) - ens1_softmax = ens1_softmax[:, :, :-1, :] - np.savez_compressed(join(folder_for_ensembing1, "T1_35.npz"), softmax=ens1_softmax) - shutil.copy(join(folder_for_ensembing0, "T1_35.pkl"), join(folder_for_ensembing1, "T1_35.pkl")) - - # now call my ensemble function - merge((folder_for_ensembing0, folder_for_ensembing1), final_predictions_folder, 8, True, - postprocessing_file=postprocessing_file) - # copy t2 files to final_predictions_folder as well - t2_files = subfiles(folder_with_predictions, prefix="T2", suffix=".nii.gz", join=False) - for t2 in t2_files: - shutil.copy(join(folder_with_predictions, t2), join(final_predictions_folder, t2)) - - # apply postprocessing - from nnunet.postprocessing.connected_components import apply_postprocessing_to_folder, load_postprocessing - postprocessed_folder = join(output_folder, "final_postprocessed") - for_which_classes, min_valid_obj_size = load_postprocessing(postprocessing_file) - apply_postprocessing_to_folder(final_predictions_folder, postprocessed_folder, - for_which_classes, min_valid_obj_size, 8) - - # now export the niftis in the weird png format - # task 3 - output_dir = join(output_folder, "CHAOS_submission_template_new", "Task3", "MR") - for t1 in t1_patient_names: - output_folder_here = join(output_dir, t1, "T1DUAL", "Results") - nifti_file = join(postprocessed_folder, "T1_%s.nii.gz" % t1) - write_pngs_from_nifti(nifti_file, output_folder_here, converter=convert_seg_to_intensity_task3) - for t2 in t2_files: - patname = t2.split("_")[-1][:-7] - output_folder_here = join(output_dir, patname, "T2SPIR", "Results") - nifti_file = join(postprocessed_folder, "T2_%s.nii.gz" % patname) - write_pngs_from_nifti(nifti_file, output_folder_here, converter=convert_seg_to_intensity_task3) - - # task 5 - output_dir = join(output_folder, "CHAOS_submission_template_new", "Task5", "MR") - for t1 in t1_patient_names: - output_folder_here = join(output_dir, t1, "T1DUAL", "Results") - nifti_file = join(postprocessed_folder, "T1_%s.nii.gz" % t1) - write_pngs_from_nifti(nifti_file, output_folder_here, converter=convert_seg_to_intensity_task5) - for t2 in t2_files: - patname = t2.split("_")[-1][:-7] - output_folder_here = join(output_dir, patname, "T2SPIR", "Results") - nifti_file = join(postprocessed_folder, "T2_%s.nii.gz" % patname) - write_pngs_from_nifti(nifti_file, output_folder_here, converter=convert_seg_to_intensity_task5) - - - -if __name__ == "__main__": - """ - This script only prepares data to participate in Task 5 and Task 5. I don't like the CT task because - 1) there are - no abdominal organs in the ground truth. In the case of CT we are supposed to train only liver while on MRI we are - supposed to train all organs. This would require manual modification of nnU-net to deal with this dataset. This is - not what nnU-net is about. - 2) CT Liver or multiorgan segmentation is too easy to get external data for. Therefore the challenges comes down - to who gets the b est external data, not who has the best algorithm. Not super interesting. - - Task 3 is a subtask of Task 5 so we need to prepare the data only once. - Difficulty: We need to process both T1 and T2, but T1 has 2 'modalities' (phases). nnU-Net cannot handly varying - number of input channels. We need to be creative. - We deal with this by preparing 2 Variants: - 1) pretend we have 2 modalities for T2 as well by simply stacking a copy of the data - 2) treat all MRI sequences independently, so we now have 3*20 training data instead of 2*20. In inference we then - ensemble the results for the two t1 modalities. - - Careful: We need to split manually here to ensure we stratify by patient - """ - - root = "/media/fabian/My Book/datasets/CHAOS_challenge/Train_Sets" - root_test = "/media/fabian/My Book/datasets/CHAOS_challenge/Test_Sets" - out_base = nnUNet_raw_data - # CT - # we ignore CT because - - ############################################################## - # Variant 1 - ############################################################## - patient_ids = [] - patient_ids_test = [] - - output_folder = join(out_base, "Task037_CHAOS_Task_3_5_Variant1") - output_images = join(output_folder, "imagesTr") - output_labels = join(output_folder, "labelsTr") - output_imagesTs = join(output_folder, "imagesTs") - maybe_mkdir_p(output_images) - maybe_mkdir_p(output_labels) - maybe_mkdir_p(output_imagesTs) - - - # Process T1 train - d = join(root, "MR") - patients = subdirs(d, join=False) - for p in patients: - patient_name = "T1_" + p - gt_dir = join(d, p, "T1DUAL", "Ground") - seg = convert_MR_seg(load_png_stack(gt_dir)[::-1]) - - img_dir = join(d, p, "T1DUAL", "DICOM_anon", "InPhase") - img_outfile = join(output_images, patient_name + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_dir = join(d, p, "T1DUAL", "DICOM_anon", "OutPhase") - img_outfile = join(output_images, patient_name + "_0001.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_sitk = sitk.ReadImage(img_outfile) - img_sitk_npy = sitk.GetArrayFromImage(img_sitk) - seg_itk = sitk.GetImageFromArray(seg.astype(np.uint8)) - seg_itk = copy_geometry(seg_itk, img_sitk) - sitk.WriteImage(seg_itk, join(output_labels, patient_name + ".nii.gz")) - patient_ids.append(patient_name) - - # Process T1 test - d = join(root_test, "MR") - patients = subdirs(d, join=False) - for p in patients: - patient_name = "T1_" + p - - img_dir = join(d, p, "T1DUAL", "DICOM_anon", "InPhase") - img_outfile = join(output_imagesTs, patient_name + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_dir = join(d, p, "T1DUAL", "DICOM_anon", "OutPhase") - img_outfile = join(output_imagesTs, patient_name + "_0001.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_sitk = sitk.ReadImage(img_outfile) - img_sitk_npy = sitk.GetArrayFromImage(img_sitk) - patient_ids_test.append(patient_name) - - # Process T2 train - d = join(root, "MR") - patients = subdirs(d, join=False) - for p in patients: - patient_name = "T2_" + p - - gt_dir = join(d, p, "T2SPIR", "Ground") - seg = convert_MR_seg(load_png_stack(gt_dir)[::-1]) - - img_dir = join(d, p, "T2SPIR", "DICOM_anon") - img_outfile = join(output_images, patient_name + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - shutil.copy(join(output_images, patient_name + "_0000.nii.gz"), join(output_images, patient_name + "_0001.nii.gz")) - - img_sitk = sitk.ReadImage(img_outfile) - img_sitk_npy = sitk.GetArrayFromImage(img_sitk) - seg_itk = sitk.GetImageFromArray(seg.astype(np.uint8)) - seg_itk = copy_geometry(seg_itk, img_sitk) - sitk.WriteImage(seg_itk, join(output_labels, patient_name + ".nii.gz")) - patient_ids.append(patient_name) - - # Process T2 test - d = join(root_test, "MR") - patients = subdirs(d, join=False) - for p in patients: - patient_name = "T2_" + p - - gt_dir = join(d, p, "T2SPIR", "Ground") - - img_dir = join(d, p, "T2SPIR", "DICOM_anon") - img_outfile = join(output_imagesTs, patient_name + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - shutil.copy(join(output_imagesTs, patient_name + "_0000.nii.gz"), join(output_imagesTs, patient_name + "_0001.nii.gz")) - - img_sitk = sitk.ReadImage(img_outfile) - img_sitk_npy = sitk.GetArrayFromImage(img_sitk) - patient_ids_test.append(patient_name) - - json_dict = OrderedDict() - json_dict['name'] = "Chaos Challenge Task3/5 Variant 1" - json_dict['description'] = "nothing" - json_dict['tensorImageSize'] = "4D" - json_dict['reference'] = "https://chaos.grand-challenge.org/Data/" - json_dict['licence'] = "see https://chaos.grand-challenge.org/Data/" - json_dict['release'] = "0.0" - json_dict['modality'] = { - "0": "MRI", - "1": "MRI", - } - json_dict['labels'] = { - "0": "background", - "1": "liver", - "2": "right kidney", - "3": "left kidney", - "4": "spleen", - } - json_dict['numTraining'] = len(patient_ids) - json_dict['numTest'] = 0 - json_dict['training'] = [{'image': "./imagesTr/%s.nii.gz" % i, "label": "./labelsTr/%s.nii.gz" % i} for i in - patient_ids] - json_dict['test'] = [] - - save_json(json_dict, join(output_folder, "dataset.json")) - - ############################################################## - # Variant 2 - ############################################################## - - patient_ids = [] - patient_ids_test = [] - - output_folder = join(out_base, "Task038_CHAOS_Task_3_5_Variant2") - output_images = join(output_folder, "imagesTr") - output_imagesTs = join(output_folder, "imagesTs") - output_labels = join(output_folder, "labelsTr") - maybe_mkdir_p(output_images) - maybe_mkdir_p(output_imagesTs) - maybe_mkdir_p(output_labels) - - # Process T1 train - d = join(root, "MR") - patients = subdirs(d, join=False) - for p in patients: - patient_name_in = "T1_in_" + p - patient_name_out = "T1_out_" + p - gt_dir = join(d, p, "T1DUAL", "Ground") - seg = convert_MR_seg(load_png_stack(gt_dir)[::-1]) - - img_dir = join(d, p, "T1DUAL", "DICOM_anon", "InPhase") - img_outfile = join(output_images, patient_name_in + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_dir = join(d, p, "T1DUAL", "DICOM_anon", "OutPhase") - img_outfile = join(output_images, patient_name_out + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_sitk = sitk.ReadImage(img_outfile) - img_sitk_npy = sitk.GetArrayFromImage(img_sitk) - seg_itk = sitk.GetImageFromArray(seg.astype(np.uint8)) - seg_itk = copy_geometry(seg_itk, img_sitk) - sitk.WriteImage(seg_itk, join(output_labels, patient_name_in + ".nii.gz")) - sitk.WriteImage(seg_itk, join(output_labels, patient_name_out + ".nii.gz")) - patient_ids.append(patient_name_out) - patient_ids.append(patient_name_in) - - # Process T1 test - d = join(root_test, "MR") - patients = subdirs(d, join=False) - for p in patients: - patient_name_in = "T1_in_" + p - patient_name_out = "T1_out_" + p - gt_dir = join(d, p, "T1DUAL", "Ground") - - img_dir = join(d, p, "T1DUAL", "DICOM_anon", "InPhase") - img_outfile = join(output_imagesTs, patient_name_in + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_dir = join(d, p, "T1DUAL", "DICOM_anon", "OutPhase") - img_outfile = join(output_imagesTs, patient_name_out + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_sitk = sitk.ReadImage(img_outfile) - img_sitk_npy = sitk.GetArrayFromImage(img_sitk) - patient_ids_test.append(patient_name_out) - patient_ids_test.append(patient_name_in) - - # Process T2 train - d = join(root, "MR") - patients = subdirs(d, join=False) - for p in patients: - patient_name = "T2_" + p - - gt_dir = join(d, p, "T2SPIR", "Ground") - seg = convert_MR_seg(load_png_stack(gt_dir)[::-1]) - - img_dir = join(d, p, "T2SPIR", "DICOM_anon") - img_outfile = join(output_images, patient_name + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_sitk = sitk.ReadImage(img_outfile) - img_sitk_npy = sitk.GetArrayFromImage(img_sitk) - seg_itk = sitk.GetImageFromArray(seg.astype(np.uint8)) - seg_itk = copy_geometry(seg_itk, img_sitk) - sitk.WriteImage(seg_itk, join(output_labels, patient_name + ".nii.gz")) - patient_ids.append(patient_name) - - # Process T2 test - d = join(root_test, "MR") - patients = subdirs(d, join=False) - for p in patients: - patient_name = "T2_" + p - - gt_dir = join(d, p, "T2SPIR", "Ground") - - img_dir = join(d, p, "T2SPIR", "DICOM_anon") - img_outfile = join(output_imagesTs, patient_name + "_0000.nii.gz") - _ = dicom2nifti.convert_dicom.dicom_series_to_nifti(img_dir, img_outfile, reorient_nifti=False) - - img_sitk = sitk.ReadImage(img_outfile) - img_sitk_npy = sitk.GetArrayFromImage(img_sitk) - patient_ids_test.append(patient_name) - - json_dict = OrderedDict() - json_dict['name'] = "Chaos Challenge Task3/5 Variant 2" - json_dict['description'] = "nothing" - json_dict['tensorImageSize'] = "4D" - json_dict['reference'] = "https://chaos.grand-challenge.org/Data/" - json_dict['licence'] = "see https://chaos.grand-challenge.org/Data/" - json_dict['release'] = "0.0" - json_dict['modality'] = { - "0": "MRI", - } - json_dict['labels'] = { - "0": "background", - "1": "liver", - "2": "right kidney", - "3": "left kidney", - "4": "spleen", - } - json_dict['numTraining'] = len(patient_ids) - json_dict['numTest'] = 0 - json_dict['training'] = [{'image': "./imagesTr/%s.nii.gz" % i, "label": "./labelsTr/%s.nii.gz" % i} for i in - patient_ids] - json_dict['test'] = [] - - save_json(json_dict, join(output_folder, "dataset.json")) - - ################################################# - # custom split - ################################################# - patients = subdirs(join(root, "MR"), join=False) - task_name_variant1 = "Task037_CHAOS_Task_3_5_Variant1" - task_name_variant2 = "Task038_CHAOS_Task_3_5_Variant2" - - output_preprocessed_v1 = join(preprocessing_output_dir, task_name_variant1) - maybe_mkdir_p(output_preprocessed_v1) - - output_preprocessed_v2 = join(preprocessing_output_dir, task_name_variant2) - maybe_mkdir_p(output_preprocessed_v2) - - splits = [] - for fold in range(5): - tr, val = get_split_deterministic(patients, fold, 5, 12345) - train = ["T2_" + i for i in tr] + ["T1_" + i for i in tr] - validation = ["T2_" + i for i in val] + ["T1_" + i for i in val] - splits.append({ - 'train': train, - 'val': validation - }) - save_pickle(splits, join(output_preprocessed_v1, "splits_final.pkl")) - - splits = [] - for fold in range(5): - tr, val = get_split_deterministic(patients, fold, 5, 12345) - train = ["T2_" + i for i in tr] + ["T1_in_" + i for i in tr] + ["T1_out_" + i for i in tr] - validation = ["T2_" + i for i in val] + ["T1_in_" + i for i in val] + ["T1_out_" + i for i in val] - splits.append({ - 'train': train, - 'val': validation - }) - save_pickle(splits, join(output_preprocessed_v2, "splits_final.pkl")) - diff --git a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/__init__.py b/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/__init__.py deleted file mode 100644 index 72b8078b9dddddf22182fec2555d8d118ea72622..0000000000000000000000000000000000000000 --- a/spaces/ho11laqe/nnUNet_calvingfront_detection/nnunet/evaluation/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -from __future__ import absolute_import -from . import * \ No newline at end of file diff --git a/spaces/huggingface-projects/codellama-bot/README.md b/spaces/huggingface-projects/codellama-bot/README.md deleted file mode 100644 index ed933f3cb1974d3972141ffb941eb666991bfa3e..0000000000000000000000000000000000000000 --- a/spaces/huggingface-projects/codellama-bot/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Codellama Bot -emoji: 📚 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.44.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/huy-ha/semabs-relevancy/CLIP/clip/model_explainability.py b/spaces/huy-ha/semabs-relevancy/CLIP/clip/model_explainability.py deleted file mode 100644 index e55f383bce90b34a12799cc84da1bdb69ebec376..0000000000000000000000000000000000000000 --- a/spaces/huy-ha/semabs-relevancy/CLIP/clip/model_explainability.py +++ /dev/null @@ -1,602 +0,0 @@ -# modified from: https://github.com/hila-chefer/Transformer-MM-Explainability/blob/main/CLIP/clip/model.py -from collections import OrderedDict -from typing import Tuple, Union - -import numpy as np -import torch -import torch.nn.functional as F -from torch import nn -from .auxiliary import ( - multi_head_attention_forward, - MultiheadAttention, - interpolate_positional_emb, -) -import sys - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1): - super().__init__() - - # all conv layers have stride 1. an avgpool is performed after the second convolution when stride > 1 - self.conv1 = nn.Conv2d(inplanes, planes, 1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - - self.conv2 = nn.Conv2d(planes, planes, 3, padding=1, bias=False) - self.bn2 = nn.BatchNorm2d(planes) - - self.avgpool = nn.AvgPool2d(stride) if stride > 1 else nn.Identity() - - self.conv3 = nn.Conv2d(planes, planes * self.expansion, 1, bias=False) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - - self.relu = nn.ReLU(inplace=True) - self.downsample = None - self.stride = stride - - if stride > 1 or inplanes != planes * Bottleneck.expansion: - # downsampling layer is prepended with an avgpool, and the subsequent convolution has stride 1 - self.downsample_modules = OrderedDict( - [ - ("-1", nn.AvgPool2d(stride)), - ( - "0", - nn.Conv2d( - inplanes, planes * self.expansion, 1, stride=1, bias=False - ), - ), - ("1", nn.BatchNorm2d(planes * self.expansion)), - ] - ) - self.downsample = nn.Sequential(self.downsample_modules) - - def forward(self, x: torch.Tensor): - identity = x - - out = self.relu(self.bn1(self.conv1(x))) - out = self.relu(self.bn2(self.conv2(out))) - out = self.avgpool(out) - out = self.bn3(self.conv3(out)) - - if self.downsample is not None: - identity = self.downsample(x) - - out += identity - out = self.relu(out) - return out - - -class AttentionPool2d(nn.Module): - def __init__( - self, spacial_dim: int, embed_dim: int, num_heads: int, output_dim: int = None - ): - super().__init__() - self.positional_embedding = nn.Parameter( - torch.randn(spacial_dim**2 + 1, embed_dim) / embed_dim**0.5 - ) - self.k_proj = nn.Linear(embed_dim, embed_dim) - self.q_proj = nn.Linear(embed_dim, embed_dim) - self.v_proj = nn.Linear(embed_dim, embed_dim) - self.c_proj = nn.Linear(embed_dim, output_dim or embed_dim) - self.num_heads = num_heads - - def forward(self, x): - x = x.reshape(x.shape[0], x.shape[1], x.shape[2] * x.shape[3]).permute( - 2, 0, 1 - ) # NCHW -> (HW)NC - x = torch.cat([x.mean(dim=0, keepdim=True), x], dim=0) # (HW+1)NC - x = x + self.positional_embedding[:, None, :].to(x.dtype) # (HW+1)NC - x, _ = multi_head_attention_forward( - query=x, - key=x, - value=x, - embed_dim_to_check=x.shape[-1], - num_heads=self.num_heads, - q_proj_weight=self.q_proj.weight, - k_proj_weight=self.k_proj.weight, - v_proj_weight=self.v_proj.weight, - in_proj_weight=None, - in_proj_bias=torch.cat( - [self.q_proj.bias, self.k_proj.bias, self.v_proj.bias] - ), - bias_k=None, - bias_v=None, - add_zero_attn=False, - dropout_p=0, - out_proj_weight=self.c_proj.weight, - out_proj_bias=self.c_proj.bias, - use_separate_proj_weight=True, - training=self.training, - need_weights=False, - ) - - return x[0] - - -class ModifiedResNet(nn.Module): - """ - A ResNet class that is similar to torchvision's but contains the following changes: - - There are now 3 "stem" convolutions as opposed to 1, with an average pool instead of a max pool. - - Performs anti-aliasing strided convolutions, where an avgpool is prepended to convolutions with stride > 1 - - The final pooling layer is a QKV attention instead of an average pool - """ - - def __init__(self, layers, output_dim, heads, input_resolution=224, width=64): - super().__init__() - self.output_dim = output_dim - self.input_resolution = input_resolution - - # the 3-layer stem - self.conv1 = nn.Conv2d( - 3, width // 2, kernel_size=3, stride=2, padding=1, bias=False - ) - self.bn1 = nn.BatchNorm2d(width // 2) - self.conv2 = nn.Conv2d( - width // 2, width // 2, kernel_size=3, padding=1, bias=False - ) - self.bn2 = nn.BatchNorm2d(width // 2) - self.conv3 = nn.Conv2d(width // 2, width, kernel_size=3, padding=1, bias=False) - self.bn3 = nn.BatchNorm2d(width) - self.avgpool = nn.AvgPool2d(2) - self.relu = nn.ReLU(inplace=True) - - # residual layers - self._inplanes = width # this is a *mutable* variable used during construction - self.layer1 = self._make_layer(width, layers[0]) - self.layer2 = self._make_layer(width * 2, layers[1], stride=2) - self.layer3 = self._make_layer(width * 4, layers[2], stride=2) - self.layer4 = self._make_layer(width * 8, layers[3], stride=2) - - embed_dim = width * 32 # the ResNet feature dimension - self.attnpool = AttentionPool2d( - input_resolution // 32, embed_dim, heads, output_dim - ) - - def _make_layer(self, planes, blocks, stride=1): - layers = [Bottleneck(self._inplanes, planes, stride)] - - self._inplanes = planes * Bottleneck.expansion - for _ in range(1, blocks): - layers.append(Bottleneck(self._inplanes, planes)) - - return nn.Sequential(*layers) - - def forward(self, x): - def stem(x): - for conv, bn in [ - (self.conv1, self.bn1), - (self.conv2, self.bn2), - (self.conv3, self.bn3), - ]: - x = self.relu(bn(conv(x))) - x = self.avgpool(x) - return x - - x = x.type(self.conv1.weight.dtype) - x = stem(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x = self.layer4(x) - x = self.attnpool(x) - - return x - - -class LayerNorm(nn.LayerNorm): - """Subclass torch's LayerNorm to handle fp16.""" - - def forward(self, x: torch.Tensor): - orig_type = x.dtype - ret = super().forward(x.type(torch.float32)) - return ret.type(orig_type) - - -class QuickGELU(nn.Module): - def forward(self, x: torch.Tensor): - return x * torch.sigmoid(1.702 * x) - - -class ResidualAttentionBlock(nn.Module): - def __init__( - self, d_model: int, n_head: int, attn_mask: torch.Tensor = None, is_visual=True - ): - super().__init__() - - self.attn = MultiheadAttention(d_model, n_head) - self.ln_1 = LayerNorm(d_model) - self.mlp_modules = OrderedDict( - [ - ("c_fc", nn.Linear(d_model, d_model * 4)), - ("gelu", QuickGELU()), - ("c_proj", nn.Linear(d_model * 4, d_model)), - ] - ) - self.mlp = nn.Sequential(self.mlp_modules) - self.ln_2 = LayerNorm(d_model) - self.attn_mask = attn_mask - - self.attn_probs = None - self.attn_grad = None - - self.is_visual = is_visual - - def set_attn_probs(self, attn_probs): - self.attn_probs = attn_probs - - def set_attn_grad(self, attn_grad): - self.attn_grad = attn_grad - - def attention(self, x: torch.Tensor): - self.attn_mask = ( - self.attn_mask.to(dtype=x.dtype, device=x.device) - if self.attn_mask is not None - else None - ) - - if self.is_visual: - return self.attn( - x, - x, - x, - need_weights=False, - attn_mask=self.attn_mask, - attention_probs_forward_hook=self.set_attn_probs, - attention_probs_backwards_hook=self.set_attn_grad, - )[0] - - return self.attn(x, x, x, need_weights=False, attn_mask=self.attn_mask)[0] - - def forward(self, x: torch.Tensor): - x = x + self.attention(self.ln_1(x)) - x = x + self.mlp(self.ln_2(x)) - return x - - -class Transformer(nn.Module): - def __init__( - self, - width: int, - layers: int, - heads: int, - attn_mask: torch.Tensor = None, - is_visual=False, - ): - super().__init__() - self.width = width - self.layers = layers - self.resblocks_modules = [ - ResidualAttentionBlock(width, heads, attn_mask, is_visual=is_visual) - for _ in range(layers) - ] - self.resblocks = nn.Sequential(*self.resblocks_modules) - - def forward(self, x: torch.Tensor, tile_attn_mask: torch.Tensor = None): - prev_attn_masks = [] - if tile_attn_mask is not None: - for resblock in self.resblocks.modules(): - prev_attn_masks.append(resblock.attn_mask.clone()) - resblock.attn_mask = tile_attn_mask - x = self.resblocks(x) - if tile_attn_mask is not None: - for resblock, prev_attn_mask in zip( - self.resblocks.modules(), prev_attn_masks - ): - resblock.attn_mask = prev_attn_mask - return x - - -class VisionTransformer(nn.Module): - def __init__( - self, - input_resolution: int, - patch_size: int, - width: int, - layers: int, - heads: int, - output_dim: int, - ): - super().__init__() - self.input_resolution = input_resolution - self.output_dim = output_dim - self.conv1 = nn.Conv2d( - in_channels=3, - out_channels=width, - kernel_size=patch_size, - stride=patch_size, - bias=False, - ) - - scale = width**-0.5 - self.class_embedding = nn.Parameter(scale * torch.randn(width)) - self.positional_embedding = nn.Parameter( - scale * torch.randn((input_resolution // patch_size) ** 2 + 1, width) - ) - self.ln_pre = LayerNorm(width) - - self.transformer = Transformer(width, layers, heads, is_visual=True) - - self.ln_post = LayerNorm(width) - self.proj = nn.Parameter(scale * torch.randn(width, output_dim)) - - def forward(self, x: torch.Tensor, **kwargs): - x = self.conv1(x) # shape = [*, width, grid, grid] - # shape = [*, width, grid ** 2] - x = x.reshape(x.shape[0], x.shape[1], -1) - x = x.permute(0, 2, 1) # shape = [*, grid ** 2, width] - x = torch.cat( - [ - self.class_embedding.to(x.dtype) - + torch.zeros( - x.shape[0], 1, x.shape[-1], dtype=x.dtype, device=x.device - ), - x, - ], - dim=1, - ) # shape = [*, grid ** 2 + 1, width] - if len(x[0]) != 50: - pe = interpolate_positional_emb(self.positional_embedding, len(x[0])) - x += pe.to(x.dtype) - else: - x = x + self.positional_embedding.to(x.dtype) - x = self.ln_pre(x) - - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x, **kwargs) - x = x.permute(1, 0, 2) # LND -> NLD - - x = self.ln_post(x[:, 0, :]) - - if self.proj is not None: - x = x @ self.proj - - return x - - -class CLIP(nn.Module): - def __init__( - self, - embed_dim: int, - # vision - image_resolution: int, - vision_layers: Union[Tuple[int, int, int, int], int], - vision_width: int, - vision_patch_size: int, - # text - context_length: int, - vocab_size: int, - transformer_width: int, - transformer_heads: int, - transformer_layers: int, - ): - super().__init__() - - self.context_length = context_length - - if isinstance(vision_layers, (tuple, list)): - vision_heads = vision_width * 32 // 64 - self.visual = ModifiedResNet( - layers=vision_layers, - output_dim=embed_dim, - heads=vision_heads, - input_resolution=image_resolution, - width=vision_width, - ) - else: - vision_heads = vision_width // 64 - self.visual = VisionTransformer( - input_resolution=image_resolution, - patch_size=vision_patch_size, - width=vision_width, - layers=vision_layers, - heads=vision_heads, - output_dim=embed_dim, - ) - - self.transformer = Transformer( - width=transformer_width, - layers=transformer_layers, - heads=transformer_heads, - attn_mask=self.build_attention_mask(), - is_visual=False, - ) - - self.vocab_size = vocab_size - self.token_embedding = nn.Embedding(vocab_size, transformer_width) - self.positional_embedding = nn.Parameter( - torch.empty(self.context_length, transformer_width) - ) - self.ln_final = LayerNorm(transformer_width) - - self.text_projection = nn.Parameter(torch.empty(transformer_width, embed_dim)) - self.logit_scale = nn.Parameter(torch.ones([]) * np.log(1 / 0.07)) - - self.initialize_parameters() - - def initialize_parameters(self): - nn.init.normal_(self.token_embedding.weight, std=0.02) - nn.init.normal_(self.positional_embedding, std=0.01) - - if isinstance(self.visual, ModifiedResNet): - if self.visual.attnpool is not None: - std = self.visual.attnpool.c_proj.in_features**-0.5 - nn.init.normal_(self.visual.attnpool.q_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.k_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.v_proj.weight, std=std) - nn.init.normal_(self.visual.attnpool.c_proj.weight, std=std) - - for resnet_block in [ - self.visual.layer1, - self.visual.layer2, - self.visual.layer3, - self.visual.layer4, - ]: - for name, param in resnet_block.named_parameters(): - if name.endswith("bn3.weight"): - nn.init.zeros_(param) - - proj_std = (self.transformer.width**-0.5) * ( - (2 * self.transformer.layers) ** -0.5 - ) - attn_std = self.transformer.width**-0.5 - fc_std = (2 * self.transformer.width) ** -0.5 - for block in self.transformer.resblocks: - nn.init.normal_(block.attn.in_proj_weight, std=attn_std) - nn.init.normal_(block.attn.out_proj.weight, std=proj_std) - nn.init.normal_(block.mlp.c_fc.weight, std=fc_std) - nn.init.normal_(block.mlp.c_proj.weight, std=proj_std) - - if self.text_projection is not None: - nn.init.normal_(self.text_projection, std=self.transformer.width**-0.5) - - def build_attention_mask(self): - # lazily create causal attention mask, with full attention between the vision tokens - # pytorch uses additive attention mask; fill with -inf - mask = torch.empty(self.context_length, self.context_length) - mask.fill_(float("-inf")) - mask.triu_(1) # zero out the lower diagonal - return mask - - @property - def dtype(self): - return self.visual.conv1.weight.dtype - - def encode_image(self, image, **kwargs): - return self.visual(image.type(self.dtype), **kwargs) - - def encode_text(self, text): - x = self.token_embedding(text).type(self.dtype) # [batch_size, n_ctx, d_model] - - x = x + self.positional_embedding.type(self.dtype) - x = x.permute(1, 0, 2) # NLD -> LND - x = self.transformer(x) - x = x.permute(1, 0, 2) # LND -> NLD - x = self.ln_final(x).type(self.dtype) - - # x.shape = [batch_size, n_ctx, transformer.width] - # take features from the eot embedding (eot_token is the highest number in each sequence) - x = x[torch.arange(x.shape[0]), text.argmax(dim=-1)] @ self.text_projection - - return x - - def forward(self, image, text): - image_features = self.encode_image(image) - text_features = self.encode_text(text) - - # normalized features - image_features = image_features / image_features.norm(dim=-1, keepdim=True) - text_features = text_features / text_features.norm(dim=-1, keepdim=True) - - # cosine similarity as logits - logit_scale = self.logit_scale.exp() - logits_per_image = logit_scale * image_features @ text_features.t() - logits_per_text = logits_per_image.t() - - # shape = [global_batch_size, global_batch_size] - return logits_per_image, logits_per_text - - -def convert_weights(model: nn.Module): - """Convert applicable model parameters to fp16""" - - def _convert_weights_to_fp16(l): - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Linear)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - if isinstance(l, MultiheadAttention): - for attr in [ - *[f"{s}_proj_weight" for s in ["in", "q", "k", "v"]], - "in_proj_bias", - "bias_k", - "bias_v", - ]: - tensor = getattr(l, attr) - if tensor is not None: - tensor.data = tensor.data.half() - - for name in ["text_projection", "proj"]: - if hasattr(l, name): - attr = getattr(l, name) - if attr is not None: - attr.data = attr.data.half() - - model.apply(_convert_weights_to_fp16) - - -def build_model(state_dict: dict): - vit = "visual.proj" in state_dict - - if vit: - vision_width = state_dict["visual.conv1.weight"].shape[0] - vision_layers = len( - [ - k - for k in state_dict.keys() - if k.startswith("visual.") and k.endswith(".attn.in_proj_weight") - ] - ) - vision_patch_size = state_dict["visual.conv1.weight"].shape[-1] - grid_size = round( - (state_dict["visual.positional_embedding"].shape[0] - 1) ** 0.5 - ) - image_resolution = vision_patch_size * grid_size - else: - counts: list = [ - len( - set( - k.split(".")[2] - for k in state_dict - if k.startswith(f"visual.layer{b}") - ) - ) - for b in [1, 2, 3, 4] - ] - vision_layers = tuple(counts) - vision_width = state_dict["visual.layer1.0.conv1.weight"].shape[0] - output_width = round( - (state_dict["visual.attnpool.positional_embedding"].shape[0] - 1) ** 0.5 - ) - vision_patch_size = None - assert ( - output_width**2 + 1 - == state_dict["visual.attnpool.positional_embedding"].shape[0] - ) - image_resolution = output_width * 32 - - embed_dim = state_dict["text_projection"].shape[1] - context_length = state_dict["positional_embedding"].shape[0] - vocab_size = state_dict["token_embedding.weight"].shape[0] - transformer_width = state_dict["ln_final.weight"].shape[0] - transformer_heads = transformer_width // 64 - transformer_layers = len( - set( - k.split(".")[2] - for k in state_dict - if k.startswith(f"transformer.resblocks") - ) - ) - - model = CLIP( - embed_dim, - image_resolution, - vision_layers, - vision_width, - vision_patch_size, - context_length, - vocab_size, - transformer_width, - transformer_heads, - transformer_layers, - ) - - for key in ["input_resolution", "context_length", "vocab_size"]: - if key in state_dict: - del state_dict[key] - - convert_weights(model) - model.load_state_dict(state_dict) - return model.eval() diff --git a/spaces/hysts/BLIP-Diffusion/README.md b/spaces/hysts/BLIP-Diffusion/README.md deleted file mode 100644 index 895c613ef8165d89a8f2c37c37eea4f5ecb3f970..0000000000000000000000000000000000000000 --- a/spaces/hysts/BLIP-Diffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: BLIP-Diffusion -emoji: 🔥 -colorFrom: yellow -colorTo: indigo -sdk: gradio -sdk_version: 4.1.1 -app_file: app.py -pinned: false -suggested_hardware: t4-small ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/hzwluoye/gpt4/client/js/change-language.js b/spaces/hzwluoye/gpt4/client/js/change-language.js deleted file mode 100644 index ce87f6f60c7a9acca5e1902612930ef677f3fb65..0000000000000000000000000000000000000000 --- a/spaces/hzwluoye/gpt4/client/js/change-language.js +++ /dev/null @@ -1,47 +0,0 @@ -document.addEventListener('DOMContentLoaded', fetchLanguages); - -async function fetchLanguages() { - try { - const [languagesResponse, currentLanguageResponse] = await Promise.all([ - fetch(`${url_prefix}/get-languages`), - fetch(`${url_prefix}/get-locale`) - ]); - - const languages = await languagesResponse.json(); - const currentLanguage = await currentLanguageResponse.text(); - - const languageSelect = document.getElementById('language'); - languages.forEach(lang => { - const option = document.createElement('option'); - option.value = lang; - option.textContent = lang; - languageSelect.appendChild(option); - }); - - const savedLanguage = localStorage.getItem("language") || currentLanguage; - setLanguageOnPageLoad(savedLanguage); - } catch (error) { - console.error("Failed to fetch languages or current language"); - } -} - -function setLanguageOnPageLoad(language) { - document.getElementById("language").value = language; -} - -function changeLanguage(lang) { - fetch(`${url_prefix}/change-language`, { - method: "POST", - headers: { - "Content-Type": "application/json", - }, - body: JSON.stringify({ language: lang }), - }).then((response) => { - if (response.ok) { - localStorage.setItem("language", lang); - location.reload(); - } else { - console.error("Failed to change language"); - } - }); -} diff --git a/spaces/ieeecsuna/ieee_cs_tools/README.md b/spaces/ieeecsuna/ieee_cs_tools/README.md deleted file mode 100644 index 463a0c4e7a5b84a4df32df359be5c81906c58a16..0000000000000000000000000000000000000000 --- a/spaces/ieeecsuna/ieee_cs_tools/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Ieee Cs Tools -emoji: 💻 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.40.1 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/imcaoxuan/runwayml-stable-diffusion-v1-5/app.py b/spaces/imcaoxuan/runwayml-stable-diffusion-v1-5/app.py deleted file mode 100644 index a82df332731f067826d3e1ef79fabceffb74d07e..0000000000000000000000000000000000000000 --- a/spaces/imcaoxuan/runwayml-stable-diffusion-v1-5/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/runwayml/stable-diffusion-v1-5").launch() \ No newline at end of file diff --git a/spaces/inamXcontru/PoeticTTS/Call-of-Duty-Black-Ops-2-DLC-pack- NosTEAM.md b/spaces/inamXcontru/PoeticTTS/Call-of-Duty-Black-Ops-2-DLC-pack- NosTEAM.md deleted file mode 100644 index 2cc8d5f62dbaa2c9d5a178a4bffb1b2f9f64ac91..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Call-of-Duty-Black-Ops-2-DLC-pack- NosTEAM.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>Call-of-Duty-Black-Ops-2-DLC-pack- NosTEAM</h2><br /><p><b><b>Download Zip</b> ☆☆☆ <a href="https://gohhs.com/2uz3kS">https://gohhs.com/2uz3kS</a></b></p><br /><br /> - -Insert a line with the content of 'dlcpacks:/neptunia/' Civilization 5: Neptunia mods! ... The pack includes: Purple Heart Black Heart Green Heart White Heart ... I'm worried that GTA Neptune A Puyo Puyo VS 2 (PPVS2) Skin Mod in ... More mods Jun 22, 2018 · Dubbed Call of Duty: Neptunia Ops 3, the mod ... 4d29de3e1b<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/inamXcontru/PoeticTTS/Complete Tools For WEP And WPA Wireless Cracking.md b/spaces/inamXcontru/PoeticTTS/Complete Tools For WEP And WPA Wireless Cracking.md deleted file mode 100644 index c0a2d4d5993a71aef94fc73ac8f0e495f6efc2ec..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Complete Tools For WEP And WPA Wireless Cracking.md +++ /dev/null @@ -1,16 +0,0 @@ -<h2>Complete Tools For WEP and WPA Wireless Cracking</h2><br /><p><b><b>Download File</b> ››› <a href="https://gohhs.com/2uz3ah">https://gohhs.com/2uz3ah</a></b></p><br /><br /> -<br /> -The discussion about WEP and WPA in prior sections is from the perspective of a wireless device and the wireless controller that is responsible for setting the key. However, WPA and WEP are also from the view of a user of a wireless network. One user may connect a wireless network to his own laptop with the help of a wireless security controller. An attacker can connect to the wireless controller instead of the actual user. This is a common scenario in enterprise settings. A malicious user may be able to view the communications from the user’s laptop. A well-designed wireless security controller should protect the user from a malicious user. - - **Security Algorithm** **Complexity** **Number of Keystream** **Security** **Type of Attack** - - ------------------------ ---------------- ------------------------- -------------- -------------------- - - WEP Low 4 High Passive attack - - WPA High 4 Medium For - - , 4fefd39f24<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/inamXcontru/PoeticTTS/Damarukam Movie Hindi Dubbed Downloadl The Film that Broke Box Office Records in Telugu Cinema.md b/spaces/inamXcontru/PoeticTTS/Damarukam Movie Hindi Dubbed Downloadl The Film that Broke Box Office Records in Telugu Cinema.md deleted file mode 100644 index 0079f1f91743a9564491ba51ebe67ae932a8064b..0000000000000000000000000000000000000000 --- a/spaces/inamXcontru/PoeticTTS/Damarukam Movie Hindi Dubbed Downloadl The Film that Broke Box Office Records in Telugu Cinema.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>AutoCAD Inventor LT Suite 2007 Keygen Only Xforce 3 Rar</h2><br /><p><b><b>Download Zip</b> » <a href="https://gohhs.com/2uz31G">https://gohhs.com/2uz31G</a></b></p><br /><br /> -<br /> - aaccfb2cb3<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/innnky/nyaru4.0/vdecoder/hifigan/nvSTFT.py b/spaces/innnky/nyaru4.0/vdecoder/hifigan/nvSTFT.py deleted file mode 100644 index 88597d62a505715091f9ba62d38bf0a85a31b95a..0000000000000000000000000000000000000000 --- a/spaces/innnky/nyaru4.0/vdecoder/hifigan/nvSTFT.py +++ /dev/null @@ -1,111 +0,0 @@ -import math -import os -os.environ["LRU_CACHE_CAPACITY"] = "3" -import random -import torch -import torch.utils.data -import numpy as np -import librosa -from librosa.util import normalize -from librosa.filters import mel as librosa_mel_fn -from scipy.io.wavfile import read -import soundfile as sf - -def load_wav_to_torch(full_path, target_sr=None, return_empty_on_exception=False): - sampling_rate = None - try: - data, sampling_rate = sf.read(full_path, always_2d=True)# than soundfile. - except Exception as ex: - print(f"'{full_path}' failed to load.\nException:") - print(ex) - if return_empty_on_exception: - return [], sampling_rate or target_sr or 32000 - else: - raise Exception(ex) - - if len(data.shape) > 1: - data = data[:, 0] - assert len(data) > 2# check duration of audio file is > 2 samples (because otherwise the slice operation was on the wrong dimension) - - if np.issubdtype(data.dtype, np.integer): # if audio data is type int - max_mag = -np.iinfo(data.dtype).min # maximum magnitude = min possible value of intXX - else: # if audio data is type fp32 - max_mag = max(np.amax(data), -np.amin(data)) - max_mag = (2**31)+1 if max_mag > (2**15) else ((2**15)+1 if max_mag > 1.01 else 1.0) # data should be either 16-bit INT, 32-bit INT or [-1 to 1] float32 - - data = torch.FloatTensor(data.astype(np.float32))/max_mag - - if (torch.isinf(data) | torch.isnan(data)).any() and return_empty_on_exception:# resample will crash with inf/NaN inputs. return_empty_on_exception will return empty arr instead of except - return [], sampling_rate or target_sr or 32000 - if target_sr is not None and sampling_rate != target_sr: - data = torch.from_numpy(librosa.core.resample(data.numpy(), orig_sr=sampling_rate, target_sr=target_sr)) - sampling_rate = target_sr - - return data, sampling_rate - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - -class STFT(): - def __init__(self, sr=22050, n_mels=80, n_fft=1024, win_size=1024, hop_length=256, fmin=20, fmax=11025, clip_val=1e-5): - self.target_sr = sr - - self.n_mels = n_mels - self.n_fft = n_fft - self.win_size = win_size - self.hop_length = hop_length - self.fmin = fmin - self.fmax = fmax - self.clip_val = clip_val - self.mel_basis = {} - self.hann_window = {} - - def get_mel(self, y, center=False): - sampling_rate = self.target_sr - n_mels = self.n_mels - n_fft = self.n_fft - win_size = self.win_size - hop_length = self.hop_length - fmin = self.fmin - fmax = self.fmax - clip_val = self.clip_val - - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - if fmax not in self.mel_basis: - mel = librosa_mel_fn(sr=sampling_rate, n_fft=n_fft, n_mels=n_mels, fmin=fmin, fmax=fmax) - self.mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - self.hann_window[str(y.device)] = torch.hann_window(self.win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_length)/2), int((n_fft-hop_length)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_length, win_length=win_size, window=self.hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True) - # print(111,spec) - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - # print(222,spec) - spec = torch.matmul(self.mel_basis[str(fmax)+'_'+str(y.device)], spec) - # print(333,spec) - spec = dynamic_range_compression_torch(spec, clip_val=clip_val) - # print(444,spec) - return spec - - def __call__(self, audiopath): - audio, sr = load_wav_to_torch(audiopath, target_sr=self.target_sr) - spect = self.get_mel(audio.unsqueeze(0)).squeeze(0) - return spect - -stft = STFT() diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Arts And Culture An Introduction To The Humanities Pdf Download.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Arts And Culture An Introduction To The Humanities Pdf Download.md deleted file mode 100644 index b2b23f1892fad658778edc0a3481cb5a806b570d..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Arts And Culture An Introduction To The Humanities Pdf Download.md +++ /dev/null @@ -1,7 +0,0 @@ -<br /> -<p>the knowledge concepts are the foundation for the liberal arts and sciences courses. students are expected to integrate information literacy, written communication, and analytic skills into their thinking. students are expected to develop a sophisticated understanding of the nature of knowledge, the processes of learning, and the methods for doing their work. the courses have a small core of concepts that are central to an understanding of the subject matter. students are expected to master the basic concepts, understand their interrelationship, and use them to develop more complex knowledge. students are expected to gain the skills to develop their own knowledge of complex topics. students are expected to be able to think critically about their own ideas and make informed decisions about their work. </p> -<p> most of the humanities texts are literary texts. there are also a number of texts that are not primarily literary; these include a few non-fiction texts, a few visual texts, and a few films, which are discussed in the digital video and film section. some of the questions also require familiarity with music and dance, which are covered in the music and dance section. both the music and dance questions are included in the digital video and film section as well. </p> -<h2>Arts And Culture An Introduction To The Humanities Pdf Download</h2><br /><p><b><b>Download Zip</b> ……… <a href="https://urlin.us/2uEwki">https://urlin.us/2uEwki</a></b></p><br /><br /> -<p> the choice of texts is not limited to these genres; there are also a few written texts that have a bearing on the humanities, including a few philosophical and religious texts. the choice of these texts is based on their relevance to the study of art and culture. because of the number of disciplines and periods that are covered, there will be some overlap among the test questions. the test designers have attempted to assure that such overlap is balanced and that each discipline has sufficient representation on the test. </p> 899543212b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Mecasoft Pro 6.0 Crack Serial --.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Mecasoft Pro 6.0 Crack Serial --.md deleted file mode 100644 index 71ec19db62b3e119ab4f3971496af94fa12af4cd..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Descargar Mecasoft Pro 6.0 Crack Serial --.md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>Descargar Mecasoft Pro 6.0 Crack Serial --</h2><br /><p><b><b>Download File</b> ⚙ <a href="https://urlin.us/2uEvtJ">https://urlin.us/2uEvtJ</a></b></p><br /><br /> -<br /> - d5da3c52bf<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/Diablo2medianxlheroeditor113.md b/spaces/inplisQlawa/anything-midjourney-v4-1/Diablo2medianxlheroeditor113.md deleted file mode 100644 index 9844985c672b8747b41393600e882ae1ed9df8ce..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/Diablo2medianxlheroeditor113.md +++ /dev/null @@ -1,10 +0,0 @@ - -<p>related download diablo2hero (deutsche spieler) jun. 10, 2020 diablo2medianxlheroeditor113 for windows.exe, diablo2hero (deutsche spieler).exe. download now: diablo 2 hero lucerii mar. 17, 2020 diablo2medianxlheroeditor113 for windows. diablo2medianxlheroeditor113 ragnarok offline class3 ep 20.0 one2up olm to pst converter pro 1.3 crack patched linguatec personal translator v14.0. 17, 2020 for windows. > download diablo2hero mar. download now: diablo2hero mar. diablo2medianxlheroeditor113 we are going to try the crack out today. </p> -<h2>diablo2medianxlheroeditor113</h2><br /><p><b><b>Download</b> ►►► <a href="https://urlin.us/2uExar">https://urlin.us/2uExar</a></b></p><br /><br /> -<p>diablo2medianxlheroeditor113( file name: latte-m - 3.0.2.901 ) download diablo2hero (deutsche spieler) jun. 10, 2020 diablo2medianxlheroeditor113 for windows. diablo2medianxlheroeditor113 + download diablo2hero mar. 17, 2020 for windows.exe. diablo2medianxlheroeditor113 [premium-crack] download diablo2hero mar. diablo2medianxlheroeditor113 ( download now ) the fastest and most intuitive 3d modeling software on the market. </p> -<p>dave x 360 sonic cinema creative design suite 13.5.2 serial key. diablo2medianxlheroeditor113 hansolo fire starter 1.4 keygen. diablo2medianxlheroeditor113 linguatec personal translator v14.0. diablo2medianxlheroeditor113 free download staywire pro 8.2 keygen diablo2medianxlheroeditor113 download simcity 4 deluxe. diablo2medianxlheroeditor113 dvd to xvid rtr rar. </p> -<p>diablo2medianxlheroeditor113 download erp 2012. diablo2medianxlheroeditor113 ohmiolite 3.0. diablo2medianxlheroeditor113 txttodb ebook to pdf converter 2.5. diablo2medianxlheroeditor113 diablo2medianxlheroeditor113 pdf editor 2007. </p> -<p></p> -<p>diablo2medianxlheroeditor113 remove latest patches. diablo2medianxlheroeditor113 download diablo2medianxlheroeditor113 dvd to mkv 7. diablo2medianxlheroeditor113 download halloween music 2.0. diablo2medianxlheroeditor113 halloween music 2. diablo2medianxlheroeditor113.zip > halloween music 2. diablo2medianxlheroeditor113 abo cdm professional edition 2008 15. diablo2medianxlheroeditor113 download pdf to ppt x ppt download. </p> 899543212b<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/inplisQlawa/anything-midjourney-v4-1/NASCAR Heat 3 - 2018 Hot Pass Free Download [key Serial].md b/spaces/inplisQlawa/anything-midjourney-v4-1/NASCAR Heat 3 - 2018 Hot Pass Free Download [key Serial].md deleted file mode 100644 index 686533043de65c00cc87592b4b6520979916230b..0000000000000000000000000000000000000000 --- a/spaces/inplisQlawa/anything-midjourney-v4-1/NASCAR Heat 3 - 2018 Hot Pass Free Download [key Serial].md +++ /dev/null @@ -1,6 +0,0 @@ -<h2>NASCAR Heat 3 - 2018 Hot Pass Free Download [key serial]</h2><br /><p><b><b>DOWNLOAD</b> →→→ <a href="https://urlin.us/2uEwch">https://urlin.us/2uEwch</a></b></p><br /><br /> - -8/10 (102 votes) - Download Need for Speed ProStreet Free. ... Aug 3, 2019 - Need For Speed Payback Serial Key Keygen Generator Download. ... [Best Buy] Need for Speed: Hot Pursuit Remastered PS4/XB1 39.99 (reg. $54.99) ... 3 JUEGOS EN 1 NEED FOR SPEED MAS NASCAR HEAT 4 MAS WRC 5 FIA World Rally ... 1fdad05405<br /> -<br /> -<br /> -<p></p> diff --git a/spaces/inreVtussa/clothingai/Examples/Autocad Architecture 2008 64 Bit Crack Fix.md b/spaces/inreVtussa/clothingai/Examples/Autocad Architecture 2008 64 Bit Crack Fix.md deleted file mode 100644 index ff0844305f850670c1c30ff832f4a4e22f723326..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Autocad Architecture 2008 64 Bit Crack Fix.md +++ /dev/null @@ -1,79 +0,0 @@ - -<h1>How to Use Autocad Architecture 2008 64 Bit Crack</h1> -<p>If you are looking for a powerful and versatile software for designing and drafting architectural projects, you might want to try Autocad Architecture 2008 64 Bit Crack. This is a cracked version of the original Autocad Architecture 2008 software, which allows you to use it without paying for a license or activation code. In this article, we will show you how to download, install and use Autocad Architecture 2008 64 Bit Crack for free.</p> -<h2>Autocad Architecture 2008 64 Bit Crack</h2><br /><p><b><b>DOWNLOAD</b> ✶ <a href="https://tiurll.com/2uCkme">https://tiurll.com/2uCkme</a></b></p><br /><br /> -<h2>What is Autocad Architecture 2008 64 Bit Crack?</h2> -<p>Autocad Architecture 2008 64 Bit Crack is a software that lets you create and edit 2D and 3D architectural drawings and models. It is based on the Autocad platform, but with additional features and tools specifically designed for architects. Some of the benefits of using Autocad Architecture 2008 64 Bit Crack are:</p> -<ul> -<li>It supports 64-bit operating systems, which means faster performance and more memory capacity.</li> -<li>It has a user-friendly interface that allows you to access various commands and options easily.</li> -<li>It has a library of predefined architectural objects and components that you can drag and drop into your drawings.</li> -<li>It has a smart wall tool that automatically adjusts the wall thickness and height according to your specifications.</li> -<li>It has a dynamic block feature that lets you create and modify custom blocks with different parameters and properties.</li> -<li>It has a layer management system that helps you organize and control the visibility of your drawing elements.</li> -<li>It has a rendering engine that lets you create realistic images and animations of your designs.</li> -</ul> -<h2>How to Download Autocad Architecture 2008 64 Bit Crack?</h2> -<p>To download Autocad Architecture 2008 64 Bit Crack, you need to find a reliable source that offers the cracked version of the software. One of the websites that you can try is <a href="https://archive.org/details/AutoCAD_Architecture_2008_Autodesk_18508-051462-0010A_2007">Internet Archive</a>, which provides free access to various digital files and media. Here are the steps to download Autocad Architecture 2008 64 Bit Crack from Internet Archive:</p> -<ol> -<li>Go to <a href="https://archive.org/details/AutoCAD_Architecture_2008_Autodesk_18508-051462-0010A_2007">this link</a> and click on the "Download Options" button on the right side of the page.</li> -<li>Select the "ISO IMAGE" option and wait for the download to start.</li> -<li>Save the file to your computer and extract it using a software like WinRAR or 7-Zip.</li> -<li>You should see a folder named "AutoCAD_Architecture_2008_Autodesk_18508-051462-0010A_2007" with several files inside.</li> -</ol> -<h2>How to Install Autocad Architecture 2008 64 Bit Crack?</h2> -<p>To install Autocad Architecture 2008 64 Bit Crack, you need to follow these steps:</p> -<ol> -<li>Open the folder that you extracted from the ISO file and double-click on the "setup.exe" file.</li> -<li>Follow the instructions on the screen and choose the language, destination folder and components that you want to install.</li> -<li>When prompted for a serial number, enter any number that matches this format: XXX-XXXXXXXX (for example, 123-45678901).</li> -<li>When prompted for an activation code, open another folder named "keygen" and run the "autocad-2008-keygen.exe" file.</li> -<li>Copy the serial number that you entered during the installation and paste it into the keygen window.</li> -<li>Click on the "Generate" button and copy the activation code that appears in the keygen window.</li> -<li>Paste the activation code into the installation window and click on "Next".</li> -<li>Wait for the installation to finish and click on "Finish".</li> -</ol> -<h2>How to Use Autocad Architecture 2008 64 Bit Crack?</h2> -<p>To use Autocad Architecture 2008 64 Bit Crack, you need to launch the software from your desktop or start menu. You will see a welcome screen that gives you some options to start a new drawing, open an existing one or access some tutorials. You can also customize your workspace by choosing from different menus, toolbars and palettes. Here are some tips on how to use Autocad Architecture 2008 64 Bit Crack:</p> -<ul> -<li>To create a new drawing, click on the "New" button on the standard toolbar or press Ctrl+N on your keyboard. You can choose from different templates or start from scratch.</li> -<li>To open an existing drawing, click on the "Open" button on the standard toolbar or press Ctrl+O on your keyboard. You can browse through your folders or use the search function to find your file.</li> -<li>To save your drawing, click on the "Save" button on the standard toolbar or press Ctrl+S on your keyboard. You can choose a name and location for your file or overwrite an existing one.</li> -<li>To draw basic shapes and objects, use the tools on the draw toolbar or type commands in the command line. For example, to draw a line, click on the "Line" tool or type LINE in the command line. Then specify two points by clicking on the screen or entering coordinates.</li> -<li>To modify your drawing elements, use the tools on the modify toolbar or type commands in the command line. For example, to move an object, click on the "Move" tool or type MOVE in the command line. Then select the object that you want to move and specify a base point and a destination point.</li> -<li>To add architectural objects and components, use the tools on the architecture toolbar or type commands in the command line. For example, to insert a door, click on the "Door" tool or type DOOR in -the command line. Then specify a wall where you want to place -the door and adjust its properties such as size, style and orientation.</li></p> -<li>To change the appearance and properties of your drawing elements, use the tools on the properties toolbar or type commands in the command line. For example, to change the color of an object, click on the "Color" tool or type COLOR in the command line. Then select the object that you want to change and choose a color from the palette.</li> -<li>To organize and manage your drawing layers, use the tools on the layers toolbar or type commands in the command line. For example, to create a new layer, click on the "New Layer" tool or type LAYER in the command line. Then enter a name for your layer and set its attributes such as color, linetype and visibility.</li> -<li>To render and animate your drawing, use the tools on the render toolbar or type commands in the command line. For example, to render your drawing as an image, click on the "Render" tool or type RENDER in the command line. Then adjust the settings such as resolution, quality and lighting.</li> -</ul> -<h2>Conclusion</h2> -<p>Autocad Architecture 2008 64 Bit Crack is a great software for architects who want to create and edit professional and realistic drawings and models. It has many features and tools that make it easy and efficient to use. However, it is also illegal and risky to use a cracked version of the software, as it may contain viruses, malware or spyware that can harm your computer or compromise your data. Therefore, we recommend that you use a legitimate and licensed version of Autocad Architecture 2008 instead of using Autocad Architecture 2008 64 Bit Crack.</p> -<h2>How to Troubleshoot Autocad Architecture 2008 64 Bit Crack?</h2> -<p>Although Autocad Architecture 2008 64 Bit Crack may seem to work fine at first, you may encounter some problems and errors later on. Some of the common issues that you may face are:</p> -<p></p> -<ul> -<li>The software crashes or freezes frequently.</li> -<li>The software does not open or run properly.</li> -<li>The software displays error messages or warnings.</li> -<li>The software does not save or export your drawings correctly.</li> -<li>The software does not recognize your license or activation code.</li> -</ul> -<p>To fix these problems, you can try some of the following solutions:</p> -<ul> -<li>Update your system drivers and software to the latest versions.</li> -<li>Scan your computer for viruses, malware and spyware and remove them if found.</li> -<li>Repair or reinstall the software using the original setup file.</li> -<li>Contact the customer support of Autodesk or the website where you downloaded the software from.</li> -<li>Buy a genuine and licensed version of Autocad Architecture 2008 from Autodesk or an authorized dealer.</li> -</ul> -<h2>What are the Alternatives to Autocad Architecture 2008 64 Bit Crack?</h2> -<p>If you are looking for a legal and safe way to use a software for architectural design and drafting, you may want to consider some of the alternatives to Autocad Architecture 2008 64 Bit Crack. Some of the options that you can try are:</p> -<ul> -<li>Autodesk Revit: This is a software that allows you to create and manage building information models (BIM) for architecture, engineering and construction. It has features such as parametric modeling, collaboration tools and analysis tools.</li> -<li>SketchUp: This is a software that lets you create and edit 3D models of anything. It has features such as intuitive interface, push-pull modeling, geo-location and rendering plugins.</li> -<li>ArchiCAD: This is a software that enables you to design and document buildings using BIM technology. It has features such as smart objects, teamwork tools and virtual building explorer.</li> -</ul></p> 3cee63e6c2<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/inreVtussa/clothingai/Examples/Billing Software Free Download [PATCHED] Full Version Medical.md b/spaces/inreVtussa/clothingai/Examples/Billing Software Free Download [PATCHED] Full Version Medical.md deleted file mode 100644 index 362f3086f08c40f1967b0d29c4a7c92b98bbe5ed..0000000000000000000000000000000000000000 --- a/spaces/inreVtussa/clothingai/Examples/Billing Software Free Download [PATCHED] Full Version Medical.md +++ /dev/null @@ -1,193 +0,0 @@ -<br /> ----> ServiceClient failure for DeepLeo[/ERROR]</p> -<h3>billrMD</h3> - -<p>billrMD is a web-based billing software platform that allows you to manage your medical billing and coding processes with ease. It offers features such as electronic claims submission, scheduling, payment processing, reporting, patient invoicing, credit card processing, and more. billrMD integrates with popular EHR systems such as Epic, Cerner, Allscripts, and others. billrMD offers a free trial for 14 days, and then charges $49 per month per provider.</p> -<h2>billing software free download full version medical</h2><br /><p><b><b>Download Zip</b> ⚹ <a href="https://tiurll.com/2uCipN">https://tiurll.com/2uCipN</a></b></p><br /><br /> - -<h2>Benefits of Using Billing Software Free Download Full Version Medical</h2> - -<p>By using billing software free download full version medical, you can enjoy many benefits for your practice, such as:</p> - -<ul> -<li>Save time and money: Billing software can automate and streamline your medical billing and coding processes, reducing errors, rejections, denials, and delays. You can also save on paper, postage, and labor costs by using electronic claims submission and payment processing.</li> -<li>Improve accuracy and efficiency: Billing software can help you avoid common billing mistakes such as incorrect codes, missing information, duplicate claims, etc. You can also use features such as clinical decision rules engine, lab integration, e-prescribing, etc. to ensure the highest quality of care for your patients.</li> -<li>Enhance your cash flow and profitability: Billing software can help you get paid faster and more consistently by insurance companies and patients. You can also use features such as reporting, patient statements, credit card processing, etc. to monitor and optimize your revenue cycle management.</li> -<li>Increase your patient satisfaction and loyalty: Billing software can help you provide a better patient experience by offering features such as online booking, telehealth, patient portal, automated appointment reminders, etc. You can also use features such as invoicing, credit card processing, etc. to offer convenient payment options for your patients.</li> -<li>Stay compliant and secure: Billing software can help you comply with HIPAA regulations and other industry standards by offering features such as secure data storage, encryption, backup, recovery, audit trails, etc. You can also use features such as ONC 2015 Cures Update Certification to demonstrate your commitment to interoperability, security, and usability.</li> -</ul> - -<h2>Conclusion</h2> - -<p>Billing software free download full version medical is a great way to improve your medical billing -and coding processes -save time -money -and paper -and enhance your revenue cycle management. -There are many options available in 2023 -each with its own advantages -and disadvantages. -You should consider your practice size -specialty -needs -budget -existing systems -workflows -security -reliability -customer support -and training before choosing the best one for your practice. -You can also use platforms such as G2 or GetApp to compare different options based on user ratings -and reviews. -You can sign up for a free account today by visiting the website of the billing software provider that you are interested in -filling out a form -verifying your email address -logging in to your account -and exploring the features -and functionalities of the billing software. -We hope this article has helped you find the best billing software free download full version medical for your practice in 2023.</p> -<h3>GetApp</h3> - -<p>GetApp is a platform that helps you discover and compare the best software solutions for your business needs. You can browse through thousands of software products across various categories, such as medical billing, EHR, practice management, telemedicine, etc. You can also filter your search by features, pricing, ratings, reviews, integrations, and more. GetApp offers a list of the best free medical billing software in 2023, based on user feedback and expert analysis. You can view the details of each product, such as pros and cons, screenshots, videos, FAQs, etc. You can also request a free demo or trial from the software provider.</p> - -<h2>FAQs about Billing Software Free Download Full Version Medical</h2> - -<p>Here are some frequently asked questions about billing software free download full version medical:</p> - -<h3>What is billing software free download full version medical?</h3> - -<p>Billing software free download full version medical is a type of software that allows you to access a comprehensive and user-friendly platform that can handle all aspects of your medical billing and coding needs, from scheduling appointments and sending claims to collecting payments and generating reports. By using billing software free download full version medical, you can save time, money, and paper, improve your accuracy and efficiency, and enhance your cash flow and profitability.</p> - -<h3>Why use billing software free download full version medical?</h3> - -<p>Billing software free download full version medical can offer many benefits for your practice, such as:</p> - -<ul> -<li>Save time and money: Billing software can automate and streamline your medical billing and coding processes, reducing errors, rejections, denials, and delays. You can also save on paper, postage, and labor costs by using electronic claims submission and payment processing.</li> -<li>Improve accuracy and efficiency: Billing software can help you avoid common billing mistakes such as incorrect codes, missing information, duplicate claims, etc. You can also use features such as clinical decision rules engine, lab integration, e-prescribing, etc. to ensure the highest quality of care for your patients.</li> -<li>Enhance your cash flow and profitability: Billing software can help you get paid faster and more consistently by insurance companies and patients. You can also use features such as reporting, patient statements, credit card processing, etc. to monitor and optimize your revenue cycle management.</li> -<li>Increase your patient satisfaction and loyalty: Billing software can help you provide a better patient experience by offering features such as online booking, -telehealth, -patient portal, -automated appointment reminders, -etc. You can also use features such as invoicing, -credit card processing, -etc. to offer convenient payment options for your patients.</li> -<li>Stay compliant -and secure: Billing software can help you comply with HIPAA regulations -and other industry standards by offering features such as secure data storage -encryption -backup -recovery -audit trails -etc. You can also use features such as ONC 2015 Cures Update Certification to demonstrate your commitment to interoperability -security -and usability.</li> -</ul> - -<h3>How much does billing software free download full version medical cost?</h3> - -<p>Billing software free download full version medical can vary in price -from free to hundreds of dollars per month. Some factors that may affect the cost of billing software are:</p> - -<ul> -<li>The features -and functionalities of the software</li> -<li>The number of users -providers -patients -or claims</li> -<li>The type of pricing model -such as subscription -perpetual license -or pay-per-use</li> -<li>The hidden costs of the software -such as installation fees -training fees -maintenance fees -etc.</li> -</ul> - -<p>You should compare different options for billing software free download full version medical based on your budget -and the value that they offer for your money. You should also look for any discounts -promotions -or free trials that the software provider may offer.</p> -<p></p> - -<h3>How to use billing software free download full version medical?</h3> - -<p>To use billing software free download full version medical -you need to follow these steps:</p> - -<ol> -<li>Sign up for a free account by visiting the website of the billing software provider that you are interested in -such as OpenEMR -EZClaim -iSALUS Billing -and Scheduling -ExpertBox -or Pro Health Billing.</li> -<li>Fill out a form with your basic information such as name -email -phone number -practice name -etc.</li> -<li>Verify your email address by clicking on the link sent to you by the billing software provider.</li> -<li>Login to your account using your email address -and password.</li> -<li>Explore the features -and functionalities of the billing software -and customize it according to your preferences -such as adding users -providers -patients -services -codes -etc.</li> -<li>Start using the billing software to manage your medical billing -and coding processes -such as scheduling appointments -sending claims -collecting payments -and generating reports.</li> -</ol> - -<p>If you have any questions or issues while using the billing software -you can contact their customer support team via phone -email -chat -or ticket system.</p> - -<h2>Conclusion</h2> - -<p>Billing software free download full version medical is a great way to improve your medical billing -and coding processes -save time -money -and paper -and enhance your revenue cycle management. -There are many options available in 2023 -each with its own advantages -and disadvantages. -You should consider your practice size -specialty -needs -budget -existing systems -workflows -security -reliability -customer support -and training before choosing the best one for your practice. -You can also use platforms such as G2 or GetApp to compare different options based on user ratings -and reviews. -You can sign up for a free account today by visiting the website of the billing software provider that you are interested in -filling out a form -verifying your email address -logging in to your account -and exploring the features -and functionalities of the billing software. -We hope this article has helped you find the best billing software free download full version medical for your practice in 2023.</p> 3cee63e6c2<br /> -<br /> -<br /> \ No newline at end of file diff --git a/spaces/jackli888/stable-diffusion-webui/modules/ngrok.py b/spaces/jackli888/stable-diffusion-webui/modules/ngrok.py deleted file mode 100644 index 3df2c06bf1f10d49b7e9397758bc4f3661a51ba7..0000000000000000000000000000000000000000 --- a/spaces/jackli888/stable-diffusion-webui/modules/ngrok.py +++ /dev/null @@ -1,26 +0,0 @@ -from pyngrok import ngrok, conf, exception - -def connect(token, port, region): - account = None - if token is None: - token = 'None' - else: - if ':' in token: - # token = authtoken:username:password - account = token.split(':')[1] + ':' + token.split(':')[-1] - token = token.split(':')[0] - - config = conf.PyngrokConfig( - auth_token=token, region=region - ) - try: - if account is None: - public_url = ngrok.connect(port, pyngrok_config=config, bind_tls=True).public_url - else: - public_url = ngrok.connect(port, pyngrok_config=config, bind_tls=True, auth=account).public_url - except exception.PyngrokNgrokError: - print(f'Invalid ngrok authtoken, ngrok connection aborted.\n' - f'Your token: {token}, get the right one on https://dashboard.ngrok.com/get-started/your-authtoken') - else: - print(f'ngrok connected to localhost:{port}! URL: {public_url}\n' - 'You can use this link after the launch is complete.') diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/model.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/model.py deleted file mode 100644 index e050d3204d8f1becdf0f8b3133470708e5420cea..0000000000000000000000000000000000000000 --- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/encoder/model.py +++ /dev/null @@ -1,135 +0,0 @@ -from encoder.params_model import * -from encoder.params_data import * -from scipy.interpolate import interp1d -from sklearn.metrics import roc_curve -from torch.nn.utils import clip_grad_norm_ -from scipy.optimize import brentq -from torch import nn -import numpy as np -import torch - - -class SpeakerEncoder(nn.Module): - def __init__(self, device, loss_device): - super().__init__() - self.loss_device = loss_device - - # Network defition - self.lstm = nn.LSTM(input_size=mel_n_channels, - hidden_size=model_hidden_size, - num_layers=model_num_layers, - batch_first=True).to(device) - self.linear = nn.Linear(in_features=model_hidden_size, - out_features=model_embedding_size).to(device) - self.relu = torch.nn.ReLU().to(device) - - # Cosine similarity scaling (with fixed initial parameter values) - self.similarity_weight = nn.Parameter(torch.tensor([10.])).to(loss_device) - self.similarity_bias = nn.Parameter(torch.tensor([-5.])).to(loss_device) - - # Loss - self.loss_fn = nn.CrossEntropyLoss().to(loss_device) - - def do_gradient_ops(self): - # Gradient scale - self.similarity_weight.grad *= 0.01 - self.similarity_bias.grad *= 0.01 - - # Gradient clipping - clip_grad_norm_(self.parameters(), 3, norm_type=2) - - def forward(self, utterances, hidden_init=None): - """ - Computes the embeddings of a batch of utterance spectrograms. - - :param utterances: batch of mel-scale filterbanks of same duration as a tensor of shape - (batch_size, n_frames, n_channels) - :param hidden_init: initial hidden state of the LSTM as a tensor of shape (num_layers, - batch_size, hidden_size). Will default to a tensor of zeros if None. - :return: the embeddings as a tensor of shape (batch_size, embedding_size) - """ - # Pass the input through the LSTM layers and retrieve all outputs, the final hidden state - # and the final cell state. - out, (hidden, cell) = self.lstm(utterances, hidden_init) - - # We take only the hidden state of the last layer - embeds_raw = self.relu(self.linear(hidden[-1])) - - # L2-normalize it - embeds = embeds_raw / (torch.norm(embeds_raw, dim=1, keepdim=True) + 1e-5) - - return embeds - - def similarity_matrix(self, embeds): - """ - Computes the similarity matrix according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the similarity matrix as a tensor of shape (speakers_per_batch, - utterances_per_speaker, speakers_per_batch) - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Inclusive centroids (1 per speaker). Cloning is needed for reverse differentiation - centroids_incl = torch.mean(embeds, dim=1, keepdim=True) - centroids_incl = centroids_incl.clone() / (torch.norm(centroids_incl, dim=2, keepdim=True) + 1e-5) - - # Exclusive centroids (1 per utterance) - centroids_excl = (torch.sum(embeds, dim=1, keepdim=True) - embeds) - centroids_excl /= (utterances_per_speaker - 1) - centroids_excl = centroids_excl.clone() / (torch.norm(centroids_excl, dim=2, keepdim=True) + 1e-5) - - # Similarity matrix. The cosine similarity of already 2-normed vectors is simply the dot - # product of these vectors (which is just an element-wise multiplication reduced by a sum). - # We vectorize the computation for efficiency. - sim_matrix = torch.zeros(speakers_per_batch, utterances_per_speaker, - speakers_per_batch).to(self.loss_device) - mask_matrix = 1 - np.eye(speakers_per_batch, dtype=np.int) - for j in range(speakers_per_batch): - mask = np.where(mask_matrix[j])[0] - sim_matrix[mask, :, j] = (embeds[mask] * centroids_incl[j]).sum(dim=2) - sim_matrix[j, :, j] = (embeds[j] * centroids_excl[j]).sum(dim=1) - - ## Even more vectorized version (slower maybe because of transpose) - # sim_matrix2 = torch.zeros(speakers_per_batch, speakers_per_batch, utterances_per_speaker - # ).to(self.loss_device) - # eye = np.eye(speakers_per_batch, dtype=np.int) - # mask = np.where(1 - eye) - # sim_matrix2[mask] = (embeds[mask[0]] * centroids_incl[mask[1]]).sum(dim=2) - # mask = np.where(eye) - # sim_matrix2[mask] = (embeds * centroids_excl).sum(dim=2) - # sim_matrix2 = sim_matrix2.transpose(1, 2) - - sim_matrix = sim_matrix * self.similarity_weight + self.similarity_bias - return sim_matrix - - def loss(self, embeds): - """ - Computes the softmax loss according the section 2.1 of GE2E. - - :param embeds: the embeddings as a tensor of shape (speakers_per_batch, - utterances_per_speaker, embedding_size) - :return: the loss and the EER for this batch of embeddings. - """ - speakers_per_batch, utterances_per_speaker = embeds.shape[:2] - - # Loss - sim_matrix = self.similarity_matrix(embeds) - sim_matrix = sim_matrix.reshape((speakers_per_batch * utterances_per_speaker, - speakers_per_batch)) - ground_truth = np.repeat(np.arange(speakers_per_batch), utterances_per_speaker) - target = torch.from_numpy(ground_truth).long().to(self.loss_device) - loss = self.loss_fn(sim_matrix, target) - - # EER (not backpropagated) - with torch.no_grad(): - inv_argmax = lambda i: np.eye(1, speakers_per_batch, i, dtype=np.int)[0] - labels = np.array([inv_argmax(i) for i in ground_truth]) - preds = sim_matrix.detach().cpu().numpy() - - # Snippet from https://yangcha.github.io/EER-ROC/ - fpr, tpr, thresholds = roc_curve(labels.flatten(), preds.flatten()) - eer = brentq(lambda x: 1. - x - interp1d(fpr, tpr)(x), 0., 1.) - - return loss, eer diff --git a/spaces/joaogabriellima/Real-Time-Voice-Cloning/vocoder/distribution.py b/spaces/joaogabriellima/Real-Time-Voice-Cloning/vocoder/distribution.py deleted file mode 100644 index d3119a5ba1e77bc25a92d2664f83d366f12399c0..0000000000000000000000000000000000000000 --- a/spaces/joaogabriellima/Real-Time-Voice-Cloning/vocoder/distribution.py +++ /dev/null @@ -1,132 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F - - -def log_sum_exp(x): - """ numerically stable log_sum_exp implementation that prevents overflow """ - # TF ordering - axis = len(x.size()) - 1 - m, _ = torch.max(x, dim=axis) - m2, _ = torch.max(x, dim=axis, keepdim=True) - return m + torch.log(torch.sum(torch.exp(x - m2), dim=axis)) - - -# It is adapted from https://github.com/r9y9/wavenet_vocoder/blob/master/wavenet_vocoder/mixture.py -def discretized_mix_logistic_loss(y_hat, y, num_classes=65536, - log_scale_min=None, reduce=True): - if log_scale_min is None: - log_scale_min = float(np.log(1e-14)) - y_hat = y_hat.permute(0,2,1) - assert y_hat.dim() == 3 - assert y_hat.size(1) % 3 == 0 - nr_mix = y_hat.size(1) // 3 - - # (B x T x C) - y_hat = y_hat.transpose(1, 2) - - # unpack parameters. (B, T, num_mixtures) x 3 - logit_probs = y_hat[:, :, :nr_mix] - means = y_hat[:, :, nr_mix:2 * nr_mix] - log_scales = torch.clamp(y_hat[:, :, 2 * nr_mix:3 * nr_mix], min=log_scale_min) - - # B x T x 1 -> B x T x num_mixtures - y = y.expand_as(means) - - centered_y = y - means - inv_stdv = torch.exp(-log_scales) - plus_in = inv_stdv * (centered_y + 1. / (num_classes - 1)) - cdf_plus = torch.sigmoid(plus_in) - min_in = inv_stdv * (centered_y - 1. / (num_classes - 1)) - cdf_min = torch.sigmoid(min_in) - - # log probability for edge case of 0 (before scaling) - # equivalent: torch.log(F.sigmoid(plus_in)) - log_cdf_plus = plus_in - F.softplus(plus_in) - - # log probability for edge case of 255 (before scaling) - # equivalent: (1 - F.sigmoid(min_in)).log() - log_one_minus_cdf_min = -F.softplus(min_in) - - # probability for all other cases - cdf_delta = cdf_plus - cdf_min - - mid_in = inv_stdv * centered_y - # log probability in the center of the bin, to be used in extreme cases - # (not actually used in our code) - log_pdf_mid = mid_in - log_scales - 2. * F.softplus(mid_in) - - # tf equivalent - """ - log_probs = tf.where(x < -0.999, log_cdf_plus, - tf.where(x > 0.999, log_one_minus_cdf_min, - tf.where(cdf_delta > 1e-5, - tf.log(tf.maximum(cdf_delta, 1e-12)), - log_pdf_mid - np.log(127.5)))) - """ - # TODO: cdf_delta <= 1e-5 actually can happen. How can we choose the value - # for num_classes=65536 case? 1e-7? not sure.. - inner_inner_cond = (cdf_delta > 1e-5).float() - - inner_inner_out = inner_inner_cond * \ - torch.log(torch.clamp(cdf_delta, min=1e-12)) + \ - (1. - inner_inner_cond) * (log_pdf_mid - np.log((num_classes - 1) / 2)) - inner_cond = (y > 0.999).float() - inner_out = inner_cond * log_one_minus_cdf_min + (1. - inner_cond) * inner_inner_out - cond = (y < -0.999).float() - log_probs = cond * log_cdf_plus + (1. - cond) * inner_out - - log_probs = log_probs + F.log_softmax(logit_probs, -1) - - if reduce: - return -torch.mean(log_sum_exp(log_probs)) - else: - return -log_sum_exp(log_probs).unsqueeze(-1) - - -def sample_from_discretized_mix_logistic(y, log_scale_min=None): - """ - Sample from discretized mixture of logistic distributions - Args: - y (Tensor): B x C x T - log_scale_min (float): Log scale minimum value - Returns: - Tensor: sample in range of [-1, 1]. - """ - if log_scale_min is None: - log_scale_min = float(np.log(1e-14)) - assert y.size(1) % 3 == 0 - nr_mix = y.size(1) // 3 - - # B x T x C - y = y.transpose(1, 2) - logit_probs = y[:, :, :nr_mix] - - # sample mixture indicator from softmax - temp = logit_probs.data.new(logit_probs.size()).uniform_(1e-5, 1.0 - 1e-5) - temp = logit_probs.data - torch.log(- torch.log(temp)) - _, argmax = temp.max(dim=-1) - - # (B, T) -> (B, T, nr_mix) - one_hot = to_one_hot(argmax, nr_mix) - # select logistic parameters - means = torch.sum(y[:, :, nr_mix:2 * nr_mix] * one_hot, dim=-1) - log_scales = torch.clamp(torch.sum( - y[:, :, 2 * nr_mix:3 * nr_mix] * one_hot, dim=-1), min=log_scale_min) - # sample from logistic & clip to interval - # we don't actually round to the nearest 8bit value when sampling - u = means.data.new(means.size()).uniform_(1e-5, 1.0 - 1e-5) - x = means + torch.exp(log_scales) * (torch.log(u) - torch.log(1. - u)) - - x = torch.clamp(torch.clamp(x, min=-1.), max=1.) - - return x - - -def to_one_hot(tensor, n, fill_with=1.): - # we perform one hot encore with respect to the last axis - one_hot = torch.FloatTensor(tensor.size() + (n,)).zero_() - if tensor.is_cuda: - one_hot = one_hot.cuda() - one_hot.scatter_(len(tensor.size()), tensor.unsqueeze(-1), fill_with) - return one_hot diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Hash/test_SHA1.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Hash/test_SHA1.py deleted file mode 100644 index a883a44b5075e1536f3e177d6bd74b7980ce88fd..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/Crypto/SelfTest/Hash/test_SHA1.py +++ /dev/null @@ -1,84 +0,0 @@ -# -*- coding: utf-8 -*- -# -# SelfTest/Hash/SHA1.py: Self-test for the SHA-1 hash function -# -# Written in 2008 by Dwayne C. Litzenberger <dlitz@dlitz.net> -# -# =================================================================== -# The contents of this file are dedicated to the public domain. To -# the extent that dedication to the public domain is not available, -# everyone is granted a worldwide, perpetual, royalty-free, -# non-exclusive license to exercise all rights associated with the -# contents of this file for any purpose whatsoever. -# No rights are reserved. -# -# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -# EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -# MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -# NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS -# BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN -# ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN -# CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE -# SOFTWARE. -# =================================================================== - -"""Self-test suite for Crypto.Hash.SHA""" - -from binascii import hexlify - -from Crypto.SelfTest.loader import load_test_vectors - -# Test vectors from various sources -# This is a list of (expected_result, input[, description]) tuples. -test_data_various = [ - # FIPS PUB 180-2, A.1 - "One-Block Message" - ('a9993e364706816aba3e25717850c26c9cd0d89d', 'abc'), - - # FIPS PUB 180-2, A.2 - "Multi-Block Message" - ('84983e441c3bd26ebaae4aa1f95129e5e54670f1', - 'abcdbcdecdefdefgefghfghighijhijkijkljklmklmnlmnomnopnopq'), - - # FIPS PUB 180-2, A.3 - "Long Message" -# ('34aa973cd4c4daa4f61eeb2bdbad27316534016f', -# 'a' * 10**6, -# '"a" * 10**6'), - - # RFC 3174: Section 7.3, "TEST4" (multiple of 512 bits) - ('dea356a2cddd90c7a7ecedc5ebb563934f460452', - '01234567' * 80, - '"01234567" * 80'), -] - -def get_tests(config={}): - from Crypto.Hash import SHA1 - from .common import make_hash_tests - - tests = [] - - test_vectors = load_test_vectors(("Hash", "SHA1"), - "SHA1ShortMsg.rsp", - "KAT SHA-1", - { "len" : lambda x: int(x) } ) or [] - - test_data = test_data_various[:] - for tv in test_vectors: - try: - if tv.startswith('['): - continue - except AttributeError: - pass - if tv.len == 0: - tv.msg = b"" - test_data.append((hexlify(tv.md), tv.msg, tv.desc)) - - tests = make_hash_tests(SHA1, "SHA1", test_data, - digest_size=20, - oid="1.3.14.3.2.26") - return tests - -if __name__ == '__main__': - import unittest - suite = lambda: unittest.TestSuite(get_tests()) - unittest.main(defaultTest='suite') - -# vim:set ts=4 sw=4 sts=4 expandtab: diff --git a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/regex.py b/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/regex.py deleted file mode 100644 index fe852fdfce07211446ba480d4acdaf7f6f2a4542..0000000000000000000000000000000000000000 --- a/spaces/joaopereirajp/livvieChatBot/venv/lib/python3.9/site-packages/bson/regex.py +++ /dev/null @@ -1,134 +0,0 @@ -# Copyright 2013-present MongoDB, Inc. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -"""Tools for representing MongoDB regular expressions.""" - -import re -from typing import Any, Generic, Pattern, Type, TypeVar, Union - -from bson._helpers import _getstate_slots, _setstate_slots -from bson.son import RE_TYPE - - -def str_flags_to_int(str_flags: str) -> int: - flags = 0 - if "i" in str_flags: - flags |= re.IGNORECASE - if "l" in str_flags: - flags |= re.LOCALE - if "m" in str_flags: - flags |= re.MULTILINE - if "s" in str_flags: - flags |= re.DOTALL - if "u" in str_flags: - flags |= re.UNICODE - if "x" in str_flags: - flags |= re.VERBOSE - - return flags - - -_T = TypeVar("_T", str, bytes) - - -class Regex(Generic[_T]): - """BSON regular expression data.""" - - __slots__ = ("pattern", "flags") - - __getstate__ = _getstate_slots - __setstate__ = _setstate_slots - - _type_marker = 11 - - @classmethod - def from_native(cls: Type["Regex"], regex: "Pattern[_T]") -> "Regex[_T]": - """Convert a Python regular expression into a ``Regex`` instance. - - Note that in Python 3, a regular expression compiled from a - :class:`str` has the ``re.UNICODE`` flag set. If it is undesirable - to store this flag in a BSON regular expression, unset it first:: - - >>> pattern = re.compile('.*') - >>> regex = Regex.from_native(pattern) - >>> regex.flags ^= re.UNICODE - >>> db.collection.insert_one({'pattern': regex}) - - :Parameters: - - `regex`: A regular expression object from ``re.compile()``. - - .. warning:: - Python regular expressions use a different syntax and different - set of flags than MongoDB, which uses `PCRE`_. A regular - expression retrieved from the server may not compile in - Python, or may match a different set of strings in Python than - when used in a MongoDB query. - - .. _PCRE: http://www.pcre.org/ - """ - if not isinstance(regex, RE_TYPE): - raise TypeError("regex must be a compiled regular expression, not %s" % type(regex)) - - return Regex(regex.pattern, regex.flags) - - def __init__(self, pattern: _T, flags: Union[str, int] = 0) -> None: - """BSON regular expression data. - - This class is useful to store and retrieve regular expressions that are - incompatible with Python's regular expression dialect. - - :Parameters: - - `pattern`: string - - `flags`: (optional) an integer bitmask, or a string of flag - characters like "im" for IGNORECASE and MULTILINE - """ - if not isinstance(pattern, (str, bytes)): - raise TypeError("pattern must be a string, not %s" % type(pattern)) - self.pattern: _T = pattern - - if isinstance(flags, str): - self.flags = str_flags_to_int(flags) - elif isinstance(flags, int): - self.flags = flags - else: - raise TypeError("flags must be a string or int, not %s" % type(flags)) - - def __eq__(self, other: Any) -> bool: - if isinstance(other, Regex): - return self.pattern == other.pattern and self.flags == other.flags - else: - return NotImplemented - - __hash__ = None # type: ignore - - def __ne__(self, other: Any) -> bool: - return not self == other - - def __repr__(self) -> str: - return f"Regex({self.pattern!r}, {self.flags!r})" - - def try_compile(self) -> "Pattern[_T]": - """Compile this :class:`Regex` as a Python regular expression. - - .. warning:: - Python regular expressions use a different syntax and different - set of flags than MongoDB, which uses `PCRE`_. A regular - expression retrieved from the server may not compile in - Python, or may match a different set of strings in Python than - when used in a MongoDB query. :meth:`try_compile()` may raise - :exc:`re.error`. - - .. _PCRE: http://www.pcre.org/ - """ - return re.compile(self.pattern, self.flags) diff --git a/spaces/jone/Music_Source_Separation/bytesep/plot_results/musdb18.py b/spaces/jone/Music_Source_Separation/bytesep/plot_results/musdb18.py deleted file mode 100644 index eb91faa60b79f0f34aba1bb4810c2be7be8438f3..0000000000000000000000000000000000000000 --- a/spaces/jone/Music_Source_Separation/bytesep/plot_results/musdb18.py +++ /dev/null @@ -1,198 +0,0 @@ -import argparse -import os -import pickle - -import matplotlib.pyplot as plt -import numpy as np - - -def load_sdrs(workspace, task_name, filename, config, gpus, source_type): - - stat_path = os.path.join( - workspace, - "statistics", - task_name, - filename, - "config={},gpus={}".format(config, gpus), - "statistics.pkl", - ) - - stat_dict = pickle.load(open(stat_path, 'rb')) - - median_sdrs = [e['median_sdr_dict'][source_type] for e in stat_dict['test']] - - return median_sdrs - - -def plot_statistics(args): - - # arguments & parameters - workspace = args.workspace - select = args.select - task_name = "musdb18" - filename = "train" - - # paths - fig_path = os.path.join('results', task_name, "sdr_{}.pdf".format(select)) - os.makedirs(os.path.dirname(fig_path), exist_ok=True) - - linewidth = 1 - lines = [] - fig, ax = plt.subplots(1, 1, figsize=(8, 6)) - - if select == '1a': - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,unet', - gpus=1, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth) - lines.append(line) - ylim = 15 - - elif select == '1b': - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='accompaniment-vocals,unet', - gpus=1, - source_type="accompaniment", - ) - (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth) - lines.append(line) - ylim = 20 - - if select == '1c': - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,unet', - gpus=1, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth) - lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,resunet', - gpus=2, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='ResUNet_ISMIR2021,l1_wav', linewidth=linewidth) - lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,unet_subbandtime', - gpus=1, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='unet_subband,l1_wav', linewidth=linewidth) - lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='vocals-accompaniment,resunet_subbandtime', - gpus=1, - source_type="vocals", - ) - (line,) = ax.plot(sdrs, label='resunet_subband,l1_wav', linewidth=linewidth) - lines.append(line) - - ylim = 15 - - elif select == '1d': - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='accompaniment-vocals,unet', - gpus=1, - source_type="accompaniment", - ) - (line,) = ax.plot(sdrs, label='UNet,l1_wav', linewidth=linewidth) - lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='accompaniment-vocals,resunet', - gpus=2, - source_type="accompaniment", - ) - (line,) = ax.plot(sdrs, label='ResUNet_ISMIR2021,l1_wav', linewidth=linewidth) - lines.append(line) - - # sdrs = load_sdrs( - # workspace, - # task_name, - # filename, - # config='accompaniment-vocals,unet_subbandtime', - # gpus=1, - # source_type="accompaniment", - # ) - # (line,) = ax.plot(sdrs, label='UNet_subbtandtime,l1_wav', linewidth=linewidth) - # lines.append(line) - - sdrs = load_sdrs( - workspace, - task_name, - filename, - config='accompaniment-vocals,resunet_subbandtime', - gpus=1, - source_type="accompaniment", - ) - (line,) = ax.plot( - sdrs, label='ResUNet_subbtandtime,l1_wav', linewidth=linewidth - ) - lines.append(line) - - ylim = 20 - - else: - raise Exception('Error!') - - eval_every_iterations = 10000 - total_ticks = 50 - ticks_freq = 10 - - ax.set_ylim(0, ylim) - ax.set_xlim(0, total_ticks) - ax.xaxis.set_ticks(np.arange(0, total_ticks + 1, ticks_freq)) - ax.xaxis.set_ticklabels( - np.arange( - 0, - total_ticks * eval_every_iterations + 1, - ticks_freq * eval_every_iterations, - ) - ) - ax.yaxis.set_ticks(np.arange(ylim + 1)) - ax.yaxis.set_ticklabels(np.arange(ylim + 1)) - ax.grid(color='b', linestyle='solid', linewidth=0.3) - plt.legend(handles=lines, loc=4) - - plt.savefig(fig_path) - print('Save figure to {}'.format(fig_path)) - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--workspace', type=str, required=True) - parser.add_argument('--select', type=str, required=True) - - args = parser.parse_args() - - plot_statistics(args) diff --git a/spaces/katasou/Music-discord-bot/URL.py b/spaces/katasou/Music-discord-bot/URL.py deleted file mode 100644 index 2127a2df4565ddbd723db346856f26eaee8b68c4..0000000000000000000000000000000000000000 --- a/spaces/katasou/Music-discord-bot/URL.py +++ /dev/null @@ -1,134 +0,0 @@ -#URLをplaylistに書き込むモジュール -import requests -from bs4 import BeautifulSoup -import yt_dlp -from urllib.parse import urlparse, parse_qs -import re - - -def URLwrite(input,guild): - print("input:",input) - #if url_or_word(input) != "None": - if input.startswith("http"): - #url = url_or_word(input) - url = input - if is_youtube_url(url):#youtubeのURL - if has_ampersand_in_url(url): #&が中に存在する - id = extract_playlist_id(url)#playlistのIDを取得 - url = "https://www.youtube.com/playlist?list="+id#playlistのURLを取得 - if input.startswith("https://www.nicobox.jp"):#ボカコレか判定 - url = nicobox_to_nicovideo("https://www.nicobox.jp/share?mylist=59279076")#ニコニコに変換 - if input.startswith("https://www.nicovideo.jp"): - url = input.split('?')[0] - print("URL:",url) - type = URLwrite_to_txt(url,guild)#txtに書き込み - return url,type#終了 - - else:#wordで検索 - url =search_youtube(input) - type = URLwrite_to_txt(url,guild) - return url,type - -def url_or_word(input): #URLか識別する - # findall() 正規表現に一致する文字列を検索する - url = re.findall('https?://(?:[-\w.]|(?:%[\da-fA-F]{2}))+', input) - if len(url) != 0: - return url[0] - else: - return "None" - -def is_youtube_url(url):#youtubeのURLか判別する - pattern = r'(?:https?:\/\/)?(?:www\.)?(?:youtube\.com|youtu\.be)\/(?:watch\?v=|embed\/|v\/)?([a-zA-Z0-9_-]+)' - match = re.match(pattern, url) - return match is not None - -def has_ampersand_in_url(url):#youtubeの特殊playlistURLか判別する - if '&' in url: - return True - else: - return False - -def extract_playlist_id(url):#playlistIDを取得する - # URLを解析してクエリパラメーターを取得 - query_params = parse_qs(urlparse(url).query) - # "list"パラメーターの値を抽出 - playlist_id = query_params.get('list', [''])[0] - return playlist_id - -def URLwrite_to_txt(input,id):#書き込みdef - ydl_opts = { - 'dump_single_json': True, - 'extract_flat': True, - 'skip_download': True, - } - - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - try: - result = ydl.extract_info(input, download=False) - if 'entries' in result: - # プレイリストの場合 - entries = result['entries'] - urls = [entry['url'] for entry in entries if 'url' in entry] - - for i in range(len(urls)):#書き込み - with open(id+".txt", 'a') as f: - print(urls[i], file=f) - - return "Playlist" - else: - # 個別の動画の場合 - url = result['webpage_url'] - with open(id+".txt", 'a') as f:#書き込み - print(url, file=f) - - return "Video" - except yt_dlp.DownloadError as e:#エラー - return "Error" - -def nicobox_to_nicovideo(url):#ボカコレURL⇒ニコニコURL - parsed_url = urlparse(url) - query_params = parse_qs(parsed_url.query) - id = query_params.get('mylist', [''])[0]#idを取得 - - url = 'https://www.nicovideo.jp/mylist/'+id - - response = requests.get(url) - - if response.status_code == 200: - # レスポンスの内容をBeautifulSoupで解析 - soup = BeautifulSoup(response.text, 'html.parser') - meta_tag = soup.find('meta', {'property': 'og:url'}) - - if meta_tag: - content = meta_tag.get('content') - return content - - -def search_youtube(query): - ydl_opts = { - 'quiet': True, - 'format': 'best', - 'noplaylist': True, - 'extract_flat': True, - } - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - search_results = ydl.extract_info(f"ytsearch:{query}", download=False) - first_video_url = search_results['entries'][0]['url'] - return first_video_url - -def get_url(input): - with yt_dlp.YoutubeDL({'dump_single_json': True,'extract_flat': True,'skip_download': True,'cachedir': False }) as ydl: - try: - result = ydl.extract_info(input, download=False) - title = result['title'] - print(title) - return str(title) - except yt_dlp.DownloadError as e:#エラー - return "Error 再生不能" - -from concurrent.futures import ThreadPoolExecutor - -def get_multiple_urls(inputs): - with ThreadPoolExecutor() as executor: - titles = list(executor.map(get_url, inputs)) - return titles \ No newline at end of file diff --git a/spaces/kcagle/AutoGPT/ui/api.py b/spaces/kcagle/AutoGPT/ui/api.py deleted file mode 100644 index 3b46ad32148b23f06c6eb64c88708fc2bf92e4dc..0000000000000000000000000000000000000000 --- a/spaces/kcagle/AutoGPT/ui/api.py +++ /dev/null @@ -1,146 +0,0 @@ -import os, sys -import utils -import uuid -import json -import subprocess, threading - -FILE_DIR = os.path.dirname(os.path.abspath(__file__)) -REPO_DIR = os.path.dirname(FILE_DIR) -STATE_DIR = os.path.join(FILE_DIR, "state") -sys.path.append(REPO_DIR) -if not os.path.exists(STATE_DIR): - os.mkdir(STATE_DIR) -import time - - -def get_openai_api_key(): - return os.getenv("OPENAI_API_KEY") - - -running_apis = [] - - -def get_state(state_file): - with open(state_file, "r") as f: - state = json.load(f) - return state - - -def set_state(state_file, state): - with open(state_file, "w") as f: - json.dump(state, f) - - -class AutoAPI: - def __init__(self, openai_key, ai_name, ai_role, top_5_goals): - self.openai_key = openai_key - hex = uuid.uuid4().hex - print(hex) - self.state_file = os.path.join(STATE_DIR, f"state_{hex}.json") - self.log_file = os.path.join(STATE_DIR, f"log_{hex}.json") - - newline = "\n" - with open(os.path.join(REPO_DIR, "ai_settings.yaml"), "w") as f: - f.write( - f"""ai_goals: -{newline.join([f'- {goal[0]}' for goal in top_5_goals if goal[0]])} -ai_name: {ai_name} -ai_role: {ai_role} -""" - ) - state = { - "pending_input": None, - "awaiting_input": False, - "messages": [], - "last_message_read_index": -1, - } - set_state(self.state_file, state) - - with open(self.log_file, "w") as f: - subprocess.Popen( - [ - "python", - os.path.join(REPO_DIR, "ui", "api.py"), - openai_key, - self.state_file, - ], - cwd=REPO_DIR, - stdout=f, - stderr=f, - ) - - def send_message(self, message="Y"): - state = get_state(self.state_file) - state["pending_input"] = message - state["awaiting_input"] = False - set_state(self.state_file, state) - - def get_chatbot_response(self): - while True: - state = get_state(self.state_file) - if ( - state["awaiting_input"] - and state["last_message_read_index"] >= len(state["messages"]) - 1 - ): - break - if state["last_message_read_index"] >= len(state["messages"]) - 1: - time.sleep(1) - else: - state["last_message_read_index"] += 1 - title, content = state["messages"][state["last_message_read_index"]] - yield (f"**{title.strip()}** " if title else "") + utils.remove_color( - content - ).replace("\n", "<br />") - set_state(self.state_file, state) - - -if __name__ == "__main__": - print(sys.argv) - _, openai_key, state_file = sys.argv - os.environ["OPENAI_API_KEY"] = openai_key - import autogpt.config.config - from autogpt.logs import logger - from autogpt.cli import main - import autogpt.utils - from autogpt.spinner import Spinner - - def add_message(title, content): - state = get_state(state_file) - state["messages"].append((title, content)) - set_state(state_file, state) - - def typewriter_log(title="", title_color="", content="", *args, **kwargs): - add_message(title, content) - - def warn(message, title="", *args, **kwargs): - add_message(title, message) - - def error(title, message="", *args, **kwargs): - add_message(title, message) - - def clean_input(prompt=""): - add_message(None, prompt) - state = get_state(state_file) - state["awaiting_input"] = True - set_state(state_file, state) - while state["pending_input"] is None: - state = get_state(state_file) - print("Waiting for input...") - time.sleep(1) - print("Got input") - pending_input = state["pending_input"] - state["pending_input"] = None - set_state(state_file, state) - return pending_input - - def spinner_start(): - add_message(None, "Thinking...") - - logger.typewriter_log = typewriter_log - logger.warn = warn - logger.error = error - autogpt.utils.clean_input = clean_input - Spinner.spin = spinner_start - - sys.argv = sys.argv[:1] - main() diff --git a/spaces/kernelmachine/gpt3-quality-filter/score.py b/spaces/kernelmachine/gpt3-quality-filter/score.py deleted file mode 100644 index 81e0c07942b0463a37d73ea9775c813420675de3..0000000000000000000000000000000000000000 --- a/spaces/kernelmachine/gpt3-quality-filter/score.py +++ /dev/null @@ -1,23 +0,0 @@ -from tqdm.auto import tqdm -import numpy as np - - -def score_text(df, clf, clf_vectorizer, field='text'): - ## score text using quality filter - df['filter_output'] = clf.predict_proba(clf_vectorizer.transform(tqdm(df[field]))).tolist() - df['prob_low_quality'] = df.filter_output.apply(lambda x: x[0]) - df['prob_high_quality'] = df.filter_output.apply(lambda x: x[1]) - df = df.drop(['filter_output'], axis=1) - df['GPT3_included'] = df.prob_high_quality.apply(lambda x: np.random.pareto(9) > (1 - x)) - - return df - -def get_counts(df, field='text'): - # count number of whitespace tokens - tqdm.pandas() - df['num_tokens'] = df[field].progress_apply(lambda x: len(x.split())) - return df - -def score(x, clf, vectorizer): - # score a single document - return clf.predict_proba(vectorizer.transform([x])) \ No newline at end of file diff --git a/spaces/kevinwang676/Bark-Coqui/README.md b/spaces/kevinwang676/Bark-Coqui/README.md deleted file mode 100644 index 601169a2121646d7833c2b932be68fa14a92b39e..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/Bark-Coqui/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Coqui Bark Voice Cloning -emoji: 🐸🐶 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false -duplicated_from: fffiloni/instant-TTS-Bark-cloning ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/backbones/iresnet2060.py b/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/backbones/iresnet2060.py deleted file mode 100644 index 21d1122144d207637d2444cba1f68fe630c89f31..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/SadTalker/src/face3d/models/arcface_torch/backbones/iresnet2060.py +++ /dev/null @@ -1,176 +0,0 @@ -import torch -from torch import nn - -assert torch.__version__ >= "1.8.1" -from torch.utils.checkpoint import checkpoint_sequential - -__all__ = ['iresnet2060'] - - -def conv3x3(in_planes, out_planes, stride=1, groups=1, dilation=1): - """3x3 convolution with padding""" - return nn.Conv2d(in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=dilation, - groups=groups, - bias=False, - dilation=dilation) - - -def conv1x1(in_planes, out_planes, stride=1): - """1x1 convolution""" - return nn.Conv2d(in_planes, - out_planes, - kernel_size=1, - stride=stride, - bias=False) - - -class IBasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None, - groups=1, base_width=64, dilation=1): - super(IBasicBlock, self).__init__() - if groups != 1 or base_width != 64: - raise ValueError('BasicBlock only supports groups=1 and base_width=64') - if dilation > 1: - raise NotImplementedError("Dilation > 1 not supported in BasicBlock") - self.bn1 = nn.BatchNorm2d(inplanes, eps=1e-05, ) - self.conv1 = conv3x3(inplanes, planes) - self.bn2 = nn.BatchNorm2d(planes, eps=1e-05, ) - self.prelu = nn.PReLU(planes) - self.conv2 = conv3x3(planes, planes, stride) - self.bn3 = nn.BatchNorm2d(planes, eps=1e-05, ) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - identity = x - out = self.bn1(x) - out = self.conv1(out) - out = self.bn2(out) - out = self.prelu(out) - out = self.conv2(out) - out = self.bn3(out) - if self.downsample is not None: - identity = self.downsample(x) - out += identity - return out - - -class IResNet(nn.Module): - fc_scale = 7 * 7 - - def __init__(self, - block, layers, dropout=0, num_features=512, zero_init_residual=False, - groups=1, width_per_group=64, replace_stride_with_dilation=None, fp16=False): - super(IResNet, self).__init__() - self.fp16 = fp16 - self.inplanes = 64 - self.dilation = 1 - if replace_stride_with_dilation is None: - replace_stride_with_dilation = [False, False, False] - if len(replace_stride_with_dilation) != 3: - raise ValueError("replace_stride_with_dilation should be None " - "or a 3-element tuple, got {}".format(replace_stride_with_dilation)) - self.groups = groups - self.base_width = width_per_group - self.conv1 = nn.Conv2d(3, self.inplanes, kernel_size=3, stride=1, padding=1, bias=False) - self.bn1 = nn.BatchNorm2d(self.inplanes, eps=1e-05) - self.prelu = nn.PReLU(self.inplanes) - self.layer1 = self._make_layer(block, 64, layers[0], stride=2) - self.layer2 = self._make_layer(block, - 128, - layers[1], - stride=2, - dilate=replace_stride_with_dilation[0]) - self.layer3 = self._make_layer(block, - 256, - layers[2], - stride=2, - dilate=replace_stride_with_dilation[1]) - self.layer4 = self._make_layer(block, - 512, - layers[3], - stride=2, - dilate=replace_stride_with_dilation[2]) - self.bn2 = nn.BatchNorm2d(512 * block.expansion, eps=1e-05, ) - self.dropout = nn.Dropout(p=dropout, inplace=True) - self.fc = nn.Linear(512 * block.expansion * self.fc_scale, num_features) - self.features = nn.BatchNorm1d(num_features, eps=1e-05) - nn.init.constant_(self.features.weight, 1.0) - self.features.weight.requires_grad = False - - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.normal_(m.weight, 0, 0.1) - elif isinstance(m, (nn.BatchNorm2d, nn.GroupNorm)): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - - if zero_init_residual: - for m in self.modules(): - if isinstance(m, IBasicBlock): - nn.init.constant_(m.bn2.weight, 0) - - def _make_layer(self, block, planes, blocks, stride=1, dilate=False): - downsample = None - previous_dilation = self.dilation - if dilate: - self.dilation *= stride - stride = 1 - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - conv1x1(self.inplanes, planes * block.expansion, stride), - nn.BatchNorm2d(planes * block.expansion, eps=1e-05, ), - ) - layers = [] - layers.append( - block(self.inplanes, planes, stride, downsample, self.groups, - self.base_width, previous_dilation)) - self.inplanes = planes * block.expansion - for _ in range(1, blocks): - layers.append( - block(self.inplanes, - planes, - groups=self.groups, - base_width=self.base_width, - dilation=self.dilation)) - - return nn.Sequential(*layers) - - def checkpoint(self, func, num_seg, x): - if self.training: - return checkpoint_sequential(func, num_seg, x) - else: - return func(x) - - def forward(self, x): - with torch.cuda.amp.autocast(self.fp16): - x = self.conv1(x) - x = self.bn1(x) - x = self.prelu(x) - x = self.layer1(x) - x = self.checkpoint(self.layer2, 20, x) - x = self.checkpoint(self.layer3, 100, x) - x = self.layer4(x) - x = self.bn2(x) - x = torch.flatten(x, 1) - x = self.dropout(x) - x = self.fc(x.float() if self.fp16 else x) - x = self.features(x) - return x - - -def _iresnet(arch, block, layers, pretrained, progress, **kwargs): - model = IResNet(block, layers, **kwargs) - if pretrained: - raise ValueError() - return model - - -def iresnet2060(pretrained=False, progress=True, **kwargs): - return _iresnet('iresnet2060', IBasicBlock, [3, 128, 1024 - 128, 3], pretrained, progress, **kwargs) diff --git a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/utils/plot.py b/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/utils/plot.py deleted file mode 100644 index ccc588e5c01ca550b69c385aeb3fd139c59fb88a..0000000000000000000000000000000000000000 --- a/spaces/kevinwang676/VoiceChangers/src/face3d/models/arcface_torch/utils/plot.py +++ /dev/null @@ -1,72 +0,0 @@ -# coding: utf-8 - -import os -from pathlib import Path - -import matplotlib.pyplot as plt -import numpy as np -import pandas as pd -from menpo.visualize.viewmatplotlib import sample_colours_from_colourmap -from prettytable import PrettyTable -from sklearn.metrics import roc_curve, auc - -image_path = "/data/anxiang/IJB_release/IJBC" -files = [ - "./ms1mv3_arcface_r100/ms1mv3_arcface_r100/ijbc.npy" -] - - -def read_template_pair_list(path): - pairs = pd.read_csv(path, sep=' ', header=None).values - t1 = pairs[:, 0].astype(np.int) - t2 = pairs[:, 1].astype(np.int) - label = pairs[:, 2].astype(np.int) - return t1, t2, label - - -p1, p2, label = read_template_pair_list( - os.path.join('%s/meta' % image_path, - '%s_template_pair_label.txt' % 'ijbc')) - -methods = [] -scores = [] -for file in files: - methods.append(file.split('/')[-2]) - scores.append(np.load(file)) - -methods = np.array(methods) -scores = dict(zip(methods, scores)) -colours = dict( - zip(methods, sample_colours_from_colourmap(methods.shape[0], 'Set2'))) -x_labels = [10 ** -6, 10 ** -5, 10 ** -4, 10 ** -3, 10 ** -2, 10 ** -1] -tpr_fpr_table = PrettyTable(['Methods'] + [str(x) for x in x_labels]) -fig = plt.figure() -for method in methods: - fpr, tpr, _ = roc_curve(label, scores[method]) - roc_auc = auc(fpr, tpr) - fpr = np.flipud(fpr) - tpr = np.flipud(tpr) # select largest tpr at same fpr - plt.plot(fpr, - tpr, - color=colours[method], - lw=1, - label=('[%s (AUC = %0.4f %%)]' % - (method.split('-')[-1], roc_auc * 100))) - tpr_fpr_row = [] - tpr_fpr_row.append("%s-%s" % (method, "IJBC")) - for fpr_iter in np.arange(len(x_labels)): - _, min_index = min( - list(zip(abs(fpr - x_labels[fpr_iter]), range(len(fpr))))) - tpr_fpr_row.append('%.2f' % (tpr[min_index] * 100)) - tpr_fpr_table.add_row(tpr_fpr_row) -plt.xlim([10 ** -6, 0.1]) -plt.ylim([0.3, 1.0]) -plt.grid(linestyle='--', linewidth=1) -plt.xticks(x_labels) -plt.yticks(np.linspace(0.3, 1.0, 8, endpoint=True)) -plt.xscale('log') -plt.xlabel('False Positive Rate') -plt.ylabel('True Positive Rate') -plt.title('ROC on IJB') -plt.legend(loc="lower right") -print(tpr_fpr_table) diff --git a/spaces/kira4424/VITS-fast-fine-tuning/text/cantonese.py b/spaces/kira4424/VITS-fast-fine-tuning/text/cantonese.py deleted file mode 100644 index b66d12138b81b70b86f18217d24a08fce76305c0..0000000000000000000000000000000000000000 --- a/spaces/kira4424/VITS-fast-fine-tuning/text/cantonese.py +++ /dev/null @@ -1,59 +0,0 @@ -import re -import cn2an -import opencc - - -converter = opencc.OpenCC('jyutjyu') - -# List of (Latin alphabet, ipa) pairs: -_latin_to_ipa = [(re.compile('%s' % x[0]), x[1]) for x in [ - ('A', 'ei˥'), - ('B', 'biː˥'), - ('C', 'siː˥'), - ('D', 'tiː˥'), - ('E', 'iː˥'), - ('F', 'e˥fuː˨˩'), - ('G', 'tsiː˥'), - ('H', 'ɪk̚˥tsʰyː˨˩'), - ('I', 'ɐi˥'), - ('J', 'tsei˥'), - ('K', 'kʰei˥'), - ('L', 'e˥llou˨˩'), - ('M', 'ɛːm˥'), - ('N', 'ɛːn˥'), - ('O', 'ou˥'), - ('P', 'pʰiː˥'), - ('Q', 'kʰiːu˥'), - ('R', 'aː˥lou˨˩'), - ('S', 'ɛː˥siː˨˩'), - ('T', 'tʰiː˥'), - ('U', 'juː˥'), - ('V', 'wiː˥'), - ('W', 'tʊk̚˥piː˥juː˥'), - ('X', 'ɪk̚˥siː˨˩'), - ('Y', 'waːi˥'), - ('Z', 'iː˨sɛːt̚˥') -]] - - -def number_to_cantonese(text): - return re.sub(r'\d+(?:\.?\d+)?', lambda x: cn2an.an2cn(x.group()), text) - - -def latin_to_ipa(text): - for regex, replacement in _latin_to_ipa: - text = re.sub(regex, replacement, text) - return text - - -def cantonese_to_ipa(text): - text = number_to_cantonese(text.upper()) - text = converter.convert(text).replace('-','').replace('$',' ') - text = re.sub(r'[A-Z]', lambda x: latin_to_ipa(x.group())+' ', text) - text = re.sub(r'[、;:]', ',', text) - text = re.sub(r'\s*,\s*', ', ', text) - text = re.sub(r'\s*。\s*', '. ', text) - text = re.sub(r'\s*?\s*', '? ', text) - text = re.sub(r'\s*!\s*', '! ', text) - text = re.sub(r'\s*$', '', text) - return text diff --git a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/sampler_seed.py b/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/sampler_seed.py deleted file mode 100644 index ee0dc6bdd8df5775857028aaed5444c0f59caf80..0000000000000000000000000000000000000000 --- a/spaces/kirch/Text2Video-Zero/annotator/uniformer/mmcv/runner/hooks/sampler_seed.py +++ /dev/null @@ -1,20 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -from .hook import HOOKS, Hook - - -@HOOKS.register_module() -class DistSamplerSeedHook(Hook): - """Data-loading sampler for distributed training. - - When distributed training, it is only useful in conjunction with - :obj:`EpochBasedRunner`, while :obj:`IterBasedRunner` achieves the same - purpose with :obj:`IterLoader`. - """ - - def before_epoch(self, runner): - if hasattr(runner.data_loader.sampler, 'set_epoch'): - # in case the data loader uses `SequentialSampler` in Pytorch - runner.data_loader.sampler.set_epoch(runner.epoch) - elif hasattr(runner.data_loader.batch_sampler.sampler, 'set_epoch'): - # batch sampler in pytorch warps the sampler as its attributes. - runner.data_loader.batch_sampler.sampler.set_epoch(runner.epoch) diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/sstruct.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/sstruct.py deleted file mode 100644 index d35bc9a5c8c4b3eba0e14fc7fb009fc172432dd0..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/fontTools/misc/sstruct.py +++ /dev/null @@ -1,220 +0,0 @@ -"""sstruct.py -- SuperStruct - -Higher level layer on top of the struct module, enabling to -bind names to struct elements. The interface is similar to -struct, except the objects passed and returned are not tuples -(or argument lists), but dictionaries or instances. - -Just like struct, we use fmt strings to describe a data -structure, except we use one line per element. Lines are -separated by newlines or semi-colons. Each line contains -either one of the special struct characters ('@', '=', '<', -'>' or '!') or a 'name:formatchar' combo (eg. 'myFloat:f'). -Repetitions, like the struct module offers them are not useful -in this context, except for fixed length strings (eg. 'myInt:5h' -is not allowed but 'myString:5s' is). The 'x' fmt character -(pad byte) is treated as 'special', since it is by definition -anonymous. Extra whitespace is allowed everywhere. - -The sstruct module offers one feature that the "normal" struct -module doesn't: support for fixed point numbers. These are spelled -as "n.mF", where n is the number of bits before the point, and m -the number of bits after the point. Fixed point numbers get -converted to floats. - -pack(fmt, object): - 'object' is either a dictionary or an instance (or actually - anything that has a __dict__ attribute). If it is a dictionary, - its keys are used for names. If it is an instance, it's - attributes are used to grab struct elements from. Returns - a string containing the data. - -unpack(fmt, data, object=None) - If 'object' is omitted (or None), a new dictionary will be - returned. If 'object' is a dictionary, it will be used to add - struct elements to. If it is an instance (or in fact anything - that has a __dict__ attribute), an attribute will be added for - each struct element. In the latter two cases, 'object' itself - is returned. - -unpack2(fmt, data, object=None) - Convenience function. Same as unpack, except data may be longer - than needed. The returned value is a tuple: (object, leftoverdata). - -calcsize(fmt) - like struct.calcsize(), but uses our own fmt strings: - it returns the size of the data in bytes. -""" - -from fontTools.misc.fixedTools import fixedToFloat as fi2fl, floatToFixed as fl2fi -from fontTools.misc.textTools import tobytes, tostr -import struct -import re - -__version__ = "1.2" -__copyright__ = "Copyright 1998, Just van Rossum <just@letterror.com>" - - -class Error(Exception): - pass - - -def pack(fmt, obj): - formatstring, names, fixes = getformat(fmt, keep_pad_byte=True) - elements = [] - if not isinstance(obj, dict): - obj = obj.__dict__ - for name in names: - value = obj[name] - if name in fixes: - # fixed point conversion - value = fl2fi(value, fixes[name]) - elif isinstance(value, str): - value = tobytes(value) - elements.append(value) - data = struct.pack(*(formatstring,) + tuple(elements)) - return data - - -def unpack(fmt, data, obj=None): - if obj is None: - obj = {} - data = tobytes(data) - formatstring, names, fixes = getformat(fmt) - if isinstance(obj, dict): - d = obj - else: - d = obj.__dict__ - elements = struct.unpack(formatstring, data) - for i in range(len(names)): - name = names[i] - value = elements[i] - if name in fixes: - # fixed point conversion - value = fi2fl(value, fixes[name]) - elif isinstance(value, bytes): - try: - value = tostr(value) - except UnicodeDecodeError: - pass - d[name] = value - return obj - - -def unpack2(fmt, data, obj=None): - length = calcsize(fmt) - return unpack(fmt, data[:length], obj), data[length:] - - -def calcsize(fmt): - formatstring, names, fixes = getformat(fmt) - return struct.calcsize(formatstring) - - -# matches "name:formatchar" (whitespace is allowed) -_elementRE = re.compile( - r"\s*" # whitespace - r"([A-Za-z_][A-Za-z_0-9]*)" # name (python identifier) - r"\s*:\s*" # whitespace : whitespace - r"([xcbB?hHiIlLqQfd]|" # formatchar... - r"[0-9]+[ps]|" # ...formatchar... - r"([0-9]+)\.([0-9]+)(F))" # ...formatchar - r"\s*" # whitespace - r"(#.*)?$" # [comment] + end of string -) - -# matches the special struct fmt chars and 'x' (pad byte) -_extraRE = re.compile(r"\s*([x@=<>!])\s*(#.*)?$") - -# matches an "empty" string, possibly containing whitespace and/or a comment -_emptyRE = re.compile(r"\s*(#.*)?$") - -_fixedpointmappings = {8: "b", 16: "h", 32: "l"} - -_formatcache = {} - - -def getformat(fmt, keep_pad_byte=False): - fmt = tostr(fmt, encoding="ascii") - try: - formatstring, names, fixes = _formatcache[fmt] - except KeyError: - lines = re.split("[\n;]", fmt) - formatstring = "" - names = [] - fixes = {} - for line in lines: - if _emptyRE.match(line): - continue - m = _extraRE.match(line) - if m: - formatchar = m.group(1) - if formatchar != "x" and formatstring: - raise Error("a special fmt char must be first") - else: - m = _elementRE.match(line) - if not m: - raise Error("syntax error in fmt: '%s'" % line) - name = m.group(1) - formatchar = m.group(2) - if keep_pad_byte or formatchar != "x": - names.append(name) - if m.group(3): - # fixed point - before = int(m.group(3)) - after = int(m.group(4)) - bits = before + after - if bits not in [8, 16, 32]: - raise Error("fixed point must be 8, 16 or 32 bits long") - formatchar = _fixedpointmappings[bits] - assert m.group(5) == "F" - fixes[name] = after - formatstring = formatstring + formatchar - _formatcache[fmt] = formatstring, names, fixes - return formatstring, names, fixes - - -def _test(): - fmt = """ - # comments are allowed - > # big endian (see documentation for struct) - # empty lines are allowed: - - ashort: h - along: l - abyte: b # a byte - achar: c - astr: 5s - afloat: f; adouble: d # multiple "statements" are allowed - afixed: 16.16F - abool: ? - apad: x - """ - - print("size:", calcsize(fmt)) - - class foo(object): - pass - - i = foo() - - i.ashort = 0x7FFF - i.along = 0x7FFFFFFF - i.abyte = 0x7F - i.achar = "a" - i.astr = "12345" - i.afloat = 0.5 - i.adouble = 0.5 - i.afixed = 1.5 - i.abool = True - - data = pack(fmt, i) - print("data:", repr(data)) - print(unpack(fmt, data)) - i2 = foo() - unpack(fmt, data, i2) - print(vars(i2)) - - -if __name__ == "__main__": - _test() diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Empty-96265974.js b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Empty-96265974.js deleted file mode 100644 index 05a2e7227970e35ff1125c3d612d866782d44cef..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Empty-96265974.js +++ /dev/null @@ -1,2 +0,0 @@ -import{S as _,i as r,s as m,V as g,G as d,C as f,M as u,g as c,E as p,X as b,Y as v,Z as z,p as E,t as h,q as j}from"./index-7c0e54a6.js";import"./Button-661a0701.js";function q(n){let s,i,t;const o=n[3].default,a=g(o,n,n[2],null);return{c(){s=d("div"),i=d("div"),a&&a.c(),f(i,"class","icon svelte-1u5vjgs"),f(s,"class","empty svelte-1u5vjgs"),u(s,"small",n[0]==="small"),u(s,"large",n[0]==="large"),u(s,"unpadded_box",n[1])},m(e,l){c(e,s,l),p(s,i),a&&a.m(i,null),t=!0},p(e,[l]){a&&a.p&&(!t||l&4)&&b(a,o,e,e[2],t?z(o,e[2],l,null):v(e[2]),null),(!t||l&1)&&u(s,"small",e[0]==="small"),(!t||l&1)&&u(s,"large",e[0]==="large"),(!t||l&2)&&u(s,"unpadded_box",e[1])},i(e){t||(E(a,e),t=!0)},o(e){h(a,e),t=!1},d(e){e&&j(s),a&&a.d(e)}}}function C(n,s,i){let{$$slots:t={},$$scope:o}=s,{size:a="small"}=s,{unpadded_box:e=!1}=s;return n.$$set=l=>{"size"in l&&i(0,a=l.size),"unpadded_box"in l&&i(1,e=l.unpadded_box),"$$scope"in l&&i(2,o=l.$$scope)},[a,e,o,t]}class M extends _{constructor(s){super(),r(this,s,C,q,m,{size:0,unpadded_box:1})}}export{M as E}; -//# sourceMappingURL=Empty-96265974.js.map diff --git a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/contour.py b/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/contour.py deleted file mode 100644 index 42096958bb9392f717a9764bb873903b4d97094b..0000000000000000000000000000000000000000 --- a/spaces/ky2k/Toxicity_Classifier_POC/.venv/lib/python3.9/site-packages/matplotlib/contour.py +++ /dev/null @@ -1,1783 +0,0 @@ -""" -Classes to support contour plotting and labelling for the Axes class. -""" - -import functools -from numbers import Integral - -import numpy as np -from numpy import ma - -import matplotlib as mpl -from matplotlib import _api, _docstring -from matplotlib.backend_bases import MouseButton -from matplotlib.text import Text -import matplotlib.path as mpath -import matplotlib.ticker as ticker -import matplotlib.cm as cm -import matplotlib.colors as mcolors -import matplotlib.collections as mcoll -import matplotlib.font_manager as font_manager -import matplotlib.cbook as cbook -import matplotlib.patches as mpatches -import matplotlib.transforms as mtransforms - - -# We can't use a single line collection for contour because a line -# collection can have only a single line style, and we want to be able to have -# dashed negative contours, for example, and solid positive contours. -# We could use a single polygon collection for filled contours, but it -# seems better to keep line and filled contours similar, with one collection -# per level. - - -@_api.deprecated("3.7", alternative="Text.set_transform_rotates_text") -class ClabelText(Text): - """ - Unlike the ordinary text, the get_rotation returns an updated - angle in the pixel coordinate assuming that the input rotation is - an angle in data coordinate (or whatever transform set). - """ - - def get_rotation(self): - new_angle, = self.get_transform().transform_angles( - [super().get_rotation()], [self.get_position()]) - return new_angle - - -def _contour_labeler_event_handler(cs, inline, inline_spacing, event): - canvas = cs.axes.figure.canvas - is_button = event.name == "button_press_event" - is_key = event.name == "key_press_event" - # Quit (even if not in infinite mode; this is consistent with - # MATLAB and sometimes quite useful, but will require the user to - # test how many points were actually returned before using data). - if (is_button and event.button == MouseButton.MIDDLE - or is_key and event.key in ["escape", "enter"]): - canvas.stop_event_loop() - # Pop last click. - elif (is_button and event.button == MouseButton.RIGHT - or is_key and event.key in ["backspace", "delete"]): - # Unfortunately, if one is doing inline labels, then there is currently - # no way to fix the broken contour - once humpty-dumpty is broken, he - # can't be put back together. In inline mode, this does nothing. - if not inline: - cs.pop_label() - canvas.draw() - # Add new click. - elif (is_button and event.button == MouseButton.LEFT - # On macOS/gtk, some keys return None. - or is_key and event.key is not None): - if event.inaxes == cs.axes: - cs.add_label_near(event.x, event.y, transform=False, - inline=inline, inline_spacing=inline_spacing) - canvas.draw() - - -class ContourLabeler: - """Mixin to provide labelling capability to `.ContourSet`.""" - - def clabel(self, levels=None, *, - fontsize=None, inline=True, inline_spacing=5, fmt=None, - colors=None, use_clabeltext=False, manual=False, - rightside_up=True, zorder=None): - """ - Label a contour plot. - - Adds labels to line contours in this `.ContourSet` (which inherits from - this mixin class). - - Parameters - ---------- - levels : array-like, optional - A list of level values, that should be labeled. The list must be - a subset of ``cs.levels``. If not given, all levels are labeled. - - fontsize : str or float, default: :rc:`font.size` - Size in points or relative size e.g., 'smaller', 'x-large'. - See `.Text.set_size` for accepted string values. - - colors : color or colors or None, default: None - The label colors: - - - If *None*, the color of each label matches the color of - the corresponding contour. - - - If one string color, e.g., *colors* = 'r' or *colors* = - 'red', all labels will be plotted in this color. - - - If a tuple of colors (string, float, rgb, etc), different labels - will be plotted in different colors in the order specified. - - inline : bool, default: True - If ``True`` the underlying contour is removed where the label is - placed. - - inline_spacing : float, default: 5 - Space in pixels to leave on each side of label when placing inline. - - This spacing will be exact for labels at locations where the - contour is straight, less so for labels on curved contours. - - fmt : `.Formatter` or str or callable or dict, optional - How the levels are formatted: - - - If a `.Formatter`, it is used to format all levels at once, using - its `.Formatter.format_ticks` method. - - If a str, it is interpreted as a %-style format string. - - If a callable, it is called with one level at a time and should - return the corresponding label. - - If a dict, it should directly map levels to labels. - - The default is to use a standard `.ScalarFormatter`. - - manual : bool or iterable, default: False - If ``True``, contour labels will be placed manually using - mouse clicks. Click the first button near a contour to - add a label, click the second button (or potentially both - mouse buttons at once) to finish adding labels. The third - button can be used to remove the last label added, but - only if labels are not inline. Alternatively, the keyboard - can be used to select label locations (enter to end label - placement, delete or backspace act like the third mouse button, - and any other key will select a label location). - - *manual* can also be an iterable object of (x, y) tuples. - Contour labels will be created as if mouse is clicked at each - (x, y) position. - - rightside_up : bool, default: True - If ``True``, label rotations will always be plus - or minus 90 degrees from level. - - use_clabeltext : bool, default: False - If ``True``, use `.Text.set_transform_rotates_text` to ensure that - label rotation is updated whenever the axes aspect changes. - - zorder : float or None, default: ``(2 + contour.get_zorder())`` - zorder of the contour labels. - - Returns - ------- - labels - A list of `.Text` instances for the labels. - """ - - # clabel basically takes the input arguments and uses them to - # add a list of "label specific" attributes to the ContourSet - # object. These attributes are all of the form label* and names - # should be fairly self explanatory. - # - # Once these attributes are set, clabel passes control to the - # labels method (case of automatic label placement) or - # `BlockingContourLabeler` (case of manual label placement). - - if fmt is None: - fmt = ticker.ScalarFormatter(useOffset=False) - fmt.create_dummy_axis() - self.labelFmt = fmt - self._use_clabeltext = use_clabeltext - # Detect if manual selection is desired and remove from argument list. - self.labelManual = manual - self.rightside_up = rightside_up - if zorder is None: - self._clabel_zorder = 2+self._contour_zorder - else: - self._clabel_zorder = zorder - - if levels is None: - levels = self.levels - indices = list(range(len(self.cvalues))) - else: - levlabs = list(levels) - indices, levels = [], [] - for i, lev in enumerate(self.levels): - if lev in levlabs: - indices.append(i) - levels.append(lev) - if len(levels) < len(levlabs): - raise ValueError(f"Specified levels {levlabs} don't match " - f"available levels {self.levels}") - self.labelLevelList = levels - self.labelIndiceList = indices - - self._label_font_props = font_manager.FontProperties(size=fontsize) - - if colors is None: - self.labelMappable = self - self.labelCValueList = np.take(self.cvalues, self.labelIndiceList) - else: - cmap = mcolors.ListedColormap(colors, N=len(self.labelLevelList)) - self.labelCValueList = list(range(len(self.labelLevelList))) - self.labelMappable = cm.ScalarMappable(cmap=cmap, - norm=mcolors.NoNorm()) - - self.labelXYs = [] - - if np.iterable(manual): - for x, y in manual: - self.add_label_near(x, y, inline, inline_spacing) - elif manual: - print('Select label locations manually using first mouse button.') - print('End manual selection with second mouse button.') - if not inline: - print('Remove last label by clicking third mouse button.') - mpl._blocking_input.blocking_input_loop( - self.axes.figure, ["button_press_event", "key_press_event"], - timeout=-1, handler=functools.partial( - _contour_labeler_event_handler, - self, inline, inline_spacing)) - else: - self.labels(inline, inline_spacing) - - return cbook.silent_list('text.Text', self.labelTexts) - - @_api.deprecated("3.7", alternative="cs.labelTexts[0].get_font()") - @property - def labelFontProps(self): - return self._label_font_props - - @_api.deprecated("3.7", alternative=( - "[cs.labelTexts[0].get_font().get_size()] * len(cs.labelLevelList)")) - @property - def labelFontSizeList(self): - return [self._label_font_props.get_size()] * len(self.labelLevelList) - - @_api.deprecated("3.7", alternative="cs.labelTexts") - @property - def labelTextsList(self): - return cbook.silent_list('text.Text', self.labelTexts) - - def print_label(self, linecontour, labelwidth): - """Return whether a contour is long enough to hold a label.""" - return (len(linecontour) > 10 * labelwidth - or (np.ptp(linecontour, axis=0) > 1.2 * labelwidth).any()) - - def too_close(self, x, y, lw): - """Return whether a label is already near this location.""" - thresh = (1.2 * lw) ** 2 - return any((x - loc[0]) ** 2 + (y - loc[1]) ** 2 < thresh - for loc in self.labelXYs) - - def _get_nth_label_width(self, nth): - """Return the width of the *nth* label, in pixels.""" - fig = self.axes.figure - renderer = fig._get_renderer() - return (Text(0, 0, - self.get_text(self.labelLevelList[nth], self.labelFmt), - figure=fig, fontproperties=self._label_font_props) - .get_window_extent(renderer).width) - - @_api.deprecated("3.7", alternative="Artist.set") - def set_label_props(self, label, text, color): - """Set the label properties - color, fontsize, text.""" - label.set_text(text) - label.set_color(color) - label.set_fontproperties(self._label_font_props) - label.set_clip_box(self.axes.bbox) - - def get_text(self, lev, fmt): - """Get the text of the label.""" - if isinstance(lev, str): - return lev - elif isinstance(fmt, dict): - return fmt.get(lev, '%1.3f') - elif callable(getattr(fmt, "format_ticks", None)): - return fmt.format_ticks([*self.labelLevelList, lev])[-1] - elif callable(fmt): - return fmt(lev) - else: - return fmt % lev - - def locate_label(self, linecontour, labelwidth): - """ - Find good place to draw a label (relatively flat part of the contour). - """ - ctr_size = len(linecontour) - n_blocks = int(np.ceil(ctr_size / labelwidth)) if labelwidth > 1 else 1 - block_size = ctr_size if n_blocks == 1 else int(labelwidth) - # Split contour into blocks of length ``block_size``, filling the last - # block by cycling the contour start (per `np.resize` semantics). (Due - # to cycling, the index returned is taken modulo ctr_size.) - xx = np.resize(linecontour[:, 0], (n_blocks, block_size)) - yy = np.resize(linecontour[:, 1], (n_blocks, block_size)) - yfirst = yy[:, :1] - ylast = yy[:, -1:] - xfirst = xx[:, :1] - xlast = xx[:, -1:] - s = (yfirst - yy) * (xlast - xfirst) - (xfirst - xx) * (ylast - yfirst) - l = np.hypot(xlast - xfirst, ylast - yfirst) - # Ignore warning that divide by zero throws, as this is a valid option - with np.errstate(divide='ignore', invalid='ignore'): - distances = (abs(s) / l).sum(axis=-1) - # Labels are drawn in the middle of the block (``hbsize``) where the - # contour is the closest (per ``distances``) to a straight line, but - # not `too_close()` to a preexisting label. - hbsize = block_size // 2 - adist = np.argsort(distances) - # If all candidates are `too_close()`, go back to the straightest part - # (``adist[0]``). - for idx in np.append(adist, adist[0]): - x, y = xx[idx, hbsize], yy[idx, hbsize] - if not self.too_close(x, y, labelwidth): - break - return x, y, (idx * block_size + hbsize) % ctr_size - - def calc_label_rot_and_inline(self, slc, ind, lw, lc=None, spacing=5): - """ - Calculate the appropriate label rotation given the linecontour - coordinates in screen units, the index of the label location and the - label width. - - If *lc* is not None or empty, also break contours and compute - inlining. - - *spacing* is the empty space to leave around the label, in pixels. - - Both tasks are done together to avoid calculating path lengths - multiple times, which is relatively costly. - - The method used here involves computing the path length along the - contour in pixel coordinates and then looking approximately (label - width / 2) away from central point to determine rotation and then to - break contour if desired. - """ - - if lc is None: - lc = [] - # Half the label width - hlw = lw / 2.0 - - # Check if closed and, if so, rotate contour so label is at edge - closed = _is_closed_polygon(slc) - if closed: - slc = np.concatenate([slc[ind:-1], slc[:ind + 1]]) - if len(lc): # Rotate lc also if not empty - lc = np.concatenate([lc[ind:-1], lc[:ind + 1]]) - ind = 0 - - # Calculate path lengths - pl = np.zeros(slc.shape[0], dtype=float) - dx = np.diff(slc, axis=0) - pl[1:] = np.cumsum(np.hypot(dx[:, 0], dx[:, 1])) - pl = pl - pl[ind] - - # Use linear interpolation to get points around label - xi = np.array([-hlw, hlw]) - if closed: # Look at end also for closed contours - dp = np.array([pl[-1], 0]) - else: - dp = np.zeros_like(xi) - - # Get angle of vector between the two ends of the label - must be - # calculated in pixel space for text rotation to work correctly. - (dx,), (dy,) = (np.diff(np.interp(dp + xi, pl, slc_col)) - for slc_col in slc.T) - rotation = np.rad2deg(np.arctan2(dy, dx)) - - if self.rightside_up: - # Fix angle so text is never upside-down - rotation = (rotation + 90) % 180 - 90 - - # Break contour if desired - nlc = [] - if len(lc): - # Expand range by spacing - xi = dp + xi + np.array([-spacing, spacing]) - - # Get (integer) indices near points of interest; use -1 as marker - # for out of bounds. - I = np.interp(xi, pl, np.arange(len(pl)), left=-1, right=-1) - I = [np.floor(I[0]).astype(int), np.ceil(I[1]).astype(int)] - if I[0] != -1: - xy1 = [np.interp(xi[0], pl, lc_col) for lc_col in lc.T] - if I[1] != -1: - xy2 = [np.interp(xi[1], pl, lc_col) for lc_col in lc.T] - - # Actually break contours - if closed: - # This will remove contour if shorter than label - if all(i != -1 for i in I): - nlc.append(np.row_stack([xy2, lc[I[1]:I[0]+1], xy1])) - else: - # These will remove pieces of contour if they have length zero - if I[0] != -1: - nlc.append(np.row_stack([lc[:I[0]+1], xy1])) - if I[1] != -1: - nlc.append(np.row_stack([xy2, lc[I[1]:]])) - - # The current implementation removes contours completely - # covered by labels. Uncomment line below to keep - # original contour if this is the preferred behavior. - # if not len(nlc): nlc = [ lc ] - - return rotation, nlc - - def add_label(self, x, y, rotation, lev, cvalue): - """Add contour label without `.Text.set_transform_rotates_text`.""" - data_x, data_y = self.axes.transData.inverted().transform((x, y)) - t = Text( - data_x, data_y, - text=self.get_text(lev, self.labelFmt), - rotation=rotation, - horizontalalignment='center', verticalalignment='center', - zorder=self._clabel_zorder, - color=self.labelMappable.to_rgba(cvalue, alpha=self.alpha), - fontproperties=self._label_font_props, - clip_box=self.axes.bbox) - self.labelTexts.append(t) - self.labelCValues.append(cvalue) - self.labelXYs.append((x, y)) - # Add label to plot here - useful for manual mode label selection - self.axes.add_artist(t) - - def add_label_clabeltext(self, x, y, rotation, lev, cvalue): - """Add contour label with `.Text.set_transform_rotates_text`.""" - self.add_label(x, y, rotation, lev, cvalue) - # Grab the last added text, and reconfigure its rotation. - t = self.labelTexts[-1] - data_rotation, = self.axes.transData.inverted().transform_angles( - [rotation], [[x, y]]) - t.set(rotation=data_rotation, transform_rotates_text=True) - - def add_label_near(self, x, y, inline=True, inline_spacing=5, - transform=None): - """ - Add a label near the point ``(x, y)``. - - Parameters - ---------- - x, y : float - The approximate location of the label. - inline : bool, default: True - If *True* remove the segment of the contour beneath the label. - inline_spacing : int, default: 5 - Space in pixels to leave on each side of label when placing - inline. This spacing will be exact for labels at locations where - the contour is straight, less so for labels on curved contours. - transform : `.Transform` or `False`, default: ``self.axes.transData`` - A transform applied to ``(x, y)`` before labeling. The default - causes ``(x, y)`` to be interpreted as data coordinates. `False` - is a synonym for `.IdentityTransform`; i.e. ``(x, y)`` should be - interpreted as display coordinates. - """ - - if transform is None: - transform = self.axes.transData - if transform: - x, y = transform.transform((x, y)) - - # find the nearest contour _in screen units_ - conmin, segmin, imin, xmin, ymin = self.find_nearest_contour( - x, y, self.labelIndiceList)[:5] - - # calc_label_rot_and_inline() requires that (xmin, ymin) - # be a vertex in the path. So, if it isn't, add a vertex here - paths = self.collections[conmin].get_paths() # paths of correct coll. - lc = paths[segmin].vertices # vertices of correct segment - # Where should the new vertex be added in data-units? - xcmin = self.axes.transData.inverted().transform([xmin, ymin]) - if not np.allclose(xcmin, lc[imin]): - # No vertex is close enough, so add a new point in the vertices and - # replace the path by the new one. - lc = np.insert(lc, imin, xcmin, axis=0) - paths[segmin] = mpath.Path(lc) - - # Get index of nearest level in subset of levels used for labeling - lmin = self.labelIndiceList.index(conmin) - - # Get label width for rotating labels and breaking contours - lw = self._get_nth_label_width(lmin) - - # Figure out label rotation. - rotation, nlc = self.calc_label_rot_and_inline( - self.axes.transData.transform(lc), # to pixel space. - imin, lw, lc if inline else None, inline_spacing) - - self.add_label(xmin, ymin, rotation, self.labelLevelList[lmin], - self.labelCValueList[lmin]) - - if inline: - # Remove old, not looping over paths so we can do this up front - paths.pop(segmin) - - # Add paths if not empty or single point - paths.extend([mpath.Path(n) for n in nlc if len(n) > 1]) - - def pop_label(self, index=-1): - """Defaults to removing last label, but any index can be supplied""" - self.labelCValues.pop(index) - t = self.labelTexts.pop(index) - t.remove() - - def labels(self, inline, inline_spacing): - - if self._use_clabeltext: - add_label = self.add_label_clabeltext - else: - add_label = self.add_label - - for idx, (icon, lev, cvalue) in enumerate(zip( - self.labelIndiceList, - self.labelLevelList, - self.labelCValueList, - )): - - con = self.collections[icon] - trans = con.get_transform() - lw = self._get_nth_label_width(idx) - additions = [] - paths = con.get_paths() - for segNum, linepath in enumerate(paths): - lc = linepath.vertices # Line contour - slc = trans.transform(lc) # Line contour in screen coords - - # Check if long enough for a label - if self.print_label(slc, lw): - x, y, ind = self.locate_label(slc, lw) - - rotation, new = self.calc_label_rot_and_inline( - slc, ind, lw, lc if inline else None, inline_spacing) - - # Actually add the label - add_label(x, y, rotation, lev, cvalue) - - # If inline, add new contours - if inline: - for n in new: - # Add path if not empty or single point - if len(n) > 1: - additions.append(mpath.Path(n)) - else: # If not adding label, keep old path - additions.append(linepath) - - # After looping over all segments on a contour, replace old paths - # by new ones if inlining. - if inline: - paths[:] = additions - - def remove(self): - for text in self.labelTexts: - text.remove() - - -def _is_closed_polygon(X): - """ - Return whether first and last object in a sequence are the same. These are - presumably coordinates on a polygonal curve, in which case this function - tests if that curve is closed. - """ - return np.allclose(X[0], X[-1], rtol=1e-10, atol=1e-13) - - -def _find_closest_point_on_path(xys, p): - """ - Parameters - ---------- - xys : (N, 2) array-like - Coordinates of vertices. - p : (float, float) - Coordinates of point. - - Returns - ------- - d2min : float - Minimum square distance of *p* to *xys*. - proj : (float, float) - Projection of *p* onto *xys*. - imin : (int, int) - Consecutive indices of vertices of segment in *xys* where *proj* is. - Segments are considered as including their end-points; i.e. if the - closest point on the path is a node in *xys* with index *i*, this - returns ``(i-1, i)``. For the special case where *xys* is a single - point, this returns ``(0, 0)``. - """ - if len(xys) == 1: - return (((p - xys[0]) ** 2).sum(), xys[0], (0, 0)) - dxys = xys[1:] - xys[:-1] # Individual segment vectors. - norms = (dxys ** 2).sum(axis=1) - norms[norms == 0] = 1 # For zero-length segment, replace 0/0 by 0/1. - rel_projs = np.clip( # Project onto each segment in relative 0-1 coords. - ((p - xys[:-1]) * dxys).sum(axis=1) / norms, - 0, 1)[:, None] - projs = xys[:-1] + rel_projs * dxys # Projs. onto each segment, in (x, y). - d2s = ((projs - p) ** 2).sum(axis=1) # Squared distances. - imin = np.argmin(d2s) - return (d2s[imin], projs[imin], (imin, imin+1)) - - -_docstring.interpd.update(contour_set_attributes=r""" -Attributes ----------- -ax : `~matplotlib.axes.Axes` - The Axes object in which the contours are drawn. - -collections : `.silent_list` of `.PathCollection`\s - The `.Artist`\s representing the contour. This is a list of - `.PathCollection`\s for both line and filled contours. - -levels : array - The values of the contour levels. - -layers : array - Same as levels for line contours; half-way between - levels for filled contours. See ``ContourSet._process_colors``. -""") - - -@_docstring.dedent_interpd -class ContourSet(cm.ScalarMappable, ContourLabeler): - """ - Store a set of contour lines or filled regions. - - User-callable method: `~.Axes.clabel` - - Parameters - ---------- - ax : `~.axes.Axes` - - levels : [level0, level1, ..., leveln] - A list of floating point numbers indicating the contour levels. - - allsegs : [level0segs, level1segs, ...] - List of all the polygon segments for all the *levels*. - For contour lines ``len(allsegs) == len(levels)``, and for - filled contour regions ``len(allsegs) = len(levels)-1``. The lists - should look like :: - - level0segs = [polygon0, polygon1, ...] - polygon0 = [[x0, y0], [x1, y1], ...] - - allkinds : ``None`` or [level0kinds, level1kinds, ...] - Optional list of all the polygon vertex kinds (code types), as - described and used in Path. This is used to allow multiply- - connected paths such as holes within filled polygons. - If not ``None``, ``len(allkinds) == len(allsegs)``. The lists - should look like :: - - level0kinds = [polygon0kinds, ...] - polygon0kinds = [vertexcode0, vertexcode1, ...] - - If *allkinds* is not ``None``, usually all polygons for a - particular contour level are grouped together so that - ``level0segs = [polygon0]`` and ``level0kinds = [polygon0kinds]``. - - **kwargs - Keyword arguments are as described in the docstring of - `~.Axes.contour`. - - %(contour_set_attributes)s - """ - - def __init__(self, ax, *args, - levels=None, filled=False, linewidths=None, linestyles=None, - hatches=(None,), alpha=None, origin=None, extent=None, - cmap=None, colors=None, norm=None, vmin=None, vmax=None, - extend='neither', antialiased=None, nchunk=0, locator=None, - transform=None, negative_linestyles=None, - **kwargs): - """ - Draw contour lines or filled regions, depending on - whether keyword arg *filled* is ``False`` (default) or ``True``. - - Call signature:: - - ContourSet(ax, levels, allsegs, [allkinds], **kwargs) - - Parameters - ---------- - ax : `~.axes.Axes` - The `~.axes.Axes` object to draw on. - - levels : [level0, level1, ..., leveln] - A list of floating point numbers indicating the contour - levels. - - allsegs : [level0segs, level1segs, ...] - List of all the polygon segments for all the *levels*. - For contour lines ``len(allsegs) == len(levels)``, and for - filled contour regions ``len(allsegs) = len(levels)-1``. The lists - should look like :: - - level0segs = [polygon0, polygon1, ...] - polygon0 = [[x0, y0], [x1, y1], ...] - - allkinds : [level0kinds, level1kinds, ...], optional - Optional list of all the polygon vertex kinds (code types), as - described and used in Path. This is used to allow multiply- - connected paths such as holes within filled polygons. - If not ``None``, ``len(allkinds) == len(allsegs)``. The lists - should look like :: - - level0kinds = [polygon0kinds, ...] - polygon0kinds = [vertexcode0, vertexcode1, ...] - - If *allkinds* is not ``None``, usually all polygons for a - particular contour level are grouped together so that - ``level0segs = [polygon0]`` and ``level0kinds = [polygon0kinds]``. - - **kwargs - Keyword arguments are as described in the docstring of - `~.Axes.contour`. - """ - self.axes = ax - self.levels = levels - self.filled = filled - self.linewidths = linewidths - self.linestyles = linestyles - self.hatches = hatches - self.alpha = alpha - self.origin = origin - self.extent = extent - self.colors = colors - self.extend = extend - self.antialiased = antialiased - if self.antialiased is None and self.filled: - # Eliminate artifacts; we are not stroking the boundaries. - self.antialiased = False - # The default for line contours will be taken from the - # LineCollection default, which uses :rc:`lines.antialiased`. - - self.nchunk = nchunk - self.locator = locator - if (isinstance(norm, mcolors.LogNorm) - or isinstance(self.locator, ticker.LogLocator)): - self.logscale = True - if norm is None: - norm = mcolors.LogNorm() - else: - self.logscale = False - - _api.check_in_list([None, 'lower', 'upper', 'image'], origin=origin) - if self.extent is not None and len(self.extent) != 4: - raise ValueError( - "If given, 'extent' must be None or (x0, x1, y0, y1)") - if self.colors is not None and cmap is not None: - raise ValueError('Either colors or cmap must be None') - if self.origin == 'image': - self.origin = mpl.rcParams['image.origin'] - - self._transform = transform - - self.negative_linestyles = negative_linestyles - # If negative_linestyles was not defined as a keyword argument, define - # negative_linestyles with rcParams - if self.negative_linestyles is None: - self.negative_linestyles = \ - mpl.rcParams['contour.negative_linestyle'] - - kwargs = self._process_args(*args, **kwargs) - self._process_levels() - - self._extend_min = self.extend in ['min', 'both'] - self._extend_max = self.extend in ['max', 'both'] - if self.colors is not None: - ncolors = len(self.levels) - if self.filled: - ncolors -= 1 - i0 = 0 - - # Handle the case where colors are given for the extended - # parts of the contour. - - use_set_under_over = False - # if we are extending the lower end, and we've been given enough - # colors then skip the first color in the resulting cmap. For the - # extend_max case we don't need to worry about passing more colors - # than ncolors as ListedColormap will clip. - total_levels = (ncolors + - int(self._extend_min) + - int(self._extend_max)) - if (len(self.colors) == total_levels and - (self._extend_min or self._extend_max)): - use_set_under_over = True - if self._extend_min: - i0 = 1 - - cmap = mcolors.ListedColormap(self.colors[i0:None], N=ncolors) - - if use_set_under_over: - if self._extend_min: - cmap.set_under(self.colors[0]) - if self._extend_max: - cmap.set_over(self.colors[-1]) - - self.collections = cbook.silent_list(None) - - # label lists must be initialized here - self.labelTexts = [] - self.labelCValues = [] - - kw = {'cmap': cmap} - if norm is not None: - kw['norm'] = norm - # sets self.cmap, norm if needed; - cm.ScalarMappable.__init__(self, **kw) - if vmin is not None: - self.norm.vmin = vmin - if vmax is not None: - self.norm.vmax = vmax - self._process_colors() - - if getattr(self, 'allsegs', None) is None: - self.allsegs, self.allkinds = self._get_allsegs_and_allkinds() - elif self.allkinds is None: - # allsegs specified in constructor may or may not have allkinds as - # well. Must ensure allkinds can be zipped below. - self.allkinds = [None] * len(self.allsegs) - - if self.filled: - if self.linewidths is not None: - _api.warn_external('linewidths is ignored by contourf') - # Lower and upper contour levels. - lowers, uppers = self._get_lowers_and_uppers() - # Default zorder taken from Collection - self._contour_zorder = kwargs.pop('zorder', 1) - - self.collections[:] = [ - mcoll.PathCollection( - self._make_paths(segs, kinds), - antialiaseds=(self.antialiased,), - edgecolors='none', - alpha=self.alpha, - transform=self.get_transform(), - zorder=self._contour_zorder) - for level, level_upper, segs, kinds - in zip(lowers, uppers, self.allsegs, self.allkinds)] - else: - self.tlinewidths = tlinewidths = self._process_linewidths() - tlinestyles = self._process_linestyles() - aa = self.antialiased - if aa is not None: - aa = (self.antialiased,) - # Default zorder taken from LineCollection, which is higher than - # for filled contours so that lines are displayed on top. - self._contour_zorder = kwargs.pop('zorder', 2) - - self.collections[:] = [ - mcoll.PathCollection( - self._make_paths(segs, kinds), - facecolors="none", - antialiaseds=aa, - linewidths=width, - linestyles=[lstyle], - alpha=self.alpha, - transform=self.get_transform(), - zorder=self._contour_zorder, - label='_nolegend_') - for level, width, lstyle, segs, kinds - in zip(self.levels, tlinewidths, tlinestyles, self.allsegs, - self.allkinds)] - - for col in self.collections: - self.axes.add_collection(col, autolim=False) - col.sticky_edges.x[:] = [self._mins[0], self._maxs[0]] - col.sticky_edges.y[:] = [self._mins[1], self._maxs[1]] - self.axes.update_datalim([self._mins, self._maxs]) - self.axes.autoscale_view(tight=True) - - self.changed() # set the colors - - if kwargs: - _api.warn_external( - 'The following kwargs were not used by contour: ' + - ", ".join(map(repr, kwargs)) - ) - - def get_transform(self): - """Return the `.Transform` instance used by this ContourSet.""" - if self._transform is None: - self._transform = self.axes.transData - elif (not isinstance(self._transform, mtransforms.Transform) - and hasattr(self._transform, '_as_mpl_transform')): - self._transform = self._transform._as_mpl_transform(self.axes) - return self._transform - - def __getstate__(self): - state = self.__dict__.copy() - # the C object _contour_generator cannot currently be pickled. This - # isn't a big issue as it is not actually used once the contour has - # been calculated. - state['_contour_generator'] = None - return state - - def legend_elements(self, variable_name='x', str_format=str): - """ - Return a list of artists and labels suitable for passing through - to `~.Axes.legend` which represent this ContourSet. - - The labels have the form "0 < x <= 1" stating the data ranges which - the artists represent. - - Parameters - ---------- - variable_name : str - The string used inside the inequality used on the labels. - str_format : function: float -> str - Function used to format the numbers in the labels. - - Returns - ------- - artists : list[`.Artist`] - A list of the artists. - labels : list[str] - A list of the labels. - """ - artists = [] - labels = [] - - if self.filled: - lowers, uppers = self._get_lowers_and_uppers() - n_levels = len(self.collections) - - for i, (collection, lower, upper) in enumerate( - zip(self.collections, lowers, uppers)): - patch = mpatches.Rectangle( - (0, 0), 1, 1, - facecolor=collection.get_facecolor()[0], - hatch=collection.get_hatch(), - alpha=collection.get_alpha()) - artists.append(patch) - - lower = str_format(lower) - upper = str_format(upper) - - if i == 0 and self.extend in ('min', 'both'): - labels.append(fr'${variable_name} \leq {lower}s$') - elif i == n_levels - 1 and self.extend in ('max', 'both'): - labels.append(fr'${variable_name} > {upper}s$') - else: - labels.append(fr'${lower} < {variable_name} \leq {upper}$') - else: - for collection, level in zip(self.collections, self.levels): - - patch = mcoll.LineCollection(None) - patch.update_from(collection) - - artists.append(patch) - # format the level for insertion into the labels - level = str_format(level) - labels.append(fr'${variable_name} = {level}$') - - return artists, labels - - def _process_args(self, *args, **kwargs): - """ - Process *args* and *kwargs*; override in derived classes. - - Must set self.levels, self.zmin and self.zmax, and update axes limits. - """ - self.levels = args[0] - self.allsegs = args[1] - self.allkinds = args[2] if len(args) > 2 else None - self.zmax = np.max(self.levels) - self.zmin = np.min(self.levels) - - # Check lengths of levels and allsegs. - if self.filled: - if len(self.allsegs) != len(self.levels) - 1: - raise ValueError('must be one less number of segments as ' - 'levels') - else: - if len(self.allsegs) != len(self.levels): - raise ValueError('must be same number of segments as levels') - - # Check length of allkinds. - if (self.allkinds is not None and - len(self.allkinds) != len(self.allsegs)): - raise ValueError('allkinds has different length to allsegs') - - # Determine x, y bounds and update axes data limits. - flatseglist = [s for seg in self.allsegs for s in seg] - points = np.concatenate(flatseglist, axis=0) - self._mins = points.min(axis=0) - self._maxs = points.max(axis=0) - - return kwargs - - def _get_allsegs_and_allkinds(self): - """Compute ``allsegs`` and ``allkinds`` using C extension.""" - allsegs = [] - allkinds = [] - if self.filled: - lowers, uppers = self._get_lowers_and_uppers() - for level, level_upper in zip(lowers, uppers): - vertices, kinds = \ - self._contour_generator.create_filled_contour( - level, level_upper) - allsegs.append(vertices) - allkinds.append(kinds) - else: - for level in self.levels: - vertices, kinds = self._contour_generator.create_contour(level) - allsegs.append(vertices) - allkinds.append(kinds) - return allsegs, allkinds - - def _get_lowers_and_uppers(self): - """ - Return ``(lowers, uppers)`` for filled contours. - """ - lowers = self._levels[:-1] - if self.zmin == lowers[0]: - # Include minimum values in lowest interval - lowers = lowers.copy() # so we don't change self._levels - if self.logscale: - lowers[0] = 0.99 * self.zmin - else: - lowers[0] -= 1 - uppers = self._levels[1:] - return (lowers, uppers) - - def _make_paths(self, segs, kinds): - """ - Create and return Path objects for the specified segments and optional - kind codes. *segs* is a list of numpy arrays, each array is either a - closed line loop or open line strip of 2D points with a shape of - (npoints, 2). *kinds* is either None or a list (with the same length - as *segs*) of numpy arrays, each array is of shape (npoints,) and - contains the kind codes for the corresponding line in *segs*. If - *kinds* is None then the Path constructor creates the kind codes - assuming that the line is an open strip. - """ - if kinds is None: - return [mpath.Path(seg) for seg in segs] - else: - return [mpath.Path(seg, codes=kind) for seg, kind - in zip(segs, kinds)] - - def changed(self): - if not hasattr(self, "cvalues"): - # Just return after calling the super() changed function - cm.ScalarMappable.changed(self) - return - # Force an autoscale immediately because self.to_rgba() calls - # autoscale_None() internally with the data passed to it, - # so if vmin/vmax are not set yet, this would override them with - # content from *cvalues* rather than levels like we want - self.norm.autoscale_None(self.levels) - tcolors = [(tuple(rgba),) - for rgba in self.to_rgba(self.cvalues, alpha=self.alpha)] - self.tcolors = tcolors - hatches = self.hatches * len(tcolors) - for color, hatch, collection in zip(tcolors, hatches, - self.collections): - if self.filled: - collection.set_facecolor(color) - # update the collection's hatch (may be None) - collection.set_hatch(hatch) - else: - collection.set_edgecolor(color) - for label, cv in zip(self.labelTexts, self.labelCValues): - label.set_alpha(self.alpha) - label.set_color(self.labelMappable.to_rgba(cv)) - # add label colors - cm.ScalarMappable.changed(self) - - def _autolev(self, N): - """ - Select contour levels to span the data. - - The target number of levels, *N*, is used only when the - scale is not log and default locator is used. - - We need two more levels for filled contours than for - line contours, because for the latter we need to specify - the lower and upper boundary of each range. For example, - a single contour boundary, say at z = 0, requires only - one contour line, but two filled regions, and therefore - three levels to provide boundaries for both regions. - """ - if self.locator is None: - if self.logscale: - self.locator = ticker.LogLocator() - else: - self.locator = ticker.MaxNLocator(N + 1, min_n_ticks=1) - - lev = self.locator.tick_values(self.zmin, self.zmax) - - try: - if self.locator._symmetric: - return lev - except AttributeError: - pass - - # Trim excess levels the locator may have supplied. - under = np.nonzero(lev < self.zmin)[0] - i0 = under[-1] if len(under) else 0 - over = np.nonzero(lev > self.zmax)[0] - i1 = over[0] + 1 if len(over) else len(lev) - if self.extend in ('min', 'both'): - i0 += 1 - if self.extend in ('max', 'both'): - i1 -= 1 - - if i1 - i0 < 3: - i0, i1 = 0, len(lev) - - return lev[i0:i1] - - def _process_contour_level_args(self, args, z_dtype): - """ - Determine the contour levels and store in self.levels. - """ - if self.levels is None: - if args: - levels_arg = args[0] - elif np.issubdtype(z_dtype, bool): - if self.filled: - levels_arg = [0, .5, 1] - else: - levels_arg = [.5] - else: - levels_arg = 7 # Default, hard-wired. - else: - levels_arg = self.levels - if isinstance(levels_arg, Integral): - self.levels = self._autolev(levels_arg) - else: - self.levels = np.asarray(levels_arg, np.float64) - if self.filled and len(self.levels) < 2: - raise ValueError("Filled contours require at least 2 levels.") - if len(self.levels) > 1 and np.min(np.diff(self.levels)) <= 0.0: - raise ValueError("Contour levels must be increasing") - - def _process_levels(self): - """ - Assign values to :attr:`layers` based on :attr:`levels`, - adding extended layers as needed if contours are filled. - - For line contours, layers simply coincide with levels; - a line is a thin layer. No extended levels are needed - with line contours. - """ - # Make a private _levels to include extended regions; we - # want to leave the original levels attribute unchanged. - # (Colorbar needs this even for line contours.) - self._levels = list(self.levels) - - if self.logscale: - lower, upper = 1e-250, 1e250 - else: - lower, upper = -1e250, 1e250 - - if self.extend in ('both', 'min'): - self._levels.insert(0, lower) - if self.extend in ('both', 'max'): - self._levels.append(upper) - self._levels = np.asarray(self._levels) - - if not self.filled: - self.layers = self.levels - return - - # Layer values are mid-way between levels in screen space. - if self.logscale: - # Avoid overflow by taking sqrt before multiplying. - self.layers = (np.sqrt(self._levels[:-1]) - * np.sqrt(self._levels[1:])) - else: - self.layers = 0.5 * (self._levels[:-1] + self._levels[1:]) - - def _process_colors(self): - """ - Color argument processing for contouring. - - Note that we base the colormapping on the contour levels - and layers, not on the actual range of the Z values. This - means we don't have to worry about bad values in Z, and we - always have the full dynamic range available for the selected - levels. - - The color is based on the midpoint of the layer, except for - extended end layers. By default, the norm vmin and vmax - are the extreme values of the non-extended levels. Hence, - the layer color extremes are not the extreme values of - the colormap itself, but approach those values as the number - of levels increases. An advantage of this scheme is that - line contours, when added to filled contours, take on - colors that are consistent with those of the filled regions; - for example, a contour line on the boundary between two - regions will have a color intermediate between those - of the regions. - - """ - self.monochrome = self.cmap.monochrome - if self.colors is not None: - # Generate integers for direct indexing. - i0, i1 = 0, len(self.levels) - if self.filled: - i1 -= 1 - # Out of range indices for over and under: - if self.extend in ('both', 'min'): - i0 -= 1 - if self.extend in ('both', 'max'): - i1 += 1 - self.cvalues = list(range(i0, i1)) - self.set_norm(mcolors.NoNorm()) - else: - self.cvalues = self.layers - self.set_array(self.levels) - self.autoscale_None() - if self.extend in ('both', 'max', 'min'): - self.norm.clip = False - - # self.tcolors are set by the "changed" method - - def _process_linewidths(self): - linewidths = self.linewidths - Nlev = len(self.levels) - if linewidths is None: - default_linewidth = mpl.rcParams['contour.linewidth'] - if default_linewidth is None: - default_linewidth = mpl.rcParams['lines.linewidth'] - tlinewidths = [(default_linewidth,)] * Nlev - else: - if not np.iterable(linewidths): - linewidths = [linewidths] * Nlev - else: - linewidths = list(linewidths) - if len(linewidths) < Nlev: - nreps = int(np.ceil(Nlev / len(linewidths))) - linewidths = linewidths * nreps - if len(linewidths) > Nlev: - linewidths = linewidths[:Nlev] - tlinewidths = [(w,) for w in linewidths] - return tlinewidths - - def _process_linestyles(self): - linestyles = self.linestyles - Nlev = len(self.levels) - if linestyles is None: - tlinestyles = ['solid'] * Nlev - if self.monochrome: - eps = - (self.zmax - self.zmin) * 1e-15 - for i, lev in enumerate(self.levels): - if lev < eps: - tlinestyles[i] = self.negative_linestyles - else: - if isinstance(linestyles, str): - tlinestyles = [linestyles] * Nlev - elif np.iterable(linestyles): - tlinestyles = list(linestyles) - if len(tlinestyles) < Nlev: - nreps = int(np.ceil(Nlev / len(linestyles))) - tlinestyles = tlinestyles * nreps - if len(tlinestyles) > Nlev: - tlinestyles = tlinestyles[:Nlev] - else: - raise ValueError("Unrecognized type for linestyles kwarg") - return tlinestyles - - def get_alpha(self): - """Return alpha to be applied to all ContourSet artists.""" - return self.alpha - - def set_alpha(self, alpha): - """ - Set the alpha blending value for all ContourSet artists. - *alpha* must be between 0 (transparent) and 1 (opaque). - """ - self.alpha = alpha - self.changed() - - def find_nearest_contour(self, x, y, indices=None, pixel=True): - """ - Find the point in the contour plot that is closest to ``(x, y)``. - - This method does not support filled contours. - - Parameters - ---------- - x, y : float - The reference point. - indices : list of int or None, default: None - Indices of contour levels to consider. If None (the default), all - levels are considered. - pixel : bool, default: True - If *True*, measure distance in pixel (screen) space, which is - useful for manual contour labeling; else, measure distance in axes - space. - - Returns - ------- - contour : `.Collection` - The contour that is closest to ``(x, y)``. - segment : int - The index of the `.Path` in *contour* that is closest to - ``(x, y)``. - index : int - The index of the path segment in *segment* that is closest to - ``(x, y)``. - xmin, ymin : float - The point in the contour plot that is closest to ``(x, y)``. - d2 : float - The squared distance from ``(xmin, ymin)`` to ``(x, y)``. - """ - - # This function uses a method that is probably quite - # inefficient based on converting each contour segment to - # pixel coordinates and then comparing the given point to - # those coordinates for each contour. This will probably be - # quite slow for complex contours, but for normal use it works - # sufficiently well that the time is not noticeable. - # Nonetheless, improvements could probably be made. - - if self.filled: - raise ValueError("Method does not support filled contours.") - - if indices is None: - indices = range(len(self.collections)) - - d2min = np.inf - conmin = None - segmin = None - imin = None - xmin = None - ymin = None - - point = np.array([x, y]) - - for icon in indices: - con = self.collections[icon] - trans = con.get_transform() - paths = con.get_paths() - - for segNum, linepath in enumerate(paths): - lc = linepath.vertices - # transfer all data points to screen coordinates if desired - if pixel: - lc = trans.transform(lc) - - d2, xc, leg = _find_closest_point_on_path(lc, point) - if d2 < d2min: - d2min = d2 - conmin = icon - segmin = segNum - imin = leg[1] - xmin = xc[0] - ymin = xc[1] - - return (conmin, segmin, imin, xmin, ymin, d2min) - - def remove(self): - super().remove() - for coll in self.collections: - coll.remove() - - -@_docstring.dedent_interpd -class QuadContourSet(ContourSet): - """ - Create and store a set of contour lines or filled regions. - - This class is typically not instantiated directly by the user but by - `~.Axes.contour` and `~.Axes.contourf`. - - %(contour_set_attributes)s - """ - - def _process_args(self, *args, corner_mask=None, algorithm=None, **kwargs): - """ - Process args and kwargs. - """ - if isinstance(args[0], QuadContourSet): - if self.levels is None: - self.levels = args[0].levels - self.zmin = args[0].zmin - self.zmax = args[0].zmax - self._corner_mask = args[0]._corner_mask - contour_generator = args[0]._contour_generator - self._mins = args[0]._mins - self._maxs = args[0]._maxs - self._algorithm = args[0]._algorithm - else: - import contourpy - - if algorithm is None: - algorithm = mpl.rcParams['contour.algorithm'] - mpl.rcParams.validate["contour.algorithm"](algorithm) - self._algorithm = algorithm - - if corner_mask is None: - if self._algorithm == "mpl2005": - # mpl2005 does not support corner_mask=True so if not - # specifically requested then disable it. - corner_mask = False - else: - corner_mask = mpl.rcParams['contour.corner_mask'] - self._corner_mask = corner_mask - - x, y, z = self._contour_args(args, kwargs) - - contour_generator = contourpy.contour_generator( - x, y, z, name=self._algorithm, corner_mask=self._corner_mask, - line_type=contourpy.LineType.SeparateCode, - fill_type=contourpy.FillType.OuterCode, - chunk_size=self.nchunk) - - t = self.get_transform() - - # if the transform is not trans data, and some part of it - # contains transData, transform the xs and ys to data coordinates - if (t != self.axes.transData and - any(t.contains_branch_seperately(self.axes.transData))): - trans_to_data = t - self.axes.transData - pts = np.vstack([x.flat, y.flat]).T - transformed_pts = trans_to_data.transform(pts) - x = transformed_pts[..., 0] - y = transformed_pts[..., 1] - - self._mins = [ma.min(x), ma.min(y)] - self._maxs = [ma.max(x), ma.max(y)] - - self._contour_generator = contour_generator - - return kwargs - - def _contour_args(self, args, kwargs): - if self.filled: - fn = 'contourf' - else: - fn = 'contour' - nargs = len(args) - if nargs <= 2: - z, *args = args - z = ma.asarray(z) - x, y = self._initialize_x_y(z) - elif nargs <= 4: - x, y, z_orig, *args = args - x, y, z = self._check_xyz(x, y, z_orig, kwargs) - else: - raise _api.nargs_error(fn, takes="from 1 to 4", given=nargs) - z = ma.masked_invalid(z, copy=False) - self.zmax = float(z.max()) - self.zmin = float(z.min()) - if self.logscale and self.zmin <= 0: - z = ma.masked_where(z <= 0, z) - _api.warn_external('Log scale: values of z <= 0 have been masked') - self.zmin = float(z.min()) - self._process_contour_level_args(args, z.dtype) - return (x, y, z) - - def _check_xyz(self, x, y, z, kwargs): - """ - Check that the shapes of the input arrays match; if x and y are 1D, - convert them to 2D using meshgrid. - """ - x, y = self.axes._process_unit_info([("x", x), ("y", y)], kwargs) - - x = np.asarray(x, dtype=np.float64) - y = np.asarray(y, dtype=np.float64) - z = ma.asarray(z) - - if z.ndim != 2: - raise TypeError(f"Input z must be 2D, not {z.ndim}D") - if z.shape[0] < 2 or z.shape[1] < 2: - raise TypeError(f"Input z must be at least a (2, 2) shaped array, " - f"but has shape {z.shape}") - Ny, Nx = z.shape - - if x.ndim != y.ndim: - raise TypeError(f"Number of dimensions of x ({x.ndim}) and y " - f"({y.ndim}) do not match") - if x.ndim == 1: - nx, = x.shape - ny, = y.shape - if nx != Nx: - raise TypeError(f"Length of x ({nx}) must match number of " - f"columns in z ({Nx})") - if ny != Ny: - raise TypeError(f"Length of y ({ny}) must match number of " - f"rows in z ({Ny})") - x, y = np.meshgrid(x, y) - elif x.ndim == 2: - if x.shape != z.shape: - raise TypeError( - f"Shapes of x {x.shape} and z {z.shape} do not match") - if y.shape != z.shape: - raise TypeError( - f"Shapes of y {y.shape} and z {z.shape} do not match") - else: - raise TypeError(f"Inputs x and y must be 1D or 2D, not {x.ndim}D") - - return x, y, z - - def _initialize_x_y(self, z): - """ - Return X, Y arrays such that contour(Z) will match imshow(Z) - if origin is not None. - The center of pixel Z[i, j] depends on origin: - if origin is None, x = j, y = i; - if origin is 'lower', x = j + 0.5, y = i + 0.5; - if origin is 'upper', x = j + 0.5, y = Nrows - i - 0.5 - If extent is not None, x and y will be scaled to match, - as in imshow. - If origin is None and extent is not None, then extent - will give the minimum and maximum values of x and y. - """ - if z.ndim != 2: - raise TypeError(f"Input z must be 2D, not {z.ndim}D") - elif z.shape[0] < 2 or z.shape[1] < 2: - raise TypeError(f"Input z must be at least a (2, 2) shaped array, " - f"but has shape {z.shape}") - else: - Ny, Nx = z.shape - if self.origin is None: # Not for image-matching. - if self.extent is None: - return np.meshgrid(np.arange(Nx), np.arange(Ny)) - else: - x0, x1, y0, y1 = self.extent - x = np.linspace(x0, x1, Nx) - y = np.linspace(y0, y1, Ny) - return np.meshgrid(x, y) - # Match image behavior: - if self.extent is None: - x0, x1, y0, y1 = (0, Nx, 0, Ny) - else: - x0, x1, y0, y1 = self.extent - dx = (x1 - x0) / Nx - dy = (y1 - y0) / Ny - x = x0 + (np.arange(Nx) + 0.5) * dx - y = y0 + (np.arange(Ny) + 0.5) * dy - if self.origin == 'upper': - y = y[::-1] - return np.meshgrid(x, y) - - -_docstring.interpd.update(contour_doc=""" -`.contour` and `.contourf` draw contour lines and filled contours, -respectively. Except as noted, function signatures and return values -are the same for both versions. - -Parameters ----------- -X, Y : array-like, optional - The coordinates of the values in *Z*. - - *X* and *Y* must both be 2D with the same shape as *Z* (e.g. - created via `numpy.meshgrid`), or they must both be 1-D such - that ``len(X) == N`` is the number of columns in *Z* and - ``len(Y) == M`` is the number of rows in *Z*. - - *X* and *Y* must both be ordered monotonically. - - If not given, they are assumed to be integer indices, i.e. - ``X = range(N)``, ``Y = range(M)``. - -Z : (M, N) array-like - The height values over which the contour is drawn. Color-mapping is - controlled by *cmap*, *norm*, *vmin*, and *vmax*. - -levels : int or array-like, optional - Determines the number and positions of the contour lines / regions. - - If an int *n*, use `~matplotlib.ticker.MaxNLocator`, which tries - to automatically choose no more than *n+1* "nice" contour levels - between minimum and maximum numeric values of *Z*. - - If array-like, draw contour lines at the specified levels. - The values must be in increasing order. - -Returns -------- -`~.contour.QuadContourSet` - -Other Parameters ----------------- -corner_mask : bool, default: :rc:`contour.corner_mask` - Enable/disable corner masking, which only has an effect if *Z* is - a masked array. If ``False``, any quad touching a masked point is - masked out. If ``True``, only the triangular corners of quads - nearest those points are always masked out, other triangular - corners comprising three unmasked points are contoured as usual. - -colors : color string or sequence of colors, optional - The colors of the levels, i.e. the lines for `.contour` and the - areas for `.contourf`. - - The sequence is cycled for the levels in ascending order. If the - sequence is shorter than the number of levels, it's repeated. - - As a shortcut, single color strings may be used in place of - one-element lists, i.e. ``'red'`` instead of ``['red']`` to color - all levels with the same color. This shortcut does only work for - color strings, not for other ways of specifying colors. - - By default (value *None*), the colormap specified by *cmap* - will be used. - -alpha : float, default: 1 - The alpha blending value, between 0 (transparent) and 1 (opaque). - -%(cmap_doc)s - - This parameter is ignored if *colors* is set. - -%(norm_doc)s - - This parameter is ignored if *colors* is set. - -%(vmin_vmax_doc)s - - If *vmin* or *vmax* are not given, the default color scaling is based on - *levels*. - - This parameter is ignored if *colors* is set. - -origin : {*None*, 'upper', 'lower', 'image'}, default: None - Determines the orientation and exact position of *Z* by specifying - the position of ``Z[0, 0]``. This is only relevant, if *X*, *Y* - are not given. - - - *None*: ``Z[0, 0]`` is at X=0, Y=0 in the lower left corner. - - 'lower': ``Z[0, 0]`` is at X=0.5, Y=0.5 in the lower left corner. - - 'upper': ``Z[0, 0]`` is at X=N+0.5, Y=0.5 in the upper left - corner. - - 'image': Use the value from :rc:`image.origin`. - -extent : (x0, x1, y0, y1), optional - If *origin* is not *None*, then *extent* is interpreted as in - `.imshow`: it gives the outer pixel boundaries. In this case, the - position of Z[0, 0] is the center of the pixel, not a corner. If - *origin* is *None*, then (*x0*, *y0*) is the position of Z[0, 0], - and (*x1*, *y1*) is the position of Z[-1, -1]. - - This argument is ignored if *X* and *Y* are specified in the call - to contour. - -locator : ticker.Locator subclass, optional - The locator is used to determine the contour levels if they - are not given explicitly via *levels*. - Defaults to `~.ticker.MaxNLocator`. - -extend : {'neither', 'both', 'min', 'max'}, default: 'neither' - Determines the ``contourf``-coloring of values that are outside the - *levels* range. - - If 'neither', values outside the *levels* range are not colored. - If 'min', 'max' or 'both', color the values below, above or below - and above the *levels* range. - - Values below ``min(levels)`` and above ``max(levels)`` are mapped - to the under/over values of the `.Colormap`. Note that most - colormaps do not have dedicated colors for these by default, so - that the over and under values are the edge values of the colormap. - You may want to set these values explicitly using - `.Colormap.set_under` and `.Colormap.set_over`. - - .. note:: - - An existing `.QuadContourSet` does not get notified if - properties of its colormap are changed. Therefore, an explicit - call `.QuadContourSet.changed()` is needed after modifying the - colormap. The explicit call can be left out, if a colorbar is - assigned to the `.QuadContourSet` because it internally calls - `.QuadContourSet.changed()`. - - Example:: - - x = np.arange(1, 10) - y = x.reshape(-1, 1) - h = x * y - - cs = plt.contourf(h, levels=[10, 30, 50], - colors=['#808080', '#A0A0A0', '#C0C0C0'], extend='both') - cs.cmap.set_over('red') - cs.cmap.set_under('blue') - cs.changed() - -xunits, yunits : registered units, optional - Override axis units by specifying an instance of a - :class:`matplotlib.units.ConversionInterface`. - -antialiased : bool, optional - Enable antialiasing, overriding the defaults. For - filled contours, the default is *True*. For line contours, - it is taken from :rc:`lines.antialiased`. - -nchunk : int >= 0, optional - If 0, no subdivision of the domain. Specify a positive integer to - divide the domain into subdomains of *nchunk* by *nchunk* quads. - Chunking reduces the maximum length of polygons generated by the - contouring algorithm which reduces the rendering workload passed - on to the backend and also requires slightly less RAM. It can - however introduce rendering artifacts at chunk boundaries depending - on the backend, the *antialiased* flag and value of *alpha*. - -linewidths : float or array-like, default: :rc:`contour.linewidth` - *Only applies to* `.contour`. - - The line width of the contour lines. - - If a number, all levels will be plotted with this linewidth. - - If a sequence, the levels in ascending order will be plotted with - the linewidths in the order specified. - - If None, this falls back to :rc:`lines.linewidth`. - -linestyles : {*None*, 'solid', 'dashed', 'dashdot', 'dotted'}, optional - *Only applies to* `.contour`. - - If *linestyles* is *None*, the default is 'solid' unless the lines are - monochrome. In that case, negative contours will instead take their - linestyle from the *negative_linestyles* argument. - - *linestyles* can also be an iterable of the above strings specifying a set - of linestyles to be used. If this iterable is shorter than the number of - contour levels it will be repeated as necessary. - -negative_linestyles : {*None*, 'solid', 'dashed', 'dashdot', 'dotted'}, \ - optional - *Only applies to* `.contour`. - - If *linestyles* is *None* and the lines are monochrome, this argument - specifies the line style for negative contours. - - If *negative_linestyles* is *None*, the default is taken from - :rc:`contour.negative_linestyles`. - - *negative_linestyles* can also be an iterable of the above strings - specifying a set of linestyles to be used. If this iterable is shorter than - the number of contour levels it will be repeated as necessary. - -hatches : list[str], optional - *Only applies to* `.contourf`. - - A list of cross hatch patterns to use on the filled areas. - If None, no hatching will be added to the contour. - Hatching is supported in the PostScript, PDF, SVG and Agg - backends only. - -algorithm : {'mpl2005', 'mpl2014', 'serial', 'threaded'}, optional - Which contouring algorithm to use to calculate the contour lines and - polygons. The algorithms are implemented in - `ContourPy <https://github.com/contourpy/contourpy>`_, consult the - `ContourPy documentation <https://contourpy.readthedocs.io>`_ for - further information. - - The default is taken from :rc:`contour.algorithm`. - -data : indexable object, optional - DATA_PARAMETER_PLACEHOLDER - -Notes ------ -1. `.contourf` differs from the MATLAB version in that it does not draw - the polygon edges. To draw edges, add line contours with calls to - `.contour`. - -2. `.contourf` fills intervals that are closed at the top; that is, for - boundaries *z1* and *z2*, the filled region is:: - - z1 < Z <= z2 - - except for the lowest interval, which is closed on both sides (i.e. - it includes the lowest value). - -3. `.contour` and `.contourf` use a `marching squares - <https://en.wikipedia.org/wiki/Marching_squares>`_ algorithm to - compute contour locations. More information can be found in - `ContourPy documentation <https://contourpy.readthedocs.io>`_. -""" % _docstring.interpd.params) diff --git a/spaces/laurabarreda/genre_prediction/app.py b/spaces/laurabarreda/genre_prediction/app.py deleted file mode 100644 index 8eed504a28b5b33e044bfcf5a7de0f29179a3bff..0000000000000000000000000000000000000000 --- a/spaces/laurabarreda/genre_prediction/app.py +++ /dev/null @@ -1,32 +0,0 @@ -import streamlit as st -from functions import * -from extract_electronic import * -from predict_electronic import * -from predict_all import * -from extract_all import * - -config_page() -edm() - -st.markdown(""" -<style> -.big-font { - font-size:24px !important; - font-weight:600 !important; - font-family:Roboto !important; -} -</style> -""", unsafe_allow_html=True) - -st.markdown('<p class="big-font">FIND THE GENRE</p>', unsafe_allow_html=True) - -menu = st.sidebar.selectbox('Selecciona la página', ['EDM', 'All genres']) - -if menu == 'EDM': - input = st.text_input('Enter track url', '') - Predict_electronic(input) - - -elif menu == 'All genres': - input = st.text_input('Enter track name', '') - Predict_all(input) diff --git a/spaces/lethalhames/Phind-Phind-CodeLlama-34B-v2/README.md b/spaces/lethalhames/Phind-Phind-CodeLlama-34B-v2/README.md deleted file mode 100644 index ace140e5c8409520c3a1d5dbf2e402739df46743..0000000000000000000000000000000000000000 --- a/spaces/lethalhames/Phind-Phind-CodeLlama-34B-v2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Phind Phind CodeLlama 34B V2 -emoji: 🚀 -colorFrom: red -colorTo: green -sdk: gradio -sdk_version: 3.41.2 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lewiswu1209/MockingBird/ppg2mel/utils/abs_model.py b/spaces/lewiswu1209/MockingBird/ppg2mel/utils/abs_model.py deleted file mode 100644 index b6d27a6df74c6988dd4355cbef149ed90f3a36cf..0000000000000000000000000000000000000000 --- a/spaces/lewiswu1209/MockingBird/ppg2mel/utils/abs_model.py +++ /dev/null @@ -1,23 +0,0 @@ -from abc import ABC -from abc import abstractmethod - -import torch - -class AbsMelDecoder(torch.nn.Module, ABC): - """The abstract PPG-based voice conversion class - This "model" is one of mediator objects for "Task" class. - - """ - - @abstractmethod - def forward( - self, - bottle_neck_features: torch.Tensor, - feature_lengths: torch.Tensor, - speech: torch.Tensor, - speech_lengths: torch.Tensor, - logf0_uv: torch.Tensor = None, - spembs: torch.Tensor = None, - styleembs: torch.Tensor = None, - ) -> torch.Tensor: - raise NotImplementedError diff --git a/spaces/lighdow/anime-cute-tts/__init__.py b/spaces/lighdow/anime-cute-tts/__init__.py deleted file mode 100644 index 4e69c354dd24e3243980236eca962cd5945a92fc..0000000000000000000000000000000000000000 --- a/spaces/lighdow/anime-cute-tts/__init__.py +++ /dev/null @@ -1,32 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners - - -def text_to_sequence(text, symbols, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - _symbol_to_id = {s: i for i, s in enumerate(symbols)} - - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/lightli/bingo-newbing/src/components/ui/icons.tsx b/spaces/lightli/bingo-newbing/src/components/ui/icons.tsx deleted file mode 100644 index 742b489b50437c5b64c86082f2ebc712eeb6a2b0..0000000000000000000000000000000000000000 --- a/spaces/lightli/bingo-newbing/src/components/ui/icons.tsx +++ /dev/null @@ -1,504 +0,0 @@ -'use client' - -import * as React from 'react' - -import { cn } from '@/lib/utils' - -function IconNextChat({ - className, - inverted, - ...props -}: React.ComponentProps<'svg'> & { inverted?: boolean }) { - const id = React.useId() - - return ( - <svg - viewBox="0 0 17 17" - fill="none" - xmlns="http://www.w3.org/2000/svg" - className={cn('h-4 w-4', className)} - {...props} - > - <defs> - <linearGradient - id={`gradient-${id}-1`} - x1="10.6889" - y1="10.3556" - x2="13.8445" - y2="14.2667" - gradientUnits="userSpaceOnUse" - > - <stop stopColor={inverted ? 'white' : 'black'} /> - <stop - offset={1} - stopColor={inverted ? 'white' : 'black'} - stopOpacity={0} - /> - </linearGradient> - <linearGradient - id={`gradient-${id}-2`} - x1="11.7555" - y1="4.8" - x2="11.7376" - y2="9.50002" - gradientUnits="userSpaceOnUse" - > - <stop stopColor={inverted ? 'white' : 'black'} /> - <stop - offset={1} - stopColor={inverted ? 'white' : 'black'} - stopOpacity={0} - /> - </linearGradient> - </defs> - <path - d="M1 16L2.58314 11.2506C1.83084 9.74642 1.63835 8.02363 2.04013 6.39052C2.4419 4.75741 3.41171 3.32057 4.776 2.33712C6.1403 1.35367 7.81003 0.887808 9.4864 1.02289C11.1628 1.15798 12.7364 1.8852 13.9256 3.07442C15.1148 4.26363 15.842 5.83723 15.9771 7.5136C16.1122 9.18997 15.6463 10.8597 14.6629 12.224C13.6794 13.5883 12.2426 14.5581 10.6095 14.9599C8.97637 15.3616 7.25358 15.1692 5.74942 14.4169L1 16Z" - fill={inverted ? 'black' : 'white'} - stroke={inverted ? 'black' : 'white'} - strokeWidth={2} - strokeLinecap="round" - strokeLinejoin="round" - /> - <mask - id="mask0_91_2047" - style={{ maskType: 'alpha' }} - maskUnits="userSpaceOnUse" - x={1} - y={0} - width={16} - height={16} - > - <circle cx={9} cy={8} r={8} fill={inverted ? 'black' : 'white'} /> - </mask> - <g mask="url(#mask0_91_2047)"> - <circle cx={9} cy={8} r={8} fill={inverted ? 'black' : 'white'} /> - <path - d="M14.2896 14.0018L7.146 4.8H5.80005V11.1973H6.87681V6.16743L13.4444 14.6529C13.7407 14.4545 14.0231 14.2369 14.2896 14.0018Z" - fill={`url(#gradient-${id}-1)`} - /> - <rect - x="11.2222" - y="4.8" - width="1.06667" - height="6.4" - fill={`url(#gradient-${id}-2)`} - /> - </g> - </svg> - ) -} - -function IconOpenAI({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - <svg - fill="currentColor" - viewBox="0 0 24 24" - role="img" - xmlns="http://www.w3.org/2000/svg" - className={cn('h-4 w-4', className)} - {...props} - > - <title>OpenAI icon - - - ) -} - -function IconGitHub({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - GitHub - - - ) -} - -function IconSeparator({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - ) -} - -function IconArrowDown({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowRight({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUser({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconPlus({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconArrowElbow({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSpinner({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMessage({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconTrash({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMore({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconRefresh({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconStop({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSidebar({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconMoon({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconSun({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCopy({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconCheck({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconDownload({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconClose({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconEdit({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconShare({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconUsers({ className, ...props }: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconExternalLink({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -function IconChevronUpDown({ - className, - ...props -}: React.ComponentProps<'svg'>) { - return ( - - - - ) -} - -export { - IconEdit, - IconNextChat, - IconOpenAI, - IconGitHub, - IconSeparator, - IconArrowDown, - IconArrowRight, - IconUser, - IconPlus, - IconArrowElbow, - IconSpinner, - IconMessage, - IconTrash, - IconMore, - IconRefresh, - IconStop, - IconSidebar, - IconMoon, - IconSun, - IconCopy, - IconCheck, - IconDownload, - IconClose, - IconShare, - IconUsers, - IconExternalLink, - IconChevronUpDown -} diff --git a/spaces/lijk20/ClueAI-ChatYuan-large-v1/app.py b/spaces/lijk20/ClueAI-ChatYuan-large-v1/app.py deleted file mode 100644 index a5b254e51e851ca68111cf420679fe5cf11356b2..0000000000000000000000000000000000000000 --- a/spaces/lijk20/ClueAI-ChatYuan-large-v1/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/ClueAI/ChatYuan-large-v1").launch() \ No newline at end of file diff --git a/spaces/liliyRehtina/color/models/clusterkit.py b/spaces/liliyRehtina/color/models/clusterkit.py deleted file mode 100644 index b9e42a9ca5d9dce5dd44ea8f24e9a2e3d153ca74..0000000000000000000000000000000000000000 --- a/spaces/liliyRehtina/color/models/clusterkit.py +++ /dev/null @@ -1,291 +0,0 @@ -import torch -import torch.nn as nn -import torch.nn.functional as F - -from functools import partial -import numpy as np -import torch -from tqdm import tqdm -import math, random -#from sklearn.cluster import KMeans, kmeans_plusplus, MeanShift, estimate_bandwidth - - -def tensor_kmeans_sklearn(data_vecs, n_clusters=7, metric='euclidean', need_layer_masks=False, max_iters=20): - N,C,H,W = data_vecs.shape - assert N == 1, 'only support singe image tensor' - ## (1,C,H,W) -> (HW,C) - data_vecs = data_vecs.permute(0,2,3,1).view(-1,C) - ## convert tensor to array - data_vecs_np = data_vecs.squeeze().detach().to("cpu").numpy() - km = KMeans(n_clusters=n_clusters, init='k-means++', n_init=10, max_iter=300) - pred = km.fit_predict(data_vecs_np) - cluster_ids_x = torch.from_numpy(km.labels_).to(data_vecs.device) - id_maps = cluster_ids_x.reshape(1,1,H,W).long() - if need_layer_masks: - one_hot_labels = F.one_hot(id_maps.squeeze(1), num_classes=n_clusters).float() - cluster_mask = one_hot_labels.permute(0,3,1,2) - return cluster_mask - return id_maps - - -def tensor_kmeans_pytorch(data_vecs, n_clusters=7, metric='euclidean', need_layer_masks=False, max_iters=20): - N,C,H,W = data_vecs.shape - assert N == 1, 'only support singe image tensor' - - ## (1,C,H,W) -> (HW,C) - data_vecs = data_vecs.permute(0,2,3,1).view(-1,C) - ## cosine | euclidean - #cluster_ids_x, cluster_centers = kmeans(X=data_vecs, num_clusters=n_clusters, distance=metric, device=data_vecs.device) - cluster_ids_x, cluster_centers = kmeans(X=data_vecs, num_clusters=n_clusters, distance=metric,\ - tqdm_flag=False, iter_limit=max_iters, device=data_vecs.device) - id_maps = cluster_ids_x.reshape(1,1,H,W) - if need_layer_masks: - one_hot_labels = F.one_hot(id_maps.squeeze(1), num_classes=n_clusters).float() - cluster_mask = one_hot_labels.permute(0,3,1,2) - return cluster_mask - return id_maps - - -def batch_kmeans_pytorch(data_vecs, n_clusters=7, metric='euclidean', use_sklearn_kmeans=False): - N,C,H,W = data_vecs.shape - sample_list = [] - for idx in range(N): - if use_sklearn_kmeans: - cluster_mask = tensor_kmeans_sklearn(data_vecs[idx:idx+1,:,:,:], n_clusters, metric, True) - else: - cluster_mask = tensor_kmeans_pytorch(data_vecs[idx:idx+1,:,:,:], n_clusters, metric, True) - sample_list.append(cluster_mask) - return torch.cat(sample_list, dim=0) - - -def get_centroid_candidates(data_vecs, n_clusters=7, metric='euclidean', max_iters=20): - N,C,H,W = data_vecs.shape - data_vecs = data_vecs.permute(0,2,3,1).view(-1,C) - cluster_ids_x, cluster_centers = kmeans(X=data_vecs, num_clusters=n_clusters, distance=metric,\ - tqdm_flag=False, iter_limit=max_iters, device=data_vecs.device) - return cluster_centers - - -def find_distinctive_elements(data_tensor, n_clusters=7, topk=3, metric='euclidean'): - N,C,H,W = data_tensor.shape - centroid_list = [] - for idx in range(N): - cluster_centers = get_centroid_candidates(data_tensor[idx:idx+1,:,:,:], n_clusters, metric) - centroid_list.append(cluster_centers) - - batch_centroids = torch.stack(centroid_list, dim=0) - data_vecs = data_tensor.flatten(2) - ## distance matrix: (N,K,HW) = (N,K,C) x (N,C,HW) - AtB = torch.matmul(batch_centroids, data_vecs) - AtA = torch.matmul(batch_centroids, batch_centroids.permute(0,2,1)) - BtB = torch.matmul(data_vecs.permute(0,2,1), data_vecs) - diag_A = torch.diagonal(AtA, dim1=-2, dim2=-1) - diag_B = torch.diagonal(BtB, dim1=-2, dim2=-1) - A2 = diag_A.unsqueeze(2).repeat(1,1,H*W) - B2 = diag_B.unsqueeze(1).repeat(1,n_clusters,1) - distance_map = A2 - 2*AtB + B2 - values, indices = distance_map.topk(topk, dim=2, largest=False, sorted=True) - cluster_mask = torch.where(distance_map <= values[:,:,topk-1:], torch.ones_like(distance_map), torch.zeros_like(distance_map)) - cluster_mask = cluster_mask.view(N,n_clusters,H,W) - return cluster_mask - - -##--------------------------------------------------------------------------------- -''' - resource from github: https://github.com/subhadarship/kmeans_pytorch -''' -##--------------------------------------------------------------------------------- - -def initialize(X, num_clusters): - """ - initialize cluster centers - :param X: (torch.tensor) matrix - :param num_clusters: (int) number of clusters - :return: (np.array) initial state - """ - np.random.seed(1) - num_samples = len(X) - indices = np.random.choice(num_samples, num_clusters, replace=False) - initial_state = X[indices] - return initial_state - - -def kmeans( - X, - num_clusters, - distance='euclidean', - cluster_centers=[], - tol=1e-4, - tqdm_flag=True, - iter_limit=0, - device=torch.device('cpu'), - gamma_for_soft_dtw=0.001 -): - """ - perform kmeans - :param X: (torch.tensor) matrix - :param num_clusters: (int) number of clusters - :param distance: (str) distance [options: 'euclidean', 'cosine'] [default: 'euclidean'] - :param tol: (float) threshold [default: 0.0001] - :param device: (torch.device) device [default: cpu] - :param tqdm_flag: Allows to turn logs on and off - :param iter_limit: hard limit for max number of iterations - :param gamma_for_soft_dtw: approaches to (hard) DTW as gamma -> 0 - :return: (torch.tensor, torch.tensor) cluster ids, cluster centers - """ - if tqdm_flag: - print(f'running k-means on {device}..') - - if distance == 'euclidean': - pairwise_distance_function = partial(pairwise_distance, device=device, tqdm_flag=tqdm_flag) - elif distance == 'cosine': - pairwise_distance_function = partial(pairwise_cosine, device=device) - else: - raise NotImplementedError - - # convert to float - X = X.float() - - # transfer to device - X = X.to(device) - - # initialize - if type(cluster_centers) == list: # ToDo: make this less annoyingly weird - initial_state = initialize(X, num_clusters) - else: - if tqdm_flag: - print('resuming') - # find data point closest to the initial cluster center - initial_state = cluster_centers - dis = pairwise_distance_function(X, initial_state) - choice_points = torch.argmin(dis, dim=0) - initial_state = X[choice_points] - initial_state = initial_state.to(device) - - iteration = 0 - if tqdm_flag: - tqdm_meter = tqdm(desc='[running kmeans]') - while True: - - dis = pairwise_distance_function(X, initial_state) - - choice_cluster = torch.argmin(dis, dim=1) - - initial_state_pre = initial_state.clone() - - for index in range(num_clusters): - selected = torch.nonzero(choice_cluster == index).squeeze().to(device) - - selected = torch.index_select(X, 0, selected) - - # https://github.com/subhadarship/kmeans_pytorch/issues/16 - if selected.shape[0] == 0: - selected = X[torch.randint(len(X), (1,))] - - initial_state[index] = selected.mean(dim=0) - - center_shift = torch.sum( - torch.sqrt( - torch.sum((initial_state - initial_state_pre) ** 2, dim=1) - )) - - # increment iteration - iteration = iteration + 1 - - # update tqdm meter - if tqdm_flag: - tqdm_meter.set_postfix( - iteration=f'{iteration}', - center_shift=f'{center_shift ** 2:0.6f}', - tol=f'{tol:0.6f}' - ) - tqdm_meter.update() - if center_shift ** 2 < tol: - break - if iter_limit != 0 and iteration >= iter_limit: - #print('hello, there!') - break - - return choice_cluster.to(device), initial_state.to(device) - - -def kmeans_predict( - X, - cluster_centers, - distance='euclidean', - device=torch.device('cpu'), - gamma_for_soft_dtw=0.001, - tqdm_flag=True -): - """ - predict using cluster centers - :param X: (torch.tensor) matrix - :param cluster_centers: (torch.tensor) cluster centers - :param distance: (str) distance [options: 'euclidean', 'cosine'] [default: 'euclidean'] - :param device: (torch.device) device [default: 'cpu'] - :param gamma_for_soft_dtw: approaches to (hard) DTW as gamma -> 0 - :return: (torch.tensor) cluster ids - """ - if tqdm_flag: - print(f'predicting on {device}..') - - if distance == 'euclidean': - pairwise_distance_function = partial(pairwise_distance, device=device, tqdm_flag=tqdm_flag) - elif distance == 'cosine': - pairwise_distance_function = partial(pairwise_cosine, device=device) - elif distance == 'soft_dtw': - sdtw = SoftDTW(use_cuda=device.type == 'cuda', gamma=gamma_for_soft_dtw) - pairwise_distance_function = partial(pairwise_soft_dtw, sdtw=sdtw, device=device) - else: - raise NotImplementedError - - # convert to float - X = X.float() - - # transfer to device - X = X.to(device) - - dis = pairwise_distance_function(X, cluster_centers) - choice_cluster = torch.argmin(dis, dim=1) - - return choice_cluster.cpu() - - -def pairwise_distance(data1, data2, device=torch.device('cpu'), tqdm_flag=True): - if tqdm_flag: - print(f'device is :{device}') - - # transfer to device - data1, data2 = data1.to(device), data2.to(device) - - # N*1*M - A = data1.unsqueeze(dim=1) - - # 1*N*M - B = data2.unsqueeze(dim=0) - - dis = (A - B) ** 2.0 - # return N*N matrix for pairwise distance - dis = dis.sum(dim=-1).squeeze() - return dis - - -def pairwise_cosine(data1, data2, device=torch.device('cpu')): - # transfer to device - data1, data2 = data1.to(device), data2.to(device) - - # N*1*M - A = data1.unsqueeze(dim=1) - - # 1*N*M - B = data2.unsqueeze(dim=0) - - # normalize the points | [0.3, 0.4] -> [0.3/sqrt(0.09 + 0.16), 0.4/sqrt(0.09 + 0.16)] = [0.3/0.5, 0.4/0.5] - A_normalized = A / A.norm(dim=-1, keepdim=True) - B_normalized = B / B.norm(dim=-1, keepdim=True) - - cosine = A_normalized * B_normalized - - # return N*N matrix for pairwise distance - cosine_dis = 1 - cosine.sum(dim=-1).squeeze() - return cosine_dis \ No newline at end of file diff --git a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/diffusion_onnx.py b/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/diffusion_onnx.py deleted file mode 100644 index 1c1e80321de162b5233801efa3423739f7f92bdc..0000000000000000000000000000000000000000 --- a/spaces/lllqqq/so-vits-svc-models-pcr/diffusion/diffusion_onnx.py +++ /dev/null @@ -1,612 +0,0 @@ -from collections import deque -from functools import partial -from inspect import isfunction -import torch.nn.functional as F -import librosa.sequence -import numpy as np -from torch.nn import Conv1d -from torch.nn import Mish -import torch -from torch import nn -from tqdm import tqdm -import math - - -def exists(x): - return x is not None - - -def default(val, d): - if exists(val): - return val - return d() if isfunction(d) else d - - -def extract(a, t): - return a[t].reshape((1, 1, 1, 1)) - - -def noise_like(shape, device, repeat=False): - repeat_noise = lambda: torch.randn((1, *shape[1:]), device=device).repeat(shape[0], *((1,) * (len(shape) - 1))) - noise = lambda: torch.randn(shape, device=device) - return repeat_noise() if repeat else noise() - - -def linear_beta_schedule(timesteps, max_beta=0.02): - """ - linear schedule - """ - betas = np.linspace(1e-4, max_beta, timesteps) - return betas - - -def cosine_beta_schedule(timesteps, s=0.008): - """ - cosine schedule - as proposed in https://openreview.net/forum?id=-NEXDKk8gZ - """ - steps = timesteps + 1 - x = np.linspace(0, steps, steps) - alphas_cumprod = np.cos(((x / steps) + s) / (1 + s) * np.pi * 0.5) ** 2 - alphas_cumprod = alphas_cumprod / alphas_cumprod[0] - betas = 1 - (alphas_cumprod[1:] / alphas_cumprod[:-1]) - return np.clip(betas, a_min=0, a_max=0.999) - - -beta_schedule = { - "cosine": cosine_beta_schedule, - "linear": linear_beta_schedule, -} - - -def extract_1(a, t): - return a[t].reshape((1, 1, 1, 1)) - - -def predict_stage0(noise_pred, noise_pred_prev): - return (noise_pred + noise_pred_prev) / 2 - - -def predict_stage1(noise_pred, noise_list): - return (noise_pred * 3 - - noise_list[-1]) / 2 - - -def predict_stage2(noise_pred, noise_list): - return (noise_pred * 23 - - noise_list[-1] * 16 - + noise_list[-2] * 5) / 12 - - -def predict_stage3(noise_pred, noise_list): - return (noise_pred * 55 - - noise_list[-1] * 59 - + noise_list[-2] * 37 - - noise_list[-3] * 9) / 24 - - -class SinusoidalPosEmb(nn.Module): - def __init__(self, dim): - super().__init__() - self.dim = dim - self.half_dim = dim // 2 - self.emb = 9.21034037 / (self.half_dim - 1) - self.emb = torch.exp(torch.arange(self.half_dim) * torch.tensor(-self.emb)).unsqueeze(0) - self.emb = self.emb.cpu() - - def forward(self, x): - emb = self.emb * x - emb = torch.cat((emb.sin(), emb.cos()), dim=-1) - return emb - - -class ResidualBlock(nn.Module): - def __init__(self, encoder_hidden, residual_channels, dilation): - super().__init__() - self.residual_channels = residual_channels - self.dilated_conv = Conv1d(residual_channels, 2 * residual_channels, 3, padding=dilation, dilation=dilation) - self.diffusion_projection = nn.Linear(residual_channels, residual_channels) - self.conditioner_projection = Conv1d(encoder_hidden, 2 * residual_channels, 1) - self.output_projection = Conv1d(residual_channels, 2 * residual_channels, 1) - - def forward(self, x, conditioner, diffusion_step): - diffusion_step = self.diffusion_projection(diffusion_step).unsqueeze(-1) - conditioner = self.conditioner_projection(conditioner) - y = x + diffusion_step - y = self.dilated_conv(y) + conditioner - - gate, filter_1 = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - - y = torch.sigmoid(gate) * torch.tanh(filter_1) - y = self.output_projection(y) - - residual, skip = torch.split(y, [self.residual_channels, self.residual_channels], dim=1) - - return (x + residual) / 1.41421356, skip - - -class DiffNet(nn.Module): - def __init__(self, in_dims, n_layers, n_chans, n_hidden): - super().__init__() - self.encoder_hidden = n_hidden - self.residual_layers = n_layers - self.residual_channels = n_chans - self.input_projection = Conv1d(in_dims, self.residual_channels, 1) - self.diffusion_embedding = SinusoidalPosEmb(self.residual_channels) - dim = self.residual_channels - self.mlp = nn.Sequential( - nn.Linear(dim, dim * 4), - Mish(), - nn.Linear(dim * 4, dim) - ) - self.residual_layers = nn.ModuleList([ - ResidualBlock(self.encoder_hidden, self.residual_channels, 1) - for i in range(self.residual_layers) - ]) - self.skip_projection = Conv1d(self.residual_channels, self.residual_channels, 1) - self.output_projection = Conv1d(self.residual_channels, in_dims, 1) - nn.init.zeros_(self.output_projection.weight) - - def forward(self, spec, diffusion_step, cond): - x = spec.squeeze(0) - x = self.input_projection(x) # x [B, residual_channel, T] - x = F.relu(x) - # skip = torch.randn_like(x) - diffusion_step = diffusion_step.float() - diffusion_step = self.diffusion_embedding(diffusion_step) - diffusion_step = self.mlp(diffusion_step) - - x, skip = self.residual_layers[0](x, cond, diffusion_step) - # noinspection PyTypeChecker - for layer in self.residual_layers[1:]: - x, skip_connection = layer.forward(x, cond, diffusion_step) - skip = skip + skip_connection - x = skip / math.sqrt(len(self.residual_layers)) - x = self.skip_projection(x) - x = F.relu(x) - x = self.output_projection(x) # [B, 80, T] - return x.unsqueeze(1) - - -class AfterDiffusion(nn.Module): - def __init__(self, spec_max, spec_min, v_type='a'): - super().__init__() - self.spec_max = spec_max - self.spec_min = spec_min - self.type = v_type - - def forward(self, x): - x = x.squeeze(1).permute(0, 2, 1) - mel_out = (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min - if self.type == 'nsf-hifigan-log10': - mel_out = mel_out * 0.434294 - return mel_out.transpose(2, 1) - - -class Pred(nn.Module): - def __init__(self, alphas_cumprod): - super().__init__() - self.alphas_cumprod = alphas_cumprod - - def forward(self, x_1, noise_t, t_1, t_prev): - a_t = extract(self.alphas_cumprod, t_1).cpu() - a_prev = extract(self.alphas_cumprod, t_prev).cpu() - a_t_sq, a_prev_sq = a_t.sqrt().cpu(), a_prev.sqrt().cpu() - x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x_1 - 1 / ( - a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t) - x_pred = x_1 + x_delta.cpu() - - return x_pred - - -class GaussianDiffusion(nn.Module): - def __init__(self, - out_dims=128, - n_layers=20, - n_chans=384, - n_hidden=256, - timesteps=1000, - k_step=1000, - max_beta=0.02, - spec_min=-12, - spec_max=2): - super().__init__() - self.denoise_fn = DiffNet(out_dims, n_layers, n_chans, n_hidden) - self.out_dims = out_dims - self.mel_bins = out_dims - self.n_hidden = n_hidden - betas = beta_schedule['linear'](timesteps, max_beta=max_beta) - - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.k_step = k_step - - self.noise_list = deque(maxlen=4) - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = betas * (1. - alphas_cumprod_prev) / (1. - alphas_cumprod) - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - self.register_buffer('spec_min', torch.FloatTensor([spec_min])[None, None, :out_dims]) - self.register_buffer('spec_max', torch.FloatTensor([spec_max])[None, None, :out_dims]) - self.ad = AfterDiffusion(self.spec_max, self.spec_min) - self.xp = Pred(self.alphas_cumprod) - - def q_mean_variance(self, x_start, t): - mean = extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start - variance = extract(1. - self.alphas_cumprod, t, x_start.shape) - log_variance = extract(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, cond): - noise_pred = self.denoise_fn(x, t, cond=cond) - x_recon = self.predict_start_from_noise(x, t=t, noise=noise_pred) - - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, cond, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, cond=cond) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_plms(self, x, t, interval, cond, clip_denoised=True, repeat_noise=False): - """ - Use the PLMS method from - [Pseudo Numerical Methods for Diffusion Models on Manifolds](https://arxiv.org/abs/2202.09778). - """ - - def get_x_pred(x, noise_t, t): - a_t = extract(self.alphas_cumprod, t) - a_prev = extract(self.alphas_cumprod, torch.max(t - interval, torch.zeros_like(t))) - a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt() - - x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x - 1 / ( - a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t) - x_pred = x + x_delta - - return x_pred - - noise_list = self.noise_list - noise_pred = self.denoise_fn(x, t, cond=cond) - - if len(noise_list) == 0: - x_pred = get_x_pred(x, noise_pred, t) - noise_pred_prev = self.denoise_fn(x_pred, max(t - interval, 0), cond=cond) - noise_pred_prime = (noise_pred + noise_pred_prev) / 2 - elif len(noise_list) == 1: - noise_pred_prime = (3 * noise_pred - noise_list[-1]) / 2 - elif len(noise_list) == 2: - noise_pred_prime = (23 * noise_pred - 16 * noise_list[-1] + 5 * noise_list[-2]) / 12 - else: - noise_pred_prime = (55 * noise_pred - 59 * noise_list[-1] + 37 * noise_list[-2] - 9 * noise_list[-3]) / 24 - - x_prev = get_x_pred(x, noise_pred_prime, t) - noise_list.append(noise_pred) - - return x_prev - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return ( - extract(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise - ) - - def p_losses(self, x_start, t, cond, noise=None, loss_type='l2'): - noise = default(noise, lambda: torch.randn_like(x_start)) - - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - x_recon = self.denoise_fn(x_noisy, t, cond) - - if loss_type == 'l1': - loss = (noise - x_recon).abs().mean() - elif loss_type == 'l2': - loss = F.mse_loss(noise, x_recon) - else: - raise NotImplementedError() - - return loss - - def org_forward(self, - condition, - init_noise=None, - gt_spec=None, - infer=True, - infer_speedup=100, - method='pndm', - k_step=1000, - use_tqdm=True): - """ - conditioning diffusion, use fastspeech2 encoder output as the condition - """ - cond = condition - b, device = condition.shape[0], condition.device - if not infer: - spec = self.norm_spec(gt_spec) - t = torch.randint(0, self.k_step, (b,), device=device).long() - norm_spec = spec.transpose(1, 2)[:, None, :, :] # [B, 1, M, T] - return self.p_losses(norm_spec, t, cond=cond) - else: - shape = (cond.shape[0], 1, self.out_dims, cond.shape[2]) - - if gt_spec is None: - t = self.k_step - if init_noise is None: - x = torch.randn(shape, device=device) - else: - x = init_noise - else: - t = k_step - norm_spec = self.norm_spec(gt_spec) - norm_spec = norm_spec.transpose(1, 2)[:, None, :, :] - x = self.q_sample(x_start=norm_spec, t=torch.tensor([t - 1], device=device).long()) - - if method is not None and infer_speedup > 1: - if method == 'dpm-solver': - from .dpm_solver_pytorch import NoiseScheduleVP, model_wrapper, DPM_Solver - # 1. Define the noise schedule. - noise_schedule = NoiseScheduleVP(schedule='discrete', betas=self.betas[:t]) - - # 2. Convert your discrete-time `model` to the continuous-time - # noise prediction model. Here is an example for a diffusion model - # `model` with the noise prediction type ("noise") . - def my_wrapper(fn): - def wrapped(x, t, **kwargs): - ret = fn(x, t, **kwargs) - if use_tqdm: - self.bar.update(1) - return ret - - return wrapped - - model_fn = model_wrapper( - my_wrapper(self.denoise_fn), - noise_schedule, - model_type="noise", # or "x_start" or "v" or "score" - model_kwargs={"cond": cond} - ) - - # 3. Define dpm-solver and sample by singlestep DPM-Solver. - # (We recommend singlestep DPM-Solver for unconditional sampling) - # You can adjust the `steps` to balance the computation - # costs and the sample quality. - dpm_solver = DPM_Solver(model_fn, noise_schedule) - - steps = t // infer_speedup - if use_tqdm: - self.bar = tqdm(desc="sample time step", total=steps) - x = dpm_solver.sample( - x, - steps=steps, - order=3, - skip_type="time_uniform", - method="singlestep", - ) - if use_tqdm: - self.bar.close() - elif method == 'pndm': - self.noise_list = deque(maxlen=4) - if use_tqdm: - for i in tqdm( - reversed(range(0, t, infer_speedup)), desc='sample time step', - total=t // infer_speedup, - ): - x = self.p_sample_plms( - x, torch.full((b,), i, device=device, dtype=torch.long), - infer_speedup, cond=cond - ) - else: - for i in reversed(range(0, t, infer_speedup)): - x = self.p_sample_plms( - x, torch.full((b,), i, device=device, dtype=torch.long), - infer_speedup, cond=cond - ) - else: - raise NotImplementedError(method) - else: - if use_tqdm: - for i in tqdm(reversed(range(0, t)), desc='sample time step', total=t): - x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond) - else: - for i in reversed(range(0, t)): - x = self.p_sample(x, torch.full((b,), i, device=device, dtype=torch.long), cond) - x = x.squeeze(1).transpose(1, 2) # [B, T, M] - return self.denorm_spec(x).transpose(2, 1) - - def norm_spec(self, x): - return (x - self.spec_min) / (self.spec_max - self.spec_min) * 2 - 1 - - def denorm_spec(self, x): - return (x + 1) / 2 * (self.spec_max - self.spec_min) + self.spec_min - - def get_x_pred(self, x_1, noise_t, t_1, t_prev): - a_t = extract(self.alphas_cumprod, t_1) - a_prev = extract(self.alphas_cumprod, t_prev) - a_t_sq, a_prev_sq = a_t.sqrt(), a_prev.sqrt() - x_delta = (a_prev - a_t) * ((1 / (a_t_sq * (a_t_sq + a_prev_sq))) * x_1 - 1 / ( - a_t_sq * (((1 - a_prev) * a_t).sqrt() + ((1 - a_t) * a_prev).sqrt())) * noise_t) - x_pred = x_1 + x_delta - return x_pred - - def OnnxExport(self, project_name=None, init_noise=None, hidden_channels=256, export_denoise=True, export_pred=True, export_after=True): - cond = torch.randn([1, self.n_hidden, 10]).cpu() - if init_noise is None: - x = torch.randn((1, 1, self.mel_bins, cond.shape[2]), dtype=torch.float32).cpu() - else: - x = init_noise - pndms = 100 - - org_y_x = self.org_forward(cond, init_noise=x) - - device = cond.device - n_frames = cond.shape[2] - step_range = torch.arange(0, self.k_step, pndms, dtype=torch.long, device=device).flip(0) - plms_noise_stage = torch.tensor(0, dtype=torch.long, device=device) - noise_list = torch.zeros((0, 1, 1, self.mel_bins, n_frames), device=device) - - ot = step_range[0] - ot_1 = torch.full((1,), ot, device=device, dtype=torch.long) - if export_denoise: - torch.onnx.export( - self.denoise_fn, - (x.cpu(), ot_1.cpu(), cond.cpu()), - f"{project_name}_denoise.onnx", - input_names=["noise", "time", "condition"], - output_names=["noise_pred"], - dynamic_axes={ - "noise": [3], - "condition": [2] - }, - opset_version=16 - ) - - for t in step_range: - t_1 = torch.full((1,), t, device=device, dtype=torch.long) - noise_pred = self.denoise_fn(x, t_1, cond) - t_prev = t_1 - pndms - t_prev = t_prev * (t_prev > 0) - if plms_noise_stage == 0: - if export_pred: - torch.onnx.export( - self.xp, - (x.cpu(), noise_pred.cpu(), t_1.cpu(), t_prev.cpu()), - f"{project_name}_pred.onnx", - input_names=["noise", "noise_pred", "time", "time_prev"], - output_names=["noise_pred_o"], - dynamic_axes={ - "noise": [3], - "noise_pred": [3] - }, - opset_version=16 - ) - - x_pred = self.get_x_pred(x, noise_pred, t_1, t_prev) - noise_pred_prev = self.denoise_fn(x_pred, t_prev, cond=cond) - noise_pred_prime = predict_stage0(noise_pred, noise_pred_prev) - - elif plms_noise_stage == 1: - noise_pred_prime = predict_stage1(noise_pred, noise_list) - - elif plms_noise_stage == 2: - noise_pred_prime = predict_stage2(noise_pred, noise_list) - - else: - noise_pred_prime = predict_stage3(noise_pred, noise_list) - - noise_pred = noise_pred.unsqueeze(0) - - if plms_noise_stage < 3: - noise_list = torch.cat((noise_list, noise_pred), dim=0) - plms_noise_stage = plms_noise_stage + 1 - - else: - noise_list = torch.cat((noise_list[-2:], noise_pred), dim=0) - - x = self.get_x_pred(x, noise_pred_prime, t_1, t_prev) - if export_after: - torch.onnx.export( - self.ad, - x.cpu(), - f"{project_name}_after.onnx", - input_names=["x"], - output_names=["mel_out"], - dynamic_axes={ - "x": [3] - }, - opset_version=16 - ) - x = self.ad(x) - - print((x == org_y_x).all()) - return x - - def forward(self, condition=None, init_noise=None, pndms=None, k_step=None): - cond = condition - x = init_noise - - device = cond.device - n_frames = cond.shape[2] - step_range = torch.arange(0, k_step.item(), pndms.item(), dtype=torch.long, device=device).flip(0) - plms_noise_stage = torch.tensor(0, dtype=torch.long, device=device) - noise_list = torch.zeros((0, 1, 1, self.mel_bins, n_frames), device=device) - - ot = step_range[0] - ot_1 = torch.full((1,), ot, device=device, dtype=torch.long) - - for t in step_range: - t_1 = torch.full((1,), t, device=device, dtype=torch.long) - noise_pred = self.denoise_fn(x, t_1, cond) - t_prev = t_1 - pndms - t_prev = t_prev * (t_prev > 0) - if plms_noise_stage == 0: - x_pred = self.get_x_pred(x, noise_pred, t_1, t_prev) - noise_pred_prev = self.denoise_fn(x_pred, t_prev, cond=cond) - noise_pred_prime = predict_stage0(noise_pred, noise_pred_prev) - - elif plms_noise_stage == 1: - noise_pred_prime = predict_stage1(noise_pred, noise_list) - - elif plms_noise_stage == 2: - noise_pred_prime = predict_stage2(noise_pred, noise_list) - - else: - noise_pred_prime = predict_stage3(noise_pred, noise_list) - - noise_pred = noise_pred.unsqueeze(0) - - if plms_noise_stage < 3: - noise_list = torch.cat((noise_list, noise_pred), dim=0) - plms_noise_stage = plms_noise_stage + 1 - - else: - noise_list = torch.cat((noise_list[-2:], noise_pred), dim=0) - - x = self.get_x_pred(x, noise_pred_prime, t_1, t_prev) - x = self.ad(x) - return x diff --git a/spaces/lojban/text-to-speech/nix_tts_simple/inference.py b/spaces/lojban/text-to-speech/nix_tts_simple/inference.py deleted file mode 100644 index cab8c993b4a91dc3aaf37b97f92fd3e10d7b6f4a..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/nix_tts_simple/inference.py +++ /dev/null @@ -1,55 +0,0 @@ -import os -import pickle -import timeit - -import numpy as np -import onnxruntime as ort - -from .tokenizer_lojban import NixTokenizerEN - -class NixTTSInference: - - def __init__( - self, - model_dir, - ): - # Load tokenizer - self.tokenizer = NixTokenizerEN(pickle.load(open(os.path.join(model_dir, "tokenizer_state.pkl"), "rb"))) - # Load TTS model - self.encoder = ort.InferenceSession(os.path.join(model_dir, "encoder.onnx")) - self.decoder = ort.InferenceSession(os.path.join(model_dir, "decoder.onnx")) - - def tokenize( - self, - text, - ): - # Tokenize input text - c, c_lengths, phonemes = self.tokenizer([text]) - - return np.array(c, dtype = np.int64), np.array(c_lengths, dtype = np.int64), phonemes - - def vocalize( - self, - c, - c_lengths, - ): - """ - Single-batch TTS inference - """ - # Infer latent samples from encoder - z = self.encoder.run( - None, - { - "c": c, - "c_lengths": c_lengths, - } - )[2] - # Decode raw audio with decoder - xw = self.decoder.run( - None, - { - "z": z, - } - )[0] - - return xw diff --git a/spaces/lojban/text-to-speech/vits/monotonic_align/core.c b/spaces/lojban/text-to-speech/vits/monotonic_align/core.c deleted file mode 100644 index b5761db6c3cf2399ff5236638de747501c9ee2af..0000000000000000000000000000000000000000 --- a/spaces/lojban/text-to-speech/vits/monotonic_align/core.c +++ /dev/null @@ -1,21608 +0,0 @@ -/* Generated by Cython 0.29.32 */ - -/* BEGIN: Cython Metadata -{ - "distutils": { - "name": "vits.monotonic_align.core", - "sources": [ - "core.pyx" - ] - }, - "module_name": "vits.monotonic_align.core" -} -END: Cython Metadata */ - -#ifndef PY_SSIZE_T_CLEAN -#define PY_SSIZE_T_CLEAN -#endif /* PY_SSIZE_T_CLEAN */ -#include "Python.h" -#ifndef Py_PYTHON_H - #error Python headers needed to compile C extensions, please install development version of Python. -#elif PY_VERSION_HEX < 0x02060000 || (0x03000000 <= PY_VERSION_HEX && PY_VERSION_HEX < 0x03030000) - #error Cython requires Python 2.6+ or Python 3.3+. -#else -#define CYTHON_ABI "0_29_32" -#define CYTHON_HEX_VERSION 0x001D20F0 -#define CYTHON_FUTURE_DIVISION 0 -#include -#ifndef offsetof - #define offsetof(type, member) ( (size_t) & ((type*)0) -> member ) -#endif -#if !defined(WIN32) && !defined(MS_WINDOWS) - #ifndef __stdcall - #define __stdcall - #endif - #ifndef __cdecl - #define __cdecl - #endif - #ifndef __fastcall - #define __fastcall - #endif -#endif -#ifndef DL_IMPORT - #define DL_IMPORT(t) t -#endif -#ifndef DL_EXPORT - #define DL_EXPORT(t) t -#endif -#define __PYX_COMMA , -#ifndef HAVE_LONG_LONG - #if PY_VERSION_HEX >= 0x02070000 - #define HAVE_LONG_LONG - #endif -#endif -#ifndef PY_LONG_LONG - #define PY_LONG_LONG LONG_LONG -#endif -#ifndef Py_HUGE_VAL - #define Py_HUGE_VAL HUGE_VAL -#endif -#ifdef PYPY_VERSION - #define CYTHON_COMPILING_IN_PYPY 1 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #undef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 0 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #if PY_VERSION_HEX < 0x03050000 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #undef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 0 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #undef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 1 - #undef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 0 - #undef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 0 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC (PYPY_VERSION_HEX >= 0x07030900) - #endif -#elif defined(PYSTON_VERSION) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 1 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #undef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 0 - #undef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 0 - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 0 - #endif -#elif defined(PY_NOGIL) - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 0 - #define CYTHON_COMPILING_IN_NOGIL 1 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #ifndef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #undef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 0 - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #undef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL 0 - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT 1 - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE 1 - #endif - #undef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS 0 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 -#else - #define CYTHON_COMPILING_IN_PYPY 0 - #define CYTHON_COMPILING_IN_PYSTON 0 - #define CYTHON_COMPILING_IN_CPYTHON 1 - #define CYTHON_COMPILING_IN_NOGIL 0 - #ifndef CYTHON_USE_TYPE_SLOTS - #define CYTHON_USE_TYPE_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYTYPE_LOOKUP - #define CYTHON_USE_PYTYPE_LOOKUP 0 - #elif !defined(CYTHON_USE_PYTYPE_LOOKUP) - #define CYTHON_USE_PYTYPE_LOOKUP 1 - #endif - #if PY_MAJOR_VERSION < 3 - #undef CYTHON_USE_ASYNC_SLOTS - #define CYTHON_USE_ASYNC_SLOTS 0 - #elif !defined(CYTHON_USE_ASYNC_SLOTS) - #define CYTHON_USE_ASYNC_SLOTS 1 - #endif - #if PY_VERSION_HEX < 0x02070000 - #undef CYTHON_USE_PYLONG_INTERNALS - #define CYTHON_USE_PYLONG_INTERNALS 0 - #elif !defined(CYTHON_USE_PYLONG_INTERNALS) - #define CYTHON_USE_PYLONG_INTERNALS 1 - #endif - #ifndef CYTHON_USE_PYLIST_INTERNALS - #define CYTHON_USE_PYLIST_INTERNALS 1 - #endif - #ifndef CYTHON_USE_UNICODE_INTERNALS - #define CYTHON_USE_UNICODE_INTERNALS 1 - #endif - #if PY_VERSION_HEX < 0x030300F0 || PY_VERSION_HEX >= 0x030B00A2 - #undef CYTHON_USE_UNICODE_WRITER - #define CYTHON_USE_UNICODE_WRITER 0 - #elif !defined(CYTHON_USE_UNICODE_WRITER) - #define CYTHON_USE_UNICODE_WRITER 1 - #endif - #ifndef CYTHON_AVOID_BORROWED_REFS - #define CYTHON_AVOID_BORROWED_REFS 0 - #endif - #ifndef CYTHON_ASSUME_SAFE_MACROS - #define CYTHON_ASSUME_SAFE_MACROS 1 - #endif - #ifndef CYTHON_UNPACK_METHODS - #define CYTHON_UNPACK_METHODS 1 - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_FAST_THREAD_STATE - #define CYTHON_FAST_THREAD_STATE 0 - #elif !defined(CYTHON_FAST_THREAD_STATE) - #define CYTHON_FAST_THREAD_STATE 1 - #endif - #ifndef CYTHON_FAST_PYCALL - #define CYTHON_FAST_PYCALL (PY_VERSION_HEX < 0x030A0000) - #endif - #ifndef CYTHON_PEP489_MULTI_PHASE_INIT - #define CYTHON_PEP489_MULTI_PHASE_INIT (PY_VERSION_HEX >= 0x03050000) - #endif - #ifndef CYTHON_USE_TP_FINALIZE - #define CYTHON_USE_TP_FINALIZE (PY_VERSION_HEX >= 0x030400a1) - #endif - #ifndef CYTHON_USE_DICT_VERSIONS - #define CYTHON_USE_DICT_VERSIONS (PY_VERSION_HEX >= 0x030600B1) - #endif - #if PY_VERSION_HEX >= 0x030B00A4 - #undef CYTHON_USE_EXC_INFO_STACK - #define CYTHON_USE_EXC_INFO_STACK 0 - #elif !defined(CYTHON_USE_EXC_INFO_STACK) - #define CYTHON_USE_EXC_INFO_STACK (PY_VERSION_HEX >= 0x030700A3) - #endif - #ifndef CYTHON_UPDATE_DESCRIPTOR_DOC - #define CYTHON_UPDATE_DESCRIPTOR_DOC 1 - #endif -#endif -#if !defined(CYTHON_FAST_PYCCALL) -#define CYTHON_FAST_PYCCALL (CYTHON_FAST_PYCALL && PY_VERSION_HEX >= 0x030600B1) -#endif -#if CYTHON_USE_PYLONG_INTERNALS - #if PY_MAJOR_VERSION < 3 - #include "longintrepr.h" - #endif - #undef SHIFT - #undef BASE - #undef MASK - #ifdef SIZEOF_VOID_P - enum { __pyx_check_sizeof_voidp = 1 / (int)(SIZEOF_VOID_P == sizeof(void*)) }; - #endif -#endif -#ifndef __has_attribute - #define __has_attribute(x) 0 -#endif -#ifndef __has_cpp_attribute - #define __has_cpp_attribute(x) 0 -#endif -#ifndef CYTHON_RESTRICT - #if defined(__GNUC__) - #define CYTHON_RESTRICT __restrict__ - #elif defined(_MSC_VER) && _MSC_VER >= 1400 - #define CYTHON_RESTRICT __restrict - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_RESTRICT restrict - #else - #define CYTHON_RESTRICT - #endif -#endif -#ifndef CYTHON_UNUSED -# if defined(__GNUC__) -# if !(defined(__cplusplus)) || (__GNUC__ > 3 || (__GNUC__ == 3 && __GNUC_MINOR__ >= 4)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -# elif defined(__ICC) || (defined(__INTEL_COMPILER) && !defined(_MSC_VER)) -# define CYTHON_UNUSED __attribute__ ((__unused__)) -# else -# define CYTHON_UNUSED -# endif -#endif -#ifndef CYTHON_MAYBE_UNUSED_VAR -# if defined(__cplusplus) - template void CYTHON_MAYBE_UNUSED_VAR( const T& ) { } -# else -# define CYTHON_MAYBE_UNUSED_VAR(x) (void)(x) -# endif -#endif -#ifndef CYTHON_NCP_UNUSED -# if CYTHON_COMPILING_IN_CPYTHON -# define CYTHON_NCP_UNUSED -# else -# define CYTHON_NCP_UNUSED CYTHON_UNUSED -# endif -#endif -#define __Pyx_void_to_None(void_result) ((void)(void_result), Py_INCREF(Py_None), Py_None) -#ifdef _MSC_VER - #ifndef _MSC_STDINT_H_ - #if _MSC_VER < 1300 - typedef unsigned char uint8_t; - typedef unsigned int uint32_t; - #else - typedef unsigned __int8 uint8_t; - typedef unsigned __int32 uint32_t; - #endif - #endif -#else - #include -#endif -#ifndef CYTHON_FALLTHROUGH - #if defined(__cplusplus) && __cplusplus >= 201103L - #if __has_cpp_attribute(fallthrough) - #define CYTHON_FALLTHROUGH [[fallthrough]] - #elif __has_cpp_attribute(clang::fallthrough) - #define CYTHON_FALLTHROUGH [[clang::fallthrough]] - #elif __has_cpp_attribute(gnu::fallthrough) - #define CYTHON_FALLTHROUGH [[gnu::fallthrough]] - #endif - #endif - #ifndef CYTHON_FALLTHROUGH - #if __has_attribute(fallthrough) - #define CYTHON_FALLTHROUGH __attribute__((fallthrough)) - #else - #define CYTHON_FALLTHROUGH - #endif - #endif - #if defined(__clang__ ) && defined(__apple_build_version__) - #if __apple_build_version__ < 7000000 - #undef CYTHON_FALLTHROUGH - #define CYTHON_FALLTHROUGH - #endif - #endif -#endif - -#ifndef CYTHON_INLINE - #if defined(__clang__) - #define CYTHON_INLINE __inline__ __attribute__ ((__unused__)) - #elif defined(__GNUC__) - #define CYTHON_INLINE __inline__ - #elif defined(_MSC_VER) - #define CYTHON_INLINE __inline - #elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define CYTHON_INLINE inline - #else - #define CYTHON_INLINE - #endif -#endif - -#if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX < 0x02070600 && !defined(Py_OptimizeFlag) - #define Py_OptimizeFlag 0 -#endif -#define __PYX_BUILD_PY_SSIZE_T "n" -#define CYTHON_FORMAT_SSIZE_T "z" -#if PY_MAJOR_VERSION < 3 - #define __Pyx_BUILTIN_MODULE_NAME "__builtin__" - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a+k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) - #define __Pyx_DefaultClassType PyClass_Type -#else - #define __Pyx_BUILTIN_MODULE_NAME "builtins" - #define __Pyx_DefaultClassType PyType_Type -#if PY_VERSION_HEX >= 0x030B00A1 - static CYTHON_INLINE PyCodeObject* __Pyx_PyCode_New(int a, int k, int l, int s, int f, - PyObject *code, PyObject *c, PyObject* n, PyObject *v, - PyObject *fv, PyObject *cell, PyObject* fn, - PyObject *name, int fline, PyObject *lnos) { - PyObject *kwds=NULL, *argcount=NULL, *posonlyargcount=NULL, *kwonlyargcount=NULL; - PyObject *nlocals=NULL, *stacksize=NULL, *flags=NULL, *replace=NULL, *call_result=NULL, *empty=NULL; - const char *fn_cstr=NULL; - const char *name_cstr=NULL; - PyCodeObject* co=NULL; - PyObject *type, *value, *traceback; - PyErr_Fetch(&type, &value, &traceback); - if (!(kwds=PyDict_New())) goto end; - if (!(argcount=PyLong_FromLong(a))) goto end; - if (PyDict_SetItemString(kwds, "co_argcount", argcount) != 0) goto end; - if (!(posonlyargcount=PyLong_FromLong(0))) goto end; - if (PyDict_SetItemString(kwds, "co_posonlyargcount", posonlyargcount) != 0) goto end; - if (!(kwonlyargcount=PyLong_FromLong(k))) goto end; - if (PyDict_SetItemString(kwds, "co_kwonlyargcount", kwonlyargcount) != 0) goto end; - if (!(nlocals=PyLong_FromLong(l))) goto end; - if (PyDict_SetItemString(kwds, "co_nlocals", nlocals) != 0) goto end; - if (!(stacksize=PyLong_FromLong(s))) goto end; - if (PyDict_SetItemString(kwds, "co_stacksize", stacksize) != 0) goto end; - if (!(flags=PyLong_FromLong(f))) goto end; - if (PyDict_SetItemString(kwds, "co_flags", flags) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_code", code) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_consts", c) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_names", n) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_varnames", v) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_freevars", fv) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_cellvars", cell) != 0) goto end; - if (PyDict_SetItemString(kwds, "co_linetable", lnos) != 0) goto end; - if (!(fn_cstr=PyUnicode_AsUTF8AndSize(fn, NULL))) goto end; - if (!(name_cstr=PyUnicode_AsUTF8AndSize(name, NULL))) goto end; - if (!(co = PyCode_NewEmpty(fn_cstr, name_cstr, fline))) goto end; - if (!(replace = PyObject_GetAttrString((PyObject*)co, "replace"))) goto cleanup_code_too; - if (!(empty = PyTuple_New(0))) goto cleanup_code_too; // unfortunately __pyx_empty_tuple isn't available here - if (!(call_result = PyObject_Call(replace, empty, kwds))) goto cleanup_code_too; - Py_XDECREF((PyObject*)co); - co = (PyCodeObject*)call_result; - call_result = NULL; - if (0) { - cleanup_code_too: - Py_XDECREF((PyObject*)co); - co = NULL; - } - end: - Py_XDECREF(kwds); - Py_XDECREF(argcount); - Py_XDECREF(posonlyargcount); - Py_XDECREF(kwonlyargcount); - Py_XDECREF(nlocals); - Py_XDECREF(stacksize); - Py_XDECREF(replace); - Py_XDECREF(call_result); - Py_XDECREF(empty); - if (type) { - PyErr_Restore(type, value, traceback); - } - return co; - } -#else - #define __Pyx_PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos)\ - PyCode_New(a, k, l, s, f, code, c, n, v, fv, cell, fn, name, fline, lnos) -#endif - #define __Pyx_DefaultClassType PyType_Type -#endif -#ifndef Py_TPFLAGS_CHECKTYPES - #define Py_TPFLAGS_CHECKTYPES 0 -#endif -#ifndef Py_TPFLAGS_HAVE_INDEX - #define Py_TPFLAGS_HAVE_INDEX 0 -#endif -#ifndef Py_TPFLAGS_HAVE_NEWBUFFER - #define Py_TPFLAGS_HAVE_NEWBUFFER 0 -#endif -#ifndef Py_TPFLAGS_HAVE_FINALIZE - #define Py_TPFLAGS_HAVE_FINALIZE 0 -#endif -#ifndef METH_STACKLESS - #define METH_STACKLESS 0 -#endif -#if PY_VERSION_HEX <= 0x030700A3 || !defined(METH_FASTCALL) - #ifndef METH_FASTCALL - #define METH_FASTCALL 0x80 - #endif - typedef PyObject *(*__Pyx_PyCFunctionFast) (PyObject *self, PyObject *const *args, Py_ssize_t nargs); - typedef PyObject *(*__Pyx_PyCFunctionFastWithKeywords) (PyObject *self, PyObject *const *args, - Py_ssize_t nargs, PyObject *kwnames); -#else - #define __Pyx_PyCFunctionFast _PyCFunctionFast - #define __Pyx_PyCFunctionFastWithKeywords _PyCFunctionFastWithKeywords -#endif -#if CYTHON_FAST_PYCCALL -#define __Pyx_PyFastCFunction_Check(func)\ - ((PyCFunction_Check(func) && (METH_FASTCALL == (PyCFunction_GET_FLAGS(func) & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))))) -#else -#define __Pyx_PyFastCFunction_Check(func) 0 -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Malloc) - #define PyObject_Malloc(s) PyMem_Malloc(s) - #define PyObject_Free(p) PyMem_Free(p) - #define PyObject_Realloc(p) PyMem_Realloc(p) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX < 0x030400A1 - #define PyMem_RawMalloc(n) PyMem_Malloc(n) - #define PyMem_RawRealloc(p, n) PyMem_Realloc(p, n) - #define PyMem_RawFree(p) PyMem_Free(p) -#endif -#if CYTHON_COMPILING_IN_PYSTON - #define __Pyx_PyCode_HasFreeVars(co) PyCode_HasFreeVars(co) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) PyFrame_SetLineNumber(frame, lineno) -#else - #define __Pyx_PyCode_HasFreeVars(co) (PyCode_GetNumFree(co) > 0) - #define __Pyx_PyFrame_SetLineNumber(frame, lineno) (frame)->f_lineno = (lineno) -#endif -#if !CYTHON_FAST_THREAD_STATE || PY_VERSION_HEX < 0x02070000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#elif PY_VERSION_HEX >= 0x03060000 - #define __Pyx_PyThreadState_Current _PyThreadState_UncheckedGet() -#elif PY_VERSION_HEX >= 0x03000000 - #define __Pyx_PyThreadState_Current PyThreadState_GET() -#else - #define __Pyx_PyThreadState_Current _PyThreadState_Current -#endif -#if PY_VERSION_HEX < 0x030700A2 && !defined(PyThread_tss_create) && !defined(Py_tss_NEEDS_INIT) -#include "pythread.h" -#define Py_tss_NEEDS_INIT 0 -typedef int Py_tss_t; -static CYTHON_INLINE int PyThread_tss_create(Py_tss_t *key) { - *key = PyThread_create_key(); - return 0; -} -static CYTHON_INLINE Py_tss_t * PyThread_tss_alloc(void) { - Py_tss_t *key = (Py_tss_t *)PyObject_Malloc(sizeof(Py_tss_t)); - *key = Py_tss_NEEDS_INIT; - return key; -} -static CYTHON_INLINE void PyThread_tss_free(Py_tss_t *key) { - PyObject_Free(key); -} -static CYTHON_INLINE int PyThread_tss_is_created(Py_tss_t *key) { - return *key != Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE void PyThread_tss_delete(Py_tss_t *key) { - PyThread_delete_key(*key); - *key = Py_tss_NEEDS_INIT; -} -static CYTHON_INLINE int PyThread_tss_set(Py_tss_t *key, void *value) { - return PyThread_set_key_value(*key, value); -} -static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) { - return PyThread_get_key_value(*key); -} -#endif -#if CYTHON_COMPILING_IN_CPYTHON || defined(_PyDict_NewPresized) -#define __Pyx_PyDict_NewPresized(n) ((n <= 8) ? PyDict_New() : _PyDict_NewPresized(n)) -#else -#define __Pyx_PyDict_NewPresized(n) PyDict_New() -#endif -#if PY_MAJOR_VERSION >= 3 || CYTHON_FUTURE_DIVISION - #define __Pyx_PyNumber_Divide(x,y) PyNumber_TrueDivide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceTrueDivide(x,y) -#else - #define __Pyx_PyNumber_Divide(x,y) PyNumber_Divide(x,y) - #define __Pyx_PyNumber_InPlaceDivide(x,y) PyNumber_InPlaceDivide(x,y) -#endif -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 && CYTHON_USE_UNICODE_INTERNALS -#define __Pyx_PyDict_GetItemStr(dict, name) _PyDict_GetItem_KnownHash(dict, name, ((PyASCIIObject *) name)->hash) -#else -#define __Pyx_PyDict_GetItemStr(dict, name) PyDict_GetItem(dict, name) -#endif -#if PY_VERSION_HEX > 0x03030000 && defined(PyUnicode_KIND) - #define CYTHON_PEP393_ENABLED 1 - #if defined(PyUnicode_IS_READY) - #define __Pyx_PyUnicode_READY(op) (likely(PyUnicode_IS_READY(op)) ?\ - 0 : _PyUnicode_Ready((PyObject *)(op))) - #else - #define __Pyx_PyUnicode_READY(op) (0) - #endif - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_LENGTH(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) PyUnicode_READ_CHAR(u, i) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) PyUnicode_MAX_CHAR_VALUE(u) - #define __Pyx_PyUnicode_KIND(u) PyUnicode_KIND(u) - #define __Pyx_PyUnicode_DATA(u) PyUnicode_DATA(u) - #define __Pyx_PyUnicode_READ(k, d, i) PyUnicode_READ(k, d, i) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) PyUnicode_WRITE(k, d, i, ch) - #if defined(PyUnicode_IS_READY) && defined(PyUnicode_GET_SIZE) - #if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x03090000 - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : ((PyCompactUnicodeObject *)(u))->wstr_length)) - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != (likely(PyUnicode_IS_READY(u)) ? PyUnicode_GET_LENGTH(u) : PyUnicode_GET_SIZE(u))) - #endif - #else - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_LENGTH(u)) - #endif -#else - #define CYTHON_PEP393_ENABLED 0 - #define PyUnicode_1BYTE_KIND 1 - #define PyUnicode_2BYTE_KIND 2 - #define PyUnicode_4BYTE_KIND 4 - #define __Pyx_PyUnicode_READY(op) (0) - #define __Pyx_PyUnicode_GET_LENGTH(u) PyUnicode_GET_SIZE(u) - #define __Pyx_PyUnicode_READ_CHAR(u, i) ((Py_UCS4)(PyUnicode_AS_UNICODE(u)[i])) - #define __Pyx_PyUnicode_MAX_CHAR_VALUE(u) ((sizeof(Py_UNICODE) == 2) ? 65535 : 1114111) - #define __Pyx_PyUnicode_KIND(u) (sizeof(Py_UNICODE)) - #define __Pyx_PyUnicode_DATA(u) ((void*)PyUnicode_AS_UNICODE(u)) - #define __Pyx_PyUnicode_READ(k, d, i) ((void)(k), (Py_UCS4)(((Py_UNICODE*)d)[i])) - #define __Pyx_PyUnicode_WRITE(k, d, i, ch) (((void)(k)), ((Py_UNICODE*)d)[i] = ch) - #define __Pyx_PyUnicode_IS_TRUE(u) (0 != PyUnicode_GET_SIZE(u)) -#endif -#if CYTHON_COMPILING_IN_PYPY - #define __Pyx_PyUnicode_Concat(a, b) PyNumber_Add(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) PyNumber_Add(a, b) -#else - #define __Pyx_PyUnicode_Concat(a, b) PyUnicode_Concat(a, b) - #define __Pyx_PyUnicode_ConcatSafe(a, b) ((unlikely((a) == Py_None) || unlikely((b) == Py_None)) ?\ - PyNumber_Add(a, b) : __Pyx_PyUnicode_Concat(a, b)) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyUnicode_Contains) - #define PyUnicode_Contains(u, s) PySequence_Contains(u, s) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyByteArray_Check) - #define PyByteArray_Check(obj) PyObject_TypeCheck(obj, &PyByteArray_Type) -#endif -#if CYTHON_COMPILING_IN_PYPY && !defined(PyObject_Format) - #define PyObject_Format(obj, fmt) PyObject_CallMethod(obj, "__format__", "O", fmt) -#endif -#define __Pyx_PyString_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyString_Check(b) && !PyString_CheckExact(b)))) ? PyNumber_Remainder(a, b) : __Pyx_PyString_Format(a, b)) -#define __Pyx_PyUnicode_FormatSafe(a, b) ((unlikely((a) == Py_None || (PyUnicode_Check(b) && !PyUnicode_CheckExact(b)))) ? PyNumber_Remainder(a, b) : PyUnicode_Format(a, b)) -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyString_Format(a, b) PyUnicode_Format(a, b) -#else - #define __Pyx_PyString_Format(a, b) PyString_Format(a, b) -#endif -#if PY_MAJOR_VERSION < 3 && !defined(PyObject_ASCII) - #define PyObject_ASCII(o) PyObject_Repr(o) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBaseString_Type PyUnicode_Type - #define PyStringObject PyUnicodeObject - #define PyString_Type PyUnicode_Type - #define PyString_Check PyUnicode_Check - #define PyString_CheckExact PyUnicode_CheckExact -#ifndef PyObject_Unicode - #define PyObject_Unicode PyObject_Str -#endif -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyBaseString_Check(obj) PyUnicode_Check(obj) - #define __Pyx_PyBaseString_CheckExact(obj) PyUnicode_CheckExact(obj) -#else - #define __Pyx_PyBaseString_Check(obj) (PyString_Check(obj) || PyUnicode_Check(obj)) - #define __Pyx_PyBaseString_CheckExact(obj) (PyString_CheckExact(obj) || PyUnicode_CheckExact(obj)) -#endif -#ifndef PySet_CheckExact - #define PySet_CheckExact(obj) (Py_TYPE(obj) == &PySet_Type) -#endif -#if PY_VERSION_HEX >= 0x030900A4 - #define __Pyx_SET_REFCNT(obj, refcnt) Py_SET_REFCNT(obj, refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SET_SIZE(obj, size) -#else - #define __Pyx_SET_REFCNT(obj, refcnt) Py_REFCNT(obj) = (refcnt) - #define __Pyx_SET_SIZE(obj, size) Py_SIZE(obj) = (size) -#endif -#if CYTHON_ASSUME_SAFE_MACROS - #define __Pyx_PySequence_SIZE(seq) Py_SIZE(seq) -#else - #define __Pyx_PySequence_SIZE(seq) PySequence_Size(seq) -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyIntObject PyLongObject - #define PyInt_Type PyLong_Type - #define PyInt_Check(op) PyLong_Check(op) - #define PyInt_CheckExact(op) PyLong_CheckExact(op) - #define PyInt_FromString PyLong_FromString - #define PyInt_FromUnicode PyLong_FromUnicode - #define PyInt_FromLong PyLong_FromLong - #define PyInt_FromSize_t PyLong_FromSize_t - #define PyInt_FromSsize_t PyLong_FromSsize_t - #define PyInt_AsLong PyLong_AsLong - #define PyInt_AS_LONG PyLong_AS_LONG - #define PyInt_AsSsize_t PyLong_AsSsize_t - #define PyInt_AsUnsignedLongMask PyLong_AsUnsignedLongMask - #define PyInt_AsUnsignedLongLongMask PyLong_AsUnsignedLongLongMask - #define PyNumber_Int PyNumber_Long -#endif -#if PY_MAJOR_VERSION >= 3 - #define PyBoolObject PyLongObject -#endif -#if PY_MAJOR_VERSION >= 3 && CYTHON_COMPILING_IN_PYPY - #ifndef PyUnicode_InternFromString - #define PyUnicode_InternFromString(s) PyUnicode_FromString(s) - #endif -#endif -#if PY_VERSION_HEX < 0x030200A4 - typedef long Py_hash_t; - #define __Pyx_PyInt_FromHash_t PyInt_FromLong - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsHash_t -#else - #define __Pyx_PyInt_FromHash_t PyInt_FromSsize_t - #define __Pyx_PyInt_AsHash_t __Pyx_PyIndex_AsSsize_t -#endif -#if PY_MAJOR_VERSION >= 3 - #define __Pyx_PyMethod_New(func, self, klass) ((self) ? ((void)(klass), PyMethod_New(func, self)) : __Pyx_NewRef(func)) -#else - #define __Pyx_PyMethod_New(func, self, klass) PyMethod_New(func, self, klass) -#endif -#if CYTHON_USE_ASYNC_SLOTS - #if PY_VERSION_HEX >= 0x030500B1 - #define __Pyx_PyAsyncMethodsStruct PyAsyncMethods - #define __Pyx_PyType_AsAsync(obj) (Py_TYPE(obj)->tp_as_async) - #else - #define __Pyx_PyType_AsAsync(obj) ((__Pyx_PyAsyncMethodsStruct*) (Py_TYPE(obj)->tp_reserved)) - #endif -#else - #define __Pyx_PyType_AsAsync(obj) NULL -#endif -#ifndef __Pyx_PyAsyncMethodsStruct - typedef struct { - unaryfunc am_await; - unaryfunc am_aiter; - unaryfunc am_anext; - } __Pyx_PyAsyncMethodsStruct; -#endif - -#if defined(_WIN32) || defined(WIN32) || defined(MS_WINDOWS) - #if !defined(_USE_MATH_DEFINES) - #define _USE_MATH_DEFINES - #endif -#endif -#include -#ifdef NAN -#define __PYX_NAN() ((float) NAN) -#else -static CYTHON_INLINE float __PYX_NAN() { - float value; - memset(&value, 0xFF, sizeof(value)); - return value; -} -#endif -#if defined(__CYGWIN__) && defined(_LDBL_EQ_DBL) -#define __Pyx_truncl trunc -#else -#define __Pyx_truncl truncl -#endif - -#define __PYX_MARK_ERR_POS(f_index, lineno) \ - { __pyx_filename = __pyx_f[f_index]; (void)__pyx_filename; __pyx_lineno = lineno; (void)__pyx_lineno; __pyx_clineno = __LINE__; (void)__pyx_clineno; } -#define __PYX_ERR(f_index, lineno, Ln_error) \ - { __PYX_MARK_ERR_POS(f_index, lineno) goto Ln_error; } - -#ifndef __PYX_EXTERN_C - #ifdef __cplusplus - #define __PYX_EXTERN_C extern "C" - #else - #define __PYX_EXTERN_C extern - #endif -#endif - -#define __PYX_HAVE__vits__monotonic_align__core -#define __PYX_HAVE_API__vits__monotonic_align__core -/* Early includes */ -#include "pythread.h" -#include -#include -#include -#include "pystate.h" -#ifdef _OPENMP -#include -#endif /* _OPENMP */ - -#if defined(PYREX_WITHOUT_ASSERTIONS) && !defined(CYTHON_WITHOUT_ASSERTIONS) -#define CYTHON_WITHOUT_ASSERTIONS -#endif - -typedef struct {PyObject **p; const char *s; const Py_ssize_t n; const char* encoding; - const char is_unicode; const char is_str; const char intern; } __Pyx_StringTabEntry; - -#define __PYX_DEFAULT_STRING_ENCODING_IS_ASCII 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_UTF8 0 -#define __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT (PY_MAJOR_VERSION >= 3 && __PYX_DEFAULT_STRING_ENCODING_IS_UTF8) -#define __PYX_DEFAULT_STRING_ENCODING "" -#define __Pyx_PyObject_FromString __Pyx_PyBytes_FromString -#define __Pyx_PyObject_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#define __Pyx_uchar_cast(c) ((unsigned char)c) -#define __Pyx_long_cast(x) ((long)x) -#define __Pyx_fits_Py_ssize_t(v, type, is_signed) (\ - (sizeof(type) < sizeof(Py_ssize_t)) ||\ - (sizeof(type) > sizeof(Py_ssize_t) &&\ - likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX) &&\ - (!is_signed || likely(v > (type)PY_SSIZE_T_MIN ||\ - v == (type)PY_SSIZE_T_MIN))) ||\ - (sizeof(type) == sizeof(Py_ssize_t) &&\ - (is_signed || likely(v < (type)PY_SSIZE_T_MAX ||\ - v == (type)PY_SSIZE_T_MAX))) ) -static CYTHON_INLINE int __Pyx_is_valid_index(Py_ssize_t i, Py_ssize_t limit) { - return (size_t) i < (size_t) limit; -} -#if defined (__cplusplus) && __cplusplus >= 201103L - #include - #define __Pyx_sst_abs(value) std::abs(value) -#elif SIZEOF_INT >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) abs(value) -#elif SIZEOF_LONG >= SIZEOF_SIZE_T - #define __Pyx_sst_abs(value) labs(value) -#elif defined (_MSC_VER) - #define __Pyx_sst_abs(value) ((Py_ssize_t)_abs64(value)) -#elif defined (__STDC_VERSION__) && __STDC_VERSION__ >= 199901L - #define __Pyx_sst_abs(value) llabs(value) -#elif defined (__GNUC__) - #define __Pyx_sst_abs(value) __builtin_llabs(value) -#else - #define __Pyx_sst_abs(value) ((value<0) ? -value : value) -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject*); -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject*, Py_ssize_t* length); -#define __Pyx_PyByteArray_FromString(s) PyByteArray_FromStringAndSize((const char*)s, strlen((const char*)s)) -#define __Pyx_PyByteArray_FromStringAndSize(s, l) PyByteArray_FromStringAndSize((const char*)s, l) -#define __Pyx_PyBytes_FromString PyBytes_FromString -#define __Pyx_PyBytes_FromStringAndSize PyBytes_FromStringAndSize -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char*); -#if PY_MAJOR_VERSION < 3 - #define __Pyx_PyStr_FromString __Pyx_PyBytes_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyBytes_FromStringAndSize -#else - #define __Pyx_PyStr_FromString __Pyx_PyUnicode_FromString - #define __Pyx_PyStr_FromStringAndSize __Pyx_PyUnicode_FromStringAndSize -#endif -#define __Pyx_PyBytes_AsWritableString(s) ((char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableSString(s) ((signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsWritableUString(s) ((unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsString(s) ((const char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsSString(s) ((const signed char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyBytes_AsUString(s) ((const unsigned char*) PyBytes_AS_STRING(s)) -#define __Pyx_PyObject_AsWritableString(s) ((char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableSString(s) ((signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsWritableUString(s) ((unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsSString(s) ((const signed char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_AsUString(s) ((const unsigned char*) __Pyx_PyObject_AsString(s)) -#define __Pyx_PyObject_FromCString(s) __Pyx_PyObject_FromString((const char*)s) -#define __Pyx_PyBytes_FromCString(s) __Pyx_PyBytes_FromString((const char*)s) -#define __Pyx_PyByteArray_FromCString(s) __Pyx_PyByteArray_FromString((const char*)s) -#define __Pyx_PyStr_FromCString(s) __Pyx_PyStr_FromString((const char*)s) -#define __Pyx_PyUnicode_FromCString(s) __Pyx_PyUnicode_FromString((const char*)s) -static CYTHON_INLINE size_t __Pyx_Py_UNICODE_strlen(const Py_UNICODE *u) { - const Py_UNICODE *u_end = u; - while (*u_end++) ; - return (size_t)(u_end - u - 1); -} -#define __Pyx_PyUnicode_FromUnicode(u) PyUnicode_FromUnicode(u, __Pyx_Py_UNICODE_strlen(u)) -#define __Pyx_PyUnicode_FromUnicodeAndLength PyUnicode_FromUnicode -#define __Pyx_PyUnicode_AsUnicode PyUnicode_AsUnicode -#define __Pyx_NewRef(obj) (Py_INCREF(obj), obj) -#define __Pyx_Owned_Py_None(b) __Pyx_NewRef(Py_None) -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b); -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject*); -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject*); -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x); -#define __Pyx_PySequence_Tuple(obj)\ - (likely(PyTuple_CheckExact(obj)) ? __Pyx_NewRef(obj) : PySequence_Tuple(obj)) -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject*); -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t); -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject*); -#if CYTHON_ASSUME_SAFE_MACROS -#define __pyx_PyFloat_AsDouble(x) (PyFloat_CheckExact(x) ? PyFloat_AS_DOUBLE(x) : PyFloat_AsDouble(x)) -#else -#define __pyx_PyFloat_AsDouble(x) PyFloat_AsDouble(x) -#endif -#define __pyx_PyFloat_AsFloat(x) ((float) __pyx_PyFloat_AsDouble(x)) -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyNumber_Int(x) (PyLong_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Long(x)) -#else -#define __Pyx_PyNumber_Int(x) (PyInt_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Int(x)) -#endif -#define __Pyx_PyNumber_Float(x) (PyFloat_CheckExact(x) ? __Pyx_NewRef(x) : PyNumber_Float(x)) -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII -static int __Pyx_sys_getdefaultencoding_not_ascii; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - PyObject* ascii_chars_u = NULL; - PyObject* ascii_chars_b = NULL; - const char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - if (strcmp(default_encoding_c, "ascii") == 0) { - __Pyx_sys_getdefaultencoding_not_ascii = 0; - } else { - char ascii_chars[128]; - int c; - for (c = 0; c < 128; c++) { - ascii_chars[c] = c; - } - __Pyx_sys_getdefaultencoding_not_ascii = 1; - ascii_chars_u = PyUnicode_DecodeASCII(ascii_chars, 128, NULL); - if (!ascii_chars_u) goto bad; - ascii_chars_b = PyUnicode_AsEncodedString(ascii_chars_u, default_encoding_c, NULL); - if (!ascii_chars_b || !PyBytes_Check(ascii_chars_b) || memcmp(ascii_chars, PyBytes_AS_STRING(ascii_chars_b), 128) != 0) { - PyErr_Format( - PyExc_ValueError, - "This module compiled with c_string_encoding=ascii, but default encoding '%.200s' is not a superset of ascii.", - default_encoding_c); - goto bad; - } - Py_DECREF(ascii_chars_u); - Py_DECREF(ascii_chars_b); - } - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - Py_XDECREF(ascii_chars_u); - Py_XDECREF(ascii_chars_b); - return -1; -} -#endif -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT && PY_MAJOR_VERSION >= 3 -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_DecodeUTF8(c_str, size, NULL) -#else -#define __Pyx_PyUnicode_FromStringAndSize(c_str, size) PyUnicode_Decode(c_str, size, __PYX_DEFAULT_STRING_ENCODING, NULL) -#if __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -static char* __PYX_DEFAULT_STRING_ENCODING; -static int __Pyx_init_sys_getdefaultencoding_params(void) { - PyObject* sys; - PyObject* default_encoding = NULL; - char* default_encoding_c; - sys = PyImport_ImportModule("sys"); - if (!sys) goto bad; - default_encoding = PyObject_CallMethod(sys, (char*) (const char*) "getdefaultencoding", NULL); - Py_DECREF(sys); - if (!default_encoding) goto bad; - default_encoding_c = PyBytes_AsString(default_encoding); - if (!default_encoding_c) goto bad; - __PYX_DEFAULT_STRING_ENCODING = (char*) malloc(strlen(default_encoding_c) + 1); - if (!__PYX_DEFAULT_STRING_ENCODING) goto bad; - strcpy(__PYX_DEFAULT_STRING_ENCODING, default_encoding_c); - Py_DECREF(default_encoding); - return 0; -bad: - Py_XDECREF(default_encoding); - return -1; -} -#endif -#endif - - -/* Test for GCC > 2.95 */ -#if defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))) - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) -#else /* !__GNUC__ or GCC < 2.95 */ - #define likely(x) (x) - #define unlikely(x) (x) -#endif /* __GNUC__ */ -static CYTHON_INLINE void __Pyx_pretend_to_initialize(void* ptr) { (void)ptr; } - -static PyObject *__pyx_m = NULL; -static PyObject *__pyx_d; -static PyObject *__pyx_b; -static PyObject *__pyx_cython_runtime = NULL; -static PyObject *__pyx_empty_tuple; -static PyObject *__pyx_empty_bytes; -static PyObject *__pyx_empty_unicode; -static int __pyx_lineno; -static int __pyx_clineno = 0; -static const char * __pyx_cfilenm= __FILE__; -static const char *__pyx_filename; - - -static const char *__pyx_f[] = { - "core.pyx", - "stringsource", -}; -/* NoFastGil.proto */ -#define __Pyx_PyGILState_Ensure PyGILState_Ensure -#define __Pyx_PyGILState_Release PyGILState_Release -#define __Pyx_FastGIL_Remember() -#define __Pyx_FastGIL_Forget() -#define __Pyx_FastGilFuncInit() - -/* MemviewSliceStruct.proto */ -struct __pyx_memoryview_obj; -typedef struct { - struct __pyx_memoryview_obj *memview; - char *data; - Py_ssize_t shape[8]; - Py_ssize_t strides[8]; - Py_ssize_t suboffsets[8]; -} __Pyx_memviewslice; -#define __Pyx_MemoryView_Len(m) (m.shape[0]) - -/* Atomics.proto */ -#include -#ifndef CYTHON_ATOMICS - #define CYTHON_ATOMICS 1 -#endif -#define __PYX_CYTHON_ATOMICS_ENABLED() CYTHON_ATOMICS -#define __pyx_atomic_int_type int -#if CYTHON_ATOMICS && (__GNUC__ >= 5 || (__GNUC__ == 4 &&\ - (__GNUC_MINOR__ > 1 ||\ - (__GNUC_MINOR__ == 1 && __GNUC_PATCHLEVEL__ >= 2)))) - #define __pyx_atomic_incr_aligned(value) __sync_fetch_and_add(value, 1) - #define __pyx_atomic_decr_aligned(value) __sync_fetch_and_sub(value, 1) - #ifdef __PYX_DEBUG_ATOMICS - #warning "Using GNU atomics" - #endif -#elif CYTHON_ATOMICS && defined(_MSC_VER) && CYTHON_COMPILING_IN_NOGIL - #include - #undef __pyx_atomic_int_type - #define __pyx_atomic_int_type long - #pragma intrinsic (_InterlockedExchangeAdd) - #define __pyx_atomic_incr_aligned(value) _InterlockedExchangeAdd(value, 1) - #define __pyx_atomic_decr_aligned(value) _InterlockedExchangeAdd(value, -1) - #ifdef __PYX_DEBUG_ATOMICS - #pragma message ("Using MSVC atomics") - #endif -#else - #undef CYTHON_ATOMICS - #define CYTHON_ATOMICS 0 - #ifdef __PYX_DEBUG_ATOMICS - #warning "Not using atomics" - #endif -#endif -typedef volatile __pyx_atomic_int_type __pyx_atomic_int; -#if CYTHON_ATOMICS - #define __pyx_add_acquisition_count(memview)\ - __pyx_atomic_incr_aligned(__pyx_get_slice_count_pointer(memview)) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_atomic_decr_aligned(__pyx_get_slice_count_pointer(memview)) -#else - #define __pyx_add_acquisition_count(memview)\ - __pyx_add_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) - #define __pyx_sub_acquisition_count(memview)\ - __pyx_sub_acquisition_count_locked(__pyx_get_slice_count_pointer(memview), memview->lock) -#endif - -/* ForceInitThreads.proto */ -#ifndef __PYX_FORCE_INIT_THREADS - #define __PYX_FORCE_INIT_THREADS 0 -#endif - -/* BufferFormatStructs.proto */ -#define IS_UNSIGNED(type) (((type) -1) > 0) -struct __Pyx_StructField_; -#define __PYX_BUF_FLAGS_PACKED_STRUCT (1 << 0) -typedef struct { - const char* name; - struct __Pyx_StructField_* fields; - size_t size; - size_t arraysize[8]; - int ndim; - char typegroup; - char is_unsigned; - int flags; -} __Pyx_TypeInfo; -typedef struct __Pyx_StructField_ { - __Pyx_TypeInfo* type; - const char* name; - size_t offset; -} __Pyx_StructField; -typedef struct { - __Pyx_StructField* field; - size_t parent_offset; -} __Pyx_BufFmt_StackElem; -typedef struct { - __Pyx_StructField root; - __Pyx_BufFmt_StackElem* head; - size_t fmt_offset; - size_t new_count, enc_count; - size_t struct_alignment; - int is_complex; - char enc_type; - char new_packmode; - char enc_packmode; - char is_valid_array; -} __Pyx_BufFmt_Context; - - -/*--- Type declarations ---*/ -struct __pyx_array_obj; -struct __pyx_MemviewEnum_obj; -struct __pyx_memoryview_obj; -struct __pyx_memoryviewslice_obj; -struct __pyx_opt_args_4vits_15monotonic_align_4core_maximum_path_each; - -/* "vits/monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ -struct __pyx_opt_args_4vits_15monotonic_align_4core_maximum_path_each { - int __pyx_n; - float max_neg_val; -}; - -/* "View.MemoryView":106 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ -struct __pyx_array_obj { - PyObject_HEAD - struct __pyx_vtabstruct_array *__pyx_vtab; - char *data; - Py_ssize_t len; - char *format; - int ndim; - Py_ssize_t *_shape; - Py_ssize_t *_strides; - Py_ssize_t itemsize; - PyObject *mode; - PyObject *_format; - void (*callback_free_data)(void *); - int free_data; - int dtype_is_object; -}; - - -/* "View.MemoryView":280 - * - * @cname('__pyx_MemviewEnum') - * cdef class Enum(object): # <<<<<<<<<<<<<< - * cdef object name - * def __init__(self, name): - */ -struct __pyx_MemviewEnum_obj { - PyObject_HEAD - PyObject *name; -}; - - -/* "View.MemoryView":331 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ -struct __pyx_memoryview_obj { - PyObject_HEAD - struct __pyx_vtabstruct_memoryview *__pyx_vtab; - PyObject *obj; - PyObject *_size; - PyObject *_array_interface; - PyThread_type_lock lock; - __pyx_atomic_int acquisition_count[2]; - __pyx_atomic_int *acquisition_count_aligned_p; - Py_buffer view; - int flags; - int dtype_is_object; - __Pyx_TypeInfo *typeinfo; -}; - - -/* "View.MemoryView":967 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ -struct __pyx_memoryviewslice_obj { - struct __pyx_memoryview_obj __pyx_base; - __Pyx_memviewslice from_slice; - PyObject *from_object; - PyObject *(*to_object_func)(char *); - int (*to_dtype_func)(char *, PyObject *); -}; - - - -/* "View.MemoryView":106 - * - * @cname("__pyx_array") - * cdef class array: # <<<<<<<<<<<<<< - * - * cdef: - */ - -struct __pyx_vtabstruct_array { - PyObject *(*get_memview)(struct __pyx_array_obj *); -}; -static struct __pyx_vtabstruct_array *__pyx_vtabptr_array; - - -/* "View.MemoryView":331 - * - * @cname('__pyx_memoryview') - * cdef class memoryview(object): # <<<<<<<<<<<<<< - * - * cdef object obj - */ - -struct __pyx_vtabstruct_memoryview { - char *(*get_item_pointer)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*is_slice)(struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_slice_assignment)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*setitem_slice_assign_scalar)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *); - PyObject *(*setitem_indexed)(struct __pyx_memoryview_obj *, PyObject *, PyObject *); - PyObject *(*convert_item_to_object)(struct __pyx_memoryview_obj *, char *); - PyObject *(*assign_item_from_object)(struct __pyx_memoryview_obj *, char *, PyObject *); -}; -static struct __pyx_vtabstruct_memoryview *__pyx_vtabptr_memoryview; - - -/* "View.MemoryView":967 - * - * @cname('__pyx_memoryviewslice') - * cdef class _memoryviewslice(memoryview): # <<<<<<<<<<<<<< - * "Internal class for passing memoryview slices to Python" - * - */ - -struct __pyx_vtabstruct__memoryviewslice { - struct __pyx_vtabstruct_memoryview __pyx_base; -}; -static struct __pyx_vtabstruct__memoryviewslice *__pyx_vtabptr__memoryviewslice; - -/* --- Runtime support code (head) --- */ -/* Refnanny.proto */ -#ifndef CYTHON_REFNANNY - #define CYTHON_REFNANNY 0 -#endif -#if CYTHON_REFNANNY - typedef struct { - void (*INCREF)(void*, PyObject*, int); - void (*DECREF)(void*, PyObject*, int); - void (*GOTREF)(void*, PyObject*, int); - void (*GIVEREF)(void*, PyObject*, int); - void* (*SetupContext)(const char*, int, const char*); - void (*FinishContext)(void**); - } __Pyx_RefNannyAPIStruct; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNanny = NULL; - static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname); - #define __Pyx_RefNannyDeclarations void *__pyx_refnanny = NULL; -#ifdef WITH_THREAD - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - if (acquire_gil) {\ - PyGILState_STATE __pyx_gilstate_save = PyGILState_Ensure();\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - PyGILState_Release(__pyx_gilstate_save);\ - } else {\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__);\ - } -#else - #define __Pyx_RefNannySetupContext(name, acquire_gil)\ - __pyx_refnanny = __Pyx_RefNanny->SetupContext((name), __LINE__, __FILE__) -#endif - #define __Pyx_RefNannyFinishContext()\ - __Pyx_RefNanny->FinishContext(&__pyx_refnanny) - #define __Pyx_INCREF(r) __Pyx_RefNanny->INCREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_DECREF(r) __Pyx_RefNanny->DECREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GOTREF(r) __Pyx_RefNanny->GOTREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_GIVEREF(r) __Pyx_RefNanny->GIVEREF(__pyx_refnanny, (PyObject *)(r), __LINE__) - #define __Pyx_XINCREF(r) do { if((r) != NULL) {__Pyx_INCREF(r); }} while(0) - #define __Pyx_XDECREF(r) do { if((r) != NULL) {__Pyx_DECREF(r); }} while(0) - #define __Pyx_XGOTREF(r) do { if((r) != NULL) {__Pyx_GOTREF(r); }} while(0) - #define __Pyx_XGIVEREF(r) do { if((r) != NULL) {__Pyx_GIVEREF(r);}} while(0) -#else - #define __Pyx_RefNannyDeclarations - #define __Pyx_RefNannySetupContext(name, acquire_gil) - #define __Pyx_RefNannyFinishContext() - #define __Pyx_INCREF(r) Py_INCREF(r) - #define __Pyx_DECREF(r) Py_DECREF(r) - #define __Pyx_GOTREF(r) - #define __Pyx_GIVEREF(r) - #define __Pyx_XINCREF(r) Py_XINCREF(r) - #define __Pyx_XDECREF(r) Py_XDECREF(r) - #define __Pyx_XGOTREF(r) - #define __Pyx_XGIVEREF(r) -#endif -#define __Pyx_XDECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_XDECREF(tmp);\ - } while (0) -#define __Pyx_DECREF_SET(r, v) do {\ - PyObject *tmp = (PyObject *) r;\ - r = v; __Pyx_DECREF(tmp);\ - } while (0) -#define __Pyx_CLEAR(r) do { PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);} while(0) -#define __Pyx_XCLEAR(r) do { if((r) != NULL) {PyObject* tmp = ((PyObject*)(r)); r = NULL; __Pyx_DECREF(tmp);}} while(0) - -/* PyObjectGetAttrStr.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GetAttrStr(o,n) PyObject_GetAttr(o,n) -#endif - -/* GetBuiltinName.proto */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name); - -/* MemviewSliceInit.proto */ -#define __Pyx_BUF_MAX_NDIMS %(BUF_MAX_NDIMS)d -#define __Pyx_MEMVIEW_DIRECT 1 -#define __Pyx_MEMVIEW_PTR 2 -#define __Pyx_MEMVIEW_FULL 4 -#define __Pyx_MEMVIEW_CONTIG 8 -#define __Pyx_MEMVIEW_STRIDED 16 -#define __Pyx_MEMVIEW_FOLLOW 32 -#define __Pyx_IS_C_CONTIG 1 -#define __Pyx_IS_F_CONTIG 2 -static int __Pyx_init_memviewslice( - struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference); -static CYTHON_INLINE int __pyx_add_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -static CYTHON_INLINE int __pyx_sub_acquisition_count_locked( - __pyx_atomic_int *acquisition_count, PyThread_type_lock lock); -#define __pyx_get_slice_count_pointer(memview) (memview->acquisition_count_aligned_p) -#define __pyx_get_slice_count(memview) (*__pyx_get_slice_count_pointer(memview)) -#define __PYX_INC_MEMVIEW(slice, have_gil) __Pyx_INC_MEMVIEW(slice, have_gil, __LINE__) -#define __PYX_XDEC_MEMVIEW(slice, have_gil) __Pyx_XDEC_MEMVIEW(slice, have_gil, __LINE__) -static CYTHON_INLINE void __Pyx_INC_MEMVIEW(__Pyx_memviewslice *, int, int); -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *, int, int); - -/* RaiseArgTupleInvalid.proto */ -static void __Pyx_RaiseArgtupleInvalid(const char* func_name, int exact, - Py_ssize_t num_min, Py_ssize_t num_max, Py_ssize_t num_found); - -/* RaiseDoubleKeywords.proto */ -static void __Pyx_RaiseDoubleKeywordsError(const char* func_name, PyObject* kw_name); - -/* ParseKeywords.proto */ -static int __Pyx_ParseOptionalKeywords(PyObject *kwds, PyObject **argnames[],\ - PyObject *kwds2, PyObject *values[], Py_ssize_t num_pos_args,\ - const char* function_name); - -/* None.proto */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname); - -/* ArgTypeTest.proto */ -#define __Pyx_ArgTypeTest(obj, type, none_allowed, name, exact)\ - ((likely((Py_TYPE(obj) == type) | (none_allowed && (obj == Py_None)))) ? 1 :\ - __Pyx__ArgTypeTest(obj, type, name, exact)) -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact); - -/* PyObjectCall.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw); -#else -#define __Pyx_PyObject_Call(func, arg, kw) PyObject_Call(func, arg, kw) -#endif - -/* PyThreadStateGet.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyThreadState_declare PyThreadState *__pyx_tstate; -#define __Pyx_PyThreadState_assign __pyx_tstate = __Pyx_PyThreadState_Current; -#define __Pyx_PyErr_Occurred() __pyx_tstate->curexc_type -#else -#define __Pyx_PyThreadState_declare -#define __Pyx_PyThreadState_assign -#define __Pyx_PyErr_Occurred() PyErr_Occurred() -#endif - -/* PyErrFetchRestore.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_Clear() __Pyx_ErrRestore(NULL, NULL, NULL) -#define __Pyx_ErrRestoreWithState(type, value, tb) __Pyx_ErrRestoreInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) __Pyx_ErrFetchInState(PyThreadState_GET(), type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) __Pyx_ErrRestoreInState(__pyx_tstate, type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) __Pyx_ErrFetchInState(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_PyErr_SetNone(exc) (Py_INCREF(exc), __Pyx_ErrRestore((exc), NULL, NULL)) -#else -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#endif -#else -#define __Pyx_PyErr_Clear() PyErr_Clear() -#define __Pyx_PyErr_SetNone(exc) PyErr_SetNone(exc) -#define __Pyx_ErrRestoreWithState(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchWithState(type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestoreInState(tstate, type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetchInState(tstate, type, value, tb) PyErr_Fetch(type, value, tb) -#define __Pyx_ErrRestore(type, value, tb) PyErr_Restore(type, value, tb) -#define __Pyx_ErrFetch(type, value, tb) PyErr_Fetch(type, value, tb) -#endif - -/* RaiseException.proto */ -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause); - -/* PyCFunctionFastCall.proto */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject *__Pyx_PyCFunction_FastCall(PyObject *func, PyObject **args, Py_ssize_t nargs); -#else -#define __Pyx_PyCFunction_FastCall(func, args, nargs) (assert(0), NULL) -#endif - -/* PyFunctionFastCall.proto */ -#if CYTHON_FAST_PYCALL -#define __Pyx_PyFunction_FastCall(func, args, nargs)\ - __Pyx_PyFunction_FastCallDict((func), (args), (nargs), NULL) -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs); -#else -#define __Pyx_PyFunction_FastCallDict(func, args, nargs, kwargs) _PyFunction_FastCallDict(func, args, nargs, kwargs) -#endif -#define __Pyx_BUILD_ASSERT_EXPR(cond)\ - (sizeof(char [1 - 2*!(cond)]) - 1) -#ifndef Py_MEMBER_SIZE -#define Py_MEMBER_SIZE(type, member) sizeof(((type *)0)->member) -#endif -#if CYTHON_FAST_PYCALL - static size_t __pyx_pyframe_localsplus_offset = 0; - #include "frameobject.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif - #define __Pxy_PyFrame_Initialize_Offsets()\ - ((void)__Pyx_BUILD_ASSERT_EXPR(sizeof(PyFrameObject) == offsetof(PyFrameObject, f_localsplus) + Py_MEMBER_SIZE(PyFrameObject, f_localsplus)),\ - (void)(__pyx_pyframe_localsplus_offset = ((size_t)PyFrame_Type.tp_basicsize) - Py_MEMBER_SIZE(PyFrameObject, f_localsplus))) - #define __Pyx_PyFrame_GetLocalsplus(frame)\ - (assert(__pyx_pyframe_localsplus_offset), (PyObject **)(((char *)(frame)) + __pyx_pyframe_localsplus_offset)) -#endif // CYTHON_FAST_PYCALL -#endif - -/* PyObjectCall2Args.proto */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2); - -/* PyObjectCallMethO.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg); -#endif - -/* PyObjectCallOneArg.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg); - -/* IncludeStringH.proto */ -#include - -/* BytesEquals.proto */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals); - -/* UnicodeEquals.proto */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals); - -/* StrEquals.proto */ -#if PY_MAJOR_VERSION >= 3 -#define __Pyx_PyString_Equals __Pyx_PyUnicode_Equals -#else -#define __Pyx_PyString_Equals __Pyx_PyBytes_Equals -#endif - -/* DivInt[Py_ssize_t].proto */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t, Py_ssize_t); - -/* UnaryNegOverflows.proto */ -#define UNARY_NEG_WOULD_OVERFLOW(x)\ - (((x) < 0) & ((unsigned long)(x) == 0-(unsigned long)(x))) - -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *); /*proto*/ -/* GetAttr.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *, PyObject *); - -/* GetItemInt.proto */ -#define __Pyx_GetItemInt(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Fast(o, (Py_ssize_t)i, is_list, wraparound, boundscheck) :\ - (is_list ? (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL) :\ - __Pyx_GetItemInt_Generic(o, to_py_func(i)))) -#define __Pyx_GetItemInt_List(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_List_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "list index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -#define __Pyx_GetItemInt_Tuple(o, i, type, is_signed, to_py_func, is_list, wraparound, boundscheck)\ - (__Pyx_fits_Py_ssize_t(i, type, is_signed) ?\ - __Pyx_GetItemInt_Tuple_Fast(o, (Py_ssize_t)i, wraparound, boundscheck) :\ - (PyErr_SetString(PyExc_IndexError, "tuple index out of range"), (PyObject*)NULL)) -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - int wraparound, int boundscheck); -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j); -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, - int is_list, int wraparound, int boundscheck); - -/* ObjectGetItem.proto */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key); -#else -#define __Pyx_PyObject_GetItem(obj, key) PyObject_GetItem(obj, key) -#endif - -/* decode_c_string_utf16.proto */ -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 0; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16LE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = -1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} -static CYTHON_INLINE PyObject *__Pyx_PyUnicode_DecodeUTF16BE(const char *s, Py_ssize_t size, const char *errors) { - int byteorder = 1; - return PyUnicode_DecodeUTF16(s, size, errors, &byteorder); -} - -/* decode_c_string.proto */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)); - -/* PyErrExceptionMatches.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_PyErr_ExceptionMatches(err) __Pyx_PyErr_ExceptionMatchesInState(__pyx_tstate, err) -static CYTHON_INLINE int __Pyx_PyErr_ExceptionMatchesInState(PyThreadState* tstate, PyObject* err); -#else -#define __Pyx_PyErr_ExceptionMatches(err) PyErr_ExceptionMatches(err) -#endif - -/* GetAttr3.proto */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *, PyObject *, PyObject *); - -/* PyDictVersioning.proto */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -#define __PYX_DICT_VERSION_INIT ((PY_UINT64_T) -1) -#define __PYX_GET_DICT_VERSION(dict) (((PyDictObject*)(dict))->ma_version_tag) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var)\ - (version_var) = __PYX_GET_DICT_VERSION(dict);\ - (cache_var) = (value); -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - if (likely(__PYX_GET_DICT_VERSION(DICT) == __pyx_dict_version)) {\ - (VAR) = __pyx_dict_cached_value;\ - } else {\ - (VAR) = __pyx_dict_cached_value = (LOOKUP);\ - __pyx_dict_version = __PYX_GET_DICT_VERSION(DICT);\ - }\ -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj); -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj); -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version); -#else -#define __PYX_GET_DICT_VERSION(dict) (0) -#define __PYX_UPDATE_DICT_CACHE(dict, value, cache_var, version_var) -#define __PYX_PY_DICT_LOOKUP_IF_MODIFIED(VAR, DICT, LOOKUP) (VAR) = (LOOKUP); -#endif - -/* GetModuleGlobalName.proto */ -#if CYTHON_USE_DICT_VERSIONS -#define __Pyx_GetModuleGlobalName(var, name) {\ - static PY_UINT64_T __pyx_dict_version = 0;\ - static PyObject *__pyx_dict_cached_value = NULL;\ - (var) = (likely(__pyx_dict_version == __PYX_GET_DICT_VERSION(__pyx_d))) ?\ - (likely(__pyx_dict_cached_value) ? __Pyx_NewRef(__pyx_dict_cached_value) : __Pyx_GetBuiltinName(name)) :\ - __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -#define __Pyx_GetModuleGlobalNameUncached(var, name) {\ - PY_UINT64_T __pyx_dict_version;\ - PyObject *__pyx_dict_cached_value;\ - (var) = __Pyx__GetModuleGlobalName(name, &__pyx_dict_version, &__pyx_dict_cached_value);\ -} -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value); -#else -#define __Pyx_GetModuleGlobalName(var, name) (var) = __Pyx__GetModuleGlobalName(name) -#define __Pyx_GetModuleGlobalNameUncached(var, name) (var) = __Pyx__GetModuleGlobalName(name) -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name); -#endif - -/* RaiseTooManyValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected); - -/* RaiseNeedMoreValuesToUnpack.proto */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index); - -/* RaiseNoneIterError.proto */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void); - -/* ExtTypeTest.proto */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type); - -/* GetTopmostException.proto */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * __Pyx_PyErr_GetTopmostException(PyThreadState *tstate); -#endif - -/* SaveResetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSave(type, value, tb) __Pyx__ExceptionSave(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#define __Pyx_ExceptionReset(type, value, tb) __Pyx__ExceptionReset(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb); -#else -#define __Pyx_ExceptionSave(type, value, tb) PyErr_GetExcInfo(type, value, tb) -#define __Pyx_ExceptionReset(type, value, tb) PyErr_SetExcInfo(type, value, tb) -#endif - -/* GetException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_GetException(type, value, tb) __Pyx__GetException(__pyx_tstate, type, value, tb) -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* SwapException.proto */ -#if CYTHON_FAST_THREAD_STATE -#define __Pyx_ExceptionSwap(type, value, tb) __Pyx__ExceptionSwap(__pyx_tstate, type, value, tb) -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb); -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb); -#endif - -/* Import.proto */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level); - -/* FastTypeChecks.proto */ -#if CYTHON_COMPILING_IN_CPYTHON -#define __Pyx_TypeCheck(obj, type) __Pyx_IsSubtype(Py_TYPE(obj), (PyTypeObject *)type) -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject *type); -static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *type1, PyObject *type2); -#else -#define __Pyx_TypeCheck(obj, type) PyObject_TypeCheck(obj, (PyTypeObject *)type) -#define __Pyx_PyErr_GivenExceptionMatches(err, type) PyErr_GivenExceptionMatches(err, type) -#define __Pyx_PyErr_GivenExceptionMatches2(err, type1, type2) (PyErr_GivenExceptionMatches(err, type1) || PyErr_GivenExceptionMatches(err, type2)) -#endif -#define __Pyx_PyException_Check(obj) __Pyx_TypeCheck(obj, PyExc_Exception) - -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -/* ListCompAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_ListComp_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len)) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_ListComp_Append(L,x) PyList_Append(L,x) -#endif - -/* PyIntBinop.proto */ -#if !CYTHON_COMPILING_IN_PYPY -static PyObject* __Pyx_PyInt_AddObjC(PyObject *op1, PyObject *op2, long intval, int inplace, int zerodivision_check); -#else -#define __Pyx_PyInt_AddObjC(op1, op2, intval, inplace, zerodivision_check)\ - (inplace ? PyNumber_InPlaceAdd(op1, op2) : PyNumber_Add(op1, op2)) -#endif - -/* ListExtend.proto */ -static CYTHON_INLINE int __Pyx_PyList_Extend(PyObject* L, PyObject* v) { -#if CYTHON_COMPILING_IN_CPYTHON - PyObject* none = _PyList_Extend((PyListObject*)L, v); - if (unlikely(!none)) - return -1; - Py_DECREF(none); - return 0; -#else - return PyList_SetSlice(L, PY_SSIZE_T_MAX, PY_SSIZE_T_MAX, v); -#endif -} - -/* ListAppend.proto */ -#if CYTHON_USE_PYLIST_INTERNALS && CYTHON_ASSUME_SAFE_MACROS -static CYTHON_INLINE int __Pyx_PyList_Append(PyObject* list, PyObject* x) { - PyListObject* L = (PyListObject*) list; - Py_ssize_t len = Py_SIZE(list); - if (likely(L->allocated > len) & likely(len > (L->allocated >> 1))) { - Py_INCREF(x); - PyList_SET_ITEM(list, len, x); - __Pyx_SET_SIZE(list, len + 1); - return 0; - } - return PyList_Append(list, x); -} -#else -#define __Pyx_PyList_Append(L,x) PyList_Append(L,x) -#endif - -/* DivInt[long].proto */ -static CYTHON_INLINE long __Pyx_div_long(long, long); - -/* PySequenceContains.proto */ -static CYTHON_INLINE int __Pyx_PySequence_ContainsTF(PyObject* item, PyObject* seq, int eq) { - int result = PySequence_Contains(seq, item); - return unlikely(result < 0) ? result : (result == (eq == Py_EQ)); -} - -/* ImportFrom.proto */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name); - -/* HasAttr.proto */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *, PyObject *); - -/* PyObject_GenericGetAttrNoDict.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttrNoDict PyObject_GenericGetAttr -#endif - -/* PyObject_GenericGetAttr.proto */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name); -#else -#define __Pyx_PyObject_GenericGetAttr PyObject_GenericGetAttr -#endif - -/* SetVTable.proto */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable); - -/* PyObjectGetAttrStrNoError.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name); - -/* SetupReduce.proto */ -static int __Pyx_setup_reduce(PyObject* type_obj); - -/* CLineInTraceback.proto */ -#ifdef CYTHON_CLINE_IN_TRACEBACK -#define __Pyx_CLineForTraceback(tstate, c_line) (((CYTHON_CLINE_IN_TRACEBACK)) ? c_line : 0) -#else -static int __Pyx_CLineForTraceback(PyThreadState *tstate, int c_line); -#endif - -/* CodeObjectCache.proto */ -typedef struct { - PyCodeObject* code_object; - int code_line; -} __Pyx_CodeObjectCacheEntry; -struct __Pyx_CodeObjectCache { - int count; - int max_count; - __Pyx_CodeObjectCacheEntry* entries; -}; -static struct __Pyx_CodeObjectCache __pyx_code_cache = {0,0,NULL}; -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line); -static PyCodeObject *__pyx_find_code_object(int code_line); -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object); - -/* AddTraceback.proto */ -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename); - -#if PY_MAJOR_VERSION < 3 - static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags); - static void __Pyx_ReleaseBuffer(Py_buffer *view); -#else - #define __Pyx_GetBuffer PyObject_GetBuffer - #define __Pyx_ReleaseBuffer PyBuffer_Release -#endif - - -/* BufferStructDeclare.proto */ -typedef struct { - Py_ssize_t shape, strides, suboffsets; -} __Pyx_Buf_DimInfo; -typedef struct { - size_t refcount; - Py_buffer pybuffer; -} __Pyx_Buffer; -typedef struct { - __Pyx_Buffer *rcbuffer; - char *data; - __Pyx_Buf_DimInfo diminfo[8]; -} __Pyx_LocalBuf_ND; - -/* MemviewSliceIsContig.proto */ -static int __pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim); - -/* OverlappingSlices.proto */ -static int __pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize); - -/* Capsule.proto */ -static CYTHON_INLINE PyObject *__pyx_capsule_create(void *p, const char *sig); - -/* IsLittleEndian.proto */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void); - -/* BufferFormatCheck.proto */ -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts); -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type); - -/* TypeInfoCompare.proto */ -static int __pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b); - -/* MemviewSliceValidateAndInit.proto */ -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *, int writable_flag); - -/* ObjectToMemviewSlice.proto */ -static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *, int writable_flag); - -/* GCCDiagnostics.proto */ -#if defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 6)) -#define __Pyx_HAS_GCC_DIAGNOSTIC -#endif - -/* MemviewSliceCopyTemplate.proto */ -static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *); - -/* CIntToPy.proto */ -static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value); - -/* CIntFromPy.proto */ -static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *); - -/* CIntFromPy.proto */ -static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *); - -/* CheckBinaryVersion.proto */ -static int __Pyx_check_binary_version(void); - -/* InitStrings.proto */ -static int __Pyx_InitStrings(__Pyx_StringTabEntry *t); - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self); /* proto*/ -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto*/ -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src); /* proto*/ -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp); /* proto*/ -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value); /* proto*/ - -/* Module declarations from 'cython.view' */ - -/* Module declarations from 'cython' */ - -/* Module declarations from 'vits.monotonic_align.core' */ -static PyTypeObject *__pyx_array_type = 0; -static PyTypeObject *__pyx_MemviewEnum_type = 0; -static PyTypeObject *__pyx_memoryview_type = 0; -static PyTypeObject *__pyx_memoryviewslice_type = 0; -static PyObject *generic = 0; -static PyObject *strided = 0; -static PyObject *indirect = 0; -static PyObject *contiguous = 0; -static PyObject *indirect_contiguous = 0; -static int __pyx_memoryview_thread_locks_used; -static PyThread_type_lock __pyx_memoryview_thread_locks[8]; -static void __pyx_f_4vits_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice, __Pyx_memviewslice, int, int, struct __pyx_opt_args_4vits_15monotonic_align_4core_maximum_path_each *__pyx_optional_args); /*proto*/ -static void __pyx_f_4vits_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, __Pyx_memviewslice, int __pyx_skip_dispatch); /*proto*/ -static struct __pyx_array_obj *__pyx_array_new(PyObject *, Py_ssize_t, char *, char *, char *); /*proto*/ -static void *__pyx_align_pointer(void *, size_t); /*proto*/ -static PyObject *__pyx_memoryview_new(PyObject *, int, int, __Pyx_TypeInfo *); /*proto*/ -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *); /*proto*/ -static PyObject *_unellipsify(PyObject *, int); /*proto*/ -static PyObject *assert_direct_dimensions(Py_ssize_t *, int); /*proto*/ -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *, PyObject *); /*proto*/ -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int *, Py_ssize_t, Py_ssize_t, Py_ssize_t, int, int, int, int); /*proto*/ -static char *__pyx_pybuffer_index(Py_buffer *, char *, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memslice_transpose(__Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice, int, PyObject *(*)(char *), int (*)(char *, PyObject *), int); /*proto*/ -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *); /*proto*/ -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *, __Pyx_memviewslice *); /*proto*/ -static Py_ssize_t abs_py_ssize_t(Py_ssize_t); /*proto*/ -static char __pyx_get_best_slice_order(__Pyx_memviewslice *, int); /*proto*/ -static void _copy_strided_to_strided(char *, Py_ssize_t *, char *, Py_ssize_t *, Py_ssize_t *, Py_ssize_t *, int, size_t); /*proto*/ -static void copy_strided_to_strided(__Pyx_memviewslice *, __Pyx_memviewslice *, int, size_t); /*proto*/ -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *, int); /*proto*/ -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *, Py_ssize_t *, Py_ssize_t, int, char); /*proto*/ -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *, __Pyx_memviewslice *, char, int); /*proto*/ -static int __pyx_memoryview_err_extents(int, Py_ssize_t, Py_ssize_t); /*proto*/ -static int __pyx_memoryview_err_dim(PyObject *, char *, int); /*proto*/ -static int __pyx_memoryview_err(PyObject *, char *); /*proto*/ -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice, __Pyx_memviewslice, int, int, int); /*proto*/ -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *, int, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_refcount_objects_in_slice(char *, Py_ssize_t *, Py_ssize_t *, int, int); /*proto*/ -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *, int, size_t, void *, int); /*proto*/ -static void __pyx_memoryview__slice_assign_scalar(char *, Py_ssize_t *, Py_ssize_t *, int, size_t, void *); /*proto*/ -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *, PyObject *); /*proto*/ -static __Pyx_TypeInfo __Pyx_TypeInfo_int = { "int", NULL, sizeof(int), { 0 }, 0, IS_UNSIGNED(int) ? 'U' : 'I', IS_UNSIGNED(int), 0 }; -static __Pyx_TypeInfo __Pyx_TypeInfo_float = { "float", NULL, sizeof(float), { 0 }, 0, 'R', 0, 0 }; -#define __Pyx_MODULE_NAME "vits.monotonic_align.core" -extern int __pyx_module_is_main_vits__monotonic_align__core; -int __pyx_module_is_main_vits__monotonic_align__core = 0; - -/* Implementation of 'vits.monotonic_align.core' */ -static PyObject *__pyx_builtin_range; -static PyObject *__pyx_builtin_ValueError; -static PyObject *__pyx_builtin_MemoryError; -static PyObject *__pyx_builtin_enumerate; -static PyObject *__pyx_builtin_TypeError; -static PyObject *__pyx_builtin_Ellipsis; -static PyObject *__pyx_builtin_id; -static PyObject *__pyx_builtin_IndexError; -static const char __pyx_k_O[] = "O"; -static const char __pyx_k_c[] = "c"; -static const char __pyx_k_id[] = "id"; -static const char __pyx_k_new[] = "__new__"; -static const char __pyx_k_obj[] = "obj"; -static const char __pyx_k_base[] = "base"; -static const char __pyx_k_dict[] = "__dict__"; -static const char __pyx_k_main[] = "__main__"; -static const char __pyx_k_mode[] = "mode"; -static const char __pyx_k_name[] = "name"; -static const char __pyx_k_ndim[] = "ndim"; -static const char __pyx_k_pack[] = "pack"; -static const char __pyx_k_size[] = "size"; -static const char __pyx_k_step[] = "step"; -static const char __pyx_k_stop[] = "stop"; -static const char __pyx_k_t_xs[] = "t_xs"; -static const char __pyx_k_t_ys[] = "t_ys"; -static const char __pyx_k_test[] = "__test__"; -static const char __pyx_k_ASCII[] = "ASCII"; -static const char __pyx_k_class[] = "__class__"; -static const char __pyx_k_error[] = "error"; -static const char __pyx_k_flags[] = "flags"; -static const char __pyx_k_paths[] = "paths"; -static const char __pyx_k_range[] = "range"; -static const char __pyx_k_shape[] = "shape"; -static const char __pyx_k_start[] = "start"; -static const char __pyx_k_encode[] = "encode"; -static const char __pyx_k_format[] = "format"; -static const char __pyx_k_import[] = "__import__"; -static const char __pyx_k_name_2[] = "__name__"; -static const char __pyx_k_pickle[] = "pickle"; -static const char __pyx_k_reduce[] = "__reduce__"; -static const char __pyx_k_struct[] = "struct"; -static const char __pyx_k_unpack[] = "unpack"; -static const char __pyx_k_update[] = "update"; -static const char __pyx_k_values[] = "values"; -static const char __pyx_k_fortran[] = "fortran"; -static const char __pyx_k_memview[] = "memview"; -static const char __pyx_k_Ellipsis[] = "Ellipsis"; -static const char __pyx_k_getstate[] = "__getstate__"; -static const char __pyx_k_itemsize[] = "itemsize"; -static const char __pyx_k_pyx_type[] = "__pyx_type"; -static const char __pyx_k_setstate[] = "__setstate__"; -static const char __pyx_k_TypeError[] = "TypeError"; -static const char __pyx_k_enumerate[] = "enumerate"; -static const char __pyx_k_pyx_state[] = "__pyx_state"; -static const char __pyx_k_reduce_ex[] = "__reduce_ex__"; -static const char __pyx_k_IndexError[] = "IndexError"; -static const char __pyx_k_ValueError[] = "ValueError"; -static const char __pyx_k_pyx_result[] = "__pyx_result"; -static const char __pyx_k_pyx_vtable[] = "__pyx_vtable__"; -static const char __pyx_k_MemoryError[] = "MemoryError"; -static const char __pyx_k_PickleError[] = "PickleError"; -static const char __pyx_k_pyx_checksum[] = "__pyx_checksum"; -static const char __pyx_k_stringsource[] = "stringsource"; -static const char __pyx_k_pyx_getbuffer[] = "__pyx_getbuffer"; -static const char __pyx_k_reduce_cython[] = "__reduce_cython__"; -static const char __pyx_k_View_MemoryView[] = "View.MemoryView"; -static const char __pyx_k_allocate_buffer[] = "allocate_buffer"; -static const char __pyx_k_dtype_is_object[] = "dtype_is_object"; -static const char __pyx_k_pyx_PickleError[] = "__pyx_PickleError"; -static const char __pyx_k_setstate_cython[] = "__setstate_cython__"; -static const char __pyx_k_pyx_unpickle_Enum[] = "__pyx_unpickle_Enum"; -static const char __pyx_k_cline_in_traceback[] = "cline_in_traceback"; -static const char __pyx_k_strided_and_direct[] = ""; -static const char __pyx_k_strided_and_indirect[] = ""; -static const char __pyx_k_contiguous_and_direct[] = ""; -static const char __pyx_k_MemoryView_of_r_object[] = ""; -static const char __pyx_k_MemoryView_of_r_at_0x_x[] = ""; -static const char __pyx_k_contiguous_and_indirect[] = ""; -static const char __pyx_k_Cannot_index_with_type_s[] = "Cannot index with type '%s'"; -static const char __pyx_k_Invalid_shape_in_axis_d_d[] = "Invalid shape in axis %d: %d."; -static const char __pyx_k_itemsize_0_for_cython_array[] = "itemsize <= 0 for cython.array"; -static const char __pyx_k_unable_to_allocate_array_data[] = "unable to allocate array data."; -static const char __pyx_k_strided_and_direct_or_indirect[] = ""; -static const char __pyx_k_Buffer_view_does_not_expose_stri[] = "Buffer view does not expose strides"; -static const char __pyx_k_Can_only_create_a_buffer_that_is[] = "Can only create a buffer that is contiguous in memory."; -static const char __pyx_k_Cannot_assign_to_read_only_memor[] = "Cannot assign to read-only memoryview"; -static const char __pyx_k_Cannot_create_writable_memory_vi[] = "Cannot create writable memory view from read-only memoryview"; -static const char __pyx_k_Empty_shape_tuple_for_cython_arr[] = "Empty shape tuple for cython.array"; -static const char __pyx_k_Incompatible_checksums_0x_x_vs_0[] = "Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))"; -static const char __pyx_k_Indirect_dimensions_not_supporte[] = "Indirect dimensions not supported"; -static const char __pyx_k_Invalid_mode_expected_c_or_fortr[] = "Invalid mode, expected 'c' or 'fortran', got %s"; -static const char __pyx_k_Out_of_bounds_on_buffer_access_a[] = "Out of bounds on buffer access (axis %d)"; -static const char __pyx_k_Unable_to_convert_item_to_object[] = "Unable to convert item to object"; -static const char __pyx_k_got_differing_extents_in_dimensi[] = "got differing extents in dimension %d (got %d and %d)"; -static const char __pyx_k_no_default___reduce___due_to_non[] = "no default __reduce__ due to non-trivial __cinit__"; -static const char __pyx_k_unable_to_allocate_shape_and_str[] = "unable to allocate shape and strides."; -static PyObject *__pyx_n_s_ASCII; -static PyObject *__pyx_kp_s_Buffer_view_does_not_expose_stri; -static PyObject *__pyx_kp_s_Can_only_create_a_buffer_that_is; -static PyObject *__pyx_kp_s_Cannot_assign_to_read_only_memor; -static PyObject *__pyx_kp_s_Cannot_create_writable_memory_vi; -static PyObject *__pyx_kp_s_Cannot_index_with_type_s; -static PyObject *__pyx_n_s_Ellipsis; -static PyObject *__pyx_kp_s_Empty_shape_tuple_for_cython_arr; -static PyObject *__pyx_kp_s_Incompatible_checksums_0x_x_vs_0; -static PyObject *__pyx_n_s_IndexError; -static PyObject *__pyx_kp_s_Indirect_dimensions_not_supporte; -static PyObject *__pyx_kp_s_Invalid_mode_expected_c_or_fortr; -static PyObject *__pyx_kp_s_Invalid_shape_in_axis_d_d; -static PyObject *__pyx_n_s_MemoryError; -static PyObject *__pyx_kp_s_MemoryView_of_r_at_0x_x; -static PyObject *__pyx_kp_s_MemoryView_of_r_object; -static PyObject *__pyx_n_b_O; -static PyObject *__pyx_kp_s_Out_of_bounds_on_buffer_access_a; -static PyObject *__pyx_n_s_PickleError; -static PyObject *__pyx_n_s_TypeError; -static PyObject *__pyx_kp_s_Unable_to_convert_item_to_object; -static PyObject *__pyx_n_s_ValueError; -static PyObject *__pyx_n_s_View_MemoryView; -static PyObject *__pyx_n_s_allocate_buffer; -static PyObject *__pyx_n_s_base; -static PyObject *__pyx_n_s_c; -static PyObject *__pyx_n_u_c; -static PyObject *__pyx_n_s_class; -static PyObject *__pyx_n_s_cline_in_traceback; -static PyObject *__pyx_kp_s_contiguous_and_direct; -static PyObject *__pyx_kp_s_contiguous_and_indirect; -static PyObject *__pyx_n_s_dict; -static PyObject *__pyx_n_s_dtype_is_object; -static PyObject *__pyx_n_s_encode; -static PyObject *__pyx_n_s_enumerate; -static PyObject *__pyx_n_s_error; -static PyObject *__pyx_n_s_flags; -static PyObject *__pyx_n_s_format; -static PyObject *__pyx_n_s_fortran; -static PyObject *__pyx_n_u_fortran; -static PyObject *__pyx_n_s_getstate; -static PyObject *__pyx_kp_s_got_differing_extents_in_dimensi; -static PyObject *__pyx_n_s_id; -static PyObject *__pyx_n_s_import; -static PyObject *__pyx_n_s_itemsize; -static PyObject *__pyx_kp_s_itemsize_0_for_cython_array; -static PyObject *__pyx_n_s_main; -static PyObject *__pyx_n_s_memview; -static PyObject *__pyx_n_s_mode; -static PyObject *__pyx_n_s_name; -static PyObject *__pyx_n_s_name_2; -static PyObject *__pyx_n_s_ndim; -static PyObject *__pyx_n_s_new; -static PyObject *__pyx_kp_s_no_default___reduce___due_to_non; -static PyObject *__pyx_n_s_obj; -static PyObject *__pyx_n_s_pack; -static PyObject *__pyx_n_s_paths; -static PyObject *__pyx_n_s_pickle; -static PyObject *__pyx_n_s_pyx_PickleError; -static PyObject *__pyx_n_s_pyx_checksum; -static PyObject *__pyx_n_s_pyx_getbuffer; -static PyObject *__pyx_n_s_pyx_result; -static PyObject *__pyx_n_s_pyx_state; -static PyObject *__pyx_n_s_pyx_type; -static PyObject *__pyx_n_s_pyx_unpickle_Enum; -static PyObject *__pyx_n_s_pyx_vtable; -static PyObject *__pyx_n_s_range; -static PyObject *__pyx_n_s_reduce; -static PyObject *__pyx_n_s_reduce_cython; -static PyObject *__pyx_n_s_reduce_ex; -static PyObject *__pyx_n_s_setstate; -static PyObject *__pyx_n_s_setstate_cython; -static PyObject *__pyx_n_s_shape; -static PyObject *__pyx_n_s_size; -static PyObject *__pyx_n_s_start; -static PyObject *__pyx_n_s_step; -static PyObject *__pyx_n_s_stop; -static PyObject *__pyx_kp_s_strided_and_direct; -static PyObject *__pyx_kp_s_strided_and_direct_or_indirect; -static PyObject *__pyx_kp_s_strided_and_indirect; -static PyObject *__pyx_kp_s_stringsource; -static PyObject *__pyx_n_s_struct; -static PyObject *__pyx_n_s_t_xs; -static PyObject *__pyx_n_s_t_ys; -static PyObject *__pyx_n_s_test; -static PyObject *__pyx_kp_s_unable_to_allocate_array_data; -static PyObject *__pyx_kp_s_unable_to_allocate_shape_and_str; -static PyObject *__pyx_n_s_unpack; -static PyObject *__pyx_n_s_update; -static PyObject *__pyx_n_s_values; -static PyObject *__pyx_pf_4vits_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr); /* proto */ -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item); /* proto */ -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /* proto */ -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name); /* proto */ -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object); /* proto */ -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /* proto */ -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self); /* proto */ -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state); /* proto */ -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k); /*proto*/ -static PyObject *__pyx_int_0; -static PyObject *__pyx_int_1; -static PyObject *__pyx_int_112105877; -static PyObject *__pyx_int_136983863; -static PyObject *__pyx_int_184977713; -static PyObject *__pyx_int_neg_1; -static float __pyx_k_; -static PyObject *__pyx_tuple__2; -static PyObject *__pyx_tuple__3; -static PyObject *__pyx_tuple__4; -static PyObject *__pyx_tuple__5; -static PyObject *__pyx_tuple__6; -static PyObject *__pyx_tuple__7; -static PyObject *__pyx_tuple__8; -static PyObject *__pyx_tuple__9; -static PyObject *__pyx_slice__16; -static PyObject *__pyx_tuple__10; -static PyObject *__pyx_tuple__11; -static PyObject *__pyx_tuple__12; -static PyObject *__pyx_tuple__13; -static PyObject *__pyx_tuple__14; -static PyObject *__pyx_tuple__15; -static PyObject *__pyx_tuple__17; -static PyObject *__pyx_tuple__18; -static PyObject *__pyx_tuple__19; -static PyObject *__pyx_tuple__20; -static PyObject *__pyx_tuple__21; -static PyObject *__pyx_tuple__22; -static PyObject *__pyx_tuple__23; -static PyObject *__pyx_tuple__24; -static PyObject *__pyx_tuple__25; -static PyObject *__pyx_tuple__26; -static PyObject *__pyx_codeobj__27; -/* Late includes */ - -/* "vits/monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - -static void __pyx_f_4vits_15monotonic_align_4core_maximum_path_each(__Pyx_memviewslice __pyx_v_path, __Pyx_memviewslice __pyx_v_value, int __pyx_v_t_y, int __pyx_v_t_x, struct __pyx_opt_args_4vits_15monotonic_align_4core_maximum_path_each *__pyx_optional_args) { - float __pyx_v_max_neg_val = __pyx_k_; - int __pyx_v_x; - int __pyx_v_y; - float __pyx_v_v_prev; - float __pyx_v_v_cur; - int __pyx_v_index; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - long __pyx_t_4; - int __pyx_t_5; - long __pyx_t_6; - long __pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - Py_ssize_t __pyx_t_10; - float __pyx_t_11; - float __pyx_t_12; - float __pyx_t_13; - int __pyx_t_14; - Py_ssize_t __pyx_t_15; - Py_ssize_t __pyx_t_16; - if (__pyx_optional_args) { - if (__pyx_optional_args->__pyx_n > 0) { - __pyx_v_max_neg_val = __pyx_optional_args->max_neg_val; - } - } - - /* "vits/monotonic_align/core.pyx":13 - * cdef float v_cur - * cdef float tmp - * cdef int index = t_x - 1 # <<<<<<<<<<<<<< - * - * for y in range(t_y): - */ - __pyx_v_index = (__pyx_v_t_x - 1); - - /* "vits/monotonic_align/core.pyx":15 - * cdef int index = t_x - 1 - * - * for y in range(t_y): # <<<<<<<<<<<<<< - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - */ - __pyx_t_1 = __pyx_v_t_y; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_y = __pyx_t_3; - - /* "vits/monotonic_align/core.pyx":16 - * - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): # <<<<<<<<<<<<<< - * if x == y: - * v_cur = max_neg_val - */ - __pyx_t_4 = (__pyx_v_y + 1); - __pyx_t_5 = __pyx_v_t_x; - if (((__pyx_t_4 < __pyx_t_5) != 0)) { - __pyx_t_6 = __pyx_t_4; - } else { - __pyx_t_6 = __pyx_t_5; - } - __pyx_t_4 = __pyx_t_6; - __pyx_t_5 = ((__pyx_v_t_x + __pyx_v_y) - __pyx_v_t_y); - __pyx_t_6 = 0; - if (((__pyx_t_5 > __pyx_t_6) != 0)) { - __pyx_t_7 = __pyx_t_5; - } else { - __pyx_t_7 = __pyx_t_6; - } - __pyx_t_6 = __pyx_t_4; - for (__pyx_t_5 = __pyx_t_7; __pyx_t_5 < __pyx_t_6; __pyx_t_5+=1) { - __pyx_v_x = __pyx_t_5; - - /* "vits/monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - __pyx_t_8 = ((__pyx_v_x == __pyx_v_y) != 0); - if (__pyx_t_8) { - - /* "vits/monotonic_align/core.pyx":18 - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: - * v_cur = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_cur = value[y-1, x] - */ - __pyx_v_v_cur = __pyx_v_max_neg_val; - - /* "vits/monotonic_align/core.pyx":17 - * for y in range(t_y): - * for x in range(max(0, t_x + y - t_y), min(t_x, y + 1)): - * if x == y: # <<<<<<<<<<<<<< - * v_cur = max_neg_val - * else: - */ - goto __pyx_L7; - } - - /* "vits/monotonic_align/core.pyx":20 - * v_cur = max_neg_val - * else: - * v_cur = value[y-1, x] # <<<<<<<<<<<<<< - * if x == 0: - * if y == 0: - */ - /*else*/ { - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_x; - __pyx_v_v_cur = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))); - } - __pyx_L7:; - - /* "vits/monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - __pyx_t_8 = ((__pyx_v_x == 0) != 0); - if (__pyx_t_8) { - - /* "vits/monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - __pyx_t_8 = ((__pyx_v_y == 0) != 0); - if (__pyx_t_8) { - - /* "vits/monotonic_align/core.pyx":23 - * if x == 0: - * if y == 0: - * v_prev = 0. # <<<<<<<<<<<<<< - * else: - * v_prev = max_neg_val - */ - __pyx_v_v_prev = 0.; - - /* "vits/monotonic_align/core.pyx":22 - * v_cur = value[y-1, x] - * if x == 0: - * if y == 0: # <<<<<<<<<<<<<< - * v_prev = 0. - * else: - */ - goto __pyx_L9; - } - - /* "vits/monotonic_align/core.pyx":25 - * v_prev = 0. - * else: - * v_prev = max_neg_val # <<<<<<<<<<<<<< - * else: - * v_prev = value[y-1, x-1] - */ - /*else*/ { - __pyx_v_v_prev = __pyx_v_max_neg_val; - } - __pyx_L9:; - - /* "vits/monotonic_align/core.pyx":21 - * else: - * v_cur = value[y-1, x] - * if x == 0: # <<<<<<<<<<<<<< - * if y == 0: - * v_prev = 0. - */ - goto __pyx_L8; - } - - /* "vits/monotonic_align/core.pyx":27 - * v_prev = max_neg_val - * else: - * v_prev = value[y-1, x-1] # <<<<<<<<<<<<<< - * value[y, x] += max(v_prev, v_cur) - * - */ - /*else*/ { - __pyx_t_10 = (__pyx_v_y - 1); - __pyx_t_9 = (__pyx_v_x - 1); - __pyx_v_v_prev = (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_10 * __pyx_v_value.strides[0]) )) + __pyx_t_9)) ))); - } - __pyx_L8:; - - /* "vits/monotonic_align/core.pyx":28 - * else: - * v_prev = value[y-1, x-1] - * value[y, x] += max(v_prev, v_cur) # <<<<<<<<<<<<<< - * - * for y in range(t_y - 1, -1, -1): - */ - __pyx_t_11 = __pyx_v_v_cur; - __pyx_t_12 = __pyx_v_v_prev; - if (((__pyx_t_11 > __pyx_t_12) != 0)) { - __pyx_t_13 = __pyx_t_11; - } else { - __pyx_t_13 = __pyx_t_12; - } - __pyx_t_9 = __pyx_v_y; - __pyx_t_10 = __pyx_v_x; - *((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) )) += __pyx_t_13; - } - } - - /* "vits/monotonic_align/core.pyx":30 - * value[y, x] += max(v_prev, v_cur) - * - * for y in range(t_y - 1, -1, -1): # <<<<<<<<<<<<<< - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - */ - for (__pyx_t_1 = (__pyx_v_t_y - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_y = __pyx_t_1; - - /* "vits/monotonic_align/core.pyx":31 - * - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 # <<<<<<<<<<<<<< - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 - */ - __pyx_t_10 = __pyx_v_y; - __pyx_t_9 = __pyx_v_index; - *((int *) ( /* dim=1 */ ((char *) (((int *) ( /* dim=0 */ (__pyx_v_path.data + __pyx_t_10 * __pyx_v_path.strides[0]) )) + __pyx_t_9)) )) = 1; - - /* "vits/monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - __pyx_t_14 = ((__pyx_v_index != 0) != 0); - if (__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_14 = ((__pyx_v_index == __pyx_v_y) != 0); - if (!__pyx_t_14) { - } else { - __pyx_t_8 = __pyx_t_14; - goto __pyx_L13_bool_binop_done; - } - __pyx_t_9 = (__pyx_v_y - 1); - __pyx_t_10 = __pyx_v_index; - __pyx_t_15 = (__pyx_v_y - 1); - __pyx_t_16 = (__pyx_v_index - 1); - __pyx_t_14 = (((*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_9 * __pyx_v_value.strides[0]) )) + __pyx_t_10)) ))) < (*((float *) ( /* dim=1 */ ((char *) (((float *) ( /* dim=0 */ (__pyx_v_value.data + __pyx_t_15 * __pyx_v_value.strides[0]) )) + __pyx_t_16)) )))) != 0); - __pyx_t_8 = __pyx_t_14; - __pyx_L13_bool_binop_done:; - if (__pyx_t_8) { - - /* "vits/monotonic_align/core.pyx":33 - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): - * index = index - 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_index = (__pyx_v_index - 1); - - /* "vits/monotonic_align/core.pyx":32 - * for y in range(t_y - 1, -1, -1): - * path[y, index] = 1 - * if index != 0 and (index == y or value[y-1, index] < value[y-1, index-1]): # <<<<<<<<<<<<<< - * index = index - 1 - * - */ - } - } - - /* "vits/monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - - /* function exit code */ -} - -/* "vits/monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - -static PyObject *__pyx_pw_4vits_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static void __pyx_f_4vits_15monotonic_align_4core_maximum_path_c(__Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs, CYTHON_UNUSED int __pyx_skip_dispatch) { - CYTHON_UNUSED int __pyx_v_b; - int __pyx_v_i; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - __Pyx_memviewslice __pyx_t_4 = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_t_5 = { 0, 0, { 0 }, { 0 }, { 0 } }; - Py_ssize_t __pyx_t_6; - Py_ssize_t __pyx_t_7; - - /* "vits/monotonic_align/core.pyx":39 - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: - * cdef int b = paths.shape[0] # <<<<<<<<<<<<<< - * cdef int i - * for i in prange(b, nogil=True): - */ - __pyx_v_b = (__pyx_v_paths.shape[0]); - - /* "vits/monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - { - #ifdef WITH_THREAD - PyThreadState *_save; - Py_UNBLOCK_THREADS - __Pyx_FastGIL_Remember(); - #endif - /*try:*/ { - __pyx_t_1 = __pyx_v_b; - if ((1 == 0)) abort(); - { - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) (x) - #define unlikely(x) (x) - #endif - __pyx_t_3 = (__pyx_t_1 - 0 + 1 - 1/abs(1)) / 1; - if (__pyx_t_3 > 0) - { - #ifdef _OPENMP - #pragma omp parallel private(__pyx_t_6, __pyx_t_7) firstprivate(__pyx_t_4, __pyx_t_5) - #endif /* _OPENMP */ - { - #ifdef _OPENMP - #pragma omp for firstprivate(__pyx_v_i) lastprivate(__pyx_v_i) - #endif /* _OPENMP */ - for (__pyx_t_2 = 0; __pyx_t_2 < __pyx_t_3; __pyx_t_2++){ - { - __pyx_v_i = (int)(0 + 1 * __pyx_t_2); - - /* "vits/monotonic_align/core.pyx":42 - * cdef int i - * for i in prange(b, nogil=True): - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) # <<<<<<<<<<<<<< - */ - __pyx_t_4.data = __pyx_v_paths.data; - __pyx_t_4.memview = __pyx_v_paths.memview; - __PYX_INC_MEMVIEW(&__pyx_t_4, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_paths.strides[0]; - __pyx_t_4.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_4.shape[0] = __pyx_v_paths.shape[1]; -__pyx_t_4.strides[0] = __pyx_v_paths.strides[1]; - __pyx_t_4.suboffsets[0] = -1; - -__pyx_t_4.shape[1] = __pyx_v_paths.shape[2]; -__pyx_t_4.strides[1] = __pyx_v_paths.strides[2]; - __pyx_t_4.suboffsets[1] = -1; - -__pyx_t_5.data = __pyx_v_values.data; - __pyx_t_5.memview = __pyx_v_values.memview; - __PYX_INC_MEMVIEW(&__pyx_t_5, 0); - { - Py_ssize_t __pyx_tmp_idx = __pyx_v_i; - Py_ssize_t __pyx_tmp_stride = __pyx_v_values.strides[0]; - __pyx_t_5.data += __pyx_tmp_idx * __pyx_tmp_stride; -} - -__pyx_t_5.shape[0] = __pyx_v_values.shape[1]; -__pyx_t_5.strides[0] = __pyx_v_values.strides[1]; - __pyx_t_5.suboffsets[0] = -1; - -__pyx_t_5.shape[1] = __pyx_v_values.shape[2]; -__pyx_t_5.strides[1] = __pyx_v_values.strides[2]; - __pyx_t_5.suboffsets[1] = -1; - -__pyx_t_6 = __pyx_v_i; - __pyx_t_7 = __pyx_v_i; - __pyx_f_4vits_15monotonic_align_4core_maximum_path_each(__pyx_t_4, __pyx_t_5, (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_ys.data) + __pyx_t_6)) ))), (*((int *) ( /* dim=0 */ ((char *) (((int *) __pyx_v_t_xs.data) + __pyx_t_7)) ))), NULL); - __PYX_XDEC_MEMVIEW(&__pyx_t_4, 0); - __pyx_t_4.memview = NULL; - __pyx_t_4.data = NULL; - __PYX_XDEC_MEMVIEW(&__pyx_t_5, 0); - __pyx_t_5.memview = NULL; - __pyx_t_5.data = NULL; - } - } - } - } - } - #if ((defined(__APPLE__) || defined(__OSX__)) && (defined(__GNUC__) && (__GNUC__ > 2 || (__GNUC__ == 2 && (__GNUC_MINOR__ > 95))))) - #undef likely - #undef unlikely - #define likely(x) __builtin_expect(!!(x), 1) - #define unlikely(x) __builtin_expect(!!(x), 0) - #endif - } - - /* "vits/monotonic_align/core.pyx":41 - * cdef int b = paths.shape[0] - * cdef int i - * for i in prange(b, nogil=True): # <<<<<<<<<<<<<< - * maximum_path_each(paths[i], values[i], t_ys[i], t_xs[i]) - */ - /*finally:*/ { - /*normal exit:*/{ - #ifdef WITH_THREAD - __Pyx_FastGIL_Forget(); - Py_BLOCK_THREADS - #endif - goto __pyx_L5; - } - __pyx_L5:; - } - } - - /* "vits/monotonic_align/core.pyx":38 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cpdef void maximum_path_c(int[:,:,::1] paths, float[:,:,::1] values, int[::1] t_ys, int[::1] t_xs) nogil: # <<<<<<<<<<<<<< - * cdef int b = paths.shape[0] - * cdef int i - */ - - /* function exit code */ -} - -/* Python wrapper */ -static PyObject *__pyx_pw_4vits_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyObject *__pyx_pw_4vits_15monotonic_align_4core_1maximum_path_c(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - __Pyx_memviewslice __pyx_v_paths = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_values = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_ys = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_memviewslice __pyx_v_t_xs = { 0, 0, { 0 }, { 0 }, { 0 } }; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("maximum_path_c (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_paths,&__pyx_n_s_values,&__pyx_n_s_t_ys,&__pyx_n_s_t_xs,0}; - PyObject* values[4] = {0,0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_paths)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_values)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 1); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_ys)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 2); __PYX_ERR(0, 38, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (likely((values[3] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_t_xs)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, 3); __PYX_ERR(0, 38, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "maximum_path_c") < 0)) __PYX_ERR(0, 38, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 4) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - } - __pyx_v_paths = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(values[0], PyBUF_WRITABLE); if (unlikely(!__pyx_v_paths.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_values = __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(values[1], PyBUF_WRITABLE); if (unlikely(!__pyx_v_values.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_ys = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[2], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_ys.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_v_t_xs = __Pyx_PyObject_to_MemoryviewSlice_dc_int(values[3], PyBUF_WRITABLE); if (unlikely(!__pyx_v_t_xs.memview)) __PYX_ERR(0, 38, __pyx_L3_error) - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("maximum_path_c", 1, 4, 4, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(0, 38, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("vits.monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_4vits_15monotonic_align_4core_maximum_path_c(__pyx_self, __pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_4vits_15monotonic_align_4core_maximum_path_c(CYTHON_UNUSED PyObject *__pyx_self, __Pyx_memviewslice __pyx_v_paths, __Pyx_memviewslice __pyx_v_values, __Pyx_memviewslice __pyx_v_t_ys, __Pyx_memviewslice __pyx_v_t_xs) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("maximum_path_c", 0); - __Pyx_XDECREF(__pyx_r); - if (unlikely(!__pyx_v_paths.memview)) { __Pyx_RaiseUnboundLocalError("paths"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_values.memview)) { __Pyx_RaiseUnboundLocalError("values"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_ys.memview)) { __Pyx_RaiseUnboundLocalError("t_ys"); __PYX_ERR(0, 38, __pyx_L1_error) } - if (unlikely(!__pyx_v_t_xs.memview)) { __Pyx_RaiseUnboundLocalError("t_xs"); __PYX_ERR(0, 38, __pyx_L1_error) } - __pyx_t_1 = __Pyx_void_to_None(__pyx_f_4vits_15monotonic_align_4core_maximum_path_c(__pyx_v_paths, __pyx_v_values, __pyx_v_t_ys, __pyx_v_t_xs, 0)); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 38, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("vits.monotonic_align.core.maximum_path_c", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __PYX_XDEC_MEMVIEW(&__pyx_v_paths, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_values, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_ys, 1); - __PYX_XDEC_MEMVIEW(&__pyx_v_t_xs, 1); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":123 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - -/* Python wrapper */ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_array___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_shape = 0; - Py_ssize_t __pyx_v_itemsize; - PyObject *__pyx_v_format = 0; - PyObject *__pyx_v_mode = 0; - int __pyx_v_allocate_buffer; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_shape,&__pyx_n_s_itemsize,&__pyx_n_s_format,&__pyx_n_s_mode,&__pyx_n_s_allocate_buffer,0}; - PyObject* values[5] = {0,0,0,0,0}; - values[3] = ((PyObject *)__pyx_n_s_c); - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_shape)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_itemsize)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 1); __PYX_ERR(1, 123, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_format)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, 2); __PYX_ERR(1, 123, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 3: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_mode); - if (value) { values[3] = value; kw_args--; } - } - CYTHON_FALLTHROUGH; - case 4: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_allocate_buffer); - if (value) { values[4] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 123, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 5: values[4] = PyTuple_GET_ITEM(__pyx_args, 4); - CYTHON_FALLTHROUGH; - case 4: values[3] = PyTuple_GET_ITEM(__pyx_args, 3); - CYTHON_FALLTHROUGH; - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_shape = ((PyObject*)values[0]); - __pyx_v_itemsize = __Pyx_PyIndex_AsSsize_t(values[1]); if (unlikely((__pyx_v_itemsize == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 123, __pyx_L3_error) - __pyx_v_format = values[2]; - __pyx_v_mode = values[3]; - if (values[4]) { - __pyx_v_allocate_buffer = __Pyx_PyObject_IsTrue(values[4]); if (unlikely((__pyx_v_allocate_buffer == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 124, __pyx_L3_error) - } else { - - /* "View.MemoryView":124 - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, - * mode="c", bint allocate_buffer=True): # <<<<<<<<<<<<<< - * - * cdef int idx - */ - __pyx_v_allocate_buffer = ((int)1); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 3, 5, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 123, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - if (unlikely(!__Pyx_ArgTypeTest(((PyObject *)__pyx_v_shape), (&PyTuple_Type), 1, "shape", 1))) __PYX_ERR(1, 123, __pyx_L1_error) - if (unlikely(((PyObject *)__pyx_v_format) == Py_None)) { - PyErr_Format(PyExc_TypeError, "Argument '%.200s' must not be None", "format"); __PYX_ERR(1, 123, __pyx_L1_error) - } - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(((struct __pyx_array_obj *)__pyx_v_self), __pyx_v_shape, __pyx_v_itemsize, __pyx_v_format, __pyx_v_mode, __pyx_v_allocate_buffer); - - /* "View.MemoryView":123 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - goto __pyx_L0; - __pyx_L1_error:; - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array___cinit__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, PyObject *__pyx_v_format, PyObject *__pyx_v_mode, int __pyx_v_allocate_buffer) { - int __pyx_v_idx; - Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_dim; - PyObject **__pyx_v_p; - char __pyx_v_order; - int __pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - char *__pyx_t_7; - int __pyx_t_8; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - Py_ssize_t __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - __Pyx_INCREF(__pyx_v_format); - - /* "View.MemoryView":130 - * cdef PyObject **p - * - * self.ndim = len(shape) # <<<<<<<<<<<<<< - * self.itemsize = itemsize - * - */ - if (unlikely(__pyx_v_shape == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 130, __pyx_L1_error) - } - __pyx_t_1 = PyTuple_GET_SIZE(__pyx_v_shape); if (unlikely(__pyx_t_1 == ((Py_ssize_t)-1))) __PYX_ERR(1, 130, __pyx_L1_error) - __pyx_v_self->ndim = ((int)__pyx_t_1); - - /* "View.MemoryView":131 - * - * self.ndim = len(shape) - * self.itemsize = itemsize # <<<<<<<<<<<<<< - * - * if not self.ndim: - */ - __pyx_v_self->itemsize = __pyx_v_itemsize; - - /* "View.MemoryView":133 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - __pyx_t_2 = ((!(__pyx_v_self->ndim != 0)) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":134 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__2, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 134, __pyx_L1_error) - - /* "View.MemoryView":133 - * self.itemsize = itemsize - * - * if not self.ndim: # <<<<<<<<<<<<<< - * raise ValueError("Empty shape tuple for cython.array") - * - */ - } - - /* "View.MemoryView":136 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - __pyx_t_2 = ((__pyx_v_itemsize <= 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":137 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__3, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 137, __pyx_L1_error) - - /* "View.MemoryView":136 - * raise ValueError("Empty shape tuple for cython.array") - * - * if itemsize <= 0: # <<<<<<<<<<<<<< - * raise ValueError("itemsize <= 0 for cython.array") - * - */ - } - - /* "View.MemoryView":139 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - __pyx_t_2 = PyBytes_Check(__pyx_v_format); - __pyx_t_4 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":140 - * - * if not isinstance(format, bytes): - * format = format.encode('ASCII') # <<<<<<<<<<<<<< - * self._format = format # keep a reference to the byte string - * self.format = self._format - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_format, __pyx_n_s_encode); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - } - } - __pyx_t_3 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_5, __pyx_t_6, __pyx_n_s_ASCII) : __Pyx_PyObject_CallOneArg(__pyx_t_5, __pyx_n_s_ASCII); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 140, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF_SET(__pyx_v_format, __pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":139 - * raise ValueError("itemsize <= 0 for cython.array") - * - * if not isinstance(format, bytes): # <<<<<<<<<<<<<< - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - */ - } - - /* "View.MemoryView":141 - * if not isinstance(format, bytes): - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string # <<<<<<<<<<<<<< - * self.format = self._format - * - */ - if (!(likely(PyBytes_CheckExact(__pyx_v_format))||((__pyx_v_format) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_v_format)->tp_name), 0))) __PYX_ERR(1, 141, __pyx_L1_error) - __pyx_t_3 = __pyx_v_format; - __Pyx_INCREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_3); - __Pyx_GOTREF(__pyx_v_self->_format); - __Pyx_DECREF(__pyx_v_self->_format); - __pyx_v_self->_format = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":142 - * format = format.encode('ASCII') - * self._format = format # keep a reference to the byte string - * self.format = self._format # <<<<<<<<<<<<<< - * - * - */ - if (unlikely(__pyx_v_self->_format == Py_None)) { - PyErr_SetString(PyExc_TypeError, "expected bytes, NoneType found"); - __PYX_ERR(1, 142, __pyx_L1_error) - } - __pyx_t_7 = __Pyx_PyBytes_AsWritableString(__pyx_v_self->_format); if (unlikely((!__pyx_t_7) && PyErr_Occurred())) __PYX_ERR(1, 142, __pyx_L1_error) - __pyx_v_self->format = __pyx_t_7; - - /* "View.MemoryView":145 - * - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) # <<<<<<<<<<<<<< - * self._strides = self._shape + self.ndim - * - */ - __pyx_v_self->_shape = ((Py_ssize_t *)PyObject_Malloc((((sizeof(Py_ssize_t)) * __pyx_v_self->ndim) * 2))); - - /* "View.MemoryView":146 - * - * self._shape = PyObject_Malloc(sizeof(Py_ssize_t)*self.ndim*2) - * self._strides = self._shape + self.ndim # <<<<<<<<<<<<<< - * - * if not self._shape: - */ - __pyx_v_self->_strides = (__pyx_v_self->_shape + __pyx_v_self->ndim); - - /* "View.MemoryView":148 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->_shape != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":149 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__4, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 149, __pyx_L1_error) - - /* "View.MemoryView":148 - * self._strides = self._shape + self.ndim - * - * if not self._shape: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate shape and strides.") - * - */ - } - - /* "View.MemoryView":152 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - __pyx_t_8 = 0; - __pyx_t_3 = __pyx_v_shape; __Pyx_INCREF(__pyx_t_3); __pyx_t_1 = 0; - for (;;) { - if (__pyx_t_1 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_1); __Pyx_INCREF(__pyx_t_5); __pyx_t_1++; if (unlikely(0 < 0)) __PYX_ERR(1, 152, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_3, __pyx_t_1); __pyx_t_1++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 152, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_5); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 152, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_9; - __pyx_v_idx = __pyx_t_8; - __pyx_t_8 = (__pyx_t_8 + 1); - - /* "View.MemoryView":153 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - __pyx_t_4 = ((__pyx_v_dim <= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":154 - * for idx, dim in enumerate(shape): - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) # <<<<<<<<<<<<<< - * self._shape[idx] = dim - * - */ - __pyx_t_5 = __Pyx_PyInt_From_int(__pyx_v_idx); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_10 = PyTuple_New(2); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_10, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_10, 1, __pyx_t_6); - __pyx_t_5 = 0; - __pyx_t_6 = 0; - __pyx_t_6 = __Pyx_PyString_Format(__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_t_10); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_6); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 154, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 154, __pyx_L1_error) - - /* "View.MemoryView":153 - * - * for idx, dim in enumerate(shape): - * if dim <= 0: # <<<<<<<<<<<<<< - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim - */ - } - - /* "View.MemoryView":155 - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - * self._shape[idx] = dim # <<<<<<<<<<<<<< - * - * cdef char order - */ - (__pyx_v_self->_shape[__pyx_v_idx]) = __pyx_v_dim; - - /* "View.MemoryView":152 - * - * - * for idx, dim in enumerate(shape): # <<<<<<<<<<<<<< - * if dim <= 0: - * raise ValueError("Invalid shape in axis %d: %d." % (idx, dim)) - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":158 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_fortran, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 158, __pyx_L1_error) - if (__pyx_t_4) { - - /* "View.MemoryView":159 - * cdef char order - * if mode == 'fortran': - * order = b'F' # <<<<<<<<<<<<<< - * self.mode = u'fortran' - * elif mode == 'c': - */ - __pyx_v_order = 'F'; - - /* "View.MemoryView":160 - * if mode == 'fortran': - * order = b'F' - * self.mode = u'fortran' # <<<<<<<<<<<<<< - * elif mode == 'c': - * order = b'C' - */ - __Pyx_INCREF(__pyx_n_u_fortran); - __Pyx_GIVEREF(__pyx_n_u_fortran); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_fortran; - - /* "View.MemoryView":158 - * - * cdef char order - * if mode == 'fortran': # <<<<<<<<<<<<<< - * order = b'F' - * self.mode = u'fortran' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":161 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - __pyx_t_4 = (__Pyx_PyString_Equals(__pyx_v_mode, __pyx_n_s_c, Py_EQ)); if (unlikely(__pyx_t_4 < 0)) __PYX_ERR(1, 161, __pyx_L1_error) - if (likely(__pyx_t_4)) { - - /* "View.MemoryView":162 - * self.mode = u'fortran' - * elif mode == 'c': - * order = b'C' # <<<<<<<<<<<<<< - * self.mode = u'c' - * else: - */ - __pyx_v_order = 'C'; - - /* "View.MemoryView":163 - * elif mode == 'c': - * order = b'C' - * self.mode = u'c' # <<<<<<<<<<<<<< - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - */ - __Pyx_INCREF(__pyx_n_u_c); - __Pyx_GIVEREF(__pyx_n_u_c); - __Pyx_GOTREF(__pyx_v_self->mode); - __Pyx_DECREF(__pyx_v_self->mode); - __pyx_v_self->mode = __pyx_n_u_c; - - /* "View.MemoryView":161 - * order = b'F' - * self.mode = u'fortran' - * elif mode == 'c': # <<<<<<<<<<<<<< - * order = b'C' - * self.mode = u'c' - */ - goto __pyx_L10; - } - - /* "View.MemoryView":165 - * self.mode = u'c' - * else: - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) # <<<<<<<<<<<<<< - * - * self.len = fill_contig_strides_array(self._shape, self._strides, - */ - /*else*/ { - __pyx_t_3 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_v_mode); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_10 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 165, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 165, __pyx_L1_error) - } - __pyx_L10:; - - /* "View.MemoryView":167 - * raise ValueError("Invalid mode, expected 'c' or 'fortran', got %s" % mode) - * - * self.len = fill_contig_strides_array(self._shape, self._strides, # <<<<<<<<<<<<<< - * itemsize, self.ndim, order) - * - */ - __pyx_v_self->len = __pyx_fill_contig_strides_array(__pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_itemsize, __pyx_v_self->ndim, __pyx_v_order); - - /* "View.MemoryView":170 - * itemsize, self.ndim, order) - * - * self.free_data = allocate_buffer # <<<<<<<<<<<<<< - * self.dtype_is_object = format == b'O' - * if allocate_buffer: - */ - __pyx_v_self->free_data = __pyx_v_allocate_buffer; - - /* "View.MemoryView":171 - * - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' # <<<<<<<<<<<<<< - * if allocate_buffer: - * - */ - __pyx_t_10 = PyObject_RichCompare(__pyx_v_format, __pyx_n_b_O, Py_EQ); __Pyx_XGOTREF(__pyx_t_10); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 171, __pyx_L1_error) - __pyx_t_4 = __Pyx_PyObject_IsTrue(__pyx_t_10); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 171, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __pyx_v_self->dtype_is_object = __pyx_t_4; - - /* "View.MemoryView":172 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_4 = (__pyx_v_allocate_buffer != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":175 - * - * - * self.data = malloc(self.len) # <<<<<<<<<<<<<< - * if not self.data: - * raise MemoryError("unable to allocate array data.") - */ - __pyx_v_self->data = ((char *)malloc(__pyx_v_self->len)); - - /* "View.MemoryView":176 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - __pyx_t_4 = ((!(__pyx_v_self->data != 0)) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":177 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_t_10 = __Pyx_PyObject_Call(__pyx_builtin_MemoryError, __pyx_tuple__5, NULL); if (unlikely(!__pyx_t_10)) __PYX_ERR(1, 177, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_10); - __Pyx_Raise(__pyx_t_10, 0, 0, 0); - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - __PYX_ERR(1, 177, __pyx_L1_error) - - /* "View.MemoryView":176 - * - * self.data = malloc(self.len) - * if not self.data: # <<<<<<<<<<<<<< - * raise MemoryError("unable to allocate array data.") - * - */ - } - - /* "View.MemoryView":179 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - __pyx_t_4 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":180 - * - * if self.dtype_is_object: - * p = self.data # <<<<<<<<<<<<<< - * for i in range(self.len / itemsize): - * p[i] = Py_None - */ - __pyx_v_p = ((PyObject **)__pyx_v_self->data); - - /* "View.MemoryView":181 - * if self.dtype_is_object: - * p = self.data - * for i in range(self.len / itemsize): # <<<<<<<<<<<<<< - * p[i] = Py_None - * Py_INCREF(Py_None) - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 181, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_self->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 181, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_div_Py_ssize_t(__pyx_v_self->len, __pyx_v_itemsize); - __pyx_t_9 = __pyx_t_1; - for (__pyx_t_11 = 0; __pyx_t_11 < __pyx_t_9; __pyx_t_11+=1) { - __pyx_v_i = __pyx_t_11; - - /* "View.MemoryView":182 - * p = self.data - * for i in range(self.len / itemsize): - * p[i] = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - (__pyx_v_p[__pyx_v_i]) = Py_None; - - /* "View.MemoryView":183 - * for i in range(self.len / itemsize): - * p[i] = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - Py_INCREF(Py_None); - } - - /* "View.MemoryView":179 - * raise MemoryError("unable to allocate array data.") - * - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * p = self.data - * for i in range(self.len / itemsize): - */ - } - - /* "View.MemoryView":172 - * self.free_data = allocate_buffer - * self.dtype_is_object = format == b'O' - * if allocate_buffer: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":123 - * cdef bint dtype_is_object - * - * def __cinit__(array self, tuple shape, Py_ssize_t itemsize, format not None, # <<<<<<<<<<<<<< - * mode="c", bint allocate_buffer=True): - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.array.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_format); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":186 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_array_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(((struct __pyx_array_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_2__getbuffer__(struct __pyx_array_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_v_bufmode; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - char *__pyx_t_4; - Py_ssize_t __pyx_t_5; - int __pyx_t_6; - Py_ssize_t *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":187 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 # <<<<<<<<<<<<<< - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = -1; - - /* "View.MemoryView":188 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - __pyx_t_1 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_c, Py_EQ)); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 188, __pyx_L1_error) - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":189 - * cdef int bufmode = -1 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - */ - __pyx_v_bufmode = (PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":188 - * def __getbuffer__(self, Py_buffer *info, int flags): - * cdef int bufmode = -1 - * if self.mode == u"c": # <<<<<<<<<<<<<< - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - */ - goto __pyx_L3; - } - - /* "View.MemoryView":190 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - __pyx_t_2 = (__Pyx_PyUnicode_Equals(__pyx_v_self->mode, __pyx_n_u_fortran, Py_EQ)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 190, __pyx_L1_error) - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":191 - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS # <<<<<<<<<<<<<< - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - */ - __pyx_v_bufmode = (PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS); - - /* "View.MemoryView":190 - * if self.mode == u"c": - * bufmode = PyBUF_C_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * elif self.mode == u"fortran": # <<<<<<<<<<<<<< - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - */ - } - __pyx_L3:; - - /* "View.MemoryView":192 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - __pyx_t_1 = ((!((__pyx_v_flags & __pyx_v_bufmode) != 0)) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":193 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__6, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 193, __pyx_L1_error) - - /* "View.MemoryView":192 - * elif self.mode == u"fortran": - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): # <<<<<<<<<<<<<< - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - */ - } - - /* "View.MemoryView":194 - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data # <<<<<<<<<<<<<< - * info.len = self.len - * info.ndim = self.ndim - */ - __pyx_t_4 = __pyx_v_self->data; - __pyx_v_info->buf = __pyx_t_4; - - /* "View.MemoryView":195 - * raise ValueError("Can only create a buffer that is contiguous in memory.") - * info.buf = self.data - * info.len = self.len # <<<<<<<<<<<<<< - * info.ndim = self.ndim - * info.shape = self._shape - */ - __pyx_t_5 = __pyx_v_self->len; - __pyx_v_info->len = __pyx_t_5; - - /* "View.MemoryView":196 - * info.buf = self.data - * info.len = self.len - * info.ndim = self.ndim # <<<<<<<<<<<<<< - * info.shape = self._shape - * info.strides = self._strides - */ - __pyx_t_6 = __pyx_v_self->ndim; - __pyx_v_info->ndim = __pyx_t_6; - - /* "View.MemoryView":197 - * info.len = self.len - * info.ndim = self.ndim - * info.shape = self._shape # <<<<<<<<<<<<<< - * info.strides = self._strides - * info.suboffsets = NULL - */ - __pyx_t_7 = __pyx_v_self->_shape; - __pyx_v_info->shape = __pyx_t_7; - - /* "View.MemoryView":198 - * info.ndim = self.ndim - * info.shape = self._shape - * info.strides = self._strides # <<<<<<<<<<<<<< - * info.suboffsets = NULL - * info.itemsize = self.itemsize - */ - __pyx_t_7 = __pyx_v_self->_strides; - __pyx_v_info->strides = __pyx_t_7; - - /* "View.MemoryView":199 - * info.shape = self._shape - * info.strides = self._strides - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * info.itemsize = self.itemsize - * info.readonly = 0 - */ - __pyx_v_info->suboffsets = NULL; - - /* "View.MemoryView":200 - * info.strides = self._strides - * info.suboffsets = NULL - * info.itemsize = self.itemsize # <<<<<<<<<<<<<< - * info.readonly = 0 - * - */ - __pyx_t_5 = __pyx_v_self->itemsize; - __pyx_v_info->itemsize = __pyx_t_5; - - /* "View.MemoryView":201 - * info.suboffsets = NULL - * info.itemsize = self.itemsize - * info.readonly = 0 # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - __pyx_v_info->readonly = 0; - - /* "View.MemoryView":203 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":204 - * - * if flags & PyBUF_FORMAT: - * info.format = self.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_4 = __pyx_v_self->format; - __pyx_v_info->format = __pyx_t_4; - - /* "View.MemoryView":203 - * info.readonly = 0 - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.format - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":206 - * info.format = self.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.obj = self - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L5:; - - /* "View.MemoryView":208 - * info.format = NULL - * - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":186 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * cdef int bufmode = -1 - * if self.mode == u"c": - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":212 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - -/* Python wrapper */ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_array___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_array___pyx_pf_15View_dot_MemoryView_5array_4__dealloc__(struct __pyx_array_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":213 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - __pyx_t_1 = ((__pyx_v_self->callback_free_data != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":214 - * def __dealloc__(array self): - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) # <<<<<<<<<<<<<< - * elif self.free_data: - * if self.dtype_is_object: - */ - __pyx_v_self->callback_free_data(__pyx_v_self->data); - - /* "View.MemoryView":213 - * - * def __dealloc__(array self): - * if self.callback_free_data != NULL: # <<<<<<<<<<<<<< - * self.callback_free_data(self.data) - * elif self.free_data: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":215 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - __pyx_t_1 = (__pyx_v_self->free_data != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":216 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - __pyx_t_1 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":217 - * elif self.free_data: - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, # <<<<<<<<<<<<<< - * self._strides, self.ndim, False) - * free(self.data) - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_self->data, __pyx_v_self->_shape, __pyx_v_self->_strides, __pyx_v_self->ndim, 0); - - /* "View.MemoryView":216 - * self.callback_free_data(self.data) - * elif self.free_data: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - */ - } - - /* "View.MemoryView":219 - * refcount_objects_in_slice(self.data, self._shape, - * self._strides, self.ndim, False) - * free(self.data) # <<<<<<<<<<<<<< - * PyObject_Free(self._shape) - * - */ - free(__pyx_v_self->data); - - /* "View.MemoryView":215 - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - * elif self.free_data: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * refcount_objects_in_slice(self.data, self._shape, - */ - } - __pyx_L3:; - - /* "View.MemoryView":220 - * self._strides, self.ndim, False) - * free(self.data) - * PyObject_Free(self._shape) # <<<<<<<<<<<<<< - * - * @property - */ - PyObject_Free(__pyx_v_self->_shape); - - /* "View.MemoryView":212 - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") - * - * def __dealloc__(array self): # <<<<<<<<<<<<<< - * if self.callback_free_data != NULL: - * self.callback_free_data(self.data) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":223 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_5array_7memview___get__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_5array_7memview___get__(struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":224 - * @property - * def memview(self): - * return self.get_memview() # <<<<<<<<<<<<<< - * - * @cname('get_memview') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = ((struct __pyx_vtabstruct_array *)__pyx_v_self->__pyx_vtab)->get_memview(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 224, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":223 - * - * @property - * def memview(self): # <<<<<<<<<<<<<< - * return self.get_memview() - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.memview.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":227 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - -static PyObject *__pyx_array_get_memview(struct __pyx_array_obj *__pyx_v_self) { - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_memview", 0); - - /* "View.MemoryView":228 - * @cname('get_memview') - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE # <<<<<<<<<<<<<< - * return memoryview(self, flags, self.dtype_is_object) - * - */ - __pyx_v_flags = ((PyBUF_ANY_CONTIGUOUS | PyBUF_FORMAT) | PyBUF_WRITABLE); - - /* "View.MemoryView":229 - * cdef get_memview(self): - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - PyTuple_SET_ITEM(__pyx_t_3, 0, ((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 229, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":227 - * - * @cname('get_memview') - * cdef get_memview(self): # <<<<<<<<<<<<<< - * flags = PyBUF_ANY_CONTIGUOUS|PyBUF_FORMAT|PyBUF_WRITABLE - * return memoryview(self, flags, self.dtype_is_object) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.array.get_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":231 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_array___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_array___pyx_pf_15View_dot_MemoryView_5array_6__len__(struct __pyx_array_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":232 - * - * def __len__(self): - * return self._shape[0] # <<<<<<<<<<<<<< - * - * def __getattr__(self, attr): - */ - __pyx_r = (__pyx_v_self->_shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":231 - * return memoryview(self, flags, self.dtype_is_object) - * - * def __len__(self): # <<<<<<<<<<<<<< - * return self._shape[0] - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":234 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr); /*proto*/ -static PyObject *__pyx_array___getattr__(PyObject *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getattr__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_attr)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_8__getattr__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_attr) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getattr__", 0); - - /* "View.MemoryView":235 - * - * def __getattr__(self, attr): - * return getattr(self.memview, attr) # <<<<<<<<<<<<<< - * - * def __getitem__(self, item): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_GetAttr(__pyx_t_1, __pyx_v_attr); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 235, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":234 - * return self._shape[0] - * - * def __getattr__(self, attr): # <<<<<<<<<<<<<< - * return getattr(self.memview, attr) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getattr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":237 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - -/* Python wrapper */ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item); /*proto*/ -static PyObject *__pyx_array___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_array___pyx_pf_15View_dot_MemoryView_5array_10__getitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":238 - * - * def __getitem__(self, item): - * return self.memview[item] # <<<<<<<<<<<<<< - * - * def __setitem__(self, item, value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetItem(__pyx_t_1, __pyx_v_item); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 238, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":237 - * return getattr(self.memview, attr) - * - * def __getitem__(self, item): # <<<<<<<<<<<<<< - * return self.memview[item] - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.array.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":240 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - -/* Python wrapper */ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_array___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v_item), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_array___pyx_pf_15View_dot_MemoryView_5array_12__setitem__(struct __pyx_array_obj *__pyx_v_self, PyObject *__pyx_v_item, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - - /* "View.MemoryView":241 - * - * def __setitem__(self, item, value): - * self.memview[item] = value # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_memview); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 241, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (unlikely(PyObject_SetItem(__pyx_t_1, __pyx_v_item, __pyx_v_value) < 0)) __PYX_ERR(1, 241, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":240 - * return self.memview[item] - * - * def __setitem__(self, item, value): # <<<<<<<<<<<<<< - * self.memview[item] = value - * - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_array_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array___reduce_cython__(((struct __pyx_array_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array___reduce_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__7, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_array_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_array_2__setstate_cython__(((struct __pyx_array_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_array_2__setstate_cython__(CYTHON_UNUSED struct __pyx_array_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__8, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.array.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":245 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - -static struct __pyx_array_obj *__pyx_array_new(PyObject *__pyx_v_shape, Py_ssize_t __pyx_v_itemsize, char *__pyx_v_format, char *__pyx_v_mode, char *__pyx_v_buf) { - struct __pyx_array_obj *__pyx_v_result = 0; - struct __pyx_array_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("array_cwrapper", 0); - - /* "View.MemoryView":249 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - __pyx_t_1 = ((__pyx_v_buf == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":250 - * - * if buf == NULL: - * result = array(shape, itemsize, format, mode.decode('ASCII')) # <<<<<<<<<<<<<< - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(4); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 3, __pyx_t_4); - __pyx_t_2 = 0; - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_5, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 250, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":249 - * cdef array result - * - * if buf == NULL: # <<<<<<<<<<<<<< - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":252 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - /*else*/ { - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_itemsize); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = __Pyx_PyBytes_FromString(__pyx_v_format); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_mode, 0, strlen(__pyx_v_mode), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_2 = PyTuple_New(4); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_INCREF(__pyx_v_shape); - __Pyx_GIVEREF(__pyx_v_shape); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_v_shape); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_2, 1, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_2, 2, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_2, 3, __pyx_t_3); - __pyx_t_4 = 0; - __pyx_t_5 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":253 - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) # <<<<<<<<<<<<<< - * result.data = buf - * - */ - __pyx_t_3 = __Pyx_PyDict_NewPresized(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 253, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (PyDict_SetItem(__pyx_t_3, __pyx_n_s_allocate_buffer, Py_False) < 0) __PYX_ERR(1, 253, __pyx_L1_error) - - /* "View.MemoryView":252 - * result = array(shape, itemsize, format, mode.decode('ASCII')) - * else: - * result = array(shape, itemsize, format, mode.decode('ASCII'), # <<<<<<<<<<<<<< - * allocate_buffer=False) - * result.data = buf - */ - __pyx_t_5 = __Pyx_PyObject_Call(((PyObject *)__pyx_array_type), __pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 252, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_array_obj *)__pyx_t_5); - __pyx_t_5 = 0; - - /* "View.MemoryView":254 - * result = array(shape, itemsize, format, mode.decode('ASCII'), - * allocate_buffer=False) - * result.data = buf # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->data = __pyx_v_buf; - } - __pyx_L3:; - - /* "View.MemoryView":256 - * result.data = buf - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":245 - * - * @cname("__pyx_array_new") - * cdef array array_cwrapper(tuple shape, Py_ssize_t itemsize, char *format, # <<<<<<<<<<<<<< - * char *mode, char *buf): - * cdef array result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.array_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":282 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - -/* Python wrapper */ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_MemviewEnum___init__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_name = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_name,0}; - PyObject* values[1] = {0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_name)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__init__") < 0)) __PYX_ERR(1, 282, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 1) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - } - __pyx_v_name = values[0]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__init__", 1, 1, 1, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 282, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.Enum.__init__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), __pyx_v_name); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum___init__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v_name) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__init__", 0); - - /* "View.MemoryView":283 - * cdef object name - * def __init__(self, name): - * self.name = name # <<<<<<<<<<<<<< - * def __repr__(self): - * return self.name - */ - __Pyx_INCREF(__pyx_v_name); - __Pyx_GIVEREF(__pyx_v_name); - __Pyx_GOTREF(__pyx_v_self->name); - __Pyx_DECREF(__pyx_v_self->name); - __pyx_v_self->name = __pyx_v_name; - - /* "View.MemoryView":282 - * cdef class Enum(object): - * cdef object name - * def __init__(self, name): # <<<<<<<<<<<<<< - * self.name = name - * def __repr__(self): - */ - - /* function exit code */ - __pyx_r = 0; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":284 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - -/* Python wrapper */ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_MemviewEnum___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_MemviewEnum___pyx_pf_15View_dot_MemoryView_4Enum_2__repr__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":285 - * self.name = name - * def __repr__(self): - * return self.name # <<<<<<<<<<<<<< - * - * cdef generic = Enum("") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->name); - __pyx_r = __pyx_v_self->name; - goto __pyx_L0; - - /* "View.MemoryView":284 - * def __init__(self, name): - * self.name = name - * def __repr__(self): # <<<<<<<<<<<<<< - * return self.name - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum___reduce_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum___reduce_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self) { - PyObject *__pyx_v_state = 0; - PyObject *__pyx_v__dict = 0; - int __pyx_v_use_setstate; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":5 - * cdef object _dict - * cdef bint use_setstate - * state = (self.name,) # <<<<<<<<<<<<<< - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v_self->name); - __Pyx_GIVEREF(__pyx_v_self->name); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v_self->name); - __pyx_v_state = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "(tree fragment)":6 - * cdef bint use_setstate - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) # <<<<<<<<<<<<<< - * if _dict is not None: - * state += (_dict,) - */ - __pyx_t_1 = __Pyx_GetAttr3(((PyObject *)__pyx_v_self), __pyx_n_s_dict, Py_None); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v__dict = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - __pyx_t_2 = (__pyx_v__dict != Py_None); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":8 - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: - * state += (_dict,) # <<<<<<<<<<<<<< - * use_setstate = True - * else: - */ - __pyx_t_1 = PyTuple_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_v__dict); - __Pyx_GIVEREF(__pyx_v__dict); - PyTuple_SET_ITEM(__pyx_t_1, 0, __pyx_v__dict); - __pyx_t_4 = PyNumber_InPlaceAdd(__pyx_v_state, __pyx_t_1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 8, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF_SET(__pyx_v_state, ((PyObject*)__pyx_t_4)); - __pyx_t_4 = 0; - - /* "(tree fragment)":9 - * if _dict is not None: - * state += (_dict,) - * use_setstate = True # <<<<<<<<<<<<<< - * else: - * use_setstate = self.name is not None - */ - __pyx_v_use_setstate = 1; - - /* "(tree fragment)":7 - * state = (self.name,) - * _dict = getattr(self, '__dict__', None) - * if _dict is not None: # <<<<<<<<<<<<<< - * state += (_dict,) - * use_setstate = True - */ - goto __pyx_L3; - } - - /* "(tree fragment)":11 - * use_setstate = True - * else: - * use_setstate = self.name is not None # <<<<<<<<<<<<<< - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_self->name != Py_None); - __pyx_v_use_setstate = __pyx_t_3; - } - __pyx_L3:; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - __pyx_t_3 = (__pyx_v_use_setstate != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":13 - * use_setstate = self.name is not None - * if use_setstate: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state # <<<<<<<<<<<<<< - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_4, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_1, 2, Py_None); - __pyx_t_5 = PyTuple_New(3); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 13, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_5, 1, __pyx_t_1); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_5, 2, __pyx_v_state); - __pyx_t_4 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "(tree fragment)":12 - * else: - * use_setstate = self.name is not None - * if use_setstate: # <<<<<<<<<<<<<< - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - */ - } - - /* "(tree fragment)":15 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, None), state - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __Pyx_GetModuleGlobalName(__pyx_t_5, __pyx_n_s_pyx_unpickle_Enum); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_1 = PyTuple_New(3); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_GIVEREF(((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - PyTuple_SET_ITEM(__pyx_t_1, 0, ((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self)))); - __Pyx_INCREF(__pyx_int_184977713); - __Pyx_GIVEREF(__pyx_int_184977713); - PyTuple_SET_ITEM(__pyx_t_1, 1, __pyx_int_184977713); - __Pyx_INCREF(__pyx_v_state); - __Pyx_GIVEREF(__pyx_v_state); - PyTuple_SET_ITEM(__pyx_t_1, 2, __pyx_v_state); - __pyx_t_4 = PyTuple_New(2); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 15, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_5); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_5); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_1); - __pyx_t_5 = 0; - __pyx_t_1 = 0; - __pyx_r = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L0; - } - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * cdef tuple state - * cdef object _dict - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.Enum.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_state); - __Pyx_XDECREF(__pyx_v__dict); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_MemviewEnum_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_MemviewEnum_2__setstate_cython__(((struct __pyx_MemviewEnum_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_MemviewEnum_2__setstate_cython__(struct __pyx_MemviewEnum_obj *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":17 - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): - * __pyx_unpickle_Enum__set_state(self, __pyx_state) # <<<<<<<<<<<<<< - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 17, __pyx_L1_error) - __pyx_t_1 = __pyx_unpickle_Enum__set_state(__pyx_v_self, ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 17, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":16 - * else: - * return __pyx_unpickle_Enum, (type(self), 0xb068931, state) - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state(self, __pyx_state) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.Enum.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":299 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - -static void *__pyx_align_pointer(void *__pyx_v_memory, size_t __pyx_v_alignment) { - Py_intptr_t __pyx_v_aligned_p; - size_t __pyx_v_offset; - void *__pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":301 - * cdef void *align_pointer(void *memory, size_t alignment) nogil: - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory # <<<<<<<<<<<<<< - * cdef size_t offset - * - */ - __pyx_v_aligned_p = ((Py_intptr_t)__pyx_v_memory); - - /* "View.MemoryView":305 - * - * with cython.cdivision(True): - * offset = aligned_p % alignment # <<<<<<<<<<<<<< - * - * if offset > 0: - */ - __pyx_v_offset = (__pyx_v_aligned_p % __pyx_v_alignment); - - /* "View.MemoryView":307 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - __pyx_t_1 = ((__pyx_v_offset > 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":308 - * - * if offset > 0: - * aligned_p += alignment - offset # <<<<<<<<<<<<<< - * - * return aligned_p - */ - __pyx_v_aligned_p = (__pyx_v_aligned_p + (__pyx_v_alignment - __pyx_v_offset)); - - /* "View.MemoryView":307 - * offset = aligned_p % alignment - * - * if offset > 0: # <<<<<<<<<<<<<< - * aligned_p += alignment - offset - * - */ - } - - /* "View.MemoryView":310 - * aligned_p += alignment - offset - * - * return aligned_p # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = ((void *)__pyx_v_aligned_p); - goto __pyx_L0; - - /* "View.MemoryView":299 - * - * @cname('__pyx_align_pointer') - * cdef void *align_pointer(void *memory, size_t alignment) nogil: # <<<<<<<<<<<<<< - * "Align pointer memory on a given boundary" - * cdef Py_intptr_t aligned_p = memory - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":346 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - -/* Python wrapper */ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static int __pyx_memoryview___cinit__(PyObject *__pyx_v_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v_obj = 0; - int __pyx_v_flags; - int __pyx_v_dtype_is_object; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__cinit__ (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_obj,&__pyx_n_s_flags,&__pyx_n_s_dtype_is_object,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_obj)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_flags)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, 1); __PYX_ERR(1, 346, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (kw_args > 0) { - PyObject* value = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_dtype_is_object); - if (value) { values[2] = value; kw_args--; } - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__cinit__") < 0)) __PYX_ERR(1, 346, __pyx_L3_error) - } - } else { - switch (PyTuple_GET_SIZE(__pyx_args)) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - break; - default: goto __pyx_L5_argtuple_error; - } - } - __pyx_v_obj = values[0]; - __pyx_v_flags = __Pyx_PyInt_As_int(values[1]); if (unlikely((__pyx_v_flags == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 346, __pyx_L3_error) - if (values[2]) { - __pyx_v_dtype_is_object = __Pyx_PyObject_IsTrue(values[2]); if (unlikely((__pyx_v_dtype_is_object == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 346, __pyx_L3_error) - } else { - __pyx_v_dtype_is_object = ((int)0); - } - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__cinit__", 0, 2, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 346, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return -1; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_obj, __pyx_v_flags, __pyx_v_dtype_is_object); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview___cinit__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj, int __pyx_v_flags, int __pyx_v_dtype_is_object) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__cinit__", 0); - - /* "View.MemoryView":347 - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj # <<<<<<<<<<<<<< - * self.flags = flags - * if type(self) is memoryview or obj is not None: - */ - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - __Pyx_GOTREF(__pyx_v_self->obj); - __Pyx_DECREF(__pyx_v_self->obj); - __pyx_v_self->obj = __pyx_v_obj; - - /* "View.MemoryView":348 - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): - * self.obj = obj - * self.flags = flags # <<<<<<<<<<<<<< - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - */ - __pyx_v_self->flags = __pyx_v_flags; - - /* "View.MemoryView":349 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - __pyx_t_2 = (((PyObject *)Py_TYPE(((PyObject *)__pyx_v_self))) == ((PyObject *)__pyx_memoryview_type)); - __pyx_t_3 = (__pyx_t_2 != 0); - if (!__pyx_t_3) { - } else { - __pyx_t_1 = __pyx_t_3; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_3 = (__pyx_v_obj != Py_None); - __pyx_t_2 = (__pyx_t_3 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (__pyx_t_1) { - - /* "View.MemoryView":350 - * self.flags = flags - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) # <<<<<<<<<<<<<< - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - */ - __pyx_t_4 = __Pyx_GetBuffer(__pyx_v_obj, (&__pyx_v_self->view), __pyx_v_flags); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 350, __pyx_L1_error) - - /* "View.MemoryView":351 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_self->view.obj) == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":352 - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = Py_None; - - /* "View.MemoryView":353 - * if self.view.obj == NULL: - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":351 - * if type(self) is memoryview or obj is not None: - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &self.view).obj = Py_None - * Py_INCREF(Py_None) - */ - } - - /* "View.MemoryView":349 - * self.obj = obj - * self.flags = flags - * if type(self) is memoryview or obj is not None: # <<<<<<<<<<<<<< - * __Pyx_GetBuffer(obj, &self.view, flags) - * if self.view.obj == NULL: - */ - } - - /* "View.MemoryView":355 - * Py_INCREF(Py_None) - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<< - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - */ - __pyx_t_1 = ((!(__PYX_CYTHON_ATOMICS_ENABLED() != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":357 - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - __pyx_t_1 = ((__pyx_memoryview_thread_locks_used < 8) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":358 - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - */ - __pyx_v_self->lock = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - - /* "View.MemoryView":359 - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 # <<<<<<<<<<<<<< - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used + 1); - - /* "View.MemoryView":357 - * if not __PYX_CYTHON_ATOMICS_ENABLED(): - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: # <<<<<<<<<<<<<< - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - */ - } - - /* "View.MemoryView":360 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":361 - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() # <<<<<<<<<<<<<< - * if self.lock is NULL: - * raise MemoryError - */ - __pyx_v_self->lock = PyThread_allocate_lock(); - - /* "View.MemoryView":362 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - __pyx_t_1 = ((__pyx_v_self->lock == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":363 - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - PyErr_NoMemory(); __PYX_ERR(1, 363, __pyx_L1_error) - - /* "View.MemoryView":362 - * if self.lock is NULL: - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * - */ - } - - /* "View.MemoryView":360 - * self.lock = __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] - * __pyx_memoryview_thread_locks_used += 1 - * if self.lock is NULL: # <<<<<<<<<<<<<< - * self.lock = PyThread_allocate_lock() - * if self.lock is NULL: - */ - } - - /* "View.MemoryView":355 - * Py_INCREF(Py_None) - * - * if not __PYX_CYTHON_ATOMICS_ENABLED(): # <<<<<<<<<<<<<< - * global __pyx_memoryview_thread_locks_used - * if __pyx_memoryview_thread_locks_used < THREAD_LOCKS_PREALLOCATED: - */ - } - - /* "View.MemoryView":365 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":366 - * - * if flags & PyBUF_FORMAT: - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') # <<<<<<<<<<<<<< - * else: - * self.dtype_is_object = dtype_is_object - */ - __pyx_t_2 = (((__pyx_v_self->view.format[0]) == 'O') != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L12_bool_binop_done; - } - __pyx_t_2 = (((__pyx_v_self->view.format[1]) == '\x00') != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L12_bool_binop_done:; - __pyx_v_self->dtype_is_object = __pyx_t_1; - - /* "View.MemoryView":365 - * raise MemoryError - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - */ - goto __pyx_L11; - } - - /* "View.MemoryView":368 - * self.dtype_is_object = (self.view.format[0] == b'O' and self.view.format[1] == b'\0') - * else: - * self.dtype_is_object = dtype_is_object # <<<<<<<<<<<<<< - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - */ - /*else*/ { - __pyx_v_self->dtype_is_object = __pyx_v_dtype_is_object; - } - __pyx_L11:; - - /* "View.MemoryView":370 - * self.dtype_is_object = dtype_is_object - * - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( # <<<<<<<<<<<<<< - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL - */ - __pyx_v_self->acquisition_count_aligned_p = ((__pyx_atomic_int *)__pyx_align_pointer(((void *)(&(__pyx_v_self->acquisition_count[0]))), (sizeof(__pyx_atomic_int)))); - - /* "View.MemoryView":372 - * self.acquisition_count_aligned_p = <__pyx_atomic_int *> align_pointer( - * &self.acquisition_count[0], sizeof(__pyx_atomic_int)) - * self.typeinfo = NULL # <<<<<<<<<<<<<< - * - * def __dealloc__(memoryview self): - */ - __pyx_v_self->typeinfo = NULL; - - /* "View.MemoryView":346 - * cdef __Pyx_TypeInfo *typeinfo - * - * def __cinit__(memoryview self, object obj, int flags, bint dtype_is_object=False): # <<<<<<<<<<<<<< - * self.obj = obj - * self.flags = flags - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_AddTraceback("View.MemoryView.memoryview.__cinit__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":374 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - -/* Python wrapper */ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryview___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_2__dealloc__(struct __pyx_memoryview_obj *__pyx_v_self) { - int __pyx_v_i; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyThread_type_lock __pyx_t_6; - PyThread_type_lock __pyx_t_7; - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":375 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - __pyx_t_1 = (__pyx_v_self->obj != Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":376 - * def __dealloc__(memoryview self): - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) # <<<<<<<<<<<<<< - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - */ - __Pyx_ReleaseBuffer((&__pyx_v_self->view)); - - /* "View.MemoryView":375 - * - * def __dealloc__(memoryview self): - * if self.obj is not None: # <<<<<<<<<<<<<< - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":377 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - __pyx_t_2 = ((((Py_buffer *)(&__pyx_v_self->view))->obj == Py_None) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":379 - * elif (<__pyx_buffer *> &self.view).obj == Py_None: - * - * (<__pyx_buffer *> &self.view).obj = NULL # <<<<<<<<<<<<<< - * Py_DECREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_self->view))->obj = NULL; - - /* "View.MemoryView":380 - * - * (<__pyx_buffer *> &self.view).obj = NULL - * Py_DECREF(Py_None) # <<<<<<<<<<<<<< - * - * cdef int i - */ - Py_DECREF(Py_None); - - /* "View.MemoryView":377 - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - * elif (<__pyx_buffer *> &self.view).obj == Py_None: # <<<<<<<<<<<<<< - * - * (<__pyx_buffer *> &self.view).obj = NULL - */ - } - __pyx_L3:; - - /* "View.MemoryView":384 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - __pyx_t_2 = ((__pyx_v_self->lock != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":385 - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): # <<<<<<<<<<<<<< - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - */ - __pyx_t_3 = __pyx_memoryview_thread_locks_used; - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_5 = 0; __pyx_t_5 < __pyx_t_4; __pyx_t_5+=1) { - __pyx_v_i = __pyx_t_5; - - /* "View.MemoryView":386 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - __pyx_t_2 = (((__pyx_memoryview_thread_locks[__pyx_v_i]) == __pyx_v_self->lock) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":387 - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 # <<<<<<<<<<<<<< - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - */ - __pyx_memoryview_thread_locks_used = (__pyx_memoryview_thread_locks_used - 1); - - /* "View.MemoryView":388 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - __pyx_t_2 = ((__pyx_v_i != __pyx_memoryview_thread_locks_used) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":390 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) # <<<<<<<<<<<<<< - * break - * else: - */ - __pyx_t_6 = (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]); - __pyx_t_7 = (__pyx_memoryview_thread_locks[__pyx_v_i]); - - /* "View.MemoryView":389 - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break - */ - (__pyx_memoryview_thread_locks[__pyx_v_i]) = __pyx_t_6; - (__pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used]) = __pyx_t_7; - - /* "View.MemoryView":388 - * if __pyx_memoryview_thread_locks[i] is self.lock: - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - */ - } - - /* "View.MemoryView":391 - * __pyx_memoryview_thread_locks[i], __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used] = ( - * __pyx_memoryview_thread_locks[__pyx_memoryview_thread_locks_used], __pyx_memoryview_thread_locks[i]) - * break # <<<<<<<<<<<<<< - * else: - * PyThread_free_lock(self.lock) - */ - goto __pyx_L6_break; - - /* "View.MemoryView":386 - * if self.lock != NULL: - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: # <<<<<<<<<<<<<< - * __pyx_memoryview_thread_locks_used -= 1 - * if i != __pyx_memoryview_thread_locks_used: - */ - } - } - /*else*/ { - - /* "View.MemoryView":393 - * break - * else: - * PyThread_free_lock(self.lock) # <<<<<<<<<<<<<< - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - */ - PyThread_free_lock(__pyx_v_self->lock); - } - __pyx_L6_break:; - - /* "View.MemoryView":384 - * cdef int i - * global __pyx_memoryview_thread_locks_used - * if self.lock != NULL: # <<<<<<<<<<<<<< - * for i in range(__pyx_memoryview_thread_locks_used): - * if __pyx_memoryview_thread_locks[i] is self.lock: - */ - } - - /* "View.MemoryView":374 - * self.typeinfo = NULL - * - * def __dealloc__(memoryview self): # <<<<<<<<<<<<<< - * if self.obj is not None: - * __Pyx_ReleaseBuffer(&self.view) - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":395 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - -static char *__pyx_memoryview_get_item_pointer(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - Py_ssize_t __pyx_v_dim; - char *__pyx_v_itemp; - PyObject *__pyx_v_idx = NULL; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t __pyx_t_3; - PyObject *(*__pyx_t_4)(PyObject *); - PyObject *__pyx_t_5 = NULL; - Py_ssize_t __pyx_t_6; - char *__pyx_t_7; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_item_pointer", 0); - - /* "View.MemoryView":397 - * cdef char *get_item_pointer(memoryview self, object index) except NULL: - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf # <<<<<<<<<<<<<< - * - * for dim, idx in enumerate(index): - */ - __pyx_v_itemp = ((char *)__pyx_v_self->view.buf); - - /* "View.MemoryView":399 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - __pyx_t_1 = 0; - if (likely(PyList_CheckExact(__pyx_v_index)) || PyTuple_CheckExact(__pyx_v_index)) { - __pyx_t_2 = __pyx_v_index; __Pyx_INCREF(__pyx_t_2); __pyx_t_3 = 0; - __pyx_t_4 = NULL; - } else { - __pyx_t_3 = -1; __pyx_t_2 = PyObject_GetIter(__pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 399, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = Py_TYPE(__pyx_t_2)->tp_iternext; if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 399, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_4)) { - if (likely(PyList_CheckExact(__pyx_t_2))) { - if (__pyx_t_3 >= PyList_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyList_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 399, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 399, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } else { - if (__pyx_t_3 >= PyTuple_GET_SIZE(__pyx_t_2)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_5 = PyTuple_GET_ITEM(__pyx_t_2, __pyx_t_3); __Pyx_INCREF(__pyx_t_5); __pyx_t_3++; if (unlikely(0 < 0)) __PYX_ERR(1, 399, __pyx_L1_error) - #else - __pyx_t_5 = PySequence_ITEM(__pyx_t_2, __pyx_t_3); __pyx_t_3++; if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 399, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - } - } else { - __pyx_t_5 = __pyx_t_4(__pyx_t_2); - if (unlikely(!__pyx_t_5)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 399, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_5); - } - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_5); - __pyx_t_5 = 0; - __pyx_v_dim = __pyx_t_1; - __pyx_t_1 = (__pyx_t_1 + 1); - - /* "View.MemoryView":400 - * - * for dim, idx in enumerate(index): - * itemp = pybuffer_index(&self.view, itemp, idx, dim) # <<<<<<<<<<<<<< - * - * return itemp - */ - __pyx_t_6 = __Pyx_PyIndex_AsSsize_t(__pyx_v_idx); if (unlikely((__pyx_t_6 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 400, __pyx_L1_error) - __pyx_t_7 = __pyx_pybuffer_index((&__pyx_v_self->view), __pyx_v_itemp, __pyx_t_6, __pyx_v_dim); if (unlikely(__pyx_t_7 == ((char *)NULL))) __PYX_ERR(1, 400, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_7; - - /* "View.MemoryView":399 - * cdef char *itemp = self.view.buf - * - * for dim, idx in enumerate(index): # <<<<<<<<<<<<<< - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - */ - } - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":402 - * itemp = pybuffer_index(&self.view, itemp, idx, dim) - * - * return itemp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_itemp; - goto __pyx_L0; - - /* "View.MemoryView":395 - * PyThread_free_lock(self.lock) - * - * cdef char *get_item_pointer(memoryview self, object index) except NULL: # <<<<<<<<<<<<<< - * cdef Py_ssize_t dim - * cdef char *itemp = self.view.buf - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.get_item_pointer", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":405 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index); /*proto*/ -static PyObject *__pyx_memoryview___getitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_4__getitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_indices = NULL; - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - char *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__getitem__", 0); - - /* "View.MemoryView":406 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - __pyx_t_1 = (__pyx_v_index == __pyx_builtin_Ellipsis); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":407 - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: - * return self # <<<<<<<<<<<<<< - * - * have_slices, indices = _unellipsify(index, self.view.ndim) - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __pyx_r = ((PyObject *)__pyx_v_self); - goto __pyx_L0; - - /* "View.MemoryView":406 - * - * def __getitem__(memoryview self, object index): - * if index is Ellipsis: # <<<<<<<<<<<<<< - * return self - * - */ - } - - /* "View.MemoryView":409 - * return self - * - * have_slices, indices = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * cdef char *itemp - */ - __pyx_t_3 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 409, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (likely(__pyx_t_3 != Py_None)) { - PyObject* sequence = __pyx_t_3; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 409, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_5 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - #else - __pyx_t_4 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 409, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 409, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - #endif - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 409, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_v_indices = __pyx_t_5; - __pyx_t_5 = 0; - - /* "View.MemoryView":412 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - __pyx_t_2 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 412, __pyx_L1_error) - if (__pyx_t_2) { - - /* "View.MemoryView":413 - * cdef char *itemp - * if have_slices: - * return memview_slice(self, indices) # <<<<<<<<<<<<<< - * else: - * itemp = self.get_item_pointer(indices) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((PyObject *)__pyx_memview_slice(__pyx_v_self, __pyx_v_indices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 413, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":412 - * - * cdef char *itemp - * if have_slices: # <<<<<<<<<<<<<< - * return memview_slice(self, indices) - * else: - */ - } - - /* "View.MemoryView":415 - * return memview_slice(self, indices) - * else: - * itemp = self.get_item_pointer(indices) # <<<<<<<<<<<<<< - * return self.convert_item_to_object(itemp) - * - */ - /*else*/ { - __pyx_t_6 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_indices); if (unlikely(__pyx_t_6 == ((char *)NULL))) __PYX_ERR(1, 415, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_6; - - /* "View.MemoryView":416 - * else: - * itemp = self.get_item_pointer(indices) - * return self.convert_item_to_object(itemp) # <<<<<<<<<<<<<< - * - * def __setitem__(memoryview self, object index, object value): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->convert_item_to_object(__pyx_v_self, __pyx_v_itemp); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 416, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":405 - * - * - * def __getitem__(memoryview self, object index): # <<<<<<<<<<<<<< - * if index is Ellipsis: - * return self - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_indices); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":418 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - -/* Python wrapper */ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value); /*proto*/ -static int __pyx_memoryview___setitem__(PyObject *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setitem__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v_index), ((PyObject *)__pyx_v_value)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_6__setitem__(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - PyObject *__pyx_v_have_slices = NULL; - PyObject *__pyx_v_obj = NULL; - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setitem__", 0); - __Pyx_INCREF(__pyx_v_index); - - /* "View.MemoryView":419 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - __pyx_t_1 = (__pyx_v_self->view.readonly != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":420 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__9, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 420, __pyx_L1_error) - - /* "View.MemoryView":419 - * - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: # <<<<<<<<<<<<<< - * raise TypeError("Cannot assign to read-only memoryview") - * - */ - } - - /* "View.MemoryView":422 - * raise TypeError("Cannot assign to read-only memoryview") - * - * have_slices, index = _unellipsify(index, self.view.ndim) # <<<<<<<<<<<<<< - * - * if have_slices: - */ - __pyx_t_2 = _unellipsify(__pyx_v_index, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 422, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (likely(__pyx_t_2 != Py_None)) { - PyObject* sequence = __pyx_t_2; - Py_ssize_t size = __Pyx_PySequence_SIZE(sequence); - if (unlikely(size != 2)) { - if (size > 2) __Pyx_RaiseTooManyValuesError(2); - else if (size >= 0) __Pyx_RaiseNeedMoreValuesError(size); - __PYX_ERR(1, 422, __pyx_L1_error) - } - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_3 = PyTuple_GET_ITEM(sequence, 0); - __pyx_t_4 = PyTuple_GET_ITEM(sequence, 1); - __Pyx_INCREF(__pyx_t_3); - __Pyx_INCREF(__pyx_t_4); - #else - __pyx_t_3 = PySequence_ITEM(sequence, 0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 422, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PySequence_ITEM(sequence, 1); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 422, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - #endif - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } else { - __Pyx_RaiseNoneNotIterableError(); __PYX_ERR(1, 422, __pyx_L1_error) - } - __pyx_v_have_slices = __pyx_t_3; - __pyx_t_3 = 0; - __Pyx_DECREF_SET(__pyx_v_index, __pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":424 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_have_slices); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 424, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":425 - * - * if have_slices: - * obj = self.is_slice(value) # <<<<<<<<<<<<<< - * if obj: - * self.setitem_slice_assignment(self[index], obj) - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->is_slice(__pyx_v_self, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 425, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_v_obj = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":426 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_v_obj); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 426, __pyx_L1_error) - if (__pyx_t_1) { - - /* "View.MemoryView":427 - * obj = self.is_slice(value) - * if obj: - * self.setitem_slice_assignment(self[index], obj) # <<<<<<<<<<<<<< - * else: - * self.setitem_slice_assign_scalar(self[index], value) - */ - __pyx_t_2 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assignment(__pyx_v_self, __pyx_t_2, __pyx_v_obj); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 427, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "View.MemoryView":426 - * if have_slices: - * obj = self.is_slice(value) - * if obj: # <<<<<<<<<<<<<< - * self.setitem_slice_assignment(self[index], obj) - * else: - */ - goto __pyx_L5; - } - - /* "View.MemoryView":429 - * self.setitem_slice_assignment(self[index], obj) - * else: - * self.setitem_slice_assign_scalar(self[index], value) # <<<<<<<<<<<<<< - * else: - * self.setitem_indexed(index, value) - */ - /*else*/ { - __pyx_t_4 = __Pyx_PyObject_GetItem(((PyObject *)__pyx_v_self), __pyx_v_index); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - if (!(likely(((__pyx_t_4) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_4, __pyx_memoryview_type))))) __PYX_ERR(1, 429, __pyx_L1_error) - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_slice_assign_scalar(__pyx_v_self, ((struct __pyx_memoryview_obj *)__pyx_t_4), __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 429, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L5:; - - /* "View.MemoryView":424 - * have_slices, index = _unellipsify(index, self.view.ndim) - * - * if have_slices: # <<<<<<<<<<<<<< - * obj = self.is_slice(value) - * if obj: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":431 - * self.setitem_slice_assign_scalar(self[index], value) - * else: - * self.setitem_indexed(index, value) # <<<<<<<<<<<<<< - * - * cdef is_slice(self, obj): - */ - /*else*/ { - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->setitem_indexed(__pyx_v_self, __pyx_v_index, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 431, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_L4:; - - /* "View.MemoryView":418 - * return self.convert_item_to_object(itemp) - * - * def __setitem__(memoryview self, object index, object value): # <<<<<<<<<<<<<< - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setitem__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_have_slices); - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":433 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - -static PyObject *__pyx_memoryview_is_slice(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_obj) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_slice", 0); - __Pyx_INCREF(__pyx_v_obj); - - /* "View.MemoryView":434 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_obj, __pyx_memoryview_type); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":435 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_3, &__pyx_t_4, &__pyx_t_5); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_5); - /*try:*/ { - - /* "View.MemoryView":436 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_6 = __Pyx_PyInt_From_int(((__pyx_v_self->flags & (~PyBUF_WRITABLE)) | PyBUF_ANY_CONTIGUOUS)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 436, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":437 - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) # <<<<<<<<<<<<<< - * except TypeError: - * return None - */ - __pyx_t_7 = __Pyx_PyBool_FromLong(__pyx_v_self->dtype_is_object); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 437, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - - /* "View.MemoryView":436 - * if not isinstance(obj, memoryview): - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, # <<<<<<<<<<<<<< - * self.dtype_is_object) - * except TypeError: - */ - __pyx_t_8 = PyTuple_New(3); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 436, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_8); - __Pyx_INCREF(__pyx_v_obj); - __Pyx_GIVEREF(__pyx_v_obj); - PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_v_obj); - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_8, 1, __pyx_t_6); - __Pyx_GIVEREF(__pyx_t_7); - PyTuple_SET_ITEM(__pyx_t_8, 2, __pyx_t_7); - __pyx_t_6 = 0; - __pyx_t_7 = 0; - __pyx_t_7 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_8, NULL); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 436, __pyx_L4_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF_SET(__pyx_v_obj, __pyx_t_7); - __pyx_t_7 = 0; - - /* "View.MemoryView":435 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - } - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_XDECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - goto __pyx_L9_try_end; - __pyx_L4_error:; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - - /* "View.MemoryView":438 - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - * except TypeError: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_9 = __Pyx_PyErr_ExceptionMatches(__pyx_builtin_TypeError); - if (__pyx_t_9) { - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_6) < 0) __PYX_ERR(1, 438, __pyx_L6_except_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_GOTREF(__pyx_t_8); - __Pyx_GOTREF(__pyx_t_6); - - /* "View.MemoryView":439 - * self.dtype_is_object) - * except TypeError: - * return None # <<<<<<<<<<<<<< - * - * return obj - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - goto __pyx_L7_except_return; - } - goto __pyx_L6_except_error; - __pyx_L6_except_error:; - - /* "View.MemoryView":435 - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): - * try: # <<<<<<<<<<<<<< - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - * self.dtype_is_object) - */ - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L1_error; - __pyx_L7_except_return:; - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_XGIVEREF(__pyx_t_5); - __Pyx_ExceptionReset(__pyx_t_3, __pyx_t_4, __pyx_t_5); - goto __pyx_L0; - __pyx_L9_try_end:; - } - - /* "View.MemoryView":434 - * - * cdef is_slice(self, obj): - * if not isinstance(obj, memoryview): # <<<<<<<<<<<<<< - * try: - * obj = memoryview(obj, self.flags & ~PyBUF_WRITABLE | PyBUF_ANY_CONTIGUOUS, - */ - } - - /* "View.MemoryView":441 - * return None - * - * return obj # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assignment(self, dst, src): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_obj); - __pyx_r = __pyx_v_obj; - goto __pyx_L0; - - /* "View.MemoryView":433 - * self.setitem_indexed(index, value) - * - * cdef is_slice(self, obj): # <<<<<<<<<<<<<< - * if not isinstance(obj, memoryview): - * try: - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_obj); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":443 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - -static PyObject *__pyx_memoryview_setitem_slice_assignment(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_dst, PyObject *__pyx_v_src) { - __Pyx_memviewslice __pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_src_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - __Pyx_memviewslice *__pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assignment", 0); - - /* "View.MemoryView":447 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - if (!(likely(((__pyx_v_src) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_src, __pyx_memoryview_type))))) __PYX_ERR(1, 447, __pyx_L1_error) - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_src), (&__pyx_v_src_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 447, __pyx_L1_error) - - /* "View.MemoryView":448 - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], # <<<<<<<<<<<<<< - * src.ndim, dst.ndim, self.dtype_is_object) - * - */ - if (!(likely(((__pyx_v_dst) == Py_None) || likely(__Pyx_TypeTest(__pyx_v_dst, __pyx_memoryview_type))))) __PYX_ERR(1, 448, __pyx_L1_error) - __pyx_t_2 = __pyx_memoryview_get_slice_from_memoryview(((struct __pyx_memoryview_obj *)__pyx_v_dst), (&__pyx_v_dst_slice)); if (unlikely(__pyx_t_2 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 448, __pyx_L1_error) - - /* "View.MemoryView":449 - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) # <<<<<<<<<<<<<< - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - */ - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_src, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 449, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_4 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 449, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_GetAttrStr(__pyx_v_dst, __pyx_n_s_ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 449, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = __Pyx_PyInt_As_int(__pyx_t_3); if (unlikely((__pyx_t_5 == (int)-1) && PyErr_Occurred())) __PYX_ERR(1, 449, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":447 - * cdef __Pyx_memviewslice src_slice - * - * memoryview_copy_contents(get_slice_from_memview(src, &src_slice)[0], # <<<<<<<<<<<<<< - * get_slice_from_memview(dst, &dst_slice)[0], - * src.ndim, dst.ndim, self.dtype_is_object) - */ - __pyx_t_6 = __pyx_memoryview_copy_contents((__pyx_t_1[0]), (__pyx_t_2[0]), __pyx_t_4, __pyx_t_5, __pyx_v_self->dtype_is_object); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 447, __pyx_L1_error) - - /* "View.MemoryView":443 - * return obj - * - * cdef setitem_slice_assignment(self, dst, src): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice dst_slice - * cdef __Pyx_memviewslice src_slice - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assignment", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":451 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - -static PyObject *__pyx_memoryview_setitem_slice_assign_scalar(struct __pyx_memoryview_obj *__pyx_v_self, struct __pyx_memoryview_obj *__pyx_v_dst, PyObject *__pyx_v_value) { - int __pyx_v_array[0x80]; - void *__pyx_v_tmp; - void *__pyx_v_item; - __Pyx_memviewslice *__pyx_v_dst_slice; - __Pyx_memviewslice __pyx_v_tmp_slice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_t_4; - int __pyx_t_5; - char const *__pyx_t_6; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - PyObject *__pyx_t_9 = NULL; - PyObject *__pyx_t_10 = NULL; - PyObject *__pyx_t_11 = NULL; - PyObject *__pyx_t_12 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_slice_assign_scalar", 0); - - /* "View.MemoryView":453 - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): - * cdef int array[128] - * cdef void *tmp = NULL # <<<<<<<<<<<<<< - * cdef void *item - * - */ - __pyx_v_tmp = NULL; - - /* "View.MemoryView":458 - * cdef __Pyx_memviewslice *dst_slice - * cdef __Pyx_memviewslice tmp_slice - * dst_slice = get_slice_from_memview(dst, &tmp_slice) # <<<<<<<<<<<<<< - * - * if self.view.itemsize > sizeof(array): - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_dst, (&__pyx_v_tmp_slice)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 458, __pyx_L1_error) - __pyx_v_dst_slice = __pyx_t_1; - - /* "View.MemoryView":460 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - __pyx_t_2 = ((((size_t)__pyx_v_self->view.itemsize) > (sizeof(__pyx_v_array))) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":461 - * - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) # <<<<<<<<<<<<<< - * if tmp == NULL: - * raise MemoryError - */ - __pyx_v_tmp = PyMem_Malloc(__pyx_v_self->view.itemsize); - - /* "View.MemoryView":462 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - __pyx_t_2 = ((__pyx_v_tmp == NULL) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":463 - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - * raise MemoryError # <<<<<<<<<<<<<< - * item = tmp - * else: - */ - PyErr_NoMemory(); __PYX_ERR(1, 463, __pyx_L1_error) - - /* "View.MemoryView":462 - * if self.view.itemsize > sizeof(array): - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: # <<<<<<<<<<<<<< - * raise MemoryError - * item = tmp - */ - } - - /* "View.MemoryView":464 - * if tmp == NULL: - * raise MemoryError - * item = tmp # <<<<<<<<<<<<<< - * else: - * item = array - */ - __pyx_v_item = __pyx_v_tmp; - - /* "View.MemoryView":460 - * dst_slice = get_slice_from_memview(dst, &tmp_slice) - * - * if self.view.itemsize > sizeof(array): # <<<<<<<<<<<<<< - * tmp = PyMem_Malloc(self.view.itemsize) - * if tmp == NULL: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":466 - * item = tmp - * else: - * item = array # <<<<<<<<<<<<<< - * - * try: - */ - /*else*/ { - __pyx_v_item = ((void *)__pyx_v_array); - } - __pyx_L3:; - - /* "View.MemoryView":468 - * item = array - * - * try: # <<<<<<<<<<<<<< - * if self.dtype_is_object: - * ( item)[0] = value - */ - /*try:*/ { - - /* "View.MemoryView":469 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - __pyx_t_2 = (__pyx_v_self->dtype_is_object != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":470 - * try: - * if self.dtype_is_object: - * ( item)[0] = value # <<<<<<<<<<<<<< - * else: - * self.assign_item_from_object( item, value) - */ - (((PyObject **)__pyx_v_item)[0]) = ((PyObject *)__pyx_v_value); - - /* "View.MemoryView":469 - * - * try: - * if self.dtype_is_object: # <<<<<<<<<<<<<< - * ( item)[0] = value - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":472 - * ( item)[0] = value - * else: - * self.assign_item_from_object( item, value) # <<<<<<<<<<<<<< - * - * - */ - /*else*/ { - __pyx_t_3 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, ((char *)__pyx_v_item), __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 472, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L8:; - - /* "View.MemoryView":476 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - __pyx_t_2 = ((__pyx_v_self->view.suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":477 - * - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) # <<<<<<<<<<<<<< - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - * item, self.dtype_is_object) - */ - __pyx_t_3 = assert_direct_dimensions(__pyx_v_self->view.suboffsets, __pyx_v_self->view.ndim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 477, __pyx_L6_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":476 - * - * - * if self.view.suboffsets != NULL: # <<<<<<<<<<<<<< - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, - */ - } - - /* "View.MemoryView":478 - * if self.view.suboffsets != NULL: - * assert_direct_dimensions(self.view.suboffsets, self.view.ndim) - * slice_assign_scalar(dst_slice, dst.view.ndim, self.view.itemsize, # <<<<<<<<<<<<<< - * item, self.dtype_is_object) - * finally: - */ - __pyx_memoryview_slice_assign_scalar(__pyx_v_dst_slice, __pyx_v_dst->view.ndim, __pyx_v_self->view.itemsize, __pyx_v_item, __pyx_v_self->dtype_is_object); - } - - /* "View.MemoryView":481 - * item, self.dtype_is_object) - * finally: - * PyMem_Free(tmp) # <<<<<<<<<<<<<< - * - * cdef setitem_indexed(self, index, value): - */ - /*finally:*/ { - /*normal exit:*/{ - PyMem_Free(__pyx_v_tmp); - goto __pyx_L7; - } - __pyx_L6_error:; - /*exception exit:*/{ - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __Pyx_XDECREF(__pyx_t_3); __pyx_t_3 = 0; - if (PY_MAJOR_VERSION >= 3) __Pyx_ExceptionSwap(&__pyx_t_10, &__pyx_t_11, &__pyx_t_12); - if ((PY_MAJOR_VERSION < 3) || unlikely(__Pyx_GetException(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9) < 0)) __Pyx_ErrFetch(&__pyx_t_7, &__pyx_t_8, &__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_7); - __Pyx_XGOTREF(__pyx_t_8); - __Pyx_XGOTREF(__pyx_t_9); - __Pyx_XGOTREF(__pyx_t_10); - __Pyx_XGOTREF(__pyx_t_11); - __Pyx_XGOTREF(__pyx_t_12); - __pyx_t_4 = __pyx_lineno; __pyx_t_5 = __pyx_clineno; __pyx_t_6 = __pyx_filename; - { - PyMem_Free(__pyx_v_tmp); - } - if (PY_MAJOR_VERSION >= 3) { - __Pyx_XGIVEREF(__pyx_t_10); - __Pyx_XGIVEREF(__pyx_t_11); - __Pyx_XGIVEREF(__pyx_t_12); - __Pyx_ExceptionReset(__pyx_t_10, __pyx_t_11, __pyx_t_12); - } - __Pyx_XGIVEREF(__pyx_t_7); - __Pyx_XGIVEREF(__pyx_t_8); - __Pyx_XGIVEREF(__pyx_t_9); - __Pyx_ErrRestore(__pyx_t_7, __pyx_t_8, __pyx_t_9); - __pyx_t_7 = 0; __pyx_t_8 = 0; __pyx_t_9 = 0; __pyx_t_10 = 0; __pyx_t_11 = 0; __pyx_t_12 = 0; - __pyx_lineno = __pyx_t_4; __pyx_clineno = __pyx_t_5; __pyx_filename = __pyx_t_6; - goto __pyx_L1_error; - } - __pyx_L7:; - } - - /* "View.MemoryView":451 - * src.ndim, dst.ndim, self.dtype_is_object) - * - * cdef setitem_slice_assign_scalar(self, memoryview dst, value): # <<<<<<<<<<<<<< - * cdef int array[128] - * cdef void *tmp = NULL - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_slice_assign_scalar", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":483 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - -static PyObject *__pyx_memoryview_setitem_indexed(struct __pyx_memoryview_obj *__pyx_v_self, PyObject *__pyx_v_index, PyObject *__pyx_v_value) { - char *__pyx_v_itemp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - char *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("setitem_indexed", 0); - - /* "View.MemoryView":484 - * - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) # <<<<<<<<<<<<<< - * self.assign_item_from_object(itemp, value) - * - */ - __pyx_t_1 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->get_item_pointer(__pyx_v_self, __pyx_v_index); if (unlikely(__pyx_t_1 == ((char *)NULL))) __PYX_ERR(1, 484, __pyx_L1_error) - __pyx_v_itemp = __pyx_t_1; - - /* "View.MemoryView":485 - * cdef setitem_indexed(self, index, value): - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __pyx_t_2 = ((struct __pyx_vtabstruct_memoryview *)__pyx_v_self->__pyx_vtab)->assign_item_from_object(__pyx_v_self, __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 485, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":483 - * PyMem_Free(tmp) - * - * cdef setitem_indexed(self, index, value): # <<<<<<<<<<<<<< - * cdef char *itemp = self.get_item_pointer(index) - * self.assign_item_from_object(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.setitem_indexed", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":487 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_convert_item_to_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_v_struct = NULL; - PyObject *__pyx_v_bytesitem = 0; - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - int __pyx_t_8; - PyObject *__pyx_t_9 = NULL; - size_t __pyx_t_10; - int __pyx_t_11; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":490 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef bytes bytesitem - * - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 490, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":493 - * cdef bytes bytesitem - * - * bytesitem = itemp[:self.view.itemsize] # <<<<<<<<<<<<<< - * try: - * result = struct.unpack(self.view.format, bytesitem) - */ - __pyx_t_1 = __Pyx_PyBytes_FromStringAndSize(__pyx_v_itemp + 0, __pyx_v_self->view.itemsize - 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 493, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_bytesitem = ((PyObject*)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":494 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ExceptionSave(&__pyx_t_2, &__pyx_t_3, &__pyx_t_4); - __Pyx_XGOTREF(__pyx_t_2); - __Pyx_XGOTREF(__pyx_t_3); - __Pyx_XGOTREF(__pyx_t_4); - /*try:*/ { - - /* "View.MemoryView":495 - * bytesitem = itemp[:self.view.itemsize] - * try: - * result = struct.unpack(self.view.format, bytesitem) # <<<<<<<<<<<<<< - * except struct.error: - * raise ValueError("Unable to convert item to object") - */ - __pyx_t_5 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_unpack); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_t_6 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = NULL; - __pyx_t_8 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_5))) { - __pyx_t_7 = PyMethod_GET_SELF(__pyx_t_5); - if (likely(__pyx_t_7)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_5); - __Pyx_INCREF(__pyx_t_7); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_5, function); - __pyx_t_8 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_5)) { - PyObject *__pyx_temp[3] = {__pyx_t_7, __pyx_t_6, __pyx_v_bytesitem}; - __pyx_t_1 = __Pyx_PyCFunction_FastCall(__pyx_t_5, __pyx_temp+1-__pyx_t_8, 2+__pyx_t_8); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } else - #endif - { - __pyx_t_9 = PyTuple_New(2+__pyx_t_8); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_9); - if (__pyx_t_7) { - __Pyx_GIVEREF(__pyx_t_7); PyTuple_SET_ITEM(__pyx_t_9, 0, __pyx_t_7); __pyx_t_7 = NULL; - } - __Pyx_GIVEREF(__pyx_t_6); - PyTuple_SET_ITEM(__pyx_t_9, 0+__pyx_t_8, __pyx_t_6); - __Pyx_INCREF(__pyx_v_bytesitem); - __Pyx_GIVEREF(__pyx_v_bytesitem); - PyTuple_SET_ITEM(__pyx_t_9, 1+__pyx_t_8, __pyx_v_bytesitem); - __pyx_t_6 = 0; - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_t_5, __pyx_t_9, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 495, __pyx_L3_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __pyx_v_result = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":494 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - } - - /* "View.MemoryView":499 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - /*else:*/ { - __pyx_t_10 = strlen(__pyx_v_self->view.format); - __pyx_t_11 = ((__pyx_t_10 == 1) != 0); - if (__pyx_t_11) { - - /* "View.MemoryView":500 - * else: - * if len(self.view.format) == 1: - * return result[0] # <<<<<<<<<<<<<< - * return result - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_GetItemInt(__pyx_v_result, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 500, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L6_except_return; - - /* "View.MemoryView":499 - * raise ValueError("Unable to convert item to object") - * else: - * if len(self.view.format) == 1: # <<<<<<<<<<<<<< - * return result[0] - * return result - */ - } - - /* "View.MemoryView":501 - * if len(self.view.format) == 1: - * return result[0] - * return result # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_result); - __pyx_r = __pyx_v_result; - goto __pyx_L6_except_return; - } - __pyx_L3_error:; - __Pyx_XDECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_XDECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_XDECREF(__pyx_t_9); __pyx_t_9 = 0; - - /* "View.MemoryView":496 - * try: - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: # <<<<<<<<<<<<<< - * raise ValueError("Unable to convert item to object") - * else: - */ - __Pyx_ErrFetch(&__pyx_t_1, &__pyx_t_5, &__pyx_t_9); - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_error); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 496, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = __Pyx_PyErr_GivenExceptionMatches(__pyx_t_1, __pyx_t_6); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_ErrRestore(__pyx_t_1, __pyx_t_5, __pyx_t_9); - __pyx_t_1 = 0; __pyx_t_5 = 0; __pyx_t_9 = 0; - if (__pyx_t_8) { - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - if (__Pyx_GetException(&__pyx_t_9, &__pyx_t_5, &__pyx_t_1) < 0) __PYX_ERR(1, 496, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_9); - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GOTREF(__pyx_t_1); - - /* "View.MemoryView":497 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_t_6 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__10, NULL); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 497, __pyx_L5_except_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_Raise(__pyx_t_6, 0, 0, 0); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - __PYX_ERR(1, 497, __pyx_L5_except_error) - } - goto __pyx_L5_except_error; - __pyx_L5_except_error:; - - /* "View.MemoryView":494 - * - * bytesitem = itemp[:self.view.itemsize] - * try: # <<<<<<<<<<<<<< - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - */ - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L1_error; - __pyx_L6_except_return:; - __Pyx_XGIVEREF(__pyx_t_2); - __Pyx_XGIVEREF(__pyx_t_3); - __Pyx_XGIVEREF(__pyx_t_4); - __Pyx_ExceptionReset(__pyx_t_2, __pyx_t_3, __pyx_t_4); - goto __pyx_L0; - } - - /* "View.MemoryView":487 - * self.assign_item_from_object(itemp, value) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memoryview.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesitem); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":503 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - -static PyObject *__pyx_memoryview_assign_item_from_object(struct __pyx_memoryview_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_v_struct = NULL; - char __pyx_v_c; - PyObject *__pyx_v_bytesvalue = 0; - Py_ssize_t __pyx_v_i; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_t_7; - PyObject *__pyx_t_8 = NULL; - Py_ssize_t __pyx_t_9; - PyObject *__pyx_t_10 = NULL; - char *__pyx_t_11; - char *__pyx_t_12; - char *__pyx_t_13; - char *__pyx_t_14; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":506 - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - * import struct # <<<<<<<<<<<<<< - * cdef char c - * cdef bytes bytesvalue - */ - __pyx_t_1 = __Pyx_Import(__pyx_n_s_struct, 0, 0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 506, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_v_struct = __pyx_t_1; - __pyx_t_1 = 0; - - /* "View.MemoryView":511 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - __pyx_t_2 = PyTuple_Check(__pyx_v_value); - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "View.MemoryView":512 - * - * if isinstance(value, tuple): - * bytesvalue = struct.pack(self.view.format, *value) # <<<<<<<<<<<<<< - * else: - * bytesvalue = struct.pack(self.view.format, value) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_4 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_5 = PyTuple_New(1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_5, 0, __pyx_t_4); - __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PySequence_Tuple(__pyx_v_value); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = PyNumber_Add(__pyx_t_5, __pyx_t_4); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_1, __pyx_t_6, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 512, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 512, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - - /* "View.MemoryView":511 - * cdef Py_ssize_t i - * - * if isinstance(value, tuple): # <<<<<<<<<<<<<< - * bytesvalue = struct.pack(self.view.format, *value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":514 - * bytesvalue = struct.pack(self.view.format, *value) - * else: - * bytesvalue = struct.pack(self.view.format, value) # <<<<<<<<<<<<<< - * - * for i, c in enumerate(bytesvalue): - */ - /*else*/ { - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(__pyx_v_struct, __pyx_n_s_pack); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_1 = __Pyx_PyBytes_FromString(__pyx_v_self->view.format); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = NULL; - __pyx_t_7 = 0; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_6))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_6); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_6); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_6, function); - __pyx_t_7 = 1; - } - } - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(__pyx_t_6)) { - PyObject *__pyx_temp[3] = {__pyx_t_5, __pyx_t_1, __pyx_v_value}; - __pyx_t_4 = __Pyx_PyCFunction_FastCall(__pyx_t_6, __pyx_temp+1-__pyx_t_7, 2+__pyx_t_7); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - } else - #endif - { - __pyx_t_8 = PyTuple_New(2+__pyx_t_7); if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_8); - if (__pyx_t_5) { - __Pyx_GIVEREF(__pyx_t_5); PyTuple_SET_ITEM(__pyx_t_8, 0, __pyx_t_5); __pyx_t_5 = NULL; - } - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_8, 0+__pyx_t_7, __pyx_t_1); - __Pyx_INCREF(__pyx_v_value); - __Pyx_GIVEREF(__pyx_v_value); - PyTuple_SET_ITEM(__pyx_t_8, 1+__pyx_t_7, __pyx_v_value); - __pyx_t_1 = 0; - __pyx_t_4 = __Pyx_PyObject_Call(__pyx_t_6, __pyx_t_8, NULL); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 514, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_8); __pyx_t_8 = 0; - } - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (!(likely(PyBytes_CheckExact(__pyx_t_4))||((__pyx_t_4) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "bytes", Py_TYPE(__pyx_t_4)->tp_name), 0))) __PYX_ERR(1, 514, __pyx_L1_error) - __pyx_v_bytesvalue = ((PyObject*)__pyx_t_4); - __pyx_t_4 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":516 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = 0; - if (unlikely(__pyx_v_bytesvalue == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' is not iterable"); - __PYX_ERR(1, 516, __pyx_L1_error) - } - __Pyx_INCREF(__pyx_v_bytesvalue); - __pyx_t_10 = __pyx_v_bytesvalue; - __pyx_t_12 = PyBytes_AS_STRING(__pyx_t_10); - __pyx_t_13 = (__pyx_t_12 + PyBytes_GET_SIZE(__pyx_t_10)); - for (__pyx_t_14 = __pyx_t_12; __pyx_t_14 < __pyx_t_13; __pyx_t_14++) { - __pyx_t_11 = __pyx_t_14; - __pyx_v_c = (__pyx_t_11[0]); - - /* "View.MemoryView":517 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - __pyx_v_i = __pyx_t_9; - - /* "View.MemoryView":516 - * bytesvalue = struct.pack(self.view.format, value) - * - * for i, c in enumerate(bytesvalue): # <<<<<<<<<<<<<< - * itemp[i] = c - * - */ - __pyx_t_9 = (__pyx_t_9 + 1); - - /* "View.MemoryView":517 - * - * for i, c in enumerate(bytesvalue): - * itemp[i] = c # <<<<<<<<<<<<<< - * - * @cname('getbuffer') - */ - (__pyx_v_itemp[__pyx_v_i]) = __pyx_v_c; - } - __Pyx_DECREF(__pyx_t_10); __pyx_t_10 = 0; - - /* "View.MemoryView":503 - * return result - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * """Only used if instantiated manually by the user, or if Cython doesn't - * know how to convert the type""" - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_XDECREF(__pyx_t_10); - __Pyx_AddTraceback("View.MemoryView.memoryview.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_struct); - __Pyx_XDECREF(__pyx_v_bytesvalue); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":520 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - -/* Python wrapper */ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags); /*proto*/ -static CYTHON_UNUSED int __pyx_memoryview_getbuffer(PyObject *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__getbuffer__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((Py_buffer *)__pyx_v_info), ((int)__pyx_v_flags)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static int __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_8__getbuffer__(struct __pyx_memoryview_obj *__pyx_v_self, Py_buffer *__pyx_v_info, int __pyx_v_flags) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - char *__pyx_t_5; - void *__pyx_t_6; - int __pyx_t_7; - Py_ssize_t __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - if (__pyx_v_info == NULL) { - PyErr_SetString(PyExc_BufferError, "PyObject_GetBuffer: view==NULL argument is obsolete"); - return -1; - } - __Pyx_RefNannySetupContext("__getbuffer__", 0); - __pyx_v_info->obj = Py_None; __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(__pyx_v_info->obj); - - /* "View.MemoryView":521 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - __pyx_t_2 = ((__pyx_v_flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_2 = (__pyx_v_self->view.readonly != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L4_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":522 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_t_3 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__11, NULL); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 522, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 522, __pyx_L1_error) - - /* "View.MemoryView":521 - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: # <<<<<<<<<<<<<< - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - */ - } - - /* "View.MemoryView":524 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_ND) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":525 - * - * if flags & PyBUF_ND: - * info.shape = self.view.shape # <<<<<<<<<<<<<< - * else: - * info.shape = NULL - */ - __pyx_t_4 = __pyx_v_self->view.shape; - __pyx_v_info->shape = __pyx_t_4; - - /* "View.MemoryView":524 - * raise ValueError("Cannot create writable memory view from read-only memoryview") - * - * if flags & PyBUF_ND: # <<<<<<<<<<<<<< - * info.shape = self.view.shape - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":527 - * info.shape = self.view.shape - * else: - * info.shape = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_STRIDES: - */ - /*else*/ { - __pyx_v_info->shape = NULL; - } - __pyx_L6:; - - /* "View.MemoryView":529 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_STRIDES) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":530 - * - * if flags & PyBUF_STRIDES: - * info.strides = self.view.strides # <<<<<<<<<<<<<< - * else: - * info.strides = NULL - */ - __pyx_t_4 = __pyx_v_self->view.strides; - __pyx_v_info->strides = __pyx_t_4; - - /* "View.MemoryView":529 - * info.shape = NULL - * - * if flags & PyBUF_STRIDES: # <<<<<<<<<<<<<< - * info.strides = self.view.strides - * else: - */ - goto __pyx_L7; - } - - /* "View.MemoryView":532 - * info.strides = self.view.strides - * else: - * info.strides = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_INDIRECT: - */ - /*else*/ { - __pyx_v_info->strides = NULL; - } - __pyx_L7:; - - /* "View.MemoryView":534 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_INDIRECT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":535 - * - * if flags & PyBUF_INDIRECT: - * info.suboffsets = self.view.suboffsets # <<<<<<<<<<<<<< - * else: - * info.suboffsets = NULL - */ - __pyx_t_4 = __pyx_v_self->view.suboffsets; - __pyx_v_info->suboffsets = __pyx_t_4; - - /* "View.MemoryView":534 - * info.strides = NULL - * - * if flags & PyBUF_INDIRECT: # <<<<<<<<<<<<<< - * info.suboffsets = self.view.suboffsets - * else: - */ - goto __pyx_L8; - } - - /* "View.MemoryView":537 - * info.suboffsets = self.view.suboffsets - * else: - * info.suboffsets = NULL # <<<<<<<<<<<<<< - * - * if flags & PyBUF_FORMAT: - */ - /*else*/ { - __pyx_v_info->suboffsets = NULL; - } - __pyx_L8:; - - /* "View.MemoryView":539 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - __pyx_t_1 = ((__pyx_v_flags & PyBUF_FORMAT) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":540 - * - * if flags & PyBUF_FORMAT: - * info.format = self.view.format # <<<<<<<<<<<<<< - * else: - * info.format = NULL - */ - __pyx_t_5 = __pyx_v_self->view.format; - __pyx_v_info->format = __pyx_t_5; - - /* "View.MemoryView":539 - * info.suboffsets = NULL - * - * if flags & PyBUF_FORMAT: # <<<<<<<<<<<<<< - * info.format = self.view.format - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":542 - * info.format = self.view.format - * else: - * info.format = NULL # <<<<<<<<<<<<<< - * - * info.buf = self.view.buf - */ - /*else*/ { - __pyx_v_info->format = NULL; - } - __pyx_L9:; - - /* "View.MemoryView":544 - * info.format = NULL - * - * info.buf = self.view.buf # <<<<<<<<<<<<<< - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - */ - __pyx_t_6 = __pyx_v_self->view.buf; - __pyx_v_info->buf = __pyx_t_6; - - /* "View.MemoryView":545 - * - * info.buf = self.view.buf - * info.ndim = self.view.ndim # <<<<<<<<<<<<<< - * info.itemsize = self.view.itemsize - * info.len = self.view.len - */ - __pyx_t_7 = __pyx_v_self->view.ndim; - __pyx_v_info->ndim = __pyx_t_7; - - /* "View.MemoryView":546 - * info.buf = self.view.buf - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize # <<<<<<<<<<<<<< - * info.len = self.view.len - * info.readonly = self.view.readonly - */ - __pyx_t_8 = __pyx_v_self->view.itemsize; - __pyx_v_info->itemsize = __pyx_t_8; - - /* "View.MemoryView":547 - * info.ndim = self.view.ndim - * info.itemsize = self.view.itemsize - * info.len = self.view.len # <<<<<<<<<<<<<< - * info.readonly = self.view.readonly - * info.obj = self - */ - __pyx_t_8 = __pyx_v_self->view.len; - __pyx_v_info->len = __pyx_t_8; - - /* "View.MemoryView":548 - * info.itemsize = self.view.itemsize - * info.len = self.view.len - * info.readonly = self.view.readonly # <<<<<<<<<<<<<< - * info.obj = self - * - */ - __pyx_t_1 = __pyx_v_self->view.readonly; - __pyx_v_info->readonly = __pyx_t_1; - - /* "View.MemoryView":549 - * info.len = self.view.len - * info.readonly = self.view.readonly - * info.obj = self # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_INCREF(((PyObject *)__pyx_v_self)); - __Pyx_GIVEREF(((PyObject *)__pyx_v_self)); - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); - __pyx_v_info->obj = ((PyObject *)__pyx_v_self); - - /* "View.MemoryView":520 - * - * @cname('getbuffer') - * def __getbuffer__(self, Py_buffer *info, int flags): # <<<<<<<<<<<<<< - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") - */ - - /* function exit code */ - __pyx_r = 0; - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__getbuffer__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - if (__pyx_v_info->obj != NULL) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - goto __pyx_L2; - __pyx_L0:; - if (__pyx_v_info->obj == Py_None) { - __Pyx_GOTREF(__pyx_v_info->obj); - __Pyx_DECREF(__pyx_v_info->obj); __pyx_v_info->obj = 0; - } - __pyx_L2:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":555 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_1T___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":556 - * @property - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) # <<<<<<<<<<<<<< - * transpose_memslice(&result.from_slice) - * return result - */ - __pyx_t_1 = __pyx_memoryview_copy_object(__pyx_v_self); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 556, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (!(likely(((__pyx_t_1) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_1, __pyx_memoryviewslice_type))))) __PYX_ERR(1, 556, __pyx_L1_error) - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":557 - * def T(self): - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_t_2 = __pyx_memslice_transpose((&__pyx_v_result->from_slice)); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 557, __pyx_L1_error) - - /* "View.MemoryView":558 - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - * return result # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":555 - * - * @property - * def T(self): # <<<<<<<<<<<<<< - * cdef _memoryviewslice result = memoryview_copy(self) - * transpose_memslice(&result.from_slice) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.T.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":561 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4base___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":562 - * @property - * def base(self): - * return self.obj # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->obj); - __pyx_r = __pyx_v_self->obj; - goto __pyx_L0; - - /* "View.MemoryView":561 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.obj - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":565 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_5shape___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_length; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":566 - * @property - * def shape(self): - * return tuple([length for length in self.view.shape[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyList_New(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 566, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_3 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_4 = __pyx_v_self->view.shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_length = (__pyx_t_2[0]); - __pyx_t_5 = PyInt_FromSsize_t(__pyx_v_length); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 566, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_1, (PyObject*)__pyx_t_5))) __PYX_ERR(1, 566, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - } - __pyx_t_5 = PyList_AsTuple(((PyObject*)__pyx_t_1)); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 566, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":565 - * - * @property - * def shape(self): # <<<<<<<<<<<<<< - * return tuple([length for length in self.view.shape[:self.view.ndim]]) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview.shape.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":569 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_7strides___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_stride; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":570 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - __pyx_t_1 = ((__pyx_v_self->view.strides == NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":572 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_t_2 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__12, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 572, __pyx_L1_error) - - /* "View.MemoryView":570 - * @property - * def strides(self): - * if self.view.strides == NULL: # <<<<<<<<<<<<<< - * - * raise ValueError("Buffer view does not expose strides") - */ - } - - /* "View.MemoryView":574 - * raise ValueError("Buffer view does not expose strides") - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = PyList_New(0); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 574, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_4 = (__pyx_v_self->view.strides + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.strides; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_v_stride = (__pyx_t_3[0]); - __pyx_t_6 = PyInt_FromSsize_t(__pyx_v_stride); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 574, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_2, (PyObject*)__pyx_t_6))) __PYX_ERR(1, 574, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - } - __pyx_t_6 = PyList_AsTuple(((PyObject*)__pyx_t_2)); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 574, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_6; - __pyx_t_6 = 0; - goto __pyx_L0; - - /* "View.MemoryView":569 - * - * @property - * def strides(self): # <<<<<<<<<<<<<< - * if self.view.strides == NULL: - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.strides.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":577 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_10suboffsets___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - Py_ssize_t *__pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":578 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - __pyx_t_1 = ((__pyx_v_self->view.suboffsets == NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":579 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_tuple__13, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":578 - * @property - * def suboffsets(self): - * if self.view.suboffsets == NULL: # <<<<<<<<<<<<<< - * return (-1,) * self.view.ndim - * - */ - } - - /* "View.MemoryView":581 - * return (-1,) * self.view.ndim - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_5 = (__pyx_v_self->view.suboffsets + __pyx_v_self->view.ndim); - for (__pyx_t_6 = __pyx_v_self->view.suboffsets; __pyx_t_6 < __pyx_t_5; __pyx_t_6++) { - __pyx_t_4 = __pyx_t_6; - __pyx_v_suboffset = (__pyx_t_4[0]); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_suboffset); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - if (unlikely(__Pyx_ListComp_Append(__pyx_t_3, (PyObject*)__pyx_t_2))) __PYX_ERR(1, 581, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - } - __pyx_t_2 = PyList_AsTuple(((PyObject*)__pyx_t_3)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 581, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":577 - * - * @property - * def suboffsets(self): # <<<<<<<<<<<<<< - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.suboffsets.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":584 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4ndim___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":585 - * @property - * def ndim(self): - * return self.view.ndim # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_self->view.ndim); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 585, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":584 - * - * @property - * def ndim(self): # <<<<<<<<<<<<<< - * return self.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.ndim.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":588 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_8itemsize___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":589 - * @property - * def itemsize(self): - * return self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 589, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":588 - * - * @property - * def itemsize(self): # <<<<<<<<<<<<<< - * return self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.itemsize.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":592 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_6nbytes___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":593 - * @property - * def nbytes(self): - * return self.size * self.view.itemsize # <<<<<<<<<<<<<< - * - * @property - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_size); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 593, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_self->view.itemsize); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 593, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_Multiply(__pyx_t_1, __pyx_t_2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 593, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_3; - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":592 - * - * @property - * def nbytes(self): # <<<<<<<<<<<<<< - * return self.size * self.view.itemsize - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.nbytes.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":596 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_10memoryview_4size___get__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_v_result = NULL; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - Py_ssize_t *__pyx_t_5; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":597 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - __pyx_t_1 = (__pyx_v_self->_size == Py_None); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":598 - * def size(self): - * if self._size is None: - * result = 1 # <<<<<<<<<<<<<< - * - * for length in self.view.shape[:self.view.ndim]: - */ - __Pyx_INCREF(__pyx_int_1); - __pyx_v_result = __pyx_int_1; - - /* "View.MemoryView":600 - * result = 1 - * - * for length in self.view.shape[:self.view.ndim]: # <<<<<<<<<<<<<< - * result *= length - * - */ - __pyx_t_4 = (__pyx_v_self->view.shape + __pyx_v_self->view.ndim); - for (__pyx_t_5 = __pyx_v_self->view.shape; __pyx_t_5 < __pyx_t_4; __pyx_t_5++) { - __pyx_t_3 = __pyx_t_5; - __pyx_t_6 = PyInt_FromSsize_t((__pyx_t_3[0])); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 600, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_6); - __pyx_t_6 = 0; - - /* "View.MemoryView":601 - * - * for length in self.view.shape[:self.view.ndim]: - * result *= length # <<<<<<<<<<<<<< - * - * self._size = result - */ - __pyx_t_6 = PyNumber_InPlaceMultiply(__pyx_v_result, __pyx_v_length); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 601, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __Pyx_DECREF_SET(__pyx_v_result, __pyx_t_6); - __pyx_t_6 = 0; - } - - /* "View.MemoryView":603 - * result *= length - * - * self._size = result # <<<<<<<<<<<<<< - * - * return self._size - */ - __Pyx_INCREF(__pyx_v_result); - __Pyx_GIVEREF(__pyx_v_result); - __Pyx_GOTREF(__pyx_v_self->_size); - __Pyx_DECREF(__pyx_v_self->_size); - __pyx_v_self->_size = __pyx_v_result; - - /* "View.MemoryView":597 - * @property - * def size(self): - * if self._size is None: # <<<<<<<<<<<<<< - * result = 1 - * - */ - } - - /* "View.MemoryView":605 - * self._size = result - * - * return self._size # <<<<<<<<<<<<<< - * - * def __len__(self): - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->_size); - __pyx_r = __pyx_v_self->_size; - goto __pyx_L0; - - /* "View.MemoryView":596 - * - * @property - * def size(self): # <<<<<<<<<<<<<< - * if self._size is None: - * result = 1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.memoryview.size.__get__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":607 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - -/* Python wrapper */ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self); /*proto*/ -static Py_ssize_t __pyx_memoryview___len__(PyObject *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__len__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static Py_ssize_t __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_10__len__(struct __pyx_memoryview_obj *__pyx_v_self) { - Py_ssize_t __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("__len__", 0); - - /* "View.MemoryView":608 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - __pyx_t_1 = ((__pyx_v_self->view.ndim >= 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":609 - * def __len__(self): - * if self.view.ndim >= 1: - * return self.view.shape[0] # <<<<<<<<<<<<<< - * - * return 0 - */ - __pyx_r = (__pyx_v_self->view.shape[0]); - goto __pyx_L0; - - /* "View.MemoryView":608 - * - * def __len__(self): - * if self.view.ndim >= 1: # <<<<<<<<<<<<<< - * return self.view.shape[0] - * - */ - } - - /* "View.MemoryView":611 - * return self.view.shape[0] - * - * return 0 # <<<<<<<<<<<<<< - * - * def __repr__(self): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":607 - * return self._size - * - * def __len__(self): # <<<<<<<<<<<<<< - * if self.view.ndim >= 1: - * return self.view.shape[0] - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":613 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___repr__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__repr__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_12__repr__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__repr__", 0); - - /* "View.MemoryView":614 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - - /* "View.MemoryView":615 - * def __repr__(self): - * return "" % (self.base.__class__.__name__, - * id(self)) # <<<<<<<<<<<<<< - * - * def __str__(self): - */ - __pyx_t_2 = __Pyx_PyObject_CallOneArg(__pyx_builtin_id, ((PyObject *)__pyx_v_self)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 615, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - - /* "View.MemoryView":614 - * - * def __repr__(self): - * return "" % (self.base.__class__.__name__, # <<<<<<<<<<<<<< - * id(self)) - * - */ - __pyx_t_3 = PyTuple_New(2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_t_3); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 614, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":613 - * return 0 - * - * def __repr__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__, - * id(self)) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview.__repr__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":617 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_memoryview___str__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__str__ (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_14__str__(struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__str__", 0); - - /* "View.MemoryView":618 - * - * def __str__(self): - * return "" % (self.base.__class__.__name__,) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_self), __pyx_n_s_base); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(__pyx_t_1, __pyx_n_s_class); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(__pyx_t_2, __pyx_n_s_name_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_2 = PyTuple_New(1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_2, 0, __pyx_t_1); - __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_PyString_Format(__pyx_kp_s_MemoryView_of_r_object, __pyx_t_2); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 618, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":617 - * id(self)) - * - * def __str__(self): # <<<<<<<<<<<<<< - * return "" % (self.base.__class__.__name__,) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.__str__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":621 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_c_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_c_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_16is_c_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_c_contig", 0); - - /* "View.MemoryView":624 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 624, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":625 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'C', self.view.ndim) # <<<<<<<<<<<<<< - * - * def is_f_contig(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'C', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 625, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":621 - * - * - * def is_c_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_c_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":627 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_is_f_contig(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("is_f_contig (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_18is_f_contig(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice *__pyx_v_mslice; - __Pyx_memviewslice __pyx_v_tmp; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice *__pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("is_f_contig", 0); - - /* "View.MemoryView":630 - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) # <<<<<<<<<<<<<< - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - */ - __pyx_t_1 = __pyx_memoryview_get_slice_from_memoryview(__pyx_v_self, (&__pyx_v_tmp)); if (unlikely(__pyx_t_1 == ((__Pyx_memviewslice *)NULL))) __PYX_ERR(1, 630, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":631 - * cdef __Pyx_memviewslice tmp - * mslice = get_slice_from_memview(self, &tmp) - * return slice_is_contig(mslice[0], 'F', self.view.ndim) # <<<<<<<<<<<<<< - * - * def copy(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_memviewslice_is_contig((__pyx_v_mslice[0]), 'F', __pyx_v_self->view.ndim)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 631, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":627 - * return slice_is_contig(mslice[0], 'C', self.view.ndim) - * - * def is_f_contig(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice *mslice - * cdef __Pyx_memviewslice tmp - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.is_f_contig", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":633 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_20copy(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_mslice; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy", 0); - - /* "View.MemoryView":635 - * def copy(self): - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &mslice) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_F_CONTIGUOUS)); - - /* "View.MemoryView":637 - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - * - * slice_copy(self, &mslice) # <<<<<<<<<<<<<< - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_mslice)); - - /* "View.MemoryView":638 - * - * slice_copy(self, &mslice) - * mslice = slice_copy_contig(&mslice, "c", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_C_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_mslice), ((char *)"c"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_C_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 638, __pyx_L1_error) - __pyx_v_mslice = __pyx_t_1; - - /* "View.MemoryView":643 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &mslice) # <<<<<<<<<<<<<< - * - * def copy_fortran(self): - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_mslice)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 643, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":633 - * return slice_is_contig(mslice[0], 'F', self.view.ndim) - * - * def copy(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice mslice - * cdef int flags = self.flags & ~PyBUF_F_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":645 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - -/* Python wrapper */ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_memoryview_copy_fortran(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("copy_fortran (wrapper)", 0); - __pyx_r = __pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_memoryview___pyx_pf_15View_dot_MemoryView_10memoryview_22copy_fortran(struct __pyx_memoryview_obj *__pyx_v_self) { - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - int __pyx_v_flags; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_memviewslice __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("copy_fortran", 0); - - /* "View.MemoryView":647 - * def copy_fortran(self): - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS # <<<<<<<<<<<<<< - * - * slice_copy(self, &src) - */ - __pyx_v_flags = (__pyx_v_self->flags & (~PyBUF_C_CONTIGUOUS)); - - /* "View.MemoryView":649 - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - * - * slice_copy(self, &src) # <<<<<<<<<<<<<< - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, - * self.view.itemsize, - */ - __pyx_memoryview_slice_copy(__pyx_v_self, (&__pyx_v_src)); - - /* "View.MemoryView":650 - * - * slice_copy(self, &src) - * dst = slice_copy_contig(&src, "fortran", self.view.ndim, # <<<<<<<<<<<<<< - * self.view.itemsize, - * flags|PyBUF_F_CONTIGUOUS, - */ - __pyx_t_1 = __pyx_memoryview_copy_new_contig((&__pyx_v_src), ((char *)"fortran"), __pyx_v_self->view.ndim, __pyx_v_self->view.itemsize, (__pyx_v_flags | PyBUF_F_CONTIGUOUS), __pyx_v_self->dtype_is_object); if (unlikely(PyErr_Occurred())) __PYX_ERR(1, 650, __pyx_L1_error) - __pyx_v_dst = __pyx_t_1; - - /* "View.MemoryView":655 - * self.dtype_is_object) - * - * return memoryview_copy_from_slice(self, &dst) # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_copy_object_from_slice(__pyx_v_self, (&__pyx_v_dst)); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 655, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":645 - * return memoryview_copy_from_slice(self, &mslice) - * - * def copy_fortran(self): # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice src, dst - * cdef int flags = self.flags & ~PyBUF_C_CONTIGUOUS - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView.memoryview.copy_fortran", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview___reduce_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__14, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryview_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryview_2__setstate_cython__(((struct __pyx_memoryview_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryview_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryview_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__15, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":659 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - -static PyObject *__pyx_memoryview_new(PyObject *__pyx_v_o, int __pyx_v_flags, int __pyx_v_dtype_is_object, __Pyx_TypeInfo *__pyx_v_typeinfo) { - struct __pyx_memoryview_obj *__pyx_v_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_cwrapper", 0); - - /* "View.MemoryView":660 - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) # <<<<<<<<<<<<<< - * result.typeinfo = typeinfo - * return result - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_flags); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 660, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 660, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 660, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_o); - __Pyx_GIVEREF(__pyx_v_o); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_o); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryview_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 660, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryview_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":661 - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo # <<<<<<<<<<<<<< - * return result - * - */ - __pyx_v_result->typeinfo = __pyx_v_typeinfo; - - /* "View.MemoryView":662 - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_check') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":659 - * - * @cname('__pyx_memoryview_new') - * cdef memoryview_cwrapper(object o, int flags, bint dtype_is_object, __Pyx_TypeInfo *typeinfo): # <<<<<<<<<<<<<< - * cdef memoryview result = memoryview(o, flags, dtype_is_object) - * result.typeinfo = typeinfo - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_cwrapper", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":665 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - -static CYTHON_INLINE int __pyx_memoryview_check(PyObject *__pyx_v_o) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - __Pyx_RefNannySetupContext("memoryview_check", 0); - - /* "View.MemoryView":666 - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): - * return isinstance(o, memoryview) # <<<<<<<<<<<<<< - * - * cdef tuple _unellipsify(object index, int ndim): - */ - __pyx_t_1 = __Pyx_TypeCheck(__pyx_v_o, __pyx_memoryview_type); - __pyx_r = __pyx_t_1; - goto __pyx_L0; - - /* "View.MemoryView":665 - * - * @cname('__pyx_memoryview_check') - * cdef inline bint memoryview_check(object o): # <<<<<<<<<<<<<< - * return isinstance(o, memoryview) - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":668 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - -static PyObject *_unellipsify(PyObject *__pyx_v_index, int __pyx_v_ndim) { - PyObject *__pyx_v_tup = NULL; - PyObject *__pyx_v_result = NULL; - int __pyx_v_have_slices; - int __pyx_v_seen_ellipsis; - CYTHON_UNUSED PyObject *__pyx_v_idx = NULL; - PyObject *__pyx_v_item = NULL; - Py_ssize_t __pyx_v_nslices; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - Py_ssize_t __pyx_t_5; - PyObject *(*__pyx_t_6)(PyObject *); - PyObject *__pyx_t_7 = NULL; - Py_ssize_t __pyx_t_8; - int __pyx_t_9; - int __pyx_t_10; - PyObject *__pyx_t_11 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("_unellipsify", 0); - - /* "View.MemoryView":673 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - __pyx_t_1 = PyTuple_Check(__pyx_v_index); - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":674 - * """ - * if not isinstance(index, tuple): - * tup = (index,) # <<<<<<<<<<<<<< - * else: - * tup = index - */ - __pyx_t_3 = PyTuple_New(1); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 674, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_index); - __Pyx_GIVEREF(__pyx_v_index); - PyTuple_SET_ITEM(__pyx_t_3, 0, __pyx_v_index); - __pyx_v_tup = __pyx_t_3; - __pyx_t_3 = 0; - - /* "View.MemoryView":673 - * full slices. - * """ - * if not isinstance(index, tuple): # <<<<<<<<<<<<<< - * tup = (index,) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":676 - * tup = (index,) - * else: - * tup = index # <<<<<<<<<<<<<< - * - * result = [] - */ - /*else*/ { - __Pyx_INCREF(__pyx_v_index); - __pyx_v_tup = __pyx_v_index; - } - __pyx_L3:; - - /* "View.MemoryView":678 - * tup = index - * - * result = [] # <<<<<<<<<<<<<< - * have_slices = False - * seen_ellipsis = False - */ - __pyx_t_3 = PyList_New(0); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 678, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_v_result = ((PyObject*)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":679 - * - * result = [] - * have_slices = False # <<<<<<<<<<<<<< - * seen_ellipsis = False - * for idx, item in enumerate(tup): - */ - __pyx_v_have_slices = 0; - - /* "View.MemoryView":680 - * result = [] - * have_slices = False - * seen_ellipsis = False # <<<<<<<<<<<<<< - * for idx, item in enumerate(tup): - * if item is Ellipsis: - */ - __pyx_v_seen_ellipsis = 0; - - /* "View.MemoryView":681 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - __Pyx_INCREF(__pyx_int_0); - __pyx_t_3 = __pyx_int_0; - if (likely(PyList_CheckExact(__pyx_v_tup)) || PyTuple_CheckExact(__pyx_v_tup)) { - __pyx_t_4 = __pyx_v_tup; __Pyx_INCREF(__pyx_t_4); __pyx_t_5 = 0; - __pyx_t_6 = NULL; - } else { - __pyx_t_5 = -1; __pyx_t_4 = PyObject_GetIter(__pyx_v_tup); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 681, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_6 = Py_TYPE(__pyx_t_4)->tp_iternext; if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 681, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_6)) { - if (likely(PyList_CheckExact(__pyx_t_4))) { - if (__pyx_t_5 >= PyList_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyList_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 681, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 681, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } else { - if (__pyx_t_5 >= PyTuple_GET_SIZE(__pyx_t_4)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_7 = PyTuple_GET_ITEM(__pyx_t_4, __pyx_t_5); __Pyx_INCREF(__pyx_t_7); __pyx_t_5++; if (unlikely(0 < 0)) __PYX_ERR(1, 681, __pyx_L1_error) - #else - __pyx_t_7 = PySequence_ITEM(__pyx_t_4, __pyx_t_5); __pyx_t_5++; if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 681, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - #endif - } - } else { - __pyx_t_7 = __pyx_t_6(__pyx_t_4); - if (unlikely(!__pyx_t_7)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 681, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_7); - } - __Pyx_XDECREF_SET(__pyx_v_item, __pyx_t_7); - __pyx_t_7 = 0; - __Pyx_INCREF(__pyx_t_3); - __Pyx_XDECREF_SET(__pyx_v_idx, __pyx_t_3); - __pyx_t_7 = __Pyx_PyInt_AddObjC(__pyx_t_3, __pyx_int_1, 1, 0, 0); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 681, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_3); - __pyx_t_3 = __pyx_t_7; - __pyx_t_7 = 0; - - /* "View.MemoryView":682 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - __pyx_t_2 = (__pyx_v_item == __pyx_builtin_Ellipsis); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":683 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - __pyx_t_1 = ((!(__pyx_v_seen_ellipsis != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":684 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_t_8 = PyObject_Length(__pyx_v_tup); if (unlikely(__pyx_t_8 == ((Py_ssize_t)-1))) __PYX_ERR(1, 684, __pyx_L1_error) - __pyx_t_7 = PyList_New(1 * ((((__pyx_v_ndim - __pyx_t_8) + 1)<0) ? 0:((__pyx_v_ndim - __pyx_t_8) + 1))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 684, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < ((__pyx_v_ndim - __pyx_t_8) + 1); __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_7, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_7); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 684, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - - /* "View.MemoryView":685 - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True # <<<<<<<<<<<<<< - * else: - * result.append(slice(None)) - */ - __pyx_v_seen_ellipsis = 1; - - /* "View.MemoryView":683 - * for idx, item in enumerate(tup): - * if item is Ellipsis: - * if not seen_ellipsis: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - * seen_ellipsis = True - */ - goto __pyx_L7; - } - - /* "View.MemoryView":687 - * seen_ellipsis = True - * else: - * result.append(slice(None)) # <<<<<<<<<<<<<< - * have_slices = True - * else: - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_slice__16); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 687, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":688 - * else: - * result.append(slice(None)) - * have_slices = True # <<<<<<<<<<<<<< - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - */ - __pyx_v_have_slices = 1; - - /* "View.MemoryView":682 - * seen_ellipsis = False - * for idx, item in enumerate(tup): - * if item is Ellipsis: # <<<<<<<<<<<<<< - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) - */ - goto __pyx_L6; - } - - /* "View.MemoryView":690 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - /*else*/ { - __pyx_t_2 = PySlice_Check(__pyx_v_item); - __pyx_t_10 = ((!(__pyx_t_2 != 0)) != 0); - if (__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = ((!(PyIndex_Check(__pyx_v_item) != 0)) != 0); - __pyx_t_1 = __pyx_t_10; - __pyx_L9_bool_binop_done:; - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":691 - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): - * raise TypeError("Cannot index with type '%s'" % type(item)) # <<<<<<<<<<<<<< - * - * have_slices = have_slices or isinstance(item, slice) - */ - __pyx_t_7 = __Pyx_PyString_FormatSafe(__pyx_kp_s_Cannot_index_with_type_s, ((PyObject *)Py_TYPE(__pyx_v_item))); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 691, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __pyx_t_11 = __Pyx_PyObject_CallOneArg(__pyx_builtin_TypeError, __pyx_t_7); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 691, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_Raise(__pyx_t_11, 0, 0, 0); - __Pyx_DECREF(__pyx_t_11); __pyx_t_11 = 0; - __PYX_ERR(1, 691, __pyx_L1_error) - - /* "View.MemoryView":690 - * have_slices = True - * else: - * if not isinstance(item, slice) and not PyIndex_Check(item): # <<<<<<<<<<<<<< - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - */ - } - - /* "View.MemoryView":693 - * raise TypeError("Cannot index with type '%s'" % type(item)) - * - * have_slices = have_slices or isinstance(item, slice) # <<<<<<<<<<<<<< - * result.append(item) - * - */ - __pyx_t_10 = (__pyx_v_have_slices != 0); - if (!__pyx_t_10) { - } else { - __pyx_t_1 = __pyx_t_10; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = PySlice_Check(__pyx_v_item); - __pyx_t_2 = (__pyx_t_10 != 0); - __pyx_t_1 = __pyx_t_2; - __pyx_L11_bool_binop_done:; - __pyx_v_have_slices = __pyx_t_1; - - /* "View.MemoryView":694 - * - * have_slices = have_slices or isinstance(item, slice) - * result.append(item) # <<<<<<<<<<<<<< - * - * nslices = ndim - len(result) - */ - __pyx_t_9 = __Pyx_PyList_Append(__pyx_v_result, __pyx_v_item); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 694, __pyx_L1_error) - } - __pyx_L6:; - - /* "View.MemoryView":681 - * have_slices = False - * seen_ellipsis = False - * for idx, item in enumerate(tup): # <<<<<<<<<<<<<< - * if item is Ellipsis: - * if not seen_ellipsis: - */ - } - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":696 - * result.append(item) - * - * nslices = ndim - len(result) # <<<<<<<<<<<<<< - * if nslices: - * result.extend([slice(None)] * nslices) - */ - __pyx_t_5 = PyList_GET_SIZE(__pyx_v_result); if (unlikely(__pyx_t_5 == ((Py_ssize_t)-1))) __PYX_ERR(1, 696, __pyx_L1_error) - __pyx_v_nslices = (__pyx_v_ndim - __pyx_t_5); - - /* "View.MemoryView":697 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - __pyx_t_1 = (__pyx_v_nslices != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":698 - * nslices = ndim - len(result) - * if nslices: - * result.extend([slice(None)] * nslices) # <<<<<<<<<<<<<< - * - * return have_slices or nslices, tuple(result) - */ - __pyx_t_3 = PyList_New(1 * ((__pyx_v_nslices<0) ? 0:__pyx_v_nslices)); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - { Py_ssize_t __pyx_temp; - for (__pyx_temp=0; __pyx_temp < __pyx_v_nslices; __pyx_temp++) { - __Pyx_INCREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - PyList_SET_ITEM(__pyx_t_3, __pyx_temp, __pyx_slice__16); - } - } - __pyx_t_9 = __Pyx_PyList_Extend(__pyx_v_result, __pyx_t_3); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 698, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":697 - * - * nslices = ndim - len(result) - * if nslices: # <<<<<<<<<<<<<< - * result.extend([slice(None)] * nslices) - * - */ - } - - /* "View.MemoryView":700 - * result.extend([slice(None)] * nslices) - * - * return have_slices or nslices, tuple(result) # <<<<<<<<<<<<<< - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - */ - __Pyx_XDECREF(__pyx_r); - if (!__pyx_v_have_slices) { - } else { - __pyx_t_4 = __Pyx_PyBool_FromLong(__pyx_v_have_slices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 700, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - goto __pyx_L14_bool_binop_done; - } - __pyx_t_4 = PyInt_FromSsize_t(__pyx_v_nslices); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 700, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_3 = __pyx_t_4; - __pyx_t_4 = 0; - __pyx_L14_bool_binop_done:; - __pyx_t_4 = PyList_AsTuple(__pyx_v_result); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 700, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __pyx_t_11 = PyTuple_New(2); if (unlikely(!__pyx_t_11)) __PYX_ERR(1, 700, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_11); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_11, 0, __pyx_t_3); - __Pyx_GIVEREF(__pyx_t_4); - PyTuple_SET_ITEM(__pyx_t_11, 1, __pyx_t_4); - __pyx_t_3 = 0; - __pyx_t_4 = 0; - __pyx_r = ((PyObject*)__pyx_t_11); - __pyx_t_11 = 0; - goto __pyx_L0; - - /* "View.MemoryView":668 - * return isinstance(o, memoryview) - * - * cdef tuple _unellipsify(object index, int ndim): # <<<<<<<<<<<<<< - * """ - * Replace all ellipses with full slices and fill incomplete indices with - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_11); - __Pyx_AddTraceback("View.MemoryView._unellipsify", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v_tup); - __Pyx_XDECREF(__pyx_v_result); - __Pyx_XDECREF(__pyx_v_idx); - __Pyx_XDECREF(__pyx_v_item); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":702 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - -static PyObject *assert_direct_dimensions(Py_ssize_t *__pyx_v_suboffsets, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - int __pyx_t_4; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assert_direct_dimensions", 0); - - /* "View.MemoryView":703 - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") - */ - __pyx_t_2 = (__pyx_v_suboffsets + __pyx_v_ndim); - for (__pyx_t_3 = __pyx_v_suboffsets; __pyx_t_3 < __pyx_t_2; __pyx_t_3++) { - __pyx_t_1 = __pyx_t_3; - __pyx_v_suboffset = (__pyx_t_1[0]); - - /* "View.MemoryView":704 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - __pyx_t_4 = ((__pyx_v_suboffset >= 0) != 0); - if (unlikely(__pyx_t_4)) { - - /* "View.MemoryView":705 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __Pyx_PyObject_Call(__pyx_builtin_ValueError, __pyx_tuple__17, NULL); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 705, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_Raise(__pyx_t_5, 0, 0, 0); - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - __PYX_ERR(1, 705, __pyx_L1_error) - - /* "View.MemoryView":704 - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * raise ValueError("Indirect dimensions not supported") - * - */ - } - } - - /* "View.MemoryView":702 - * return have_slices or nslices, tuple(result) - * - * cdef assert_direct_dimensions(Py_ssize_t *suboffsets, int ndim): # <<<<<<<<<<<<<< - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.assert_direct_dimensions", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":712 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - -static struct __pyx_memoryview_obj *__pyx_memview_slice(struct __pyx_memoryview_obj *__pyx_v_memview, PyObject *__pyx_v_indices) { - int __pyx_v_new_ndim; - int __pyx_v_suboffset_dim; - int __pyx_v_dim; - __Pyx_memviewslice __pyx_v_src; - __Pyx_memviewslice __pyx_v_dst; - __Pyx_memviewslice *__pyx_v_p_src; - struct __pyx_memoryviewslice_obj *__pyx_v_memviewsliceobj = 0; - __Pyx_memviewslice *__pyx_v_p_dst; - int *__pyx_v_p_suboffset_dim; - Py_ssize_t __pyx_v_start; - Py_ssize_t __pyx_v_stop; - Py_ssize_t __pyx_v_step; - int __pyx_v_have_start; - int __pyx_v_have_stop; - int __pyx_v_have_step; - PyObject *__pyx_v_index = NULL; - struct __pyx_memoryview_obj *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - struct __pyx_memoryview_obj *__pyx_t_4; - char *__pyx_t_5; - int __pyx_t_6; - Py_ssize_t __pyx_t_7; - PyObject *(*__pyx_t_8)(PyObject *); - PyObject *__pyx_t_9 = NULL; - Py_ssize_t __pyx_t_10; - int __pyx_t_11; - Py_ssize_t __pyx_t_12; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memview_slice", 0); - - /* "View.MemoryView":713 - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): - * cdef int new_ndim = 0, suboffset_dim = -1, dim # <<<<<<<<<<<<<< - * cdef bint negative_step - * cdef __Pyx_memviewslice src, dst - */ - __pyx_v_new_ndim = 0; - __pyx_v_suboffset_dim = -1; - - /* "View.MemoryView":720 - * - * - * memset(&dst, 0, sizeof(dst)) # <<<<<<<<<<<<<< - * - * cdef _memoryviewslice memviewsliceobj - */ - (void)(memset((&__pyx_v_dst), 0, (sizeof(__pyx_v_dst)))); - - /* "View.MemoryView":724 - * cdef _memoryviewslice memviewsliceobj - * - * assert memview.view.ndim > 0 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - #ifndef CYTHON_WITHOUT_ASSERTIONS - if (unlikely(!Py_OptimizeFlag)) { - if (unlikely(!((__pyx_v_memview->view.ndim > 0) != 0))) { - PyErr_SetNone(PyExc_AssertionError); - __PYX_ERR(1, 724, __pyx_L1_error) - } - } - #endif - - /* "View.MemoryView":726 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":727 - * - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview # <<<<<<<<<<<<<< - * p_src = &memviewsliceobj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 727, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_memviewsliceobj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":728 - * if isinstance(memview, _memoryviewslice): - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, &src) - */ - __pyx_v_p_src = (&__pyx_v_memviewsliceobj->from_slice); - - /* "View.MemoryView":726 - * assert memview.view.ndim > 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * memviewsliceobj = memview - * p_src = &memviewsliceobj.from_slice - */ - goto __pyx_L3; - } - - /* "View.MemoryView":730 - * p_src = &memviewsliceobj.from_slice - * else: - * slice_copy(memview, &src) # <<<<<<<<<<<<<< - * p_src = &src - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_src)); - - /* "View.MemoryView":731 - * else: - * slice_copy(memview, &src) - * p_src = &src # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_p_src = (&__pyx_v_src); - } - __pyx_L3:; - - /* "View.MemoryView":737 - * - * - * dst.memview = p_src.memview # <<<<<<<<<<<<<< - * dst.data = p_src.data - * - */ - __pyx_t_4 = __pyx_v_p_src->memview; - __pyx_v_dst.memview = __pyx_t_4; - - /* "View.MemoryView":738 - * - * dst.memview = p_src.memview - * dst.data = p_src.data # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_v_p_src->data; - __pyx_v_dst.data = __pyx_t_5; - - /* "View.MemoryView":743 - * - * - * cdef __Pyx_memviewslice *p_dst = &dst # <<<<<<<<<<<<<< - * cdef int *p_suboffset_dim = &suboffset_dim - * cdef Py_ssize_t start, stop, step - */ - __pyx_v_p_dst = (&__pyx_v_dst); - - /* "View.MemoryView":744 - * - * cdef __Pyx_memviewslice *p_dst = &dst - * cdef int *p_suboffset_dim = &suboffset_dim # <<<<<<<<<<<<<< - * cdef Py_ssize_t start, stop, step - * cdef bint have_start, have_stop, have_step - */ - __pyx_v_p_suboffset_dim = (&__pyx_v_suboffset_dim); - - /* "View.MemoryView":748 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - __pyx_t_6 = 0; - if (likely(PyList_CheckExact(__pyx_v_indices)) || PyTuple_CheckExact(__pyx_v_indices)) { - __pyx_t_3 = __pyx_v_indices; __Pyx_INCREF(__pyx_t_3); __pyx_t_7 = 0; - __pyx_t_8 = NULL; - } else { - __pyx_t_7 = -1; __pyx_t_3 = PyObject_GetIter(__pyx_v_indices); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 748, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_8 = Py_TYPE(__pyx_t_3)->tp_iternext; if (unlikely(!__pyx_t_8)) __PYX_ERR(1, 748, __pyx_L1_error) - } - for (;;) { - if (likely(!__pyx_t_8)) { - if (likely(PyList_CheckExact(__pyx_t_3))) { - if (__pyx_t_7 >= PyList_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyList_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 748, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 748, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } else { - if (__pyx_t_7 >= PyTuple_GET_SIZE(__pyx_t_3)) break; - #if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - __pyx_t_9 = PyTuple_GET_ITEM(__pyx_t_3, __pyx_t_7); __Pyx_INCREF(__pyx_t_9); __pyx_t_7++; if (unlikely(0 < 0)) __PYX_ERR(1, 748, __pyx_L1_error) - #else - __pyx_t_9 = PySequence_ITEM(__pyx_t_3, __pyx_t_7); __pyx_t_7++; if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 748, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - #endif - } - } else { - __pyx_t_9 = __pyx_t_8(__pyx_t_3); - if (unlikely(!__pyx_t_9)) { - PyObject* exc_type = PyErr_Occurred(); - if (exc_type) { - if (likely(__Pyx_PyErr_GivenExceptionMatches(exc_type, PyExc_StopIteration))) PyErr_Clear(); - else __PYX_ERR(1, 748, __pyx_L1_error) - } - break; - } - __Pyx_GOTREF(__pyx_t_9); - } - __Pyx_XDECREF_SET(__pyx_v_index, __pyx_t_9); - __pyx_t_9 = 0; - __pyx_v_dim = __pyx_t_6; - __pyx_t_6 = (__pyx_t_6 + 1); - - /* "View.MemoryView":749 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - __pyx_t_2 = (PyIndex_Check(__pyx_v_index) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":753 - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - * index, 0, 0, # start, stop, step # <<<<<<<<<<<<<< - * 0, 0, 0, # have_{start,stop,step} - * False) - */ - __pyx_t_10 = __Pyx_PyIndex_AsSsize_t(__pyx_v_index); if (unlikely((__pyx_t_10 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 753, __pyx_L1_error) - - /* "View.MemoryView":750 - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_t_10, 0, 0, 0, 0, 0, 0); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 750, __pyx_L1_error) - - /* "View.MemoryView":749 - * - * for dim, index in enumerate(indices): - * if PyIndex_Check(index): # <<<<<<<<<<<<<< - * slice_memviewslice( - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - */ - goto __pyx_L6; - } - - /* "View.MemoryView":756 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - __pyx_t_2 = (__pyx_v_index == Py_None); - __pyx_t_1 = (__pyx_t_2 != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":757 - * False) - * elif index is None: - * p_dst.shape[new_ndim] = 1 # <<<<<<<<<<<<<< - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - */ - (__pyx_v_p_dst->shape[__pyx_v_new_ndim]) = 1; - - /* "View.MemoryView":758 - * elif index is None: - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 # <<<<<<<<<<<<<< - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 - */ - (__pyx_v_p_dst->strides[__pyx_v_new_ndim]) = 0; - - /* "View.MemoryView":759 - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 # <<<<<<<<<<<<<< - * new_ndim += 1 - * else: - */ - (__pyx_v_p_dst->suboffsets[__pyx_v_new_ndim]) = -1L; - - /* "View.MemoryView":760 - * p_dst.strides[new_ndim] = 0 - * p_dst.suboffsets[new_ndim] = -1 - * new_ndim += 1 # <<<<<<<<<<<<<< - * else: - * start = index.start or 0 - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - - /* "View.MemoryView":756 - * 0, 0, 0, # have_{start,stop,step} - * False) - * elif index is None: # <<<<<<<<<<<<<< - * p_dst.shape[new_ndim] = 1 - * p_dst.strides[new_ndim] = 0 - */ - goto __pyx_L6; - } - - /* "View.MemoryView":762 - * new_ndim += 1 - * else: - * start = index.start or 0 # <<<<<<<<<<<<<< - * stop = index.stop or 0 - * step = index.step or 0 - */ - /*else*/ { - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 762, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 762, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 762, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L7_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L7_bool_binop_done:; - __pyx_v_start = __pyx_t_10; - - /* "View.MemoryView":763 - * else: - * start = index.start or 0 - * stop = index.stop or 0 # <<<<<<<<<<<<<< - * step = index.step or 0 - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 763, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 763, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 763, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L9_bool_binop_done:; - __pyx_v_stop = __pyx_t_10; - - /* "View.MemoryView":764 - * start = index.start or 0 - * stop = index.stop or 0 - * step = index.step or 0 # <<<<<<<<<<<<<< - * - * have_start = index.start is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 764, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = __Pyx_PyObject_IsTrue(__pyx_t_9); if (unlikely(__pyx_t_1 < 0)) __PYX_ERR(1, 764, __pyx_L1_error) - if (!__pyx_t_1) { - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - } else { - __pyx_t_12 = __Pyx_PyIndex_AsSsize_t(__pyx_t_9); if (unlikely((__pyx_t_12 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 764, __pyx_L1_error) - __pyx_t_10 = __pyx_t_12; - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - goto __pyx_L11_bool_binop_done; - } - __pyx_t_10 = 0; - __pyx_L11_bool_binop_done:; - __pyx_v_step = __pyx_t_10; - - /* "View.MemoryView":766 - * step = index.step or 0 - * - * have_start = index.start is not None # <<<<<<<<<<<<<< - * have_stop = index.stop is not None - * have_step = index.step is not None - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_start); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 766, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_start = __pyx_t_1; - - /* "View.MemoryView":767 - * - * have_start = index.start is not None - * have_stop = index.stop is not None # <<<<<<<<<<<<<< - * have_step = index.step is not None - * - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_stop); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 767, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_stop = __pyx_t_1; - - /* "View.MemoryView":768 - * have_start = index.start is not None - * have_stop = index.stop is not None - * have_step = index.step is not None # <<<<<<<<<<<<<< - * - * slice_memviewslice( - */ - __pyx_t_9 = __Pyx_PyObject_GetAttrStr(__pyx_v_index, __pyx_n_s_step); if (unlikely(!__pyx_t_9)) __PYX_ERR(1, 768, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_9); - __pyx_t_1 = (__pyx_t_9 != Py_None); - __Pyx_DECREF(__pyx_t_9); __pyx_t_9 = 0; - __pyx_v_have_step = __pyx_t_1; - - /* "View.MemoryView":770 - * have_step = index.step is not None - * - * slice_memviewslice( # <<<<<<<<<<<<<< - * p_dst, p_src.shape[dim], p_src.strides[dim], p_src.suboffsets[dim], - * dim, new_ndim, p_suboffset_dim, - */ - __pyx_t_11 = __pyx_memoryview_slice_memviewslice(__pyx_v_p_dst, (__pyx_v_p_src->shape[__pyx_v_dim]), (__pyx_v_p_src->strides[__pyx_v_dim]), (__pyx_v_p_src->suboffsets[__pyx_v_dim]), __pyx_v_dim, __pyx_v_new_ndim, __pyx_v_p_suboffset_dim, __pyx_v_start, __pyx_v_stop, __pyx_v_step, __pyx_v_have_start, __pyx_v_have_stop, __pyx_v_have_step, 1); if (unlikely(__pyx_t_11 == ((int)-1))) __PYX_ERR(1, 770, __pyx_L1_error) - - /* "View.MemoryView":776 - * have_start, have_stop, have_step, - * True) - * new_ndim += 1 # <<<<<<<<<<<<<< - * - * if isinstance(memview, _memoryviewslice): - */ - __pyx_v_new_ndim = (__pyx_v_new_ndim + 1); - } - __pyx_L6:; - - /* "View.MemoryView":748 - * cdef bint have_start, have_stop, have_step - * - * for dim, index in enumerate(indices): # <<<<<<<<<<<<<< - * if PyIndex_Check(index): - * slice_memviewslice( - */ - } - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - - /* "View.MemoryView":778 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":779 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":780 - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, # <<<<<<<<<<<<<< - * memviewsliceobj.to_dtype_func, - * memview.dtype_is_object) - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 780, __pyx_L1_error) } - - /* "View.MemoryView":781 - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * else: - */ - if (unlikely(!__pyx_v_memviewsliceobj)) { __Pyx_RaiseUnboundLocalError("memviewsliceobj"); __PYX_ERR(1, 781, __pyx_L1_error) } - - /* "View.MemoryView":779 - * - * if isinstance(memview, _memoryviewslice): - * return memoryview_fromslice(dst, new_ndim, # <<<<<<<<<<<<<< - * memviewsliceobj.to_object_func, - * memviewsliceobj.to_dtype_func, - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, __pyx_v_memviewsliceobj->to_object_func, __pyx_v_memviewsliceobj->to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 779, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 779, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - - /* "View.MemoryView":778 - * new_ndim += 1 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * return memoryview_fromslice(dst, new_ndim, - * memviewsliceobj.to_object_func, - */ - } - - /* "View.MemoryView":784 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - /*else*/ { - __Pyx_XDECREF(((PyObject *)__pyx_r)); - - /* "View.MemoryView":785 - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_fromslice(__pyx_v_dst, __pyx_v_new_ndim, NULL, NULL, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 784, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - - /* "View.MemoryView":784 - * memview.dtype_is_object) - * else: - * return memoryview_fromslice(dst, new_ndim, NULL, NULL, # <<<<<<<<<<<<<< - * memview.dtype_is_object) - * - */ - if (!(likely(((__pyx_t_3) == Py_None) || likely(__Pyx_TypeTest(__pyx_t_3, __pyx_memoryview_type))))) __PYX_ERR(1, 784, __pyx_L1_error) - __pyx_r = ((struct __pyx_memoryview_obj *)__pyx_t_3); - __pyx_t_3 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":712 - * - * @cname('__pyx_memview_slice') - * cdef memoryview memview_slice(memoryview memview, object indices): # <<<<<<<<<<<<<< - * cdef int new_ndim = 0, suboffset_dim = -1, dim - * cdef bint negative_step - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_9); - __Pyx_AddTraceback("View.MemoryView.memview_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_memviewsliceobj); - __Pyx_XDECREF(__pyx_v_index); - __Pyx_XGIVEREF((PyObject *)__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":809 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - -static int __pyx_memoryview_slice_memviewslice(__Pyx_memviewslice *__pyx_v_dst, Py_ssize_t __pyx_v_shape, Py_ssize_t __pyx_v_stride, Py_ssize_t __pyx_v_suboffset, int __pyx_v_dim, int __pyx_v_new_ndim, int *__pyx_v_suboffset_dim, Py_ssize_t __pyx_v_start, Py_ssize_t __pyx_v_stop, Py_ssize_t __pyx_v_step, int __pyx_v_have_start, int __pyx_v_have_stop, int __pyx_v_have_step, int __pyx_v_is_slice) { - Py_ssize_t __pyx_v_new_shape; - int __pyx_v_negative_step; - int __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":829 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - __pyx_t_1 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":831 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - __pyx_t_1 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":832 - * - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":831 - * if not is_slice: - * - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if not 0 <= start < shape: - */ - } - - /* "View.MemoryView":833 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - __pyx_t_1 = (0 <= __pyx_v_start); - if (__pyx_t_1) { - __pyx_t_1 = (__pyx_v_start < __pyx_v_shape); - } - __pyx_t_2 = ((!(__pyx_t_1 != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":834 - * start += shape - * if not 0 <= start < shape: - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) # <<<<<<<<<<<<<< - * else: - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"Index out of bounds (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 834, __pyx_L1_error) - - /* "View.MemoryView":833 - * if start < 0: - * start += shape - * if not 0 <= start < shape: # <<<<<<<<<<<<<< - * _err_dim(IndexError, "Index out of bounds (axis %d)", dim) - * else: - */ - } - - /* "View.MemoryView":829 - * cdef bint negative_step - * - * if not is_slice: # <<<<<<<<<<<<<< - * - * if start < 0: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":837 - * else: - * - * negative_step = have_step != 0 and step < 0 # <<<<<<<<<<<<<< - * - * if have_step and step == 0: - */ - /*else*/ { - __pyx_t_1 = ((__pyx_v_have_step != 0) != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step < 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L6_bool_binop_done:; - __pyx_v_negative_step = __pyx_t_2; - - /* "View.MemoryView":839 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - __pyx_t_1 = (__pyx_v_have_step != 0); - if (__pyx_t_1) { - } else { - __pyx_t_2 = __pyx_t_1; - goto __pyx_L9_bool_binop_done; - } - __pyx_t_1 = ((__pyx_v_step == 0) != 0); - __pyx_t_2 = __pyx_t_1; - __pyx_L9_bool_binop_done:; - if (__pyx_t_2) { - - /* "View.MemoryView":840 - * - * if have_step and step == 0: - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Step may not be zero (axis %d)"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 840, __pyx_L1_error) - - /* "View.MemoryView":839 - * negative_step = have_step != 0 and step < 0 - * - * if have_step and step == 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Step may not be zero (axis %d)", dim) - * - */ - } - - /* "View.MemoryView":843 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - __pyx_t_2 = (__pyx_v_have_start != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":844 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":845 - * if have_start: - * if start < 0: - * start += shape # <<<<<<<<<<<<<< - * if start < 0: - * start = 0 - */ - __pyx_v_start = (__pyx_v_start + __pyx_v_shape); - - /* "View.MemoryView":846 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - __pyx_t_2 = ((__pyx_v_start < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":847 - * start += shape - * if start < 0: - * start = 0 # <<<<<<<<<<<<<< - * elif start >= shape: - * if negative_step: - */ - __pyx_v_start = 0; - - /* "View.MemoryView":846 - * if start < 0: - * start += shape - * if start < 0: # <<<<<<<<<<<<<< - * start = 0 - * elif start >= shape: - */ - } - - /* "View.MemoryView":844 - * - * if have_start: - * if start < 0: # <<<<<<<<<<<<<< - * start += shape - * if start < 0: - */ - goto __pyx_L12; - } - - /* "View.MemoryView":848 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - __pyx_t_2 = ((__pyx_v_start >= __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":849 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":850 - * elif start >= shape: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = shape - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":849 - * start = 0 - * elif start >= shape: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L14; - } - - /* "View.MemoryView":852 - * start = shape - 1 - * else: - * start = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - /*else*/ { - __pyx_v_start = __pyx_v_shape; - } - __pyx_L14:; - - /* "View.MemoryView":848 - * if start < 0: - * start = 0 - * elif start >= shape: # <<<<<<<<<<<<<< - * if negative_step: - * start = shape - 1 - */ - } - __pyx_L12:; - - /* "View.MemoryView":843 - * - * - * if have_start: # <<<<<<<<<<<<<< - * if start < 0: - * start += shape - */ - goto __pyx_L11; - } - - /* "View.MemoryView":854 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":855 - * else: - * if negative_step: - * start = shape - 1 # <<<<<<<<<<<<<< - * else: - * start = 0 - */ - __pyx_v_start = (__pyx_v_shape - 1); - - /* "View.MemoryView":854 - * start = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * start = shape - 1 - * else: - */ - goto __pyx_L15; - } - - /* "View.MemoryView":857 - * start = shape - 1 - * else: - * start = 0 # <<<<<<<<<<<<<< - * - * if have_stop: - */ - /*else*/ { - __pyx_v_start = 0; - } - __pyx_L15:; - } - __pyx_L11:; - - /* "View.MemoryView":859 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - __pyx_t_2 = (__pyx_v_have_stop != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":860 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":861 - * if have_stop: - * if stop < 0: - * stop += shape # <<<<<<<<<<<<<< - * if stop < 0: - * stop = 0 - */ - __pyx_v_stop = (__pyx_v_stop + __pyx_v_shape); - - /* "View.MemoryView":862 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - __pyx_t_2 = ((__pyx_v_stop < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":863 - * stop += shape - * if stop < 0: - * stop = 0 # <<<<<<<<<<<<<< - * elif stop > shape: - * stop = shape - */ - __pyx_v_stop = 0; - - /* "View.MemoryView":862 - * if stop < 0: - * stop += shape - * if stop < 0: # <<<<<<<<<<<<<< - * stop = 0 - * elif stop > shape: - */ - } - - /* "View.MemoryView":860 - * - * if have_stop: - * if stop < 0: # <<<<<<<<<<<<<< - * stop += shape - * if stop < 0: - */ - goto __pyx_L17; - } - - /* "View.MemoryView":864 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - __pyx_t_2 = ((__pyx_v_stop > __pyx_v_shape) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":865 - * stop = 0 - * elif stop > shape: - * stop = shape # <<<<<<<<<<<<<< - * else: - * if negative_step: - */ - __pyx_v_stop = __pyx_v_shape; - - /* "View.MemoryView":864 - * if stop < 0: - * stop = 0 - * elif stop > shape: # <<<<<<<<<<<<<< - * stop = shape - * else: - */ - } - __pyx_L17:; - - /* "View.MemoryView":859 - * start = 0 - * - * if have_stop: # <<<<<<<<<<<<<< - * if stop < 0: - * stop += shape - */ - goto __pyx_L16; - } - - /* "View.MemoryView":867 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - /*else*/ { - __pyx_t_2 = (__pyx_v_negative_step != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":868 - * else: - * if negative_step: - * stop = -1 # <<<<<<<<<<<<<< - * else: - * stop = shape - */ - __pyx_v_stop = -1L; - - /* "View.MemoryView":867 - * stop = shape - * else: - * if negative_step: # <<<<<<<<<<<<<< - * stop = -1 - * else: - */ - goto __pyx_L19; - } - - /* "View.MemoryView":870 - * stop = -1 - * else: - * stop = shape # <<<<<<<<<<<<<< - * - * if not have_step: - */ - /*else*/ { - __pyx_v_stop = __pyx_v_shape; - } - __pyx_L19:; - } - __pyx_L16:; - - /* "View.MemoryView":872 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - __pyx_t_2 = ((!(__pyx_v_have_step != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":873 - * - * if not have_step: - * step = 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_step = 1; - - /* "View.MemoryView":872 - * stop = shape - * - * if not have_step: # <<<<<<<<<<<<<< - * step = 1 - * - */ - } - - /* "View.MemoryView":877 - * - * with cython.cdivision(True): - * new_shape = (stop - start) // step # <<<<<<<<<<<<<< - * - * if (stop - start) - step * new_shape: - */ - __pyx_v_new_shape = ((__pyx_v_stop - __pyx_v_start) / __pyx_v_step); - - /* "View.MemoryView":879 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - __pyx_t_2 = (((__pyx_v_stop - __pyx_v_start) - (__pyx_v_step * __pyx_v_new_shape)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":880 - * - * if (stop - start) - step * new_shape: - * new_shape += 1 # <<<<<<<<<<<<<< - * - * if new_shape < 0: - */ - __pyx_v_new_shape = (__pyx_v_new_shape + 1); - - /* "View.MemoryView":879 - * new_shape = (stop - start) // step - * - * if (stop - start) - step * new_shape: # <<<<<<<<<<<<<< - * new_shape += 1 - * - */ - } - - /* "View.MemoryView":882 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - __pyx_t_2 = ((__pyx_v_new_shape < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":883 - * - * if new_shape < 0: - * new_shape = 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_new_shape = 0; - - /* "View.MemoryView":882 - * new_shape += 1 - * - * if new_shape < 0: # <<<<<<<<<<<<<< - * new_shape = 0 - * - */ - } - - /* "View.MemoryView":886 - * - * - * dst.strides[new_ndim] = stride * step # <<<<<<<<<<<<<< - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset - */ - (__pyx_v_dst->strides[__pyx_v_new_ndim]) = (__pyx_v_stride * __pyx_v_step); - - /* "View.MemoryView":887 - * - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape # <<<<<<<<<<<<<< - * dst.suboffsets[new_ndim] = suboffset - * - */ - (__pyx_v_dst->shape[__pyx_v_new_ndim]) = __pyx_v_new_shape; - - /* "View.MemoryView":888 - * dst.strides[new_ndim] = stride * step - * dst.shape[new_ndim] = new_shape - * dst.suboffsets[new_ndim] = suboffset # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_dst->suboffsets[__pyx_v_new_ndim]) = __pyx_v_suboffset; - } - __pyx_L3:; - - /* "View.MemoryView":891 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - __pyx_t_2 = (((__pyx_v_suboffset_dim[0]) < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":892 - * - * if suboffset_dim[0] < 0: - * dst.data += start * stride # <<<<<<<<<<<<<< - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride - */ - __pyx_v_dst->data = (__pyx_v_dst->data + (__pyx_v_start * __pyx_v_stride)); - - /* "View.MemoryView":891 - * - * - * if suboffset_dim[0] < 0: # <<<<<<<<<<<<<< - * dst.data += start * stride - * else: - */ - goto __pyx_L23; - } - - /* "View.MemoryView":894 - * dst.data += start * stride - * else: - * dst.suboffsets[suboffset_dim[0]] += start * stride # <<<<<<<<<<<<<< - * - * if suboffset >= 0: - */ - /*else*/ { - __pyx_t_3 = (__pyx_v_suboffset_dim[0]); - (__pyx_v_dst->suboffsets[__pyx_t_3]) = ((__pyx_v_dst->suboffsets[__pyx_t_3]) + (__pyx_v_start * __pyx_v_stride)); - } - __pyx_L23:; - - /* "View.MemoryView":896 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":897 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - __pyx_t_2 = ((!(__pyx_v_is_slice != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":898 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - __pyx_t_2 = ((__pyx_v_new_ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":899 - * if not is_slice: - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset # <<<<<<<<<<<<<< - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - */ - __pyx_v_dst->data = ((((char **)__pyx_v_dst->data)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":898 - * if suboffset >= 0: - * if not is_slice: - * if new_ndim == 0: # <<<<<<<<<<<<<< - * dst.data = ( dst.data)[0] + suboffset - * else: - */ - goto __pyx_L26; - } - - /* "View.MemoryView":901 - * dst.data = ( dst.data)[0] + suboffset - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " # <<<<<<<<<<<<<< - * "must be indexed and not sliced", dim) - * else: - */ - /*else*/ { - - /* "View.MemoryView":902 - * else: - * _err_dim(IndexError, "All dimensions preceding dimension %d " - * "must be indexed and not sliced", dim) # <<<<<<<<<<<<<< - * else: - * suboffset_dim[0] = new_ndim - */ - __pyx_t_3 = __pyx_memoryview_err_dim(__pyx_builtin_IndexError, ((char *)"All dimensions preceding dimension %d must be indexed and not sliced"), __pyx_v_dim); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 901, __pyx_L1_error) - } - __pyx_L26:; - - /* "View.MemoryView":897 - * - * if suboffset >= 0: - * if not is_slice: # <<<<<<<<<<<<<< - * if new_ndim == 0: - * dst.data = ( dst.data)[0] + suboffset - */ - goto __pyx_L25; - } - - /* "View.MemoryView":904 - * "must be indexed and not sliced", dim) - * else: - * suboffset_dim[0] = new_ndim # <<<<<<<<<<<<<< - * - * return 0 - */ - /*else*/ { - (__pyx_v_suboffset_dim[0]) = __pyx_v_new_ndim; - } - __pyx_L25:; - - /* "View.MemoryView":896 - * dst.suboffsets[suboffset_dim[0]] += start * stride - * - * if suboffset >= 0: # <<<<<<<<<<<<<< - * if not is_slice: - * if new_ndim == 0: - */ - } - - /* "View.MemoryView":906 - * suboffset_dim[0] = new_ndim - * - * return 0 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":809 - * - * @cname('__pyx_memoryview_slice_memviewslice') - * cdef int slice_memviewslice( # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * Py_ssize_t shape, Py_ssize_t stride, Py_ssize_t suboffset, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.slice_memviewslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":912 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - -static char *__pyx_pybuffer_index(Py_buffer *__pyx_v_view, char *__pyx_v_bufp, Py_ssize_t __pyx_v_index, Py_ssize_t __pyx_v_dim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_suboffset; - Py_ssize_t __pyx_v_itemsize; - char *__pyx_v_resultp; - char *__pyx_r; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("pybuffer_index", 0); - - /* "View.MemoryView":914 - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 # <<<<<<<<<<<<<< - * cdef Py_ssize_t itemsize = view.itemsize - * cdef char *resultp - */ - __pyx_v_suboffset = -1L; - - /* "View.MemoryView":915 - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - * cdef Py_ssize_t itemsize = view.itemsize # <<<<<<<<<<<<<< - * cdef char *resultp - * - */ - __pyx_t_1 = __pyx_v_view->itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":918 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - __pyx_t_2 = ((__pyx_v_view->ndim == 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":919 - * - * if view.ndim == 0: - * shape = view.len / itemsize # <<<<<<<<<<<<<< - * stride = itemsize - * else: - */ - if (unlikely(__pyx_v_itemsize == 0)) { - PyErr_SetString(PyExc_ZeroDivisionError, "integer division or modulo by zero"); - __PYX_ERR(1, 919, __pyx_L1_error) - } - else if (sizeof(Py_ssize_t) == sizeof(long) && (!(((Py_ssize_t)-1) > 0)) && unlikely(__pyx_v_itemsize == (Py_ssize_t)-1) && unlikely(UNARY_NEG_WOULD_OVERFLOW(__pyx_v_view->len))) { - PyErr_SetString(PyExc_OverflowError, "value too large to perform division"); - __PYX_ERR(1, 919, __pyx_L1_error) - } - __pyx_v_shape = __Pyx_div_Py_ssize_t(__pyx_v_view->len, __pyx_v_itemsize); - - /* "View.MemoryView":920 - * if view.ndim == 0: - * shape = view.len / itemsize - * stride = itemsize # <<<<<<<<<<<<<< - * else: - * shape = view.shape[dim] - */ - __pyx_v_stride = __pyx_v_itemsize; - - /* "View.MemoryView":918 - * cdef char *resultp - * - * if view.ndim == 0: # <<<<<<<<<<<<<< - * shape = view.len / itemsize - * stride = itemsize - */ - goto __pyx_L3; - } - - /* "View.MemoryView":922 - * stride = itemsize - * else: - * shape = view.shape[dim] # <<<<<<<<<<<<<< - * stride = view.strides[dim] - * if view.suboffsets != NULL: - */ - /*else*/ { - __pyx_v_shape = (__pyx_v_view->shape[__pyx_v_dim]); - - /* "View.MemoryView":923 - * else: - * shape = view.shape[dim] - * stride = view.strides[dim] # <<<<<<<<<<<<<< - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] - */ - __pyx_v_stride = (__pyx_v_view->strides[__pyx_v_dim]); - - /* "View.MemoryView":924 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - __pyx_t_2 = ((__pyx_v_view->suboffsets != NULL) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":925 - * stride = view.strides[dim] - * if view.suboffsets != NULL: - * suboffset = view.suboffsets[dim] # <<<<<<<<<<<<<< - * - * if index < 0: - */ - __pyx_v_suboffset = (__pyx_v_view->suboffsets[__pyx_v_dim]); - - /* "View.MemoryView":924 - * shape = view.shape[dim] - * stride = view.strides[dim] - * if view.suboffsets != NULL: # <<<<<<<<<<<<<< - * suboffset = view.suboffsets[dim] - * - */ - } - } - __pyx_L3:; - - /* "View.MemoryView":927 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":928 - * - * if index < 0: - * index += view.shape[dim] # <<<<<<<<<<<<<< - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - */ - __pyx_v_index = (__pyx_v_index + (__pyx_v_view->shape[__pyx_v_dim])); - - /* "View.MemoryView":929 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index < 0) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":930 - * index += view.shape[dim] - * if index < 0: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * if index >= shape: - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 930, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 930, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 930, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 930, __pyx_L1_error) - - /* "View.MemoryView":929 - * if index < 0: - * index += view.shape[dim] - * if index < 0: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":927 - * suboffset = view.suboffsets[dim] - * - * if index < 0: # <<<<<<<<<<<<<< - * index += view.shape[dim] - * if index < 0: - */ - } - - /* "View.MemoryView":932 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - __pyx_t_2 = ((__pyx_v_index >= __pyx_v_shape) != 0); - if (unlikely(__pyx_t_2)) { - - /* "View.MemoryView":933 - * - * if index >= shape: - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) # <<<<<<<<<<<<<< - * - * resultp = bufp + index * stride - */ - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 933, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = __Pyx_PyString_Format(__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 933, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_t_3 = __Pyx_PyObject_CallOneArg(__pyx_builtin_IndexError, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 933, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_3, 0, 0, 0); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __PYX_ERR(1, 933, __pyx_L1_error) - - /* "View.MemoryView":932 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * if index >= shape: # <<<<<<<<<<<<<< - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - */ - } - - /* "View.MemoryView":935 - * raise IndexError("Out of bounds on buffer access (axis %d)" % dim) - * - * resultp = bufp + index * stride # <<<<<<<<<<<<<< - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset - */ - __pyx_v_resultp = (__pyx_v_bufp + (__pyx_v_index * __pyx_v_stride)); - - /* "View.MemoryView":936 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - __pyx_t_2 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":937 - * resultp = bufp + index * stride - * if suboffset >= 0: - * resultp = ( resultp)[0] + suboffset # <<<<<<<<<<<<<< - * - * return resultp - */ - __pyx_v_resultp = ((((char **)__pyx_v_resultp)[0]) + __pyx_v_suboffset); - - /* "View.MemoryView":936 - * - * resultp = bufp + index * stride - * if suboffset >= 0: # <<<<<<<<<<<<<< - * resultp = ( resultp)[0] + suboffset - * - */ - } - - /* "View.MemoryView":939 - * resultp = ( resultp)[0] + suboffset - * - * return resultp # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_resultp; - goto __pyx_L0; - - /* "View.MemoryView":912 - * - * @cname('__pyx_pybuffer_index') - * cdef char *pybuffer_index(Py_buffer *view, char *bufp, Py_ssize_t index, # <<<<<<<<<<<<<< - * Py_ssize_t dim) except NULL: - * cdef Py_ssize_t shape, stride, suboffset = -1 - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView.pybuffer_index", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":945 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - -static int __pyx_memslice_transpose(__Pyx_memviewslice *__pyx_v_memslice) { - int __pyx_v_ndim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - int __pyx_v_i; - int __pyx_v_j; - int __pyx_r; - int __pyx_t_1; - Py_ssize_t *__pyx_t_2; - long __pyx_t_3; - long __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - int __pyx_t_7; - int __pyx_t_8; - int __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":946 - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: - * cdef int ndim = memslice.memview.view.ndim # <<<<<<<<<<<<<< - * - * cdef Py_ssize_t *shape = memslice.shape - */ - __pyx_t_1 = __pyx_v_memslice->memview->view.ndim; - __pyx_v_ndim = __pyx_t_1; - - /* "View.MemoryView":948 - * cdef int ndim = memslice.memview.view.ndim - * - * cdef Py_ssize_t *shape = memslice.shape # <<<<<<<<<<<<<< - * cdef Py_ssize_t *strides = memslice.strides - * - */ - __pyx_t_2 = __pyx_v_memslice->shape; - __pyx_v_shape = __pyx_t_2; - - /* "View.MemoryView":949 - * - * cdef Py_ssize_t *shape = memslice.shape - * cdef Py_ssize_t *strides = memslice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = __pyx_v_memslice->strides; - __pyx_v_strides = __pyx_t_2; - - /* "View.MemoryView":953 - * - * cdef int i, j - * for i in range(ndim / 2): # <<<<<<<<<<<<<< - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - */ - __pyx_t_3 = __Pyx_div_long(__pyx_v_ndim, 2); - __pyx_t_4 = __pyx_t_3; - for (__pyx_t_1 = 0; __pyx_t_1 < __pyx_t_4; __pyx_t_1+=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":954 - * cdef int i, j - * for i in range(ndim / 2): - * j = ndim - 1 - i # <<<<<<<<<<<<<< - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] - */ - __pyx_v_j = ((__pyx_v_ndim - 1) - __pyx_v_i); - - /* "View.MemoryView":955 - * for i in range(ndim / 2): - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] # <<<<<<<<<<<<<< - * shape[i], shape[j] = shape[j], shape[i] - * - */ - __pyx_t_5 = (__pyx_v_strides[__pyx_v_j]); - __pyx_t_6 = (__pyx_v_strides[__pyx_v_i]); - (__pyx_v_strides[__pyx_v_i]) = __pyx_t_5; - (__pyx_v_strides[__pyx_v_j]) = __pyx_t_6; - - /* "View.MemoryView":956 - * j = ndim - 1 - i - * strides[i], strides[j] = strides[j], strides[i] - * shape[i], shape[j] = shape[j], shape[i] # <<<<<<<<<<<<<< - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - */ - __pyx_t_6 = (__pyx_v_shape[__pyx_v_j]); - __pyx_t_5 = (__pyx_v_shape[__pyx_v_i]); - (__pyx_v_shape[__pyx_v_i]) = __pyx_t_6; - (__pyx_v_shape[__pyx_v_j]) = __pyx_t_5; - - /* "View.MemoryView":958 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_i]) >= 0) != 0); - if (!__pyx_t_8) { - } else { - __pyx_t_7 = __pyx_t_8; - goto __pyx_L6_bool_binop_done; - } - __pyx_t_8 = (((__pyx_v_memslice->suboffsets[__pyx_v_j]) >= 0) != 0); - __pyx_t_7 = __pyx_t_8; - __pyx_L6_bool_binop_done:; - if (__pyx_t_7) { - - /* "View.MemoryView":959 - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") # <<<<<<<<<<<<<< - * - * return 1 - */ - __pyx_t_9 = __pyx_memoryview_err(__pyx_builtin_ValueError, ((char *)"Cannot transpose memoryview with indirect dimensions")); if (unlikely(__pyx_t_9 == ((int)-1))) __PYX_ERR(1, 959, __pyx_L1_error) - - /* "View.MemoryView":958 - * shape[i], shape[j] = shape[j], shape[i] - * - * if memslice.suboffsets[i] >= 0 or memslice.suboffsets[j] >= 0: # <<<<<<<<<<<<<< - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - */ - } - } - - /* "View.MemoryView":961 - * _err(ValueError, "Cannot transpose memoryview with indirect dimensions") - * - * return 1 # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = 1; - goto __pyx_L0; - - /* "View.MemoryView":945 - * - * @cname('__pyx_memslice_transpose') - * cdef int transpose_memslice(__Pyx_memviewslice *memslice) nogil except 0: # <<<<<<<<<<<<<< - * cdef int ndim = memslice.memview.view.ndim - * - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.transpose_memslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = 0; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":978 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - -/* Python wrapper */ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self); /*proto*/ -static void __pyx_memoryviewslice___dealloc__(PyObject *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__ (wrapper)", 0); - __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -static void __pyx_memoryviewslice___pyx_pf_15View_dot_MemoryView_16_memoryviewslice___dealloc__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__dealloc__", 0); - - /* "View.MemoryView":979 - * - * def __dealloc__(self): - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) # <<<<<<<<<<<<<< - * - * cdef convert_item_to_object(self, char *itemp): - */ - __PYX_XDEC_MEMVIEW((&__pyx_v_self->from_slice), 1); - - /* "View.MemoryView":978 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * def __dealloc__(self): # <<<<<<<<<<<<<< - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":981 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - -static PyObject *__pyx_memoryviewslice_convert_item_to_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("convert_item_to_object", 0); - - /* "View.MemoryView":982 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_object_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":983 - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) # <<<<<<<<<<<<<< - * else: - * return memoryview.convert_item_to_object(self, itemp) - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_v_self->to_object_func(__pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 983, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - - /* "View.MemoryView":982 - * - * cdef convert_item_to_object(self, char *itemp): - * if self.to_object_func != NULL: # <<<<<<<<<<<<<< - * return self.to_object_func(itemp) - * else: - */ - } - - /* "View.MemoryView":985 - * return self.to_object_func(itemp) - * else: - * return memoryview.convert_item_to_object(self, itemp) # <<<<<<<<<<<<<< - * - * cdef assign_item_from_object(self, char *itemp, object value): - */ - /*else*/ { - __Pyx_XDECREF(__pyx_r); - __pyx_t_2 = __pyx_memoryview_convert_item_to_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 985, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_r = __pyx_t_2; - __pyx_t_2 = 0; - goto __pyx_L0; - } - - /* "View.MemoryView":981 - * __PYX_XDEC_MEMVIEW(&self.from_slice, 1) - * - * cdef convert_item_to_object(self, char *itemp): # <<<<<<<<<<<<<< - * if self.to_object_func != NULL: - * return self.to_object_func(itemp) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.convert_item_to_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":987 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - -static PyObject *__pyx_memoryviewslice_assign_item_from_object(struct __pyx_memoryviewslice_obj *__pyx_v_self, char *__pyx_v_itemp, PyObject *__pyx_v_value) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("assign_item_from_object", 0); - - /* "View.MemoryView":988 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - __pyx_t_1 = ((__pyx_v_self->to_dtype_func != NULL) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":989 - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) # <<<<<<<<<<<<<< - * else: - * memoryview.assign_item_from_object(self, itemp, value) - */ - __pyx_t_2 = __pyx_v_self->to_dtype_func(__pyx_v_itemp, __pyx_v_value); if (unlikely(__pyx_t_2 == ((int)0))) __PYX_ERR(1, 989, __pyx_L1_error) - - /* "View.MemoryView":988 - * - * cdef assign_item_from_object(self, char *itemp, object value): - * if self.to_dtype_func != NULL: # <<<<<<<<<<<<<< - * self.to_dtype_func(itemp, value) - * else: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":991 - * self.to_dtype_func(itemp, value) - * else: - * memoryview.assign_item_from_object(self, itemp, value) # <<<<<<<<<<<<<< - * - * @property - */ - /*else*/ { - __pyx_t_3 = __pyx_memoryview_assign_item_from_object(((struct __pyx_memoryview_obj *)__pyx_v_self), __pyx_v_itemp, __pyx_v_value); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 991, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - } - __pyx_L3:; - - /* "View.MemoryView":987 - * return memoryview.convert_item_to_object(self, itemp) - * - * cdef assign_item_from_object(self, char *itemp, object value): # <<<<<<<<<<<<<< - * if self.to_dtype_func != NULL: - * self.to_dtype_func(itemp, value) - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.assign_item_from_object", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":994 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self); /*proto*/ -static PyObject *__pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(PyObject *__pyx_v_self) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__ (wrapper)", 0); - __pyx_r = __pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView_16_memoryviewslice_4base___get__(struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__get__", 0); - - /* "View.MemoryView":995 - * @property - * def base(self): - * return self.from_object # <<<<<<<<<<<<<< - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v_self->from_object); - __pyx_r = __pyx_v_self->from_object; - goto __pyx_L0; - - /* "View.MemoryView":994 - * - * @property - * def base(self): # <<<<<<<<<<<<<< - * return self.from_object - * - */ - - /* function exit code */ - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_1__reduce_cython__(PyObject *__pyx_v_self, CYTHON_UNUSED PyObject *unused) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__reduce_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice___reduce_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice___reduce_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__reduce_cython__", 0); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__18, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 2, __pyx_L1_error) - - /* "(tree fragment)":1 - * def __reduce_cython__(self): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__reduce_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - -/* Python wrapper */ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state); /*proto*/ -static PyObject *__pyx_pw___pyx_memoryviewslice_3__setstate_cython__(PyObject *__pyx_v_self, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__setstate_cython__ (wrapper)", 0); - __pyx_r = __pyx_pf___pyx_memoryviewslice_2__setstate_cython__(((struct __pyx_memoryviewslice_obj *)__pyx_v_self), ((PyObject *)__pyx_v___pyx_state)); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf___pyx_memoryviewslice_2__setstate_cython__(CYTHON_UNUSED struct __pyx_memoryviewslice_obj *__pyx_v_self, CYTHON_UNUSED PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__setstate_cython__", 0); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_t_1 = __Pyx_PyObject_Call(__pyx_builtin_TypeError, __pyx_tuple__19, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 4, __pyx_L1_error) - - /* "(tree fragment)":3 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): # <<<<<<<<<<<<<< - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView._memoryviewslice.__setstate_cython__", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1001 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - -static PyObject *__pyx_memoryview_fromslice(__Pyx_memviewslice __pyx_v_memviewslice, int __pyx_v_ndim, PyObject *(*__pyx_v_to_object_func)(char *), int (*__pyx_v_to_dtype_func)(char *, PyObject *), int __pyx_v_dtype_is_object) { - struct __pyx_memoryviewslice_obj *__pyx_v_result = 0; - Py_ssize_t __pyx_v_suboffset; - PyObject *__pyx_v_length = NULL; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - __Pyx_TypeInfo *__pyx_t_4; - Py_buffer __pyx_t_5; - Py_ssize_t *__pyx_t_6; - Py_ssize_t *__pyx_t_7; - Py_ssize_t *__pyx_t_8; - Py_ssize_t __pyx_t_9; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_fromslice", 0); - - /* "View.MemoryView":1009 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - __pyx_t_1 = ((((PyObject *)__pyx_v_memviewslice.memview) == Py_None) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1010 - * - * if memviewslice.memview == Py_None: - * return None # <<<<<<<<<<<<<< - * - * - */ - __Pyx_XDECREF(__pyx_r); - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - - /* "View.MemoryView":1009 - * cdef _memoryviewslice result - * - * if memviewslice.memview == Py_None: # <<<<<<<<<<<<<< - * return None - * - */ - } - - /* "View.MemoryView":1015 - * - * - * result = _memoryviewslice(None, 0, dtype_is_object) # <<<<<<<<<<<<<< - * - * result.from_slice = memviewslice - */ - __pyx_t_2 = __Pyx_PyBool_FromLong(__pyx_v_dtype_is_object); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1015, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyTuple_New(3); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1015, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(Py_None); - __Pyx_GIVEREF(Py_None); - PyTuple_SET_ITEM(__pyx_t_3, 0, Py_None); - __Pyx_INCREF(__pyx_int_0); - __Pyx_GIVEREF(__pyx_int_0); - PyTuple_SET_ITEM(__pyx_t_3, 1, __pyx_int_0); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_3, 2, __pyx_t_2); - __pyx_t_2 = 0; - __pyx_t_2 = __Pyx_PyObject_Call(((PyObject *)__pyx_memoryviewslice_type), __pyx_t_3, NULL); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1015, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result = ((struct __pyx_memoryviewslice_obj *)__pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1017 - * result = _memoryviewslice(None, 0, dtype_is_object) - * - * result.from_slice = memviewslice # <<<<<<<<<<<<<< - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - */ - __pyx_v_result->from_slice = __pyx_v_memviewslice; - - /* "View.MemoryView":1018 - * - * result.from_slice = memviewslice - * __PYX_INC_MEMVIEW(&memviewslice, 1) # <<<<<<<<<<<<<< - * - * result.from_object = ( memviewslice.memview).base - */ - __PYX_INC_MEMVIEW((&__pyx_v_memviewslice), 1); - - /* "View.MemoryView":1020 - * __PYX_INC_MEMVIEW(&memviewslice, 1) - * - * result.from_object = ( memviewslice.memview).base # <<<<<<<<<<<<<< - * result.typeinfo = memviewslice.memview.typeinfo - * - */ - __pyx_t_2 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v_memviewslice.memview), __pyx_n_s_base); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1020, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_GIVEREF(__pyx_t_2); - __Pyx_GOTREF(__pyx_v_result->from_object); - __Pyx_DECREF(__pyx_v_result->from_object); - __pyx_v_result->from_object = __pyx_t_2; - __pyx_t_2 = 0; - - /* "View.MemoryView":1021 - * - * result.from_object = ( memviewslice.memview).base - * result.typeinfo = memviewslice.memview.typeinfo # <<<<<<<<<<<<<< - * - * result.view = memviewslice.memview.view - */ - __pyx_t_4 = __pyx_v_memviewslice.memview->typeinfo; - __pyx_v_result->__pyx_base.typeinfo = __pyx_t_4; - - /* "View.MemoryView":1023 - * result.typeinfo = memviewslice.memview.typeinfo - * - * result.view = memviewslice.memview.view # <<<<<<<<<<<<<< - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - */ - __pyx_t_5 = __pyx_v_memviewslice.memview->view; - __pyx_v_result->__pyx_base.view = __pyx_t_5; - - /* "View.MemoryView":1024 - * - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data # <<<<<<<<<<<<<< - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - */ - __pyx_v_result->__pyx_base.view.buf = ((void *)__pyx_v_memviewslice.data); - - /* "View.MemoryView":1025 - * result.view = memviewslice.memview.view - * result.view.buf = memviewslice.data - * result.view.ndim = ndim # <<<<<<<<<<<<<< - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) - */ - __pyx_v_result->__pyx_base.view.ndim = __pyx_v_ndim; - - /* "View.MemoryView":1026 - * result.view.buf = memviewslice.data - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None # <<<<<<<<<<<<<< - * Py_INCREF(Py_None) - * - */ - ((Py_buffer *)(&__pyx_v_result->__pyx_base.view))->obj = Py_None; - - /* "View.MemoryView":1027 - * result.view.ndim = ndim - * (<__pyx_buffer *> &result.view).obj = Py_None - * Py_INCREF(Py_None) # <<<<<<<<<<<<<< - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - */ - Py_INCREF(Py_None); - - /* "View.MemoryView":1029 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - __pyx_t_1 = ((((struct __pyx_memoryview_obj *)__pyx_v_memviewslice.memview)->flags & PyBUF_WRITABLE) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1030 - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: - * result.flags = PyBUF_RECORDS # <<<<<<<<<<<<<< - * else: - * result.flags = PyBUF_RECORDS_RO - */ - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS; - - /* "View.MemoryView":1029 - * Py_INCREF(Py_None) - * - * if (memviewslice.memview).flags & PyBUF_WRITABLE: # <<<<<<<<<<<<<< - * result.flags = PyBUF_RECORDS - * else: - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1032 - * result.flags = PyBUF_RECORDS - * else: - * result.flags = PyBUF_RECORDS_RO # <<<<<<<<<<<<<< - * - * result.view.shape = result.from_slice.shape - */ - /*else*/ { - __pyx_v_result->__pyx_base.flags = PyBUF_RECORDS_RO; - } - __pyx_L4:; - - /* "View.MemoryView":1034 - * result.flags = PyBUF_RECORDS_RO - * - * result.view.shape = result.from_slice.shape # <<<<<<<<<<<<<< - * result.view.strides = result.from_slice.strides - * - */ - __pyx_v_result->__pyx_base.view.shape = ((Py_ssize_t *)__pyx_v_result->from_slice.shape); - - /* "View.MemoryView":1035 - * - * result.view.shape = result.from_slice.shape - * result.view.strides = result.from_slice.strides # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_result->__pyx_base.view.strides = ((Py_ssize_t *)__pyx_v_result->from_slice.strides); - - /* "View.MemoryView":1038 - * - * - * result.view.suboffsets = NULL # <<<<<<<<<<<<<< - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - */ - __pyx_v_result->__pyx_base.view.suboffsets = NULL; - - /* "View.MemoryView":1039 - * - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: # <<<<<<<<<<<<<< - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - */ - __pyx_t_7 = (__pyx_v_result->from_slice.suboffsets + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->from_slice.suboffsets; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_v_suboffset = (__pyx_t_6[0]); - - /* "View.MemoryView":1040 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - __pyx_t_1 = ((__pyx_v_suboffset >= 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1041 - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_result->__pyx_base.view.suboffsets = ((Py_ssize_t *)__pyx_v_result->from_slice.suboffsets); - - /* "View.MemoryView":1042 - * if suboffset >= 0: - * result.view.suboffsets = result.from_slice.suboffsets - * break # <<<<<<<<<<<<<< - * - * result.view.len = result.view.itemsize - */ - goto __pyx_L6_break; - - /* "View.MemoryView":1040 - * result.view.suboffsets = NULL - * for suboffset in result.from_slice.suboffsets[:ndim]: - * if suboffset >= 0: # <<<<<<<<<<<<<< - * result.view.suboffsets = result.from_slice.suboffsets - * break - */ - } - } - __pyx_L6_break:; - - /* "View.MemoryView":1044 - * break - * - * result.view.len = result.view.itemsize # <<<<<<<<<<<<<< - * for length in result.view.shape[:ndim]: - * result.view.len *= length - */ - __pyx_t_9 = __pyx_v_result->__pyx_base.view.itemsize; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - - /* "View.MemoryView":1045 - * - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: # <<<<<<<<<<<<<< - * result.view.len *= length - * - */ - __pyx_t_7 = (__pyx_v_result->__pyx_base.view.shape + __pyx_v_ndim); - for (__pyx_t_8 = __pyx_v_result->__pyx_base.view.shape; __pyx_t_8 < __pyx_t_7; __pyx_t_8++) { - __pyx_t_6 = __pyx_t_8; - __pyx_t_2 = PyInt_FromSsize_t((__pyx_t_6[0])); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1045, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_XDECREF_SET(__pyx_v_length, __pyx_t_2); - __pyx_t_2 = 0; - - /* "View.MemoryView":1046 - * result.view.len = result.view.itemsize - * for length in result.view.shape[:ndim]: - * result.view.len *= length # <<<<<<<<<<<<<< - * - * result.to_object_func = to_object_func - */ - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_result->__pyx_base.view.len); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1046, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyNumber_InPlaceMultiply(__pyx_t_2, __pyx_v_length); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1046, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __pyx_t_9 = __Pyx_PyIndex_AsSsize_t(__pyx_t_3); if (unlikely((__pyx_t_9 == (Py_ssize_t)-1) && PyErr_Occurred())) __PYX_ERR(1, 1046, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __pyx_v_result->__pyx_base.view.len = __pyx_t_9; - } - - /* "View.MemoryView":1048 - * result.view.len *= length - * - * result.to_object_func = to_object_func # <<<<<<<<<<<<<< - * result.to_dtype_func = to_dtype_func - * - */ - __pyx_v_result->to_object_func = __pyx_v_to_object_func; - - /* "View.MemoryView":1049 - * - * result.to_object_func = to_object_func - * result.to_dtype_func = to_dtype_func # <<<<<<<<<<<<<< - * - * return result - */ - __pyx_v_result->to_dtype_func = __pyx_v_to_dtype_func; - - /* "View.MemoryView":1051 - * result.to_dtype_func = to_dtype_func - * - * return result # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(((PyObject *)__pyx_v_result)); - __pyx_r = ((PyObject *)__pyx_v_result); - goto __pyx_L0; - - /* "View.MemoryView":1001 - * - * @cname('__pyx_memoryview_fromslice') - * cdef memoryview_fromslice(__Pyx_memviewslice memviewslice, # <<<<<<<<<<<<<< - * int ndim, - * object (*to_object_func)(char *), - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.memoryview_fromslice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_result); - __Pyx_XDECREF(__pyx_v_length); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1054 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - -static __Pyx_memviewslice *__pyx_memoryview_get_slice_from_memoryview(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_mslice) { - struct __pyx_memoryviewslice_obj *__pyx_v_obj = 0; - __Pyx_memviewslice *__pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *__pyx_t_3 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("get_slice_from_memview", 0); - - /* "View.MemoryView":1057 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1058 - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): - * obj = memview # <<<<<<<<<<<<<< - * return &obj.from_slice - * else: - */ - if (!(likely(((((PyObject *)__pyx_v_memview)) == Py_None) || likely(__Pyx_TypeTest(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type))))) __PYX_ERR(1, 1058, __pyx_L1_error) - __pyx_t_3 = ((PyObject *)__pyx_v_memview); - __Pyx_INCREF(__pyx_t_3); - __pyx_v_obj = ((struct __pyx_memoryviewslice_obj *)__pyx_t_3); - __pyx_t_3 = 0; - - /* "View.MemoryView":1059 - * if isinstance(memview, _memoryviewslice): - * obj = memview - * return &obj.from_slice # <<<<<<<<<<<<<< - * else: - * slice_copy(memview, mslice) - */ - __pyx_r = (&__pyx_v_obj->from_slice); - goto __pyx_L0; - - /* "View.MemoryView":1057 - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * obj = memview - * return &obj.from_slice - */ - } - - /* "View.MemoryView":1061 - * return &obj.from_slice - * else: - * slice_copy(memview, mslice) # <<<<<<<<<<<<<< - * return mslice - * - */ - /*else*/ { - __pyx_memoryview_slice_copy(__pyx_v_memview, __pyx_v_mslice); - - /* "View.MemoryView":1062 - * else: - * slice_copy(memview, mslice) - * return mslice # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_slice_copy') - */ - __pyx_r = __pyx_v_mslice; - goto __pyx_L0; - } - - /* "View.MemoryView":1054 - * - * @cname('__pyx_memoryview_get_slice_from_memoryview') - * cdef __Pyx_memviewslice *get_slice_from_memview(memoryview memview, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *mslice) except NULL: - * cdef _memoryviewslice obj - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_3); - __Pyx_AddTraceback("View.MemoryView.get_slice_from_memview", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF((PyObject *)__pyx_v_obj); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1065 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - -static void __pyx_memoryview_slice_copy(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_dst) { - int __pyx_v_dim; - Py_ssize_t *__pyx_v_shape; - Py_ssize_t *__pyx_v_strides; - Py_ssize_t *__pyx_v_suboffsets; - __Pyx_RefNannyDeclarations - Py_ssize_t *__pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - Py_ssize_t __pyx_t_5; - __Pyx_RefNannySetupContext("slice_copy", 0); - - /* "View.MemoryView":1069 - * cdef (Py_ssize_t*) shape, strides, suboffsets - * - * shape = memview.view.shape # <<<<<<<<<<<<<< - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets - */ - __pyx_t_1 = __pyx_v_memview->view.shape; - __pyx_v_shape = __pyx_t_1; - - /* "View.MemoryView":1070 - * - * shape = memview.view.shape - * strides = memview.view.strides # <<<<<<<<<<<<<< - * suboffsets = memview.view.suboffsets - * - */ - __pyx_t_1 = __pyx_v_memview->view.strides; - __pyx_v_strides = __pyx_t_1; - - /* "View.MemoryView":1071 - * shape = memview.view.shape - * strides = memview.view.strides - * suboffsets = memview.view.suboffsets # <<<<<<<<<<<<<< - * - * dst.memview = <__pyx_memoryview *> memview - */ - __pyx_t_1 = __pyx_v_memview->view.suboffsets; - __pyx_v_suboffsets = __pyx_t_1; - - /* "View.MemoryView":1073 - * suboffsets = memview.view.suboffsets - * - * dst.memview = <__pyx_memoryview *> memview # <<<<<<<<<<<<<< - * dst.data = memview.view.buf - * - */ - __pyx_v_dst->memview = ((struct __pyx_memoryview_obj *)__pyx_v_memview); - - /* "View.MemoryView":1074 - * - * dst.memview = <__pyx_memoryview *> memview - * dst.data = memview.view.buf # <<<<<<<<<<<<<< - * - * for dim in range(memview.view.ndim): - */ - __pyx_v_dst->data = ((char *)__pyx_v_memview->view.buf); - - /* "View.MemoryView":1076 - * dst.data = memview.view.buf - * - * for dim in range(memview.view.ndim): # <<<<<<<<<<<<<< - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - */ - __pyx_t_2 = __pyx_v_memview->view.ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_dim = __pyx_t_4; - - /* "View.MemoryView":1077 - * - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] # <<<<<<<<<<<<<< - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - */ - (__pyx_v_dst->shape[__pyx_v_dim]) = (__pyx_v_shape[__pyx_v_dim]); - - /* "View.MemoryView":1078 - * for dim in range(memview.view.ndim): - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] # <<<<<<<<<<<<<< - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 - * - */ - (__pyx_v_dst->strides[__pyx_v_dim]) = (__pyx_v_strides[__pyx_v_dim]); - - /* "View.MemoryView":1079 - * dst.shape[dim] = shape[dim] - * dst.strides[dim] = strides[dim] - * dst.suboffsets[dim] = suboffsets[dim] if suboffsets else -1 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object') - */ - if ((__pyx_v_suboffsets != 0)) { - __pyx_t_5 = (__pyx_v_suboffsets[__pyx_v_dim]); - } else { - __pyx_t_5 = -1L; - } - (__pyx_v_dst->suboffsets[__pyx_v_dim]) = __pyx_t_5; - } - - /* "View.MemoryView":1065 - * - * @cname('__pyx_memoryview_slice_copy') - * cdef void slice_copy(memoryview memview, __Pyx_memviewslice *dst): # <<<<<<<<<<<<<< - * cdef int dim - * cdef (Py_ssize_t*) shape, strides, suboffsets - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1082 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - -static PyObject *__pyx_memoryview_copy_object(struct __pyx_memoryview_obj *__pyx_v_memview) { - __Pyx_memviewslice __pyx_v_memviewslice; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy", 0); - - /* "View.MemoryView":1085 - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) # <<<<<<<<<<<<<< - * return memoryview_copy_from_slice(memview, &memviewslice) - * - */ - __pyx_memoryview_slice_copy(__pyx_v_memview, (&__pyx_v_memviewslice)); - - /* "View.MemoryView":1086 - * cdef __Pyx_memviewslice memviewslice - * slice_copy(memview, &memviewslice) - * return memoryview_copy_from_slice(memview, &memviewslice) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_object_from_slice') - */ - __Pyx_XDECREF(__pyx_r); - __pyx_t_1 = __pyx_memoryview_copy_object_from_slice(__pyx_v_memview, (&__pyx_v_memviewslice)); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1086, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_r = __pyx_t_1; - __pyx_t_1 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1082 - * - * @cname('__pyx_memoryview_copy_object') - * cdef memoryview_copy(memoryview memview): # <<<<<<<<<<<<<< - * "Create a new memoryview object" - * cdef __Pyx_memviewslice memviewslice - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1089 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - -static PyObject *__pyx_memoryview_copy_object_from_slice(struct __pyx_memoryview_obj *__pyx_v_memview, __Pyx_memviewslice *__pyx_v_memviewslice) { - PyObject *(*__pyx_v_to_object_func)(char *); - int (*__pyx_v_to_dtype_func)(char *, PyObject *); - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - int __pyx_t_2; - PyObject *(*__pyx_t_3)(char *); - int (*__pyx_t_4)(char *, PyObject *); - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("memoryview_copy_from_slice", 0); - - /* "View.MemoryView":1096 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - __pyx_t_1 = __Pyx_TypeCheck(((PyObject *)__pyx_v_memview), __pyx_memoryviewslice_type); - __pyx_t_2 = (__pyx_t_1 != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1097 - * - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func # <<<<<<<<<<<<<< - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - */ - __pyx_t_3 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_object_func; - __pyx_v_to_object_func = __pyx_t_3; - - /* "View.MemoryView":1098 - * if isinstance(memview, _memoryviewslice): - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func # <<<<<<<<<<<<<< - * else: - * to_object_func = NULL - */ - __pyx_t_4 = ((struct __pyx_memoryviewslice_obj *)__pyx_v_memview)->to_dtype_func; - __pyx_v_to_dtype_func = __pyx_t_4; - - /* "View.MemoryView":1096 - * cdef int (*to_dtype_func)(char *, object) except 0 - * - * if isinstance(memview, _memoryviewslice): # <<<<<<<<<<<<<< - * to_object_func = (<_memoryviewslice> memview).to_object_func - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1100 - * to_dtype_func = (<_memoryviewslice> memview).to_dtype_func - * else: - * to_object_func = NULL # <<<<<<<<<<<<<< - * to_dtype_func = NULL - * - */ - /*else*/ { - __pyx_v_to_object_func = NULL; - - /* "View.MemoryView":1101 - * else: - * to_object_func = NULL - * to_dtype_func = NULL # <<<<<<<<<<<<<< - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - */ - __pyx_v_to_dtype_func = NULL; - } - __pyx_L3:; - - /* "View.MemoryView":1103 - * to_dtype_func = NULL - * - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, # <<<<<<<<<<<<<< - * to_object_func, to_dtype_func, - * memview.dtype_is_object) - */ - __Pyx_XDECREF(__pyx_r); - - /* "View.MemoryView":1105 - * return memoryview_fromslice(memviewslice[0], memview.view.ndim, - * to_object_func, to_dtype_func, - * memview.dtype_is_object) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_5 = __pyx_memoryview_fromslice((__pyx_v_memviewslice[0]), __pyx_v_memview->view.ndim, __pyx_v_to_object_func, __pyx_v_to_dtype_func, __pyx_v_memview->dtype_is_object); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 1103, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __pyx_r = __pyx_t_5; - __pyx_t_5 = 0; - goto __pyx_L0; - - /* "View.MemoryView":1089 - * - * @cname('__pyx_memoryview_copy_object_from_slice') - * cdef memoryview_copy_from_slice(memoryview memview, __Pyx_memviewslice *memviewslice): # <<<<<<<<<<<<<< - * """ - * Create a new memoryview object from a given memoryview object and slice. - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_from_slice", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "View.MemoryView":1111 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - -static Py_ssize_t abs_py_ssize_t(Py_ssize_t __pyx_v_arg) { - Py_ssize_t __pyx_r; - int __pyx_t_1; - - /* "View.MemoryView":1112 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - __pyx_t_1 = ((__pyx_v_arg < 0) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1113 - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: - * return -arg # <<<<<<<<<<<<<< - * else: - * return arg - */ - __pyx_r = (-__pyx_v_arg); - goto __pyx_L0; - - /* "View.MemoryView":1112 - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: - * if arg < 0: # <<<<<<<<<<<<<< - * return -arg - * else: - */ - } - - /* "View.MemoryView":1115 - * return -arg - * else: - * return arg # <<<<<<<<<<<<<< - * - * @cname('__pyx_get_best_slice_order') - */ - /*else*/ { - __pyx_r = __pyx_v_arg; - goto __pyx_L0; - } - - /* "View.MemoryView":1111 - * - * - * cdef Py_ssize_t abs_py_ssize_t(Py_ssize_t arg) nogil: # <<<<<<<<<<<<<< - * if arg < 0: - * return -arg - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1118 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - -static char __pyx_get_best_slice_order(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim) { - int __pyx_v_i; - Py_ssize_t __pyx_v_c_stride; - Py_ssize_t __pyx_v_f_stride; - char __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1123 - * """ - * cdef int i - * cdef Py_ssize_t c_stride = 0 # <<<<<<<<<<<<<< - * cdef Py_ssize_t f_stride = 0 - * - */ - __pyx_v_c_stride = 0; - - /* "View.MemoryView":1124 - * cdef int i - * cdef Py_ssize_t c_stride = 0 - * cdef Py_ssize_t f_stride = 0 # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_f_stride = 0; - - /* "View.MemoryView":1126 - * cdef Py_ssize_t f_stride = 0 - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1127 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1128 - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_c_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1129 - * if mslice.shape[i] > 1: - * c_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - goto __pyx_L4_break; - - /* "View.MemoryView":1127 - * - * for i in range(ndim - 1, -1, -1): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * c_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L4_break:; - - /* "View.MemoryView":1131 - * break - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - */ - __pyx_t_1 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_1; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1132 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - __pyx_t_2 = (((__pyx_v_mslice->shape[__pyx_v_i]) > 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1133 - * for i in range(ndim): - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] # <<<<<<<<<<<<<< - * break - * - */ - __pyx_v_f_stride = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1134 - * if mslice.shape[i] > 1: - * f_stride = mslice.strides[i] - * break # <<<<<<<<<<<<<< - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - */ - goto __pyx_L7_break; - - /* "View.MemoryView":1132 - * - * for i in range(ndim): - * if mslice.shape[i] > 1: # <<<<<<<<<<<<<< - * f_stride = mslice.strides[i] - * break - */ - } - } - __pyx_L7_break:; - - /* "View.MemoryView":1136 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - __pyx_t_2 = ((abs_py_ssize_t(__pyx_v_c_stride) <= abs_py_ssize_t(__pyx_v_f_stride)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1137 - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): - * return 'C' # <<<<<<<<<<<<<< - * else: - * return 'F' - */ - __pyx_r = 'C'; - goto __pyx_L0; - - /* "View.MemoryView":1136 - * break - * - * if abs_py_ssize_t(c_stride) <= abs_py_ssize_t(f_stride): # <<<<<<<<<<<<<< - * return 'C' - * else: - */ - } - - /* "View.MemoryView":1139 - * return 'C' - * else: - * return 'F' # <<<<<<<<<<<<<< - * - * @cython.cdivision(True) - */ - /*else*/ { - __pyx_r = 'F'; - goto __pyx_L0; - } - - /* "View.MemoryView":1118 - * - * @cname('__pyx_get_best_slice_order') - * cdef char get_best_order(__Pyx_memviewslice *mslice, int ndim) nogil: # <<<<<<<<<<<<<< - * """ - * Figure out the best memory access order for a given slice. - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1142 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - -static void _copy_strided_to_strided(char *__pyx_v_src_data, Py_ssize_t *__pyx_v_src_strides, char *__pyx_v_dst_data, Py_ssize_t *__pyx_v_dst_strides, Py_ssize_t *__pyx_v_src_shape, Py_ssize_t *__pyx_v_dst_shape, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - CYTHON_UNUSED Py_ssize_t __pyx_v_src_extent; - Py_ssize_t __pyx_v_dst_extent; - Py_ssize_t __pyx_v_src_stride; - Py_ssize_t __pyx_v_dst_stride; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - Py_ssize_t __pyx_t_4; - Py_ssize_t __pyx_t_5; - Py_ssize_t __pyx_t_6; - - /* "View.MemoryView":1149 - * - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - */ - __pyx_v_src_extent = (__pyx_v_src_shape[0]); - - /* "View.MemoryView":1150 - * cdef Py_ssize_t i - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] - */ - __pyx_v_dst_extent = (__pyx_v_dst_shape[0]); - - /* "View.MemoryView":1151 - * cdef Py_ssize_t src_extent = src_shape[0] - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - */ - __pyx_v_src_stride = (__pyx_v_src_strides[0]); - - /* "View.MemoryView":1152 - * cdef Py_ssize_t dst_extent = dst_shape[0] - * cdef Py_ssize_t src_stride = src_strides[0] - * cdef Py_ssize_t dst_stride = dst_strides[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_dst_stride = (__pyx_v_dst_strides[0]); - - /* "View.MemoryView":1154 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1155 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - __pyx_t_2 = ((__pyx_v_src_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - __pyx_t_2 = ((__pyx_v_dst_stride > 0) != 0); - if (__pyx_t_2) { - } else { - __pyx_t_1 = __pyx_t_2; - goto __pyx_L5_bool_binop_done; - } - - /* "View.MemoryView":1156 - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - */ - __pyx_t_2 = (((size_t)__pyx_v_src_stride) == __pyx_v_itemsize); - if (__pyx_t_2) { - __pyx_t_2 = (__pyx_v_itemsize == ((size_t)__pyx_v_dst_stride)); - } - __pyx_t_3 = (__pyx_t_2 != 0); - __pyx_t_1 = __pyx_t_3; - __pyx_L5_bool_binop_done:; - - /* "View.MemoryView":1155 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - if (__pyx_t_1) { - - /* "View.MemoryView":1157 - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, (__pyx_v_itemsize * __pyx_v_dst_extent))); - - /* "View.MemoryView":1155 - * - * if ndim == 1: - * if (src_stride > 0 and dst_stride > 0 and # <<<<<<<<<<<<<< - * src_stride == itemsize == dst_stride): - * memcpy(dst_data, src_data, itemsize * dst_extent) - */ - goto __pyx_L4; - } - - /* "View.MemoryView":1159 - * memcpy(dst_data, src_data, itemsize * dst_extent) - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1160 - * else: - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) # <<<<<<<<<<<<<< - * src_data += src_stride - * dst_data += dst_stride - */ - (void)(memcpy(__pyx_v_dst_data, __pyx_v_src_data, __pyx_v_itemsize)); - - /* "View.MemoryView":1161 - * for i in range(dst_extent): - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * else: - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1162 - * memcpy(dst_data, src_data, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * else: - * for i in range(dst_extent): - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L4:; - - /* "View.MemoryView":1154 - * cdef Py_ssize_t dst_stride = dst_strides[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * if (src_stride > 0 and dst_stride > 0 and - * src_stride == itemsize == dst_stride): - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1164 - * dst_data += dst_stride - * else: - * for i in range(dst_extent): # <<<<<<<<<<<<<< - * _copy_strided_to_strided(src_data, src_strides + 1, - * dst_data, dst_strides + 1, - */ - /*else*/ { - __pyx_t_4 = __pyx_v_dst_extent; - __pyx_t_5 = __pyx_t_4; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1165 - * else: - * for i in range(dst_extent): - * _copy_strided_to_strided(src_data, src_strides + 1, # <<<<<<<<<<<<<< - * dst_data, dst_strides + 1, - * src_shape + 1, dst_shape + 1, - */ - _copy_strided_to_strided(__pyx_v_src_data, (__pyx_v_src_strides + 1), __pyx_v_dst_data, (__pyx_v_dst_strides + 1), (__pyx_v_src_shape + 1), (__pyx_v_dst_shape + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize); - - /* "View.MemoryView":1169 - * src_shape + 1, dst_shape + 1, - * ndim - 1, itemsize) - * src_data += src_stride # <<<<<<<<<<<<<< - * dst_data += dst_stride - * - */ - __pyx_v_src_data = (__pyx_v_src_data + __pyx_v_src_stride); - - /* "View.MemoryView":1170 - * ndim - 1, itemsize) - * src_data += src_stride - * dst_data += dst_stride # <<<<<<<<<<<<<< - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, - */ - __pyx_v_dst_data = (__pyx_v_dst_data + __pyx_v_dst_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1142 - * - * @cython.cdivision(True) - * cdef void _copy_strided_to_strided(char *src_data, Py_ssize_t *src_strides, # <<<<<<<<<<<<<< - * char *dst_data, Py_ssize_t *dst_strides, - * Py_ssize_t *src_shape, Py_ssize_t *dst_shape, - */ - - /* function exit code */ -} - -/* "View.MemoryView":1172 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - -static void copy_strided_to_strided(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize) { - - /* "View.MemoryView":1175 - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - * _copy_strided_to_strided(src.data, src.strides, dst.data, dst.strides, # <<<<<<<<<<<<<< - * src.shape, dst.shape, ndim, itemsize) - * - */ - _copy_strided_to_strided(__pyx_v_src->data, __pyx_v_src->strides, __pyx_v_dst->data, __pyx_v_dst->strides, __pyx_v_src->shape, __pyx_v_dst->shape, __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1172 - * dst_data += dst_stride - * - * cdef void copy_strided_to_strided(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *dst, - * int ndim, size_t itemsize) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1179 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - -static Py_ssize_t __pyx_memoryview_slice_get_size(__Pyx_memviewslice *__pyx_v_src, int __pyx_v_ndim) { - Py_ssize_t __pyx_v_shape; - Py_ssize_t __pyx_v_size; - Py_ssize_t __pyx_r; - Py_ssize_t __pyx_t_1; - Py_ssize_t *__pyx_t_2; - Py_ssize_t *__pyx_t_3; - Py_ssize_t *__pyx_t_4; - - /* "View.MemoryView":1181 - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize # <<<<<<<<<<<<<< - * - * for shape in src.shape[:ndim]: - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_size = __pyx_t_1; - - /* "View.MemoryView":1183 - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - * - * for shape in src.shape[:ndim]: # <<<<<<<<<<<<<< - * size *= shape - * - */ - __pyx_t_3 = (__pyx_v_src->shape + __pyx_v_ndim); - for (__pyx_t_4 = __pyx_v_src->shape; __pyx_t_4 < __pyx_t_3; __pyx_t_4++) { - __pyx_t_2 = __pyx_t_4; - __pyx_v_shape = (__pyx_t_2[0]); - - /* "View.MemoryView":1184 - * - * for shape in src.shape[:ndim]: - * size *= shape # <<<<<<<<<<<<<< - * - * return size - */ - __pyx_v_size = (__pyx_v_size * __pyx_v_shape); - } - - /* "View.MemoryView":1186 - * size *= shape - * - * return size # <<<<<<<<<<<<<< - * - * @cname('__pyx_fill_contig_strides_array') - */ - __pyx_r = __pyx_v_size; - goto __pyx_L0; - - /* "View.MemoryView":1179 - * - * @cname('__pyx_memoryview_slice_get_size') - * cdef Py_ssize_t slice_get_size(__Pyx_memviewslice *src, int ndim) nogil: # <<<<<<<<<<<<<< - * "Return the size of the memory occupied by the slice in number of bytes" - * cdef Py_ssize_t shape, size = src.memview.view.itemsize - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1189 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - -static Py_ssize_t __pyx_fill_contig_strides_array(Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, Py_ssize_t __pyx_v_stride, int __pyx_v_ndim, char __pyx_v_order) { - int __pyx_v_idx; - Py_ssize_t __pyx_r; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - - /* "View.MemoryView":1198 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - __pyx_t_1 = ((__pyx_v_order == 'F') != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1199 - * - * if order == 'F': - * for idx in range(ndim): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - __pyx_t_2 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_idx = __pyx_t_4; - - /* "View.MemoryView":1200 - * if order == 'F': - * for idx in range(ndim): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * else: - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1201 - * for idx in range(ndim): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * else: - * for idx in range(ndim - 1, -1, -1): - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - - /* "View.MemoryView":1198 - * cdef int idx - * - * if order == 'F': # <<<<<<<<<<<<<< - * for idx in range(ndim): - * strides[idx] = stride - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1203 - * stride *= shape[idx] - * else: - * for idx in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * strides[idx] = stride - * stride *= shape[idx] - */ - /*else*/ { - for (__pyx_t_2 = (__pyx_v_ndim - 1); __pyx_t_2 > -1; __pyx_t_2-=1) { - __pyx_v_idx = __pyx_t_2; - - /* "View.MemoryView":1204 - * else: - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride # <<<<<<<<<<<<<< - * stride *= shape[idx] - * - */ - (__pyx_v_strides[__pyx_v_idx]) = __pyx_v_stride; - - /* "View.MemoryView":1205 - * for idx in range(ndim - 1, -1, -1): - * strides[idx] = stride - * stride *= shape[idx] # <<<<<<<<<<<<<< - * - * return stride - */ - __pyx_v_stride = (__pyx_v_stride * (__pyx_v_shape[__pyx_v_idx])); - } - } - __pyx_L3:; - - /* "View.MemoryView":1207 - * stride *= shape[idx] - * - * return stride # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_data_to_temp') - */ - __pyx_r = __pyx_v_stride; - goto __pyx_L0; - - /* "View.MemoryView":1189 - * - * @cname('__pyx_fill_contig_strides_array') - * cdef Py_ssize_t fill_contig_strides_array( # <<<<<<<<<<<<<< - * Py_ssize_t *shape, Py_ssize_t *strides, Py_ssize_t stride, - * int ndim, char order) nogil: - */ - - /* function exit code */ - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1210 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - -static void *__pyx_memoryview_copy_data_to_temp(__Pyx_memviewslice *__pyx_v_src, __Pyx_memviewslice *__pyx_v_tmpslice, char __pyx_v_order, int __pyx_v_ndim) { - int __pyx_v_i; - void *__pyx_v_result; - size_t __pyx_v_itemsize; - size_t __pyx_v_size; - void *__pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - struct __pyx_memoryview_obj *__pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1221 - * cdef void *result - * - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef size_t size = slice_get_size(src, ndim) - * - */ - __pyx_t_1 = __pyx_v_src->memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1222 - * - * cdef size_t itemsize = src.memview.view.itemsize - * cdef size_t size = slice_get_size(src, ndim) # <<<<<<<<<<<<<< - * - * result = malloc(size) - */ - __pyx_v_size = __pyx_memoryview_slice_get_size(__pyx_v_src, __pyx_v_ndim); - - /* "View.MemoryView":1224 - * cdef size_t size = slice_get_size(src, ndim) - * - * result = malloc(size) # <<<<<<<<<<<<<< - * if not result: - * _err(MemoryError, NULL) - */ - __pyx_v_result = malloc(__pyx_v_size); - - /* "View.MemoryView":1225 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - __pyx_t_2 = ((!(__pyx_v_result != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1226 - * result = malloc(size) - * if not result: - * _err(MemoryError, NULL) # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_3 = __pyx_memoryview_err(__pyx_builtin_MemoryError, NULL); if (unlikely(__pyx_t_3 == ((int)-1))) __PYX_ERR(1, 1226, __pyx_L1_error) - - /* "View.MemoryView":1225 - * - * result = malloc(size) - * if not result: # <<<<<<<<<<<<<< - * _err(MemoryError, NULL) - * - */ - } - - /* "View.MemoryView":1229 - * - * - * tmpslice.data = result # <<<<<<<<<<<<<< - * tmpslice.memview = src.memview - * for i in range(ndim): - */ - __pyx_v_tmpslice->data = ((char *)__pyx_v_result); - - /* "View.MemoryView":1230 - * - * tmpslice.data = result - * tmpslice.memview = src.memview # <<<<<<<<<<<<<< - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - */ - __pyx_t_4 = __pyx_v_src->memview; - __pyx_v_tmpslice->memview = __pyx_t_4; - - /* "View.MemoryView":1231 - * tmpslice.data = result - * tmpslice.memview = src.memview - * for i in range(ndim): # <<<<<<<<<<<<<< - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1232 - * tmpslice.memview = src.memview - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] # <<<<<<<<<<<<<< - * tmpslice.suboffsets[i] = -1 - * - */ - (__pyx_v_tmpslice->shape[__pyx_v_i]) = (__pyx_v_src->shape[__pyx_v_i]); - - /* "View.MemoryView":1233 - * for i in range(ndim): - * tmpslice.shape[i] = src.shape[i] - * tmpslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, - */ - (__pyx_v_tmpslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1235 - * tmpslice.suboffsets[i] = -1 - * - * fill_contig_strides_array(&tmpslice.shape[0], &tmpslice.strides[0], itemsize, # <<<<<<<<<<<<<< - * ndim, order) - * - */ - (void)(__pyx_fill_contig_strides_array((&(__pyx_v_tmpslice->shape[0])), (&(__pyx_v_tmpslice->strides[0])), __pyx_v_itemsize, __pyx_v_ndim, __pyx_v_order)); - - /* "View.MemoryView":1239 - * - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 - */ - __pyx_t_3 = __pyx_v_ndim; - __pyx_t_5 = __pyx_t_3; - for (__pyx_t_6 = 0; __pyx_t_6 < __pyx_t_5; __pyx_t_6+=1) { - __pyx_v_i = __pyx_t_6; - - /* "View.MemoryView":1240 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - __pyx_t_2 = (((__pyx_v_tmpslice->shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1241 - * for i in range(ndim): - * if tmpslice.shape[i] == 1: - * tmpslice.strides[i] = 0 # <<<<<<<<<<<<<< - * - * if slice_is_contig(src[0], order, ndim): - */ - (__pyx_v_tmpslice->strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1240 - * - * for i in range(ndim): - * if tmpslice.shape[i] == 1: # <<<<<<<<<<<<<< - * tmpslice.strides[i] = 0 - * - */ - } - } - - /* "View.MemoryView":1243 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig((__pyx_v_src[0]), __pyx_v_order, __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1244 - * - * if slice_is_contig(src[0], order, ndim): - * memcpy(result, src.data, size) # <<<<<<<<<<<<<< - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - */ - (void)(memcpy(__pyx_v_result, __pyx_v_src->data, __pyx_v_size)); - - /* "View.MemoryView":1243 - * tmpslice.strides[i] = 0 - * - * if slice_is_contig(src[0], order, ndim): # <<<<<<<<<<<<<< - * memcpy(result, src.data, size) - * else: - */ - goto __pyx_L9; - } - - /* "View.MemoryView":1246 - * memcpy(result, src.data, size) - * else: - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) # <<<<<<<<<<<<<< - * - * return result - */ - /*else*/ { - copy_strided_to_strided(__pyx_v_src, __pyx_v_tmpslice, __pyx_v_ndim, __pyx_v_itemsize); - } - __pyx_L9:; - - /* "View.MemoryView":1248 - * copy_strided_to_strided(src, tmpslice, ndim, itemsize) - * - * return result # <<<<<<<<<<<<<< - * - * - */ - __pyx_r = __pyx_v_result; - goto __pyx_L0; - - /* "View.MemoryView":1210 - * - * @cname('__pyx_memoryview_copy_data_to_temp') - * cdef void *copy_data_to_temp(__Pyx_memviewslice *src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice *tmpslice, - * char order, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.copy_data_to_temp", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = NULL; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1253 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - -static int __pyx_memoryview_err_extents(int __pyx_v_i, Py_ssize_t __pyx_v_extent1, Py_ssize_t __pyx_v_extent2) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_extents", 0); - - /* "View.MemoryView":1256 - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - * (i, extent1, extent2)) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err_dim') - */ - __pyx_t_1 = __Pyx_PyInt_From_int(__pyx_v_i); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = PyInt_FromSsize_t(__pyx_v_extent1); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = PyInt_FromSsize_t(__pyx_v_extent2); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyTuple_New(3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1256, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_GIVEREF(__pyx_t_1); - PyTuple_SET_ITEM(__pyx_t_4, 0, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_2); - PyTuple_SET_ITEM(__pyx_t_4, 1, __pyx_t_2); - __Pyx_GIVEREF(__pyx_t_3); - PyTuple_SET_ITEM(__pyx_t_4, 2, __pyx_t_3); - __pyx_t_1 = 0; - __pyx_t_2 = 0; - __pyx_t_3 = 0; - - /* "View.MemoryView":1255 - * cdef int _err_extents(int i, Py_ssize_t extent1, - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % # <<<<<<<<<<<<<< - * (i, extent1, extent2)) - * - */ - __pyx_t_3 = __Pyx_PyString_Format(__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_t_4); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __pyx_t_4 = __Pyx_PyObject_CallOneArg(__pyx_builtin_ValueError, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1255, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 1255, __pyx_L1_error) - - /* "View.MemoryView":1253 - * - * @cname('__pyx_memoryview_err_extents') - * cdef int _err_extents(int i, Py_ssize_t extent1, # <<<<<<<<<<<<<< - * Py_ssize_t extent2) except -1 with gil: - * raise ValueError("got differing extents in dimension %d (got %d and %d)" % - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_extents", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1259 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - -static int __pyx_memoryview_err_dim(PyObject *__pyx_v_error, char *__pyx_v_msg, int __pyx_v_dim) { - int __pyx_r; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err_dim", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1260 - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: - * raise error(msg.decode('ascii') % dim) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_err') - */ - __pyx_t_2 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __pyx_t_3 = __Pyx_PyInt_From_int(__pyx_v_dim); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __pyx_t_4 = PyUnicode_Format(__pyx_t_2, __pyx_t_3); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 1260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_INCREF(__pyx_v_error); - __pyx_t_3 = __pyx_v_error; __pyx_t_2 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_3))) { - __pyx_t_2 = PyMethod_GET_SELF(__pyx_t_3); - if (likely(__pyx_t_2)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_3); - __Pyx_INCREF(__pyx_t_2); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_3, function); - } - } - __pyx_t_1 = (__pyx_t_2) ? __Pyx_PyObject_Call2Args(__pyx_t_3, __pyx_t_2, __pyx_t_4) : __Pyx_PyObject_CallOneArg(__pyx_t_3, __pyx_t_4); - __Pyx_XDECREF(__pyx_t_2); __pyx_t_2 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1260, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - __Pyx_Raise(__pyx_t_1, 0, 0, 0); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __PYX_ERR(1, 1260, __pyx_L1_error) - - /* "View.MemoryView":1259 - * - * @cname('__pyx_memoryview_err_dim') - * cdef int _err_dim(object error, char *msg, int dim) except -1 with gil: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii') % dim) - * - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_AddTraceback("View.MemoryView._err_dim", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1263 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - -static int __pyx_memoryview_err(PyObject *__pyx_v_error, char *__pyx_v_msg) { - int __pyx_r; - __Pyx_RefNannyDeclarations - int __pyx_t_1; - PyObject *__pyx_t_2 = NULL; - PyObject *__pyx_t_3 = NULL; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("_err", 0); - __Pyx_INCREF(__pyx_v_error); - - /* "View.MemoryView":1264 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - __pyx_t_1 = ((__pyx_v_msg != NULL) != 0); - if (unlikely(__pyx_t_1)) { - - /* "View.MemoryView":1265 - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: - * raise error(msg.decode('ascii')) # <<<<<<<<<<<<<< - * else: - * raise error - */ - __pyx_t_3 = __Pyx_decode_c_string(__pyx_v_msg, 0, strlen(__pyx_v_msg), NULL, NULL, PyUnicode_DecodeASCII); if (unlikely(!__pyx_t_3)) __PYX_ERR(1, 1265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_3); - __Pyx_INCREF(__pyx_v_error); - __pyx_t_4 = __pyx_v_error; __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_4))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_4); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_4); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_4, function); - } - } - __pyx_t_2 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_4, __pyx_t_5, __pyx_t_3) : __Pyx_PyObject_CallOneArg(__pyx_t_4, __pyx_t_3); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - __Pyx_DECREF(__pyx_t_3); __pyx_t_3 = 0; - if (unlikely(!__pyx_t_2)) __PYX_ERR(1, 1265, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_2); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __Pyx_Raise(__pyx_t_2, 0, 0, 0); - __Pyx_DECREF(__pyx_t_2); __pyx_t_2 = 0; - __PYX_ERR(1, 1265, __pyx_L1_error) - - /* "View.MemoryView":1264 - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: - * if msg != NULL: # <<<<<<<<<<<<<< - * raise error(msg.decode('ascii')) - * else: - */ - } - - /* "View.MemoryView":1267 - * raise error(msg.decode('ascii')) - * else: - * raise error # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_copy_contents') - */ - /*else*/ { - __Pyx_Raise(__pyx_v_error, 0, 0, 0); - __PYX_ERR(1, 1267, __pyx_L1_error) - } - - /* "View.MemoryView":1263 - * - * @cname('__pyx_memoryview_err') - * cdef int _err(object error, char *msg) except -1 with gil: # <<<<<<<<<<<<<< - * if msg != NULL: - * raise error(msg.decode('ascii')) - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_2); - __Pyx_XDECREF(__pyx_t_3); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_AddTraceback("View.MemoryView._err", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = -1; - __Pyx_XDECREF(__pyx_v_error); - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - return __pyx_r; -} - -/* "View.MemoryView":1270 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - -static int __pyx_memoryview_copy_contents(__Pyx_memviewslice __pyx_v_src, __Pyx_memviewslice __pyx_v_dst, int __pyx_v_src_ndim, int __pyx_v_dst_ndim, int __pyx_v_dtype_is_object) { - void *__pyx_v_tmpdata; - size_t __pyx_v_itemsize; - int __pyx_v_i; - char __pyx_v_order; - int __pyx_v_broadcasting; - int __pyx_v_direct_copy; - __Pyx_memviewslice __pyx_v_tmp; - int __pyx_v_ndim; - int __pyx_r; - Py_ssize_t __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - int __pyx_t_6; - void *__pyx_t_7; - int __pyx_t_8; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - - /* "View.MemoryView":1278 - * Check for overlapping memory and verify the shapes. - * """ - * cdef void *tmpdata = NULL # <<<<<<<<<<<<<< - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - */ - __pyx_v_tmpdata = NULL; - - /* "View.MemoryView":1279 - * """ - * cdef void *tmpdata = NULL - * cdef size_t itemsize = src.memview.view.itemsize # <<<<<<<<<<<<<< - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - */ - __pyx_t_1 = __pyx_v_src.memview->view.itemsize; - __pyx_v_itemsize = __pyx_t_1; - - /* "View.MemoryView":1281 - * cdef size_t itemsize = src.memview.view.itemsize - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) # <<<<<<<<<<<<<< - * cdef bint broadcasting = False - * cdef bint direct_copy = False - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_src), __pyx_v_src_ndim); - - /* "View.MemoryView":1282 - * cdef int i - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False # <<<<<<<<<<<<<< - * cdef bint direct_copy = False - * cdef __Pyx_memviewslice tmp - */ - __pyx_v_broadcasting = 0; - - /* "View.MemoryView":1283 - * cdef char order = get_best_order(&src, src_ndim) - * cdef bint broadcasting = False - * cdef bint direct_copy = False # <<<<<<<<<<<<<< - * cdef __Pyx_memviewslice tmp - * - */ - __pyx_v_direct_copy = 0; - - /* "View.MemoryView":1286 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - __pyx_t_2 = ((__pyx_v_src_ndim < __pyx_v_dst_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1287 - * - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_src), __pyx_v_src_ndim, __pyx_v_dst_ndim); - - /* "View.MemoryView":1286 - * cdef __Pyx_memviewslice tmp - * - * if src_ndim < dst_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1288 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - __pyx_t_2 = ((__pyx_v_dst_ndim < __pyx_v_src_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1289 - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: - * broadcast_leading(&dst, dst_ndim, src_ndim) # <<<<<<<<<<<<<< - * - * cdef int ndim = max(src_ndim, dst_ndim) - */ - __pyx_memoryview_broadcast_leading((&__pyx_v_dst), __pyx_v_dst_ndim, __pyx_v_src_ndim); - - /* "View.MemoryView":1288 - * if src_ndim < dst_ndim: - * broadcast_leading(&src, src_ndim, dst_ndim) - * elif dst_ndim < src_ndim: # <<<<<<<<<<<<<< - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - */ - } - __pyx_L3:; - - /* "View.MemoryView":1291 - * broadcast_leading(&dst, dst_ndim, src_ndim) - * - * cdef int ndim = max(src_ndim, dst_ndim) # <<<<<<<<<<<<<< - * - * for i in range(ndim): - */ - __pyx_t_3 = __pyx_v_dst_ndim; - __pyx_t_4 = __pyx_v_src_ndim; - if (((__pyx_t_3 > __pyx_t_4) != 0)) { - __pyx_t_5 = __pyx_t_3; - } else { - __pyx_t_5 = __pyx_t_4; - } - __pyx_v_ndim = __pyx_t_5; - - /* "View.MemoryView":1293 - * cdef int ndim = max(src_ndim, dst_ndim) - * - * for i in range(ndim): # <<<<<<<<<<<<<< - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - */ - __pyx_t_5 = __pyx_v_ndim; - __pyx_t_3 = __pyx_t_5; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1294 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) != (__pyx_v_dst.shape[__pyx_v_i])) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1295 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - __pyx_t_2 = (((__pyx_v_src.shape[__pyx_v_i]) == 1) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1296 - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: - * broadcasting = True # <<<<<<<<<<<<<< - * src.strides[i] = 0 - * else: - */ - __pyx_v_broadcasting = 1; - - /* "View.MemoryView":1297 - * if src.shape[i] == 1: - * broadcasting = True - * src.strides[i] = 0 # <<<<<<<<<<<<<< - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) - */ - (__pyx_v_src.strides[__pyx_v_i]) = 0; - - /* "View.MemoryView":1295 - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: - * if src.shape[i] == 1: # <<<<<<<<<<<<<< - * broadcasting = True - * src.strides[i] = 0 - */ - goto __pyx_L7; - } - - /* "View.MemoryView":1299 - * src.strides[i] = 0 - * else: - * _err_extents(i, dst.shape[i], src.shape[i]) # <<<<<<<<<<<<<< - * - * if src.suboffsets[i] >= 0: - */ - /*else*/ { - __pyx_t_6 = __pyx_memoryview_err_extents(__pyx_v_i, (__pyx_v_dst.shape[__pyx_v_i]), (__pyx_v_src.shape[__pyx_v_i])); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1299, __pyx_L1_error) - } - __pyx_L7:; - - /* "View.MemoryView":1294 - * - * for i in range(ndim): - * if src.shape[i] != dst.shape[i]: # <<<<<<<<<<<<<< - * if src.shape[i] == 1: - * broadcasting = True - */ - } - - /* "View.MemoryView":1301 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - __pyx_t_2 = (((__pyx_v_src.suboffsets[__pyx_v_i]) >= 0) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1302 - * - * if src.suboffsets[i] >= 0: - * _err_dim(ValueError, "Dimension %d is not direct", i) # <<<<<<<<<<<<<< - * - * if slices_overlap(&src, &dst, ndim, itemsize): - */ - __pyx_t_6 = __pyx_memoryview_err_dim(__pyx_builtin_ValueError, ((char *)"Dimension %d is not direct"), __pyx_v_i); if (unlikely(__pyx_t_6 == ((int)-1))) __PYX_ERR(1, 1302, __pyx_L1_error) - - /* "View.MemoryView":1301 - * _err_extents(i, dst.shape[i], src.shape[i]) - * - * if src.suboffsets[i] >= 0: # <<<<<<<<<<<<<< - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - */ - } - } - - /* "View.MemoryView":1304 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - __pyx_t_2 = (__pyx_slices_overlap((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1306 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - __pyx_t_2 = ((!(__pyx_memviewslice_is_contig(__pyx_v_src, __pyx_v_order, __pyx_v_ndim) != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1307 - * - * if not slice_is_contig(src, order, ndim): - * order = get_best_order(&dst, ndim) # <<<<<<<<<<<<<< - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - */ - __pyx_v_order = __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim); - - /* "View.MemoryView":1306 - * if slices_overlap(&src, &dst, ndim, itemsize): - * - * if not slice_is_contig(src, order, ndim): # <<<<<<<<<<<<<< - * order = get_best_order(&dst, ndim) - * - */ - } - - /* "View.MemoryView":1309 - * order = get_best_order(&dst, ndim) - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) # <<<<<<<<<<<<<< - * src = tmp - * - */ - __pyx_t_7 = __pyx_memoryview_copy_data_to_temp((&__pyx_v_src), (&__pyx_v_tmp), __pyx_v_order, __pyx_v_ndim); if (unlikely(__pyx_t_7 == ((void *)NULL))) __PYX_ERR(1, 1309, __pyx_L1_error) - __pyx_v_tmpdata = __pyx_t_7; - - /* "View.MemoryView":1310 - * - * tmpdata = copy_data_to_temp(&src, &tmp, order, ndim) - * src = tmp # <<<<<<<<<<<<<< - * - * if not broadcasting: - */ - __pyx_v_src = __pyx_v_tmp; - - /* "View.MemoryView":1304 - * _err_dim(ValueError, "Dimension %d is not direct", i) - * - * if slices_overlap(&src, &dst, ndim, itemsize): # <<<<<<<<<<<<<< - * - * if not slice_is_contig(src, order, ndim): - */ - } - - /* "View.MemoryView":1312 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = ((!(__pyx_v_broadcasting != 0)) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1315 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'C', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1316 - * - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) # <<<<<<<<<<<<<< - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'C', __pyx_v_ndim); - - /* "View.MemoryView":1315 - * - * - * if slice_is_contig(src, 'C', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - */ - goto __pyx_L12; - } - - /* "View.MemoryView":1317 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - __pyx_t_2 = (__pyx_memviewslice_is_contig(__pyx_v_src, 'F', __pyx_v_ndim) != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1318 - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): - * direct_copy = slice_is_contig(dst, 'F', ndim) # <<<<<<<<<<<<<< - * - * if direct_copy: - */ - __pyx_v_direct_copy = __pyx_memviewslice_is_contig(__pyx_v_dst, 'F', __pyx_v_ndim); - - /* "View.MemoryView":1317 - * if slice_is_contig(src, 'C', ndim): - * direct_copy = slice_is_contig(dst, 'C', ndim) - * elif slice_is_contig(src, 'F', ndim): # <<<<<<<<<<<<<< - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - */ - } - __pyx_L12:; - - /* "View.MemoryView":1320 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_2 = (__pyx_v_direct_copy != 0); - if (__pyx_t_2) { - - /* "View.MemoryView":1322 - * if direct_copy: - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1323 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - */ - (void)(memcpy(__pyx_v_dst.data, __pyx_v_src.data, __pyx_memoryview_slice_get_size((&__pyx_v_src), __pyx_v_ndim))); - - /* "View.MemoryView":1324 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * free(tmpdata) - * return 0 - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1325 - * memcpy(dst.data, src.data, slice_get_size(&src, ndim)) - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1326 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * if order == 'F' == get_best_order(&dst, ndim): - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1320 - * direct_copy = slice_is_contig(dst, 'F', ndim) - * - * if direct_copy: # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - } - - /* "View.MemoryView":1312 - * src = tmp - * - * if not broadcasting: # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1328 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_2 = (__pyx_v_order == 'F'); - if (__pyx_t_2) { - __pyx_t_2 = ('F' == __pyx_get_best_slice_order((&__pyx_v_dst), __pyx_v_ndim)); - } - __pyx_t_8 = (__pyx_t_2 != 0); - if (__pyx_t_8) { - - /* "View.MemoryView":1331 - * - * - * transpose_memslice(&src) # <<<<<<<<<<<<<< - * transpose_memslice(&dst) - * - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_src)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1331, __pyx_L1_error) - - /* "View.MemoryView":1332 - * - * transpose_memslice(&src) - * transpose_memslice(&dst) # <<<<<<<<<<<<<< - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - */ - __pyx_t_5 = __pyx_memslice_transpose((&__pyx_v_dst)); if (unlikely(__pyx_t_5 == ((int)0))) __PYX_ERR(1, 1332, __pyx_L1_error) - - /* "View.MemoryView":1328 - * return 0 - * - * if order == 'F' == get_best_order(&dst, ndim): # <<<<<<<<<<<<<< - * - * - */ - } - - /* "View.MemoryView":1334 - * transpose_memslice(&dst) - * - * refcount_copying(&dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1335 - * - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) # <<<<<<<<<<<<<< - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - */ - copy_strided_to_strided((&__pyx_v_src), (&__pyx_v_dst), __pyx_v_ndim, __pyx_v_itemsize); - - /* "View.MemoryView":1336 - * refcount_copying(&dst, dtype_is_object, ndim, False) - * copy_strided_to_strided(&src, &dst, ndim, itemsize) - * refcount_copying(&dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * free(tmpdata) - */ - __pyx_memoryview_refcount_copying((&__pyx_v_dst), __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1338 - * refcount_copying(&dst, dtype_is_object, ndim, True) - * - * free(tmpdata) # <<<<<<<<<<<<<< - * return 0 - * - */ - free(__pyx_v_tmpdata); - - /* "View.MemoryView":1339 - * - * free(tmpdata) - * return 0 # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_broadcast_leading') - */ - __pyx_r = 0; - goto __pyx_L0; - - /* "View.MemoryView":1270 - * - * @cname('__pyx_memoryview_copy_contents') - * cdef int memoryview_copy_contents(__Pyx_memviewslice src, # <<<<<<<<<<<<<< - * __Pyx_memviewslice dst, - * int src_ndim, int dst_ndim, - */ - - /* function exit code */ - __pyx_L1_error:; - { - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_AddTraceback("View.MemoryView.memoryview_copy_contents", __pyx_clineno, __pyx_lineno, __pyx_filename); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif - } - __pyx_r = -1; - __pyx_L0:; - return __pyx_r; -} - -/* "View.MemoryView":1342 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - -static void __pyx_memoryview_broadcast_leading(__Pyx_memviewslice *__pyx_v_mslice, int __pyx_v_ndim, int __pyx_v_ndim_other) { - int __pyx_v_i; - int __pyx_v_offset; - int __pyx_t_1; - int __pyx_t_2; - int __pyx_t_3; - - /* "View.MemoryView":1346 - * int ndim_other) nogil: - * cdef int i - * cdef int offset = ndim_other - ndim # <<<<<<<<<<<<<< - * - * for i in range(ndim - 1, -1, -1): - */ - __pyx_v_offset = (__pyx_v_ndim_other - __pyx_v_ndim); - - /* "View.MemoryView":1348 - * cdef int offset = ndim_other - ndim - * - * for i in range(ndim - 1, -1, -1): # <<<<<<<<<<<<<< - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - */ - for (__pyx_t_1 = (__pyx_v_ndim - 1); __pyx_t_1 > -1; __pyx_t_1-=1) { - __pyx_v_i = __pyx_t_1; - - /* "View.MemoryView":1349 - * - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] # <<<<<<<<<<<<<< - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - */ - (__pyx_v_mslice->shape[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->shape[__pyx_v_i]); - - /* "View.MemoryView":1350 - * for i in range(ndim - 1, -1, -1): - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] # <<<<<<<<<<<<<< - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - */ - (__pyx_v_mslice->strides[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->strides[__pyx_v_i]); - - /* "View.MemoryView":1351 - * mslice.shape[i + offset] = mslice.shape[i] - * mslice.strides[i + offset] = mslice.strides[i] - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] # <<<<<<<<<<<<<< - * - * for i in range(offset): - */ - (__pyx_v_mslice->suboffsets[(__pyx_v_i + __pyx_v_offset)]) = (__pyx_v_mslice->suboffsets[__pyx_v_i]); - } - - /* "View.MemoryView":1353 - * mslice.suboffsets[i + offset] = mslice.suboffsets[i] - * - * for i in range(offset): # <<<<<<<<<<<<<< - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - */ - __pyx_t_1 = __pyx_v_offset; - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1354 - * - * for i in range(offset): - * mslice.shape[i] = 1 # <<<<<<<<<<<<<< - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 - */ - (__pyx_v_mslice->shape[__pyx_v_i]) = 1; - - /* "View.MemoryView":1355 - * for i in range(offset): - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] # <<<<<<<<<<<<<< - * mslice.suboffsets[i] = -1 - * - */ - (__pyx_v_mslice->strides[__pyx_v_i]) = (__pyx_v_mslice->strides[0]); - - /* "View.MemoryView":1356 - * mslice.shape[i] = 1 - * mslice.strides[i] = mslice.strides[0] - * mslice.suboffsets[i] = -1 # <<<<<<<<<<<<<< - * - * - */ - (__pyx_v_mslice->suboffsets[__pyx_v_i]) = -1L; - } - - /* "View.MemoryView":1342 - * - * @cname('__pyx_memoryview_broadcast_leading') - * cdef void broadcast_leading(__Pyx_memviewslice *mslice, # <<<<<<<<<<<<<< - * int ndim, - * int ndim_other) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1364 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - -static void __pyx_memoryview_refcount_copying(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_dtype_is_object, int __pyx_v_ndim, int __pyx_v_inc) { - int __pyx_t_1; - - /* "View.MemoryView":1368 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - __pyx_t_1 = (__pyx_v_dtype_is_object != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1369 - * - * if dtype_is_object: - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, # <<<<<<<<<<<<<< - * dst.strides, ndim, inc) - * - */ - __pyx_memoryview_refcount_objects_in_slice_with_gil(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1368 - * - * - * if dtype_is_object: # <<<<<<<<<<<<<< - * refcount_objects_in_slice_with_gil(dst.data, dst.shape, - * dst.strides, ndim, inc) - */ - } - - /* "View.MemoryView":1364 - * - * @cname('__pyx_memoryview_refcount_copying') - * cdef void refcount_copying(__Pyx_memviewslice *dst, bint dtype_is_object, # <<<<<<<<<<<<<< - * int ndim, bint inc) nogil: - * - */ - - /* function exit code */ -} - -/* "View.MemoryView":1373 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - -static void __pyx_memoryview_refcount_objects_in_slice_with_gil(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - __Pyx_RefNannyDeclarations - #ifdef WITH_THREAD - PyGILState_STATE __pyx_gilstate_save = __Pyx_PyGILState_Ensure(); - #endif - __Pyx_RefNannySetupContext("refcount_objects_in_slice_with_gil", 0); - - /* "View.MemoryView":1376 - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - * refcount_objects_in_slice(data, shape, strides, ndim, inc) # <<<<<<<<<<<<<< - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, __pyx_v_shape, __pyx_v_strides, __pyx_v_ndim, __pyx_v_inc); - - /* "View.MemoryView":1373 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice_with_gil') - * cdef void refcount_objects_in_slice_with_gil(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * bint inc) with gil: - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - #ifdef WITH_THREAD - __Pyx_PyGILState_Release(__pyx_gilstate_save); - #endif -} - -/* "View.MemoryView":1379 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - -static void __pyx_memoryview_refcount_objects_in_slice(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, int __pyx_v_inc) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - __Pyx_RefNannyDeclarations - Py_ssize_t __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - __Pyx_RefNannySetupContext("refcount_objects_in_slice", 0); - - /* "View.MemoryView":1383 - * cdef Py_ssize_t i - * - * for i in range(shape[0]): # <<<<<<<<<<<<<< - * if ndim == 1: - * if inc: - */ - __pyx_t_1 = (__pyx_v_shape[0]); - __pyx_t_2 = __pyx_t_1; - for (__pyx_t_3 = 0; __pyx_t_3 < __pyx_t_2; __pyx_t_3+=1) { - __pyx_v_i = __pyx_t_3; - - /* "View.MemoryView":1384 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - __pyx_t_4 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1385 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - __pyx_t_4 = (__pyx_v_inc != 0); - if (__pyx_t_4) { - - /* "View.MemoryView":1386 - * if ndim == 1: - * if inc: - * Py_INCREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * Py_DECREF(( data)[0]) - */ - Py_INCREF((((PyObject **)__pyx_v_data)[0])); - - /* "View.MemoryView":1385 - * for i in range(shape[0]): - * if ndim == 1: - * if inc: # <<<<<<<<<<<<<< - * Py_INCREF(( data)[0]) - * else: - */ - goto __pyx_L6; - } - - /* "View.MemoryView":1388 - * Py_INCREF(( data)[0]) - * else: - * Py_DECREF(( data)[0]) # <<<<<<<<<<<<<< - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - */ - /*else*/ { - Py_DECREF((((PyObject **)__pyx_v_data)[0])); - } - __pyx_L6:; - - /* "View.MemoryView":1384 - * - * for i in range(shape[0]): - * if ndim == 1: # <<<<<<<<<<<<<< - * if inc: - * Py_INCREF(( data)[0]) - */ - goto __pyx_L5; - } - - /* "View.MemoryView":1390 - * Py_DECREF(( data)[0]) - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, inc) - * - */ - /*else*/ { - - /* "View.MemoryView":1391 - * else: - * refcount_objects_in_slice(data, shape + 1, strides + 1, - * ndim - 1, inc) # <<<<<<<<<<<<<< - * - * data += strides[0] - */ - __pyx_memoryview_refcount_objects_in_slice(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_inc); - } - __pyx_L5:; - - /* "View.MemoryView":1393 - * ndim - 1, inc) - * - * data += strides[0] # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + (__pyx_v_strides[0])); - } - - /* "View.MemoryView":1379 - * - * @cname('__pyx_memoryview_refcount_objects_in_slice') - * cdef void refcount_objects_in_slice(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, bint inc): - * cdef Py_ssize_t i - */ - - /* function exit code */ - __Pyx_RefNannyFinishContext(); -} - -/* "View.MemoryView":1399 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - -static void __pyx_memoryview_slice_assign_scalar(__Pyx_memviewslice *__pyx_v_dst, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item, int __pyx_v_dtype_is_object) { - - /* "View.MemoryView":1402 - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) # <<<<<<<<<<<<<< - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 0); - - /* "View.MemoryView":1403 - * bint dtype_is_object) nogil: - * refcount_copying(dst, dtype_is_object, ndim, False) - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, # <<<<<<<<<<<<<< - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_dst->data, __pyx_v_dst->shape, __pyx_v_dst->strides, __pyx_v_ndim, __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1405 - * _slice_assign_scalar(dst.data, dst.shape, dst.strides, ndim, - * itemsize, item) - * refcount_copying(dst, dtype_is_object, ndim, True) # <<<<<<<<<<<<<< - * - * - */ - __pyx_memoryview_refcount_copying(__pyx_v_dst, __pyx_v_dtype_is_object, __pyx_v_ndim, 1); - - /* "View.MemoryView":1399 - * - * @cname('__pyx_memoryview_slice_assign_scalar') - * cdef void slice_assign_scalar(__Pyx_memviewslice *dst, int ndim, # <<<<<<<<<<<<<< - * size_t itemsize, void *item, - * bint dtype_is_object) nogil: - */ - - /* function exit code */ -} - -/* "View.MemoryView":1409 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - -static void __pyx_memoryview__slice_assign_scalar(char *__pyx_v_data, Py_ssize_t *__pyx_v_shape, Py_ssize_t *__pyx_v_strides, int __pyx_v_ndim, size_t __pyx_v_itemsize, void *__pyx_v_item) { - CYTHON_UNUSED Py_ssize_t __pyx_v_i; - Py_ssize_t __pyx_v_stride; - Py_ssize_t __pyx_v_extent; - int __pyx_t_1; - Py_ssize_t __pyx_t_2; - Py_ssize_t __pyx_t_3; - Py_ssize_t __pyx_t_4; - - /* "View.MemoryView":1413 - * size_t itemsize, void *item) nogil: - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] # <<<<<<<<<<<<<< - * cdef Py_ssize_t extent = shape[0] - * - */ - __pyx_v_stride = (__pyx_v_strides[0]); - - /* "View.MemoryView":1414 - * cdef Py_ssize_t i - * cdef Py_ssize_t stride = strides[0] - * cdef Py_ssize_t extent = shape[0] # <<<<<<<<<<<<<< - * - * if ndim == 1: - */ - __pyx_v_extent = (__pyx_v_shape[0]); - - /* "View.MemoryView":1416 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - __pyx_t_1 = ((__pyx_v_ndim == 1) != 0); - if (__pyx_t_1) { - - /* "View.MemoryView":1417 - * - * if ndim == 1: - * for i in range(extent): # <<<<<<<<<<<<<< - * memcpy(data, item, itemsize) - * data += stride - */ - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1418 - * if ndim == 1: - * for i in range(extent): - * memcpy(data, item, itemsize) # <<<<<<<<<<<<<< - * data += stride - * else: - */ - (void)(memcpy(__pyx_v_data, __pyx_v_item, __pyx_v_itemsize)); - - /* "View.MemoryView":1419 - * for i in range(extent): - * memcpy(data, item, itemsize) - * data += stride # <<<<<<<<<<<<<< - * else: - * for i in range(extent): - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - - /* "View.MemoryView":1416 - * cdef Py_ssize_t extent = shape[0] - * - * if ndim == 1: # <<<<<<<<<<<<<< - * for i in range(extent): - * memcpy(data, item, itemsize) - */ - goto __pyx_L3; - } - - /* "View.MemoryView":1421 - * data += stride - * else: - * for i in range(extent): # <<<<<<<<<<<<<< - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - */ - /*else*/ { - __pyx_t_2 = __pyx_v_extent; - __pyx_t_3 = __pyx_t_2; - for (__pyx_t_4 = 0; __pyx_t_4 < __pyx_t_3; __pyx_t_4+=1) { - __pyx_v_i = __pyx_t_4; - - /* "View.MemoryView":1422 - * else: - * for i in range(extent): - * _slice_assign_scalar(data, shape + 1, strides + 1, # <<<<<<<<<<<<<< - * ndim - 1, itemsize, item) - * data += stride - */ - __pyx_memoryview__slice_assign_scalar(__pyx_v_data, (__pyx_v_shape + 1), (__pyx_v_strides + 1), (__pyx_v_ndim - 1), __pyx_v_itemsize, __pyx_v_item); - - /* "View.MemoryView":1424 - * _slice_assign_scalar(data, shape + 1, strides + 1, - * ndim - 1, itemsize, item) - * data += stride # <<<<<<<<<<<<<< - * - * - */ - __pyx_v_data = (__pyx_v_data + __pyx_v_stride); - } - } - __pyx_L3:; - - /* "View.MemoryView":1409 - * - * @cname('__pyx_memoryview__slice_assign_scalar') - * cdef void _slice_assign_scalar(char *data, Py_ssize_t *shape, # <<<<<<<<<<<<<< - * Py_ssize_t *strides, int ndim, - * size_t itemsize, void *item) nogil: - */ - - /* function exit code */ -} - -/* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - -/* Python wrapper */ -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds); /*proto*/ -static PyMethodDef __pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum = {"__pyx_unpickle_Enum", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum, METH_VARARGS|METH_KEYWORDS, 0}; -static PyObject *__pyx_pw_15View_dot_MemoryView_1__pyx_unpickle_Enum(PyObject *__pyx_self, PyObject *__pyx_args, PyObject *__pyx_kwds) { - PyObject *__pyx_v___pyx_type = 0; - long __pyx_v___pyx_checksum; - PyObject *__pyx_v___pyx_state = 0; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - PyObject *__pyx_r = 0; - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum (wrapper)", 0); - { - static PyObject **__pyx_pyargnames[] = {&__pyx_n_s_pyx_type,&__pyx_n_s_pyx_checksum,&__pyx_n_s_pyx_state,0}; - PyObject* values[3] = {0,0,0}; - if (unlikely(__pyx_kwds)) { - Py_ssize_t kw_args; - const Py_ssize_t pos_args = PyTuple_GET_SIZE(__pyx_args); - switch (pos_args) { - case 3: values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - CYTHON_FALLTHROUGH; - case 2: values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - CYTHON_FALLTHROUGH; - case 1: values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - CYTHON_FALLTHROUGH; - case 0: break; - default: goto __pyx_L5_argtuple_error; - } - kw_args = PyDict_Size(__pyx_kwds); - switch (pos_args) { - case 0: - if (likely((values[0] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_type)) != 0)) kw_args--; - else goto __pyx_L5_argtuple_error; - CYTHON_FALLTHROUGH; - case 1: - if (likely((values[1] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_checksum)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 1); __PYX_ERR(1, 1, __pyx_L3_error) - } - CYTHON_FALLTHROUGH; - case 2: - if (likely((values[2] = __Pyx_PyDict_GetItemStr(__pyx_kwds, __pyx_n_s_pyx_state)) != 0)) kw_args--; - else { - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, 2); __PYX_ERR(1, 1, __pyx_L3_error) - } - } - if (unlikely(kw_args > 0)) { - if (unlikely(__Pyx_ParseOptionalKeywords(__pyx_kwds, __pyx_pyargnames, 0, values, pos_args, "__pyx_unpickle_Enum") < 0)) __PYX_ERR(1, 1, __pyx_L3_error) - } - } else if (PyTuple_GET_SIZE(__pyx_args) != 3) { - goto __pyx_L5_argtuple_error; - } else { - values[0] = PyTuple_GET_ITEM(__pyx_args, 0); - values[1] = PyTuple_GET_ITEM(__pyx_args, 1); - values[2] = PyTuple_GET_ITEM(__pyx_args, 2); - } - __pyx_v___pyx_type = values[0]; - __pyx_v___pyx_checksum = __Pyx_PyInt_As_long(values[1]); if (unlikely((__pyx_v___pyx_checksum == (long)-1) && PyErr_Occurred())) __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_v___pyx_state = values[2]; - } - goto __pyx_L4_argument_unpacking_done; - __pyx_L5_argtuple_error:; - __Pyx_RaiseArgtupleInvalid("__pyx_unpickle_Enum", 1, 3, 3, PyTuple_GET_SIZE(__pyx_args)); __PYX_ERR(1, 1, __pyx_L3_error) - __pyx_L3_error:; - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __Pyx_RefNannyFinishContext(); - return NULL; - __pyx_L4_argument_unpacking_done:; - __pyx_r = __pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(__pyx_self, __pyx_v___pyx_type, __pyx_v___pyx_checksum, __pyx_v___pyx_state); - - /* function exit code */ - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -static PyObject *__pyx_pf_15View_dot_MemoryView___pyx_unpickle_Enum(CYTHON_UNUSED PyObject *__pyx_self, PyObject *__pyx_v___pyx_type, long __pyx_v___pyx_checksum, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_v___pyx_PickleError = 0; - PyObject *__pyx_v___pyx_result = 0; - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - int __pyx_t_3; - PyObject *__pyx_t_4 = NULL; - PyObject *__pyx_t_5 = NULL; - PyObject *__pyx_t_6 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum", 0); - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0xb068931, 0x82a3537, 0x6ae9995): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - */ - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_2 = (__Pyx_PySequence_ContainsTF(__pyx_t_1, __pyx_tuple__20, Py_NE)); if (unlikely(__pyx_t_2 < 0)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_3 = (__pyx_t_2 != 0); - if (__pyx_t_3) { - - /* "(tree fragment)":5 - * cdef object __pyx_result - * if __pyx_checksum not in (0xb068931, 0x82a3537, 0x6ae9995): - * from pickle import PickleError as __pyx_PickleError # <<<<<<<<<<<<<< - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - */ - __pyx_t_1 = PyList_New(1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_n_s_PickleError); - __Pyx_GIVEREF(__pyx_n_s_PickleError); - PyList_SET_ITEM(__pyx_t_1, 0, __pyx_n_s_PickleError); - __pyx_t_4 = __Pyx_Import(__pyx_n_s_pickle, __pyx_t_1, 0); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_t_1 = __Pyx_ImportFrom(__pyx_t_4, __pyx_n_s_PickleError); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 5, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_INCREF(__pyx_t_1); - __pyx_v___pyx_PickleError = __pyx_t_1; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "(tree fragment)":6 - * if __pyx_checksum not in (0xb068931, 0x82a3537, 0x6ae9995): - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) # <<<<<<<<<<<<<< - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - */ - __pyx_t_1 = __Pyx_PyInt_From_long(__pyx_v___pyx_checksum); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = __Pyx_PyString_Format(__pyx_kp_s_Incompatible_checksums_0x_x_vs_0, __pyx_t_1); if (unlikely(!__pyx_t_5)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_5); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_INCREF(__pyx_v___pyx_PickleError); - __pyx_t_1 = __pyx_v___pyx_PickleError; __pyx_t_6 = NULL; - if (CYTHON_UNPACK_METHODS && unlikely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_6 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_6)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_6); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - } - } - __pyx_t_4 = (__pyx_t_6) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_6, __pyx_t_5) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); __pyx_t_6 = 0; - __Pyx_DECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 6, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __Pyx_Raise(__pyx_t_4, 0, 0, 0); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - __PYX_ERR(1, 6, __pyx_L1_error) - - /* "(tree fragment)":4 - * cdef object __pyx_PickleError - * cdef object __pyx_result - * if __pyx_checksum not in (0xb068931, 0x82a3537, 0x6ae9995): # <<<<<<<<<<<<<< - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - */ - } - - /* "(tree fragment)":7 - * from pickle import PickleError as __pyx_PickleError - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) # <<<<<<<<<<<<<< - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - */ - __pyx_t_1 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_MemviewEnum_type), __pyx_n_s_new); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __pyx_t_5 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_1))) { - __pyx_t_5 = PyMethod_GET_SELF(__pyx_t_1); - if (likely(__pyx_t_5)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_1); - __Pyx_INCREF(__pyx_t_5); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_1, function); - } - } - __pyx_t_4 = (__pyx_t_5) ? __Pyx_PyObject_Call2Args(__pyx_t_1, __pyx_t_5, __pyx_v___pyx_type) : __Pyx_PyObject_CallOneArg(__pyx_t_1, __pyx_v___pyx_type); - __Pyx_XDECREF(__pyx_t_5); __pyx_t_5 = 0; - if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 7, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - __pyx_v___pyx_result = __pyx_t_4; - __pyx_t_4 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - __pyx_t_3 = (__pyx_v___pyx_state != Py_None); - __pyx_t_2 = (__pyx_t_3 != 0); - if (__pyx_t_2) { - - /* "(tree fragment)":9 - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) # <<<<<<<<<<<<<< - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - */ - if (!(likely(PyTuple_CheckExact(__pyx_v___pyx_state))||((__pyx_v___pyx_state) == Py_None)||(PyErr_Format(PyExc_TypeError, "Expected %.16s, got %.200s", "tuple", Py_TYPE(__pyx_v___pyx_state)->tp_name), 0))) __PYX_ERR(1, 9, __pyx_L1_error) - __pyx_t_4 = __pyx_unpickle_Enum__set_state(((struct __pyx_MemviewEnum_obj *)__pyx_v___pyx_result), ((PyObject*)__pyx_v___pyx_state)); if (unlikely(!__pyx_t_4)) __PYX_ERR(1, 9, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_4); - __Pyx_DECREF(__pyx_t_4); __pyx_t_4 = 0; - - /* "(tree fragment)":8 - * raise __pyx_PickleError("Incompatible checksums (0x%x vs (0xb068931, 0x82a3537, 0x6ae9995) = (name))" % __pyx_checksum) - * __pyx_result = Enum.__new__(__pyx_type) - * if __pyx_state is not None: # <<<<<<<<<<<<<< - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - */ - } - - /* "(tree fragment)":10 - * if __pyx_state is not None: - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result # <<<<<<<<<<<<<< - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - */ - __Pyx_XDECREF(__pyx_r); - __Pyx_INCREF(__pyx_v___pyx_result); - __pyx_r = __pyx_v___pyx_result; - goto __pyx_L0; - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - - /* function exit code */ - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_4); - __Pyx_XDECREF(__pyx_t_5); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = NULL; - __pyx_L0:; - __Pyx_XDECREF(__pyx_v___pyx_PickleError); - __Pyx_XDECREF(__pyx_v___pyx_result); - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} - -/* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - -static PyObject *__pyx_unpickle_Enum__set_state(struct __pyx_MemviewEnum_obj *__pyx_v___pyx_result, PyObject *__pyx_v___pyx_state) { - PyObject *__pyx_r = NULL; - __Pyx_RefNannyDeclarations - PyObject *__pyx_t_1 = NULL; - int __pyx_t_2; - Py_ssize_t __pyx_t_3; - int __pyx_t_4; - int __pyx_t_5; - PyObject *__pyx_t_6 = NULL; - PyObject *__pyx_t_7 = NULL; - PyObject *__pyx_t_8 = NULL; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__pyx_unpickle_Enum__set_state", 0); - - /* "(tree fragment)":12 - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] # <<<<<<<<<<<<<< - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 12, __pyx_L1_error) - } - __pyx_t_1 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 0, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 12, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __Pyx_GOTREF(__pyx_v___pyx_result->name); - __Pyx_DECREF(__pyx_v___pyx_result->name); - __pyx_v___pyx_result->name = __pyx_t_1; - __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "object of type 'NoneType' has no len()"); - __PYX_ERR(1, 13, __pyx_L1_error) - } - __pyx_t_3 = PyTuple_GET_SIZE(__pyx_v___pyx_state); if (unlikely(__pyx_t_3 == ((Py_ssize_t)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_4 = ((__pyx_t_3 > 1) != 0); - if (__pyx_t_4) { - } else { - __pyx_t_2 = __pyx_t_4; - goto __pyx_L4_bool_binop_done; - } - __pyx_t_4 = __Pyx_HasAttr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(__pyx_t_4 == ((int)-1))) __PYX_ERR(1, 13, __pyx_L1_error) - __pyx_t_5 = (__pyx_t_4 != 0); - __pyx_t_2 = __pyx_t_5; - __pyx_L4_bool_binop_done:; - if (__pyx_t_2) { - - /* "(tree fragment)":14 - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - * __pyx_result.__dict__.update(__pyx_state[1]) # <<<<<<<<<<<<<< - */ - __pyx_t_6 = __Pyx_PyObject_GetAttrStr(((PyObject *)__pyx_v___pyx_result), __pyx_n_s_dict); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_7 = __Pyx_PyObject_GetAttrStr(__pyx_t_6, __pyx_n_s_update); if (unlikely(!__pyx_t_7)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_7); - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(__pyx_v___pyx_state == Py_None)) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not subscriptable"); - __PYX_ERR(1, 14, __pyx_L1_error) - } - __pyx_t_6 = __Pyx_GetItemInt_Tuple(__pyx_v___pyx_state, 1, long, 1, __Pyx_PyInt_From_long, 0, 0, 1); if (unlikely(!__pyx_t_6)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_6); - __pyx_t_8 = NULL; - if (CYTHON_UNPACK_METHODS && likely(PyMethod_Check(__pyx_t_7))) { - __pyx_t_8 = PyMethod_GET_SELF(__pyx_t_7); - if (likely(__pyx_t_8)) { - PyObject* function = PyMethod_GET_FUNCTION(__pyx_t_7); - __Pyx_INCREF(__pyx_t_8); - __Pyx_INCREF(function); - __Pyx_DECREF_SET(__pyx_t_7, function); - } - } - __pyx_t_1 = (__pyx_t_8) ? __Pyx_PyObject_Call2Args(__pyx_t_7, __pyx_t_8, __pyx_t_6) : __Pyx_PyObject_CallOneArg(__pyx_t_7, __pyx_t_6); - __Pyx_XDECREF(__pyx_t_8); __pyx_t_8 = 0; - __Pyx_DECREF(__pyx_t_6); __pyx_t_6 = 0; - if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 14, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_DECREF(__pyx_t_7); __pyx_t_7 = 0; - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":13 - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): # <<<<<<<<<<<<<< - * __pyx_result.__dict__.update(__pyx_state[1]) - */ - } - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /* function exit code */ - __pyx_r = Py_None; __Pyx_INCREF(Py_None); - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - __Pyx_XDECREF(__pyx_t_6); - __Pyx_XDECREF(__pyx_t_7); - __Pyx_XDECREF(__pyx_t_8); - __Pyx_AddTraceback("View.MemoryView.__pyx_unpickle_Enum__set_state", __pyx_clineno, __pyx_lineno, __pyx_filename); - __pyx_r = 0; - __pyx_L0:; - __Pyx_XGIVEREF(__pyx_r); - __Pyx_RefNannyFinishContext(); - return __pyx_r; -} -static struct __pyx_vtabstruct_array __pyx_vtable_array; - -static PyObject *__pyx_tp_new_array(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_array_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_array_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_array; - p->mode = ((PyObject*)Py_None); Py_INCREF(Py_None); - p->_format = ((PyObject*)Py_None); Py_INCREF(Py_None); - if (unlikely(__pyx_array___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_array(PyObject *o) { - struct __pyx_array_obj *p = (struct __pyx_array_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && (!PyType_IS_GC(Py_TYPE(o)) || !_PyGC_FINALIZED(o))) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_array___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->mode); - Py_CLEAR(p->_format); - (*Py_TYPE(o)->tp_free)(o); -} -static PyObject *__pyx_sq_item_array(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_array(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_array___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_tp_getattro_array(PyObject *o, PyObject *n) { - PyObject *v = __Pyx_PyObject_GenericGetAttr(o, n); - if (!v && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - v = __pyx_array___getattr__(o, n); - } - return v; -} - -static PyObject *__pyx_getprop___pyx_array_memview(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_5array_7memview_1__get__(o); -} - -static PyMethodDef __pyx_methods_array[] = { - {"__getattr__", (PyCFunction)__pyx_array___getattr__, METH_O|METH_COEXIST, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_array_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_array_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_array[] = { - {(char *)"memview", __pyx_getprop___pyx_array_memview, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_array = { - __pyx_array___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_array, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_array = { - __pyx_array___len__, /*mp_length*/ - __pyx_array___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_array, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_array = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_array_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_array = { - PyVarObject_HEAD_INIT(0, 0) - "vits.monotonic_align.core.array", /*tp_name*/ - sizeof(struct __pyx_array_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_array, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - 0, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_array, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_array, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - __pyx_tp_getattro_array, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_array, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE, /*tp_flags*/ - 0, /*tp_doc*/ - 0, /*tp_traverse*/ - 0, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_array, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_array, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_array, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; - -static PyObject *__pyx_tp_new_Enum(PyTypeObject *t, CYTHON_UNUSED PyObject *a, CYTHON_UNUSED PyObject *k) { - struct __pyx_MemviewEnum_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_MemviewEnum_obj *)o); - p->name = Py_None; Py_INCREF(Py_None); - return o; -} - -static void __pyx_tp_dealloc_Enum(PyObject *o) { - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - Py_CLEAR(p->name); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_Enum(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - if (p->name) { - e = (*v)(p->name, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_Enum(PyObject *o) { - PyObject* tmp; - struct __pyx_MemviewEnum_obj *p = (struct __pyx_MemviewEnum_obj *)o; - tmp = ((PyObject*)p->name); - p->name = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - return 0; -} - -static PyMethodDef __pyx_methods_Enum[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_MemviewEnum_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_MemviewEnum = { - PyVarObject_HEAD_INIT(0, 0) - "vits.monotonic_align.core.Enum", /*tp_name*/ - sizeof(struct __pyx_MemviewEnum_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_Enum, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_MemviewEnum___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - 0, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_Enum, /*tp_traverse*/ - __pyx_tp_clear_Enum, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_Enum, /*tp_methods*/ - 0, /*tp_members*/ - 0, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - __pyx_MemviewEnum___init__, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_Enum, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -static struct __pyx_vtabstruct_memoryview __pyx_vtable_memoryview; - -static PyObject *__pyx_tp_new_memoryview(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryview_obj *p; - PyObject *o; - if (likely((t->tp_flags & Py_TPFLAGS_IS_ABSTRACT) == 0)) { - o = (*t->tp_alloc)(t, 0); - } else { - o = (PyObject *) PyBaseObject_Type.tp_new(t, __pyx_empty_tuple, 0); - } - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryview_obj *)o); - p->__pyx_vtab = __pyx_vtabptr_memoryview; - p->obj = Py_None; Py_INCREF(Py_None); - p->_size = Py_None; Py_INCREF(Py_None); - p->_array_interface = Py_None; Py_INCREF(Py_None); - p->view.obj = NULL; - if (unlikely(__pyx_memoryview___cinit__(o, a, k) < 0)) goto bad; - return o; - bad: - Py_DECREF(o); o = 0; - return NULL; -} - -static void __pyx_tp_dealloc_memoryview(PyObject *o) { - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryview___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->obj); - Py_CLEAR(p->_size); - Py_CLEAR(p->_array_interface); - (*Py_TYPE(o)->tp_free)(o); -} - -static int __pyx_tp_traverse_memoryview(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - if (p->obj) { - e = (*v)(p->obj, a); if (e) return e; - } - if (p->_size) { - e = (*v)(p->_size, a); if (e) return e; - } - if (p->_array_interface) { - e = (*v)(p->_array_interface, a); if (e) return e; - } - if (p->view.obj) { - e = (*v)(p->view.obj, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear_memoryview(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryview_obj *p = (struct __pyx_memoryview_obj *)o; - tmp = ((PyObject*)p->obj); - p->obj = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_size); - p->_size = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - tmp = ((PyObject*)p->_array_interface); - p->_array_interface = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - Py_CLEAR(p->view.obj); - return 0; -} -static PyObject *__pyx_sq_item_memoryview(PyObject *o, Py_ssize_t i) { - PyObject *r; - PyObject *x = PyInt_FromSsize_t(i); if(!x) return 0; - r = Py_TYPE(o)->tp_as_mapping->mp_subscript(o, x); - Py_DECREF(x); - return r; -} - -static int __pyx_mp_ass_subscript_memoryview(PyObject *o, PyObject *i, PyObject *v) { - if (v) { - return __pyx_memoryview___setitem__(o, i, v); - } - else { - PyErr_Format(PyExc_NotImplementedError, - "Subscript deletion not supported by %.200s", Py_TYPE(o)->tp_name); - return -1; - } -} - -static PyObject *__pyx_getprop___pyx_memoryview_T(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_1T_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4base_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_shape(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_5shape_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_strides(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_7strides_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_suboffsets(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_10suboffsets_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_ndim(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4ndim_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_itemsize(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_8itemsize_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_nbytes(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_6nbytes_1__get__(o); -} - -static PyObject *__pyx_getprop___pyx_memoryview_size(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_10memoryview_4size_1__get__(o); -} - -static PyMethodDef __pyx_methods_memoryview[] = { - {"is_c_contig", (PyCFunction)__pyx_memoryview_is_c_contig, METH_NOARGS, 0}, - {"is_f_contig", (PyCFunction)__pyx_memoryview_is_f_contig, METH_NOARGS, 0}, - {"copy", (PyCFunction)__pyx_memoryview_copy, METH_NOARGS, 0}, - {"copy_fortran", (PyCFunction)__pyx_memoryview_copy_fortran, METH_NOARGS, 0}, - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryview_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets_memoryview[] = { - {(char *)"T", __pyx_getprop___pyx_memoryview_T, 0, (char *)0, 0}, - {(char *)"base", __pyx_getprop___pyx_memoryview_base, 0, (char *)0, 0}, - {(char *)"shape", __pyx_getprop___pyx_memoryview_shape, 0, (char *)0, 0}, - {(char *)"strides", __pyx_getprop___pyx_memoryview_strides, 0, (char *)0, 0}, - {(char *)"suboffsets", __pyx_getprop___pyx_memoryview_suboffsets, 0, (char *)0, 0}, - {(char *)"ndim", __pyx_getprop___pyx_memoryview_ndim, 0, (char *)0, 0}, - {(char *)"itemsize", __pyx_getprop___pyx_memoryview_itemsize, 0, (char *)0, 0}, - {(char *)"nbytes", __pyx_getprop___pyx_memoryview_nbytes, 0, (char *)0, 0}, - {(char *)"size", __pyx_getprop___pyx_memoryview_size, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PySequenceMethods __pyx_tp_as_sequence_memoryview = { - __pyx_memoryview___len__, /*sq_length*/ - 0, /*sq_concat*/ - 0, /*sq_repeat*/ - __pyx_sq_item_memoryview, /*sq_item*/ - 0, /*sq_slice*/ - 0, /*sq_ass_item*/ - 0, /*sq_ass_slice*/ - 0, /*sq_contains*/ - 0, /*sq_inplace_concat*/ - 0, /*sq_inplace_repeat*/ -}; - -static PyMappingMethods __pyx_tp_as_mapping_memoryview = { - __pyx_memoryview___len__, /*mp_length*/ - __pyx_memoryview___getitem__, /*mp_subscript*/ - __pyx_mp_ass_subscript_memoryview, /*mp_ass_subscript*/ -}; - -static PyBufferProcs __pyx_tp_as_buffer_memoryview = { - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getreadbuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getwritebuffer*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getsegcount*/ - #endif - #if PY_MAJOR_VERSION < 3 - 0, /*bf_getcharbuffer*/ - #endif - __pyx_memoryview_getbuffer, /*bf_getbuffer*/ - 0, /*bf_releasebuffer*/ -}; - -static PyTypeObject __pyx_type___pyx_memoryview = { - PyVarObject_HEAD_INIT(0, 0) - "vits.monotonic_align.core.memoryview", /*tp_name*/ - sizeof(struct __pyx_memoryview_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc_memoryview, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - __pyx_memoryview___repr__, /*tp_repr*/ - 0, /*tp_as_number*/ - &__pyx_tp_as_sequence_memoryview, /*tp_as_sequence*/ - &__pyx_tp_as_mapping_memoryview, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - __pyx_memoryview___str__, /*tp_str*/ - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - &__pyx_tp_as_buffer_memoryview, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - 0, /*tp_doc*/ - __pyx_tp_traverse_memoryview, /*tp_traverse*/ - __pyx_tp_clear_memoryview, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods_memoryview, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets_memoryview, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new_memoryview, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; -static struct __pyx_vtabstruct__memoryviewslice __pyx_vtable__memoryviewslice; - -static PyObject *__pyx_tp_new__memoryviewslice(PyTypeObject *t, PyObject *a, PyObject *k) { - struct __pyx_memoryviewslice_obj *p; - PyObject *o = __pyx_tp_new_memoryview(t, a, k); - if (unlikely(!o)) return 0; - p = ((struct __pyx_memoryviewslice_obj *)o); - p->__pyx_base.__pyx_vtab = (struct __pyx_vtabstruct_memoryview*)__pyx_vtabptr__memoryviewslice; - p->from_object = Py_None; Py_INCREF(Py_None); - p->from_slice.memview = NULL; - return o; -} - -static void __pyx_tp_dealloc__memoryviewslice(PyObject *o) { - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - #if CYTHON_USE_TP_FINALIZE - if (unlikely(PyType_HasFeature(Py_TYPE(o), Py_TPFLAGS_HAVE_FINALIZE) && Py_TYPE(o)->tp_finalize) && !_PyGC_FINALIZED(o)) { - if (PyObject_CallFinalizerFromDealloc(o)) return; - } - #endif - PyObject_GC_UnTrack(o); - { - PyObject *etype, *eval, *etb; - PyErr_Fetch(&etype, &eval, &etb); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) + 1); - __pyx_memoryviewslice___dealloc__(o); - __Pyx_SET_REFCNT(o, Py_REFCNT(o) - 1); - PyErr_Restore(etype, eval, etb); - } - Py_CLEAR(p->from_object); - PyObject_GC_Track(o); - __pyx_tp_dealloc_memoryview(o); -} - -static int __pyx_tp_traverse__memoryviewslice(PyObject *o, visitproc v, void *a) { - int e; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - e = __pyx_tp_traverse_memoryview(o, v, a); if (e) return e; - if (p->from_object) { - e = (*v)(p->from_object, a); if (e) return e; - } - return 0; -} - -static int __pyx_tp_clear__memoryviewslice(PyObject *o) { - PyObject* tmp; - struct __pyx_memoryviewslice_obj *p = (struct __pyx_memoryviewslice_obj *)o; - __pyx_tp_clear_memoryview(o); - tmp = ((PyObject*)p->from_object); - p->from_object = Py_None; Py_INCREF(Py_None); - Py_XDECREF(tmp); - __PYX_XDEC_MEMVIEW(&p->from_slice, 1); - return 0; -} - -static PyObject *__pyx_getprop___pyx_memoryviewslice_base(PyObject *o, CYTHON_UNUSED void *x) { - return __pyx_pw_15View_dot_MemoryView_16_memoryviewslice_4base_1__get__(o); -} - -static PyMethodDef __pyx_methods__memoryviewslice[] = { - {"__reduce_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_1__reduce_cython__, METH_NOARGS, 0}, - {"__setstate_cython__", (PyCFunction)__pyx_pw___pyx_memoryviewslice_3__setstate_cython__, METH_O, 0}, - {0, 0, 0, 0} -}; - -static struct PyGetSetDef __pyx_getsets__memoryviewslice[] = { - {(char *)"base", __pyx_getprop___pyx_memoryviewslice_base, 0, (char *)0, 0}, - {0, 0, 0, 0, 0} -}; - -static PyTypeObject __pyx_type___pyx_memoryviewslice = { - PyVarObject_HEAD_INIT(0, 0) - "vits.monotonic_align.core._memoryviewslice", /*tp_name*/ - sizeof(struct __pyx_memoryviewslice_obj), /*tp_basicsize*/ - 0, /*tp_itemsize*/ - __pyx_tp_dealloc__memoryviewslice, /*tp_dealloc*/ - #if PY_VERSION_HEX < 0x030800b4 - 0, /*tp_print*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 - 0, /*tp_vectorcall_offset*/ - #endif - 0, /*tp_getattr*/ - 0, /*tp_setattr*/ - #if PY_MAJOR_VERSION < 3 - 0, /*tp_compare*/ - #endif - #if PY_MAJOR_VERSION >= 3 - 0, /*tp_as_async*/ - #endif - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___repr__, /*tp_repr*/ - #else - 0, /*tp_repr*/ - #endif - 0, /*tp_as_number*/ - 0, /*tp_as_sequence*/ - 0, /*tp_as_mapping*/ - 0, /*tp_hash*/ - 0, /*tp_call*/ - #if CYTHON_COMPILING_IN_PYPY - __pyx_memoryview___str__, /*tp_str*/ - #else - 0, /*tp_str*/ - #endif - 0, /*tp_getattro*/ - 0, /*tp_setattro*/ - 0, /*tp_as_buffer*/ - Py_TPFLAGS_DEFAULT|Py_TPFLAGS_HAVE_VERSION_TAG|Py_TPFLAGS_CHECKTYPES|Py_TPFLAGS_HAVE_NEWBUFFER|Py_TPFLAGS_BASETYPE|Py_TPFLAGS_HAVE_GC, /*tp_flags*/ - "Internal class for passing memoryview slices to Python", /*tp_doc*/ - __pyx_tp_traverse__memoryviewslice, /*tp_traverse*/ - __pyx_tp_clear__memoryviewslice, /*tp_clear*/ - 0, /*tp_richcompare*/ - 0, /*tp_weaklistoffset*/ - 0, /*tp_iter*/ - 0, /*tp_iternext*/ - __pyx_methods__memoryviewslice, /*tp_methods*/ - 0, /*tp_members*/ - __pyx_getsets__memoryviewslice, /*tp_getset*/ - 0, /*tp_base*/ - 0, /*tp_dict*/ - 0, /*tp_descr_get*/ - 0, /*tp_descr_set*/ - 0, /*tp_dictoffset*/ - 0, /*tp_init*/ - 0, /*tp_alloc*/ - __pyx_tp_new__memoryviewslice, /*tp_new*/ - 0, /*tp_free*/ - 0, /*tp_is_gc*/ - 0, /*tp_bases*/ - 0, /*tp_mro*/ - 0, /*tp_cache*/ - 0, /*tp_subclasses*/ - 0, /*tp_weaklist*/ - 0, /*tp_del*/ - 0, /*tp_version_tag*/ - #if PY_VERSION_HEX >= 0x030400a1 - 0, /*tp_finalize*/ - #endif - #if PY_VERSION_HEX >= 0x030800b1 && (!CYTHON_COMPILING_IN_PYPY || PYPY_VERSION_NUM >= 0x07030800) - 0, /*tp_vectorcall*/ - #endif - #if PY_VERSION_HEX >= 0x030800b4 && PY_VERSION_HEX < 0x03090000 - 0, /*tp_print*/ - #endif - #if CYTHON_COMPILING_IN_PYPY && PY_VERSION_HEX >= 0x03090000 - 0, /*tp_pypy_flags*/ - #endif -}; - -static PyMethodDef __pyx_methods[] = { - {"maximum_path_c", (PyCFunction)(void*)(PyCFunctionWithKeywords)__pyx_pw_4vits_15monotonic_align_4core_1maximum_path_c, METH_VARARGS|METH_KEYWORDS, 0}, - {0, 0, 0, 0} -}; - -#if PY_MAJOR_VERSION >= 3 -#if CYTHON_PEP489_MULTI_PHASE_INIT -static PyObject* __pyx_pymod_create(PyObject *spec, PyModuleDef *def); /*proto*/ -static int __pyx_pymod_exec_core(PyObject* module); /*proto*/ -static PyModuleDef_Slot __pyx_moduledef_slots[] = { - {Py_mod_create, (void*)__pyx_pymod_create}, - {Py_mod_exec, (void*)__pyx_pymod_exec_core}, - {0, NULL} -}; -#endif - -static struct PyModuleDef __pyx_moduledef = { - PyModuleDef_HEAD_INIT, - "core", - 0, /* m_doc */ - #if CYTHON_PEP489_MULTI_PHASE_INIT - 0, /* m_size */ - #else - -1, /* m_size */ - #endif - __pyx_methods /* m_methods */, - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_moduledef_slots, /* m_slots */ - #else - NULL, /* m_reload */ - #endif - NULL, /* m_traverse */ - NULL, /* m_clear */ - NULL /* m_free */ -}; -#endif -#ifndef CYTHON_SMALL_CODE -#if defined(__clang__) - #define CYTHON_SMALL_CODE -#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3)) - #define CYTHON_SMALL_CODE __attribute__((cold)) -#else - #define CYTHON_SMALL_CODE -#endif -#endif - -static __Pyx_StringTabEntry __pyx_string_tab[] = { - {&__pyx_n_s_ASCII, __pyx_k_ASCII, sizeof(__pyx_k_ASCII), 0, 0, 1, 1}, - {&__pyx_kp_s_Buffer_view_does_not_expose_stri, __pyx_k_Buffer_view_does_not_expose_stri, sizeof(__pyx_k_Buffer_view_does_not_expose_stri), 0, 0, 1, 0}, - {&__pyx_kp_s_Can_only_create_a_buffer_that_is, __pyx_k_Can_only_create_a_buffer_that_is, sizeof(__pyx_k_Can_only_create_a_buffer_that_is), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_assign_to_read_only_memor, __pyx_k_Cannot_assign_to_read_only_memor, sizeof(__pyx_k_Cannot_assign_to_read_only_memor), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_create_writable_memory_vi, __pyx_k_Cannot_create_writable_memory_vi, sizeof(__pyx_k_Cannot_create_writable_memory_vi), 0, 0, 1, 0}, - {&__pyx_kp_s_Cannot_index_with_type_s, __pyx_k_Cannot_index_with_type_s, sizeof(__pyx_k_Cannot_index_with_type_s), 0, 0, 1, 0}, - {&__pyx_n_s_Ellipsis, __pyx_k_Ellipsis, sizeof(__pyx_k_Ellipsis), 0, 0, 1, 1}, - {&__pyx_kp_s_Empty_shape_tuple_for_cython_arr, __pyx_k_Empty_shape_tuple_for_cython_arr, sizeof(__pyx_k_Empty_shape_tuple_for_cython_arr), 0, 0, 1, 0}, - {&__pyx_kp_s_Incompatible_checksums_0x_x_vs_0, __pyx_k_Incompatible_checksums_0x_x_vs_0, sizeof(__pyx_k_Incompatible_checksums_0x_x_vs_0), 0, 0, 1, 0}, - {&__pyx_n_s_IndexError, __pyx_k_IndexError, sizeof(__pyx_k_IndexError), 0, 0, 1, 1}, - {&__pyx_kp_s_Indirect_dimensions_not_supporte, __pyx_k_Indirect_dimensions_not_supporte, sizeof(__pyx_k_Indirect_dimensions_not_supporte), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_mode_expected_c_or_fortr, __pyx_k_Invalid_mode_expected_c_or_fortr, sizeof(__pyx_k_Invalid_mode_expected_c_or_fortr), 0, 0, 1, 0}, - {&__pyx_kp_s_Invalid_shape_in_axis_d_d, __pyx_k_Invalid_shape_in_axis_d_d, sizeof(__pyx_k_Invalid_shape_in_axis_d_d), 0, 0, 1, 0}, - {&__pyx_n_s_MemoryError, __pyx_k_MemoryError, sizeof(__pyx_k_MemoryError), 0, 0, 1, 1}, - {&__pyx_kp_s_MemoryView_of_r_at_0x_x, __pyx_k_MemoryView_of_r_at_0x_x, sizeof(__pyx_k_MemoryView_of_r_at_0x_x), 0, 0, 1, 0}, - {&__pyx_kp_s_MemoryView_of_r_object, __pyx_k_MemoryView_of_r_object, sizeof(__pyx_k_MemoryView_of_r_object), 0, 0, 1, 0}, - {&__pyx_n_b_O, __pyx_k_O, sizeof(__pyx_k_O), 0, 0, 0, 1}, - {&__pyx_kp_s_Out_of_bounds_on_buffer_access_a, __pyx_k_Out_of_bounds_on_buffer_access_a, sizeof(__pyx_k_Out_of_bounds_on_buffer_access_a), 0, 0, 1, 0}, - {&__pyx_n_s_PickleError, __pyx_k_PickleError, sizeof(__pyx_k_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_TypeError, __pyx_k_TypeError, sizeof(__pyx_k_TypeError), 0, 0, 1, 1}, - {&__pyx_kp_s_Unable_to_convert_item_to_object, __pyx_k_Unable_to_convert_item_to_object, sizeof(__pyx_k_Unable_to_convert_item_to_object), 0, 0, 1, 0}, - {&__pyx_n_s_ValueError, __pyx_k_ValueError, sizeof(__pyx_k_ValueError), 0, 0, 1, 1}, - {&__pyx_n_s_View_MemoryView, __pyx_k_View_MemoryView, sizeof(__pyx_k_View_MemoryView), 0, 0, 1, 1}, - {&__pyx_n_s_allocate_buffer, __pyx_k_allocate_buffer, sizeof(__pyx_k_allocate_buffer), 0, 0, 1, 1}, - {&__pyx_n_s_base, __pyx_k_base, sizeof(__pyx_k_base), 0, 0, 1, 1}, - {&__pyx_n_s_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 0, 1, 1}, - {&__pyx_n_u_c, __pyx_k_c, sizeof(__pyx_k_c), 0, 1, 0, 1}, - {&__pyx_n_s_class, __pyx_k_class, sizeof(__pyx_k_class), 0, 0, 1, 1}, - {&__pyx_n_s_cline_in_traceback, __pyx_k_cline_in_traceback, sizeof(__pyx_k_cline_in_traceback), 0, 0, 1, 1}, - {&__pyx_kp_s_contiguous_and_direct, __pyx_k_contiguous_and_direct, sizeof(__pyx_k_contiguous_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_contiguous_and_indirect, __pyx_k_contiguous_and_indirect, sizeof(__pyx_k_contiguous_and_indirect), 0, 0, 1, 0}, - {&__pyx_n_s_dict, __pyx_k_dict, sizeof(__pyx_k_dict), 0, 0, 1, 1}, - {&__pyx_n_s_dtype_is_object, __pyx_k_dtype_is_object, sizeof(__pyx_k_dtype_is_object), 0, 0, 1, 1}, - {&__pyx_n_s_encode, __pyx_k_encode, sizeof(__pyx_k_encode), 0, 0, 1, 1}, - {&__pyx_n_s_enumerate, __pyx_k_enumerate, sizeof(__pyx_k_enumerate), 0, 0, 1, 1}, - {&__pyx_n_s_error, __pyx_k_error, sizeof(__pyx_k_error), 0, 0, 1, 1}, - {&__pyx_n_s_flags, __pyx_k_flags, sizeof(__pyx_k_flags), 0, 0, 1, 1}, - {&__pyx_n_s_format, __pyx_k_format, sizeof(__pyx_k_format), 0, 0, 1, 1}, - {&__pyx_n_s_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 0, 1, 1}, - {&__pyx_n_u_fortran, __pyx_k_fortran, sizeof(__pyx_k_fortran), 0, 1, 0, 1}, - {&__pyx_n_s_getstate, __pyx_k_getstate, sizeof(__pyx_k_getstate), 0, 0, 1, 1}, - {&__pyx_kp_s_got_differing_extents_in_dimensi, __pyx_k_got_differing_extents_in_dimensi, sizeof(__pyx_k_got_differing_extents_in_dimensi), 0, 0, 1, 0}, - {&__pyx_n_s_id, __pyx_k_id, sizeof(__pyx_k_id), 0, 0, 1, 1}, - {&__pyx_n_s_import, __pyx_k_import, sizeof(__pyx_k_import), 0, 0, 1, 1}, - {&__pyx_n_s_itemsize, __pyx_k_itemsize, sizeof(__pyx_k_itemsize), 0, 0, 1, 1}, - {&__pyx_kp_s_itemsize_0_for_cython_array, __pyx_k_itemsize_0_for_cython_array, sizeof(__pyx_k_itemsize_0_for_cython_array), 0, 0, 1, 0}, - {&__pyx_n_s_main, __pyx_k_main, sizeof(__pyx_k_main), 0, 0, 1, 1}, - {&__pyx_n_s_memview, __pyx_k_memview, sizeof(__pyx_k_memview), 0, 0, 1, 1}, - {&__pyx_n_s_mode, __pyx_k_mode, sizeof(__pyx_k_mode), 0, 0, 1, 1}, - {&__pyx_n_s_name, __pyx_k_name, sizeof(__pyx_k_name), 0, 0, 1, 1}, - {&__pyx_n_s_name_2, __pyx_k_name_2, sizeof(__pyx_k_name_2), 0, 0, 1, 1}, - {&__pyx_n_s_ndim, __pyx_k_ndim, sizeof(__pyx_k_ndim), 0, 0, 1, 1}, - {&__pyx_n_s_new, __pyx_k_new, sizeof(__pyx_k_new), 0, 0, 1, 1}, - {&__pyx_kp_s_no_default___reduce___due_to_non, __pyx_k_no_default___reduce___due_to_non, sizeof(__pyx_k_no_default___reduce___due_to_non), 0, 0, 1, 0}, - {&__pyx_n_s_obj, __pyx_k_obj, sizeof(__pyx_k_obj), 0, 0, 1, 1}, - {&__pyx_n_s_pack, __pyx_k_pack, sizeof(__pyx_k_pack), 0, 0, 1, 1}, - {&__pyx_n_s_paths, __pyx_k_paths, sizeof(__pyx_k_paths), 0, 0, 1, 1}, - {&__pyx_n_s_pickle, __pyx_k_pickle, sizeof(__pyx_k_pickle), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_PickleError, __pyx_k_pyx_PickleError, sizeof(__pyx_k_pyx_PickleError), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_checksum, __pyx_k_pyx_checksum, sizeof(__pyx_k_pyx_checksum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_getbuffer, __pyx_k_pyx_getbuffer, sizeof(__pyx_k_pyx_getbuffer), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_result, __pyx_k_pyx_result, sizeof(__pyx_k_pyx_result), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_state, __pyx_k_pyx_state, sizeof(__pyx_k_pyx_state), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_type, __pyx_k_pyx_type, sizeof(__pyx_k_pyx_type), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_unpickle_Enum, __pyx_k_pyx_unpickle_Enum, sizeof(__pyx_k_pyx_unpickle_Enum), 0, 0, 1, 1}, - {&__pyx_n_s_pyx_vtable, __pyx_k_pyx_vtable, sizeof(__pyx_k_pyx_vtable), 0, 0, 1, 1}, - {&__pyx_n_s_range, __pyx_k_range, sizeof(__pyx_k_range), 0, 0, 1, 1}, - {&__pyx_n_s_reduce, __pyx_k_reduce, sizeof(__pyx_k_reduce), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_cython, __pyx_k_reduce_cython, sizeof(__pyx_k_reduce_cython), 0, 0, 1, 1}, - {&__pyx_n_s_reduce_ex, __pyx_k_reduce_ex, sizeof(__pyx_k_reduce_ex), 0, 0, 1, 1}, - {&__pyx_n_s_setstate, __pyx_k_setstate, sizeof(__pyx_k_setstate), 0, 0, 1, 1}, - {&__pyx_n_s_setstate_cython, __pyx_k_setstate_cython, sizeof(__pyx_k_setstate_cython), 0, 0, 1, 1}, - {&__pyx_n_s_shape, __pyx_k_shape, sizeof(__pyx_k_shape), 0, 0, 1, 1}, - {&__pyx_n_s_size, __pyx_k_size, sizeof(__pyx_k_size), 0, 0, 1, 1}, - {&__pyx_n_s_start, __pyx_k_start, sizeof(__pyx_k_start), 0, 0, 1, 1}, - {&__pyx_n_s_step, __pyx_k_step, sizeof(__pyx_k_step), 0, 0, 1, 1}, - {&__pyx_n_s_stop, __pyx_k_stop, sizeof(__pyx_k_stop), 0, 0, 1, 1}, - {&__pyx_kp_s_strided_and_direct, __pyx_k_strided_and_direct, sizeof(__pyx_k_strided_and_direct), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_direct_or_indirect, __pyx_k_strided_and_direct_or_indirect, sizeof(__pyx_k_strided_and_direct_or_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_strided_and_indirect, __pyx_k_strided_and_indirect, sizeof(__pyx_k_strided_and_indirect), 0, 0, 1, 0}, - {&__pyx_kp_s_stringsource, __pyx_k_stringsource, sizeof(__pyx_k_stringsource), 0, 0, 1, 0}, - {&__pyx_n_s_struct, __pyx_k_struct, sizeof(__pyx_k_struct), 0, 0, 1, 1}, - {&__pyx_n_s_t_xs, __pyx_k_t_xs, sizeof(__pyx_k_t_xs), 0, 0, 1, 1}, - {&__pyx_n_s_t_ys, __pyx_k_t_ys, sizeof(__pyx_k_t_ys), 0, 0, 1, 1}, - {&__pyx_n_s_test, __pyx_k_test, sizeof(__pyx_k_test), 0, 0, 1, 1}, - {&__pyx_kp_s_unable_to_allocate_array_data, __pyx_k_unable_to_allocate_array_data, sizeof(__pyx_k_unable_to_allocate_array_data), 0, 0, 1, 0}, - {&__pyx_kp_s_unable_to_allocate_shape_and_str, __pyx_k_unable_to_allocate_shape_and_str, sizeof(__pyx_k_unable_to_allocate_shape_and_str), 0, 0, 1, 0}, - {&__pyx_n_s_unpack, __pyx_k_unpack, sizeof(__pyx_k_unpack), 0, 0, 1, 1}, - {&__pyx_n_s_update, __pyx_k_update, sizeof(__pyx_k_update), 0, 0, 1, 1}, - {&__pyx_n_s_values, __pyx_k_values, sizeof(__pyx_k_values), 0, 0, 1, 1}, - {0, 0, 0, 0, 0, 0, 0} -}; -static CYTHON_SMALL_CODE int __Pyx_InitCachedBuiltins(void) { - __pyx_builtin_range = __Pyx_GetBuiltinName(__pyx_n_s_range); if (!__pyx_builtin_range) __PYX_ERR(0, 15, __pyx_L1_error) - __pyx_builtin_ValueError = __Pyx_GetBuiltinName(__pyx_n_s_ValueError); if (!__pyx_builtin_ValueError) __PYX_ERR(1, 134, __pyx_L1_error) - __pyx_builtin_MemoryError = __Pyx_GetBuiltinName(__pyx_n_s_MemoryError); if (!__pyx_builtin_MemoryError) __PYX_ERR(1, 149, __pyx_L1_error) - __pyx_builtin_enumerate = __Pyx_GetBuiltinName(__pyx_n_s_enumerate); if (!__pyx_builtin_enumerate) __PYX_ERR(1, 152, __pyx_L1_error) - __pyx_builtin_TypeError = __Pyx_GetBuiltinName(__pyx_n_s_TypeError); if (!__pyx_builtin_TypeError) __PYX_ERR(1, 2, __pyx_L1_error) - __pyx_builtin_Ellipsis = __Pyx_GetBuiltinName(__pyx_n_s_Ellipsis); if (!__pyx_builtin_Ellipsis) __PYX_ERR(1, 406, __pyx_L1_error) - __pyx_builtin_id = __Pyx_GetBuiltinName(__pyx_n_s_id); if (!__pyx_builtin_id) __PYX_ERR(1, 615, __pyx_L1_error) - __pyx_builtin_IndexError = __Pyx_GetBuiltinName(__pyx_n_s_IndexError); if (!__pyx_builtin_IndexError) __PYX_ERR(1, 834, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitCachedConstants(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_InitCachedConstants", 0); - - /* "View.MemoryView":134 - * - * if not self.ndim: - * raise ValueError("Empty shape tuple for cython.array") # <<<<<<<<<<<<<< - * - * if itemsize <= 0: - */ - __pyx_tuple__2 = PyTuple_Pack(1, __pyx_kp_s_Empty_shape_tuple_for_cython_arr); if (unlikely(!__pyx_tuple__2)) __PYX_ERR(1, 134, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__2); - __Pyx_GIVEREF(__pyx_tuple__2); - - /* "View.MemoryView":137 - * - * if itemsize <= 0: - * raise ValueError("itemsize <= 0 for cython.array") # <<<<<<<<<<<<<< - * - * if not isinstance(format, bytes): - */ - __pyx_tuple__3 = PyTuple_Pack(1, __pyx_kp_s_itemsize_0_for_cython_array); if (unlikely(!__pyx_tuple__3)) __PYX_ERR(1, 137, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__3); - __Pyx_GIVEREF(__pyx_tuple__3); - - /* "View.MemoryView":149 - * - * if not self._shape: - * raise MemoryError("unable to allocate shape and strides.") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__4 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_shape_and_str); if (unlikely(!__pyx_tuple__4)) __PYX_ERR(1, 149, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__4); - __Pyx_GIVEREF(__pyx_tuple__4); - - /* "View.MemoryView":177 - * self.data = malloc(self.len) - * if not self.data: - * raise MemoryError("unable to allocate array data.") # <<<<<<<<<<<<<< - * - * if self.dtype_is_object: - */ - __pyx_tuple__5 = PyTuple_Pack(1, __pyx_kp_s_unable_to_allocate_array_data); if (unlikely(!__pyx_tuple__5)) __PYX_ERR(1, 177, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__5); - __Pyx_GIVEREF(__pyx_tuple__5); - - /* "View.MemoryView":193 - * bufmode = PyBUF_F_CONTIGUOUS | PyBUF_ANY_CONTIGUOUS - * if not (flags & bufmode): - * raise ValueError("Can only create a buffer that is contiguous in memory.") # <<<<<<<<<<<<<< - * info.buf = self.data - * info.len = self.len - */ - __pyx_tuple__6 = PyTuple_Pack(1, __pyx_kp_s_Can_only_create_a_buffer_that_is); if (unlikely(!__pyx_tuple__6)) __PYX_ERR(1, 193, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__6); - __Pyx_GIVEREF(__pyx_tuple__6); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__7 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__7)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__7); - __Pyx_GIVEREF(__pyx_tuple__7); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__8 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__8)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__8); - __Pyx_GIVEREF(__pyx_tuple__8); - - /* "View.MemoryView":420 - * def __setitem__(memoryview self, object index, object value): - * if self.view.readonly: - * raise TypeError("Cannot assign to read-only memoryview") # <<<<<<<<<<<<<< - * - * have_slices, index = _unellipsify(index, self.view.ndim) - */ - __pyx_tuple__9 = PyTuple_Pack(1, __pyx_kp_s_Cannot_assign_to_read_only_memor); if (unlikely(!__pyx_tuple__9)) __PYX_ERR(1, 420, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__9); - __Pyx_GIVEREF(__pyx_tuple__9); - - /* "View.MemoryView":497 - * result = struct.unpack(self.view.format, bytesitem) - * except struct.error: - * raise ValueError("Unable to convert item to object") # <<<<<<<<<<<<<< - * else: - * if len(self.view.format) == 1: - */ - __pyx_tuple__10 = PyTuple_Pack(1, __pyx_kp_s_Unable_to_convert_item_to_object); if (unlikely(!__pyx_tuple__10)) __PYX_ERR(1, 497, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__10); - __Pyx_GIVEREF(__pyx_tuple__10); - - /* "View.MemoryView":522 - * def __getbuffer__(self, Py_buffer *info, int flags): - * if flags & PyBUF_WRITABLE and self.view.readonly: - * raise ValueError("Cannot create writable memory view from read-only memoryview") # <<<<<<<<<<<<<< - * - * if flags & PyBUF_ND: - */ - __pyx_tuple__11 = PyTuple_Pack(1, __pyx_kp_s_Cannot_create_writable_memory_vi); if (unlikely(!__pyx_tuple__11)) __PYX_ERR(1, 522, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__11); - __Pyx_GIVEREF(__pyx_tuple__11); - - /* "View.MemoryView":572 - * if self.view.strides == NULL: - * - * raise ValueError("Buffer view does not expose strides") # <<<<<<<<<<<<<< - * - * return tuple([stride for stride in self.view.strides[:self.view.ndim]]) - */ - __pyx_tuple__12 = PyTuple_Pack(1, __pyx_kp_s_Buffer_view_does_not_expose_stri); if (unlikely(!__pyx_tuple__12)) __PYX_ERR(1, 572, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__12); - __Pyx_GIVEREF(__pyx_tuple__12); - - /* "View.MemoryView":579 - * def suboffsets(self): - * if self.view.suboffsets == NULL: - * return (-1,) * self.view.ndim # <<<<<<<<<<<<<< - * - * return tuple([suboffset for suboffset in self.view.suboffsets[:self.view.ndim]]) - */ - __pyx_tuple__13 = PyTuple_New(1); if (unlikely(!__pyx_tuple__13)) __PYX_ERR(1, 579, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__13); - __Pyx_INCREF(__pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_int_neg_1); - PyTuple_SET_ITEM(__pyx_tuple__13, 0, __pyx_int_neg_1); - __Pyx_GIVEREF(__pyx_tuple__13); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__14 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__14)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__14); - __Pyx_GIVEREF(__pyx_tuple__14); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__15 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__15)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__15); - __Pyx_GIVEREF(__pyx_tuple__15); - - /* "View.MemoryView":684 - * if item is Ellipsis: - * if not seen_ellipsis: - * result.extend([slice(None)] * (ndim - len(tup) + 1)) # <<<<<<<<<<<<<< - * seen_ellipsis = True - * else: - */ - __pyx_slice__16 = PySlice_New(Py_None, Py_None, Py_None); if (unlikely(!__pyx_slice__16)) __PYX_ERR(1, 684, __pyx_L1_error) - __Pyx_GOTREF(__pyx_slice__16); - __Pyx_GIVEREF(__pyx_slice__16); - - /* "View.MemoryView":705 - * for suboffset in suboffsets[:ndim]: - * if suboffset >= 0: - * raise ValueError("Indirect dimensions not supported") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__17 = PyTuple_Pack(1, __pyx_kp_s_Indirect_dimensions_not_supporte); if (unlikely(!__pyx_tuple__17)) __PYX_ERR(1, 705, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__17); - __Pyx_GIVEREF(__pyx_tuple__17); - - /* "(tree fragment)":2 - * def __reduce_cython__(self): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - */ - __pyx_tuple__18 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__18)) __PYX_ERR(1, 2, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__18); - __Pyx_GIVEREF(__pyx_tuple__18); - - /* "(tree fragment)":4 - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") - * def __setstate_cython__(self, __pyx_state): - * raise TypeError("no default __reduce__ due to non-trivial __cinit__") # <<<<<<<<<<<<<< - */ - __pyx_tuple__19 = PyTuple_Pack(1, __pyx_kp_s_no_default___reduce___due_to_non); if (unlikely(!__pyx_tuple__19)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__19); - __Pyx_GIVEREF(__pyx_tuple__19); - __pyx_tuple__20 = PyTuple_Pack(3, __pyx_int_184977713, __pyx_int_136983863, __pyx_int_112105877); if (unlikely(!__pyx_tuple__20)) __PYX_ERR(1, 4, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__20); - __Pyx_GIVEREF(__pyx_tuple__20); - - /* "View.MemoryView":287 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_tuple__21 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct_or_indirect); if (unlikely(!__pyx_tuple__21)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__21); - __Pyx_GIVEREF(__pyx_tuple__21); - - /* "View.MemoryView":288 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_tuple__22 = PyTuple_Pack(1, __pyx_kp_s_strided_and_direct); if (unlikely(!__pyx_tuple__22)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__22); - __Pyx_GIVEREF(__pyx_tuple__22); - - /* "View.MemoryView":289 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__23 = PyTuple_Pack(1, __pyx_kp_s_strided_and_indirect); if (unlikely(!__pyx_tuple__23)) __PYX_ERR(1, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__23); - __Pyx_GIVEREF(__pyx_tuple__23); - - /* "View.MemoryView":292 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_tuple__24 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_direct); if (unlikely(!__pyx_tuple__24)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__24); - __Pyx_GIVEREF(__pyx_tuple__24); - - /* "View.MemoryView":293 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_tuple__25 = PyTuple_Pack(1, __pyx_kp_s_contiguous_and_indirect); if (unlikely(!__pyx_tuple__25)) __PYX_ERR(1, 293, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__25); - __Pyx_GIVEREF(__pyx_tuple__25); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_tuple__26 = PyTuple_Pack(5, __pyx_n_s_pyx_type, __pyx_n_s_pyx_checksum, __pyx_n_s_pyx_state, __pyx_n_s_pyx_PickleError, __pyx_n_s_pyx_result); if (unlikely(!__pyx_tuple__26)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_tuple__26); - __Pyx_GIVEREF(__pyx_tuple__26); - __pyx_codeobj__27 = (PyObject*)__Pyx_PyCode_New(3, 0, 5, 0, CO_OPTIMIZED|CO_NEWLOCALS, __pyx_empty_bytes, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_tuple__26, __pyx_empty_tuple, __pyx_empty_tuple, __pyx_kp_s_stringsource, __pyx_n_s_pyx_unpickle_Enum, 1, __pyx_empty_bytes); if (unlikely(!__pyx_codeobj__27)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_InitGlobals(void) { - /* InitThreads.init */ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 -PyEval_InitThreads(); -#endif - -if (unlikely(PyErr_Occurred())) __PYX_ERR(0, 1, __pyx_L1_error) - - if (__Pyx_InitStrings(__pyx_string_tab) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - __pyx_int_0 = PyInt_FromLong(0); if (unlikely(!__pyx_int_0)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_1 = PyInt_FromLong(1); if (unlikely(!__pyx_int_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_112105877 = PyInt_FromLong(112105877L); if (unlikely(!__pyx_int_112105877)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_136983863 = PyInt_FromLong(136983863L); if (unlikely(!__pyx_int_136983863)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_184977713 = PyInt_FromLong(184977713L); if (unlikely(!__pyx_int_184977713)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_int_neg_1 = PyInt_FromLong(-1); if (unlikely(!__pyx_int_neg_1)) __PYX_ERR(0, 1, __pyx_L1_error) - return 0; - __pyx_L1_error:; - return -1; -} - -static CYTHON_SMALL_CODE int __Pyx_modinit_global_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_export_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_init_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_type_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_variable_import_code(void); /*proto*/ -static CYTHON_SMALL_CODE int __Pyx_modinit_function_import_code(void); /*proto*/ - -static int __Pyx_modinit_global_init_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_global_init_code", 0); - /*--- Global init code ---*/ - generic = Py_None; Py_INCREF(Py_None); - strided = Py_None; Py_INCREF(Py_None); - indirect = Py_None; Py_INCREF(Py_None); - contiguous = Py_None; Py_INCREF(Py_None); - indirect_contiguous = Py_None; Py_INCREF(Py_None); - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_export_code", 0); - /*--- Variable export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_export_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_export_code", 0); - /*--- Function export code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_type_init_code(void) { - __Pyx_RefNannyDeclarations - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannySetupContext("__Pyx_modinit_type_init_code", 0); - /*--- Type init code ---*/ - __pyx_vtabptr_array = &__pyx_vtable_array; - __pyx_vtable_array.get_memview = (PyObject *(*)(struct __pyx_array_obj *))__pyx_array_get_memview; - if (PyType_Ready(&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 106, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_array.tp_print = 0; - #endif - if (__Pyx_SetVtable(__pyx_type___pyx_array.tp_dict, __pyx_vtabptr_array) < 0) __PYX_ERR(1, 106, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_array) < 0) __PYX_ERR(1, 106, __pyx_L1_error) - __pyx_array_type = &__pyx_type___pyx_array; - if (PyType_Ready(&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 280, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_MemviewEnum.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_MemviewEnum.tp_dictoffset && __pyx_type___pyx_MemviewEnum.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_MemviewEnum.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_MemviewEnum) < 0) __PYX_ERR(1, 280, __pyx_L1_error) - __pyx_MemviewEnum_type = &__pyx_type___pyx_MemviewEnum; - __pyx_vtabptr_memoryview = &__pyx_vtable_memoryview; - __pyx_vtable_memoryview.get_item_pointer = (char *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_get_item_pointer; - __pyx_vtable_memoryview.is_slice = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_is_slice; - __pyx_vtable_memoryview.setitem_slice_assignment = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_slice_assignment; - __pyx_vtable_memoryview.setitem_slice_assign_scalar = (PyObject *(*)(struct __pyx_memoryview_obj *, struct __pyx_memoryview_obj *, PyObject *))__pyx_memoryview_setitem_slice_assign_scalar; - __pyx_vtable_memoryview.setitem_indexed = (PyObject *(*)(struct __pyx_memoryview_obj *, PyObject *, PyObject *))__pyx_memoryview_setitem_indexed; - __pyx_vtable_memoryview.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryview_convert_item_to_object; - __pyx_vtable_memoryview.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryview_assign_item_from_object; - if (PyType_Ready(&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 331, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryview.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryview.tp_dictoffset && __pyx_type___pyx_memoryview.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryview.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryview.tp_dict, __pyx_vtabptr_memoryview) < 0) __PYX_ERR(1, 331, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryview) < 0) __PYX_ERR(1, 331, __pyx_L1_error) - __pyx_memoryview_type = &__pyx_type___pyx_memoryview; - __pyx_vtabptr__memoryviewslice = &__pyx_vtable__memoryviewslice; - __pyx_vtable__memoryviewslice.__pyx_base = *__pyx_vtabptr_memoryview; - __pyx_vtable__memoryviewslice.__pyx_base.convert_item_to_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *))__pyx_memoryviewslice_convert_item_to_object; - __pyx_vtable__memoryviewslice.__pyx_base.assign_item_from_object = (PyObject *(*)(struct __pyx_memoryview_obj *, char *, PyObject *))__pyx_memoryviewslice_assign_item_from_object; - __pyx_type___pyx_memoryviewslice.tp_base = __pyx_memoryview_type; - if (PyType_Ready(&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 967, __pyx_L1_error) - #if PY_VERSION_HEX < 0x030800B1 - __pyx_type___pyx_memoryviewslice.tp_print = 0; - #endif - if ((CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP) && likely(!__pyx_type___pyx_memoryviewslice.tp_dictoffset && __pyx_type___pyx_memoryviewslice.tp_getattro == PyObject_GenericGetAttr)) { - __pyx_type___pyx_memoryviewslice.tp_getattro = __Pyx_PyObject_GenericGetAttr; - } - if (__Pyx_SetVtable(__pyx_type___pyx_memoryviewslice.tp_dict, __pyx_vtabptr__memoryviewslice) < 0) __PYX_ERR(1, 967, __pyx_L1_error) - if (__Pyx_setup_reduce((PyObject*)&__pyx_type___pyx_memoryviewslice) < 0) __PYX_ERR(1, 967, __pyx_L1_error) - __pyx_memoryviewslice_type = &__pyx_type___pyx_memoryviewslice; - __Pyx_RefNannyFinishContext(); - return 0; - __pyx_L1_error:; - __Pyx_RefNannyFinishContext(); - return -1; -} - -static int __Pyx_modinit_type_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_type_import_code", 0); - /*--- Type import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_variable_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_variable_import_code", 0); - /*--- Variable import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - -static int __Pyx_modinit_function_import_code(void) { - __Pyx_RefNannyDeclarations - __Pyx_RefNannySetupContext("__Pyx_modinit_function_import_code", 0); - /*--- Function import code ---*/ - __Pyx_RefNannyFinishContext(); - return 0; -} - - -#ifndef CYTHON_NO_PYINIT_EXPORT -#define __Pyx_PyMODINIT_FUNC PyMODINIT_FUNC -#elif PY_MAJOR_VERSION < 3 -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" void -#else -#define __Pyx_PyMODINIT_FUNC void -#endif -#else -#ifdef __cplusplus -#define __Pyx_PyMODINIT_FUNC extern "C" PyObject * -#else -#define __Pyx_PyMODINIT_FUNC PyObject * -#endif -#endif - - -#if PY_MAJOR_VERSION < 3 -__Pyx_PyMODINIT_FUNC initcore(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC initcore(void) -#else -__Pyx_PyMODINIT_FUNC PyInit_core(void) CYTHON_SMALL_CODE; /*proto*/ -__Pyx_PyMODINIT_FUNC PyInit_core(void) -#if CYTHON_PEP489_MULTI_PHASE_INIT -{ - return PyModuleDef_Init(&__pyx_moduledef); -} -static CYTHON_SMALL_CODE int __Pyx_check_single_interpreter(void) { - #if PY_VERSION_HEX >= 0x030700A1 - static PY_INT64_T main_interpreter_id = -1; - PY_INT64_T current_id = PyInterpreterState_GetID(PyThreadState_Get()->interp); - if (main_interpreter_id == -1) { - main_interpreter_id = current_id; - return (unlikely(current_id == -1)) ? -1 : 0; - } else if (unlikely(main_interpreter_id != current_id)) - #else - static PyInterpreterState *main_interpreter = NULL; - PyInterpreterState *current_interpreter = PyThreadState_Get()->interp; - if (!main_interpreter) { - main_interpreter = current_interpreter; - } else if (unlikely(main_interpreter != current_interpreter)) - #endif - { - PyErr_SetString( - PyExc_ImportError, - "Interpreter change detected - this module can only be loaded into one interpreter per process."); - return -1; - } - return 0; -} -static CYTHON_SMALL_CODE int __Pyx_copy_spec_to_module(PyObject *spec, PyObject *moddict, const char* from_name, const char* to_name, int allow_none) { - PyObject *value = PyObject_GetAttrString(spec, from_name); - int result = 0; - if (likely(value)) { - if (allow_none || value != Py_None) { - result = PyDict_SetItemString(moddict, to_name, value); - } - Py_DECREF(value); - } else if (PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Clear(); - } else { - result = -1; - } - return result; -} -static CYTHON_SMALL_CODE PyObject* __pyx_pymod_create(PyObject *spec, CYTHON_UNUSED PyModuleDef *def) { - PyObject *module = NULL, *moddict, *modname; - if (__Pyx_check_single_interpreter()) - return NULL; - if (__pyx_m) - return __Pyx_NewRef(__pyx_m); - modname = PyObject_GetAttrString(spec, "name"); - if (unlikely(!modname)) goto bad; - module = PyModule_NewObject(modname); - Py_DECREF(modname); - if (unlikely(!module)) goto bad; - moddict = PyModule_GetDict(module); - if (unlikely(!moddict)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "loader", "__loader__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "origin", "__file__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "parent", "__package__", 1) < 0)) goto bad; - if (unlikely(__Pyx_copy_spec_to_module(spec, moddict, "submodule_search_locations", "__path__", 0) < 0)) goto bad; - return module; -bad: - Py_XDECREF(module); - return NULL; -} - - -static CYTHON_SMALL_CODE int __pyx_pymod_exec_core(PyObject *__pyx_pyinit_module) -#endif -#endif -{ - PyObject *__pyx_t_1 = NULL; - static PyThread_type_lock __pyx_t_2[8]; - int __pyx_lineno = 0; - const char *__pyx_filename = NULL; - int __pyx_clineno = 0; - __Pyx_RefNannyDeclarations - #if CYTHON_PEP489_MULTI_PHASE_INIT - if (__pyx_m) { - if (__pyx_m == __pyx_pyinit_module) return 0; - PyErr_SetString(PyExc_RuntimeError, "Module 'core' has already been imported. Re-initialisation is not supported."); - return -1; - } - #elif PY_MAJOR_VERSION >= 3 - if (__pyx_m) return __Pyx_NewRef(__pyx_m); - #endif - #if CYTHON_REFNANNY -__Pyx_RefNanny = __Pyx_RefNannyImportAPI("refnanny"); -if (!__Pyx_RefNanny) { - PyErr_Clear(); - __Pyx_RefNanny = __Pyx_RefNannyImportAPI("Cython.Runtime.refnanny"); - if (!__Pyx_RefNanny) - Py_FatalError("failed to import 'refnanny' module"); -} -#endif - __Pyx_RefNannySetupContext("__Pyx_PyMODINIT_FUNC PyInit_core(void)", 0); - if (__Pyx_check_binary_version() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pxy_PyFrame_Initialize_Offsets - __Pxy_PyFrame_Initialize_Offsets(); - #endif - __pyx_empty_tuple = PyTuple_New(0); if (unlikely(!__pyx_empty_tuple)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_bytes = PyBytes_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_bytes)) __PYX_ERR(0, 1, __pyx_L1_error) - __pyx_empty_unicode = PyUnicode_FromStringAndSize("", 0); if (unlikely(!__pyx_empty_unicode)) __PYX_ERR(0, 1, __pyx_L1_error) - #ifdef __Pyx_CyFunction_USED - if (__pyx_CyFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_FusedFunction_USED - if (__pyx_FusedFunction_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Coroutine_USED - if (__pyx_Coroutine_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_Generator_USED - if (__pyx_Generator_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_AsyncGen_USED - if (__pyx_AsyncGen_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - #ifdef __Pyx_StopAsyncIteration_USED - if (__pyx_StopAsyncIteration_init() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - /*--- Library function declarations ---*/ - /*--- Threads initialization code ---*/ - #if defined(WITH_THREAD) && PY_VERSION_HEX < 0x030700F0 && defined(__PYX_FORCE_INIT_THREADS) && __PYX_FORCE_INIT_THREADS - PyEval_InitThreads(); - #endif - /*--- Module creation code ---*/ - #if CYTHON_PEP489_MULTI_PHASE_INIT - __pyx_m = __pyx_pyinit_module; - Py_INCREF(__pyx_m); - #else - #if PY_MAJOR_VERSION < 3 - __pyx_m = Py_InitModule4("core", __pyx_methods, 0, 0, PYTHON_API_VERSION); Py_XINCREF(__pyx_m); - #else - __pyx_m = PyModule_Create(&__pyx_moduledef); - #endif - if (unlikely(!__pyx_m)) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - __pyx_d = PyModule_GetDict(__pyx_m); if (unlikely(!__pyx_d)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_d); - __pyx_b = PyImport_AddModule(__Pyx_BUILTIN_MODULE_NAME); if (unlikely(!__pyx_b)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_b); - __pyx_cython_runtime = PyImport_AddModule((char *) "cython_runtime"); if (unlikely(!__pyx_cython_runtime)) __PYX_ERR(0, 1, __pyx_L1_error) - Py_INCREF(__pyx_cython_runtime); - if (PyObject_SetAttrString(__pyx_m, "__builtins__", __pyx_b) < 0) __PYX_ERR(0, 1, __pyx_L1_error); - /*--- Initialize various global constants etc. ---*/ - if (__Pyx_InitGlobals() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #if PY_MAJOR_VERSION < 3 && (__PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT) - if (__Pyx_init_sys_getdefaultencoding_params() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - if (__pyx_module_is_main_vits__monotonic_align__core) { - if (PyObject_SetAttr(__pyx_m, __pyx_n_s_name_2, __pyx_n_s_main) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - } - #if PY_MAJOR_VERSION >= 3 - { - PyObject *modules = PyImport_GetModuleDict(); if (unlikely(!modules)) __PYX_ERR(0, 1, __pyx_L1_error) - if (!PyDict_GetItemString(modules, "vits.monotonic_align.core")) { - if (unlikely(PyDict_SetItemString(modules, "vits.monotonic_align.core", __pyx_m) < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - } - } - #endif - /*--- Builtin init code ---*/ - if (__Pyx_InitCachedBuiltins() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Constants init code ---*/ - if (__Pyx_InitCachedConstants() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - /*--- Global type/function init code ---*/ - (void)__Pyx_modinit_global_init_code(); - (void)__Pyx_modinit_variable_export_code(); - (void)__Pyx_modinit_function_export_code(); - if (unlikely(__Pyx_modinit_type_init_code() < 0)) __PYX_ERR(0, 1, __pyx_L1_error) - (void)__Pyx_modinit_type_import_code(); - (void)__Pyx_modinit_variable_import_code(); - (void)__Pyx_modinit_function_import_code(); - /*--- Execution code ---*/ - #if defined(__Pyx_Generator_USED) || defined(__Pyx_Coroutine_USED) - if (__Pyx_patch_abc() < 0) __PYX_ERR(0, 1, __pyx_L1_error) - #endif - - /* "vits/monotonic_align/core.pyx":7 - * @cython.boundscheck(False) - * @cython.wraparound(False) - * cdef void maximum_path_each(int[:,::1] path, float[:,::1] value, int t_y, int t_x, float max_neg_val=-1e9) nogil: # <<<<<<<<<<<<<< - * cdef int x - * cdef int y - */ - __pyx_k_ = (-1e9); - - /* "vits/monotonic_align/core.pyx":1 - * cimport cython # <<<<<<<<<<<<<< - * from cython.parallel import prange - * - */ - __pyx_t_1 = __Pyx_PyDict_NewPresized(0); if (unlikely(!__pyx_t_1)) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_test, __pyx_t_1) < 0) __PYX_ERR(0, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "View.MemoryView":210 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_array_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * def __dealloc__(array self): - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_array_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 210, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_array_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 210, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_array_type); - - /* "View.MemoryView":287 - * return self.name - * - * cdef generic = Enum("") # <<<<<<<<<<<<<< - * cdef strided = Enum("") # default - * cdef indirect = Enum("") - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__21, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 287, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(generic); - __Pyx_DECREF_SET(generic, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":288 - * - * cdef generic = Enum("") - * cdef strided = Enum("") # default # <<<<<<<<<<<<<< - * cdef indirect = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__22, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 288, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(strided); - __Pyx_DECREF_SET(strided, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":289 - * cdef generic = Enum("") - * cdef strided = Enum("") # default - * cdef indirect = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__23, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 289, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect); - __Pyx_DECREF_SET(indirect, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":292 - * - * - * cdef contiguous = Enum("") # <<<<<<<<<<<<<< - * cdef indirect_contiguous = Enum("") - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__24, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 292, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(contiguous); - __Pyx_DECREF_SET(contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":293 - * - * cdef contiguous = Enum("") - * cdef indirect_contiguous = Enum("") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __Pyx_PyObject_Call(((PyObject *)__pyx_MemviewEnum_type), __pyx_tuple__25, NULL); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 293, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - __Pyx_XGOTREF(indirect_contiguous); - __Pyx_DECREF_SET(indirect_contiguous, __pyx_t_1); - __Pyx_GIVEREF(__pyx_t_1); - __pyx_t_1 = 0; - - /* "View.MemoryView":317 - * - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 # <<<<<<<<<<<<<< - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ - * PyThread_allocate_lock(), - */ - __pyx_memoryview_thread_locks_used = 0; - - /* "View.MemoryView":318 - * DEF THREAD_LOCKS_PREALLOCATED = 8 - * cdef int __pyx_memoryview_thread_locks_used = 0 - * cdef PyThread_type_lock[THREAD_LOCKS_PREALLOCATED] __pyx_memoryview_thread_locks = [ # <<<<<<<<<<<<<< - * PyThread_allocate_lock(), - * PyThread_allocate_lock(), - */ - __pyx_t_2[0] = PyThread_allocate_lock(); - __pyx_t_2[1] = PyThread_allocate_lock(); - __pyx_t_2[2] = PyThread_allocate_lock(); - __pyx_t_2[3] = PyThread_allocate_lock(); - __pyx_t_2[4] = PyThread_allocate_lock(); - __pyx_t_2[5] = PyThread_allocate_lock(); - __pyx_t_2[6] = PyThread_allocate_lock(); - __pyx_t_2[7] = PyThread_allocate_lock(); - memcpy(&(__pyx_memoryview_thread_locks[0]), __pyx_t_2, sizeof(__pyx_memoryview_thread_locks[0]) * (8)); - - /* "View.MemoryView":551 - * info.obj = self - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 551, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryview_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 551, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryview_type); - - /* "View.MemoryView":997 - * return self.from_object - * - * __pyx_getbuffer = capsule( &__pyx_memoryview_getbuffer, "getbuffer(obj, view, flags)") # <<<<<<<<<<<<<< - * - * - */ - __pyx_t_1 = __pyx_capsule_create(((void *)(&__pyx_memoryview_getbuffer)), ((char *)"getbuffer(obj, view, flags)")); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 997, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem((PyObject *)__pyx_memoryviewslice_type->tp_dict, __pyx_n_s_pyx_getbuffer, __pyx_t_1) < 0) __PYX_ERR(1, 997, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - PyType_Modified(__pyx_memoryviewslice_type); - - /* "(tree fragment)":1 - * def __pyx_unpickle_Enum(__pyx_type, long __pyx_checksum, __pyx_state): # <<<<<<<<<<<<<< - * cdef object __pyx_PickleError - * cdef object __pyx_result - */ - __pyx_t_1 = PyCFunction_NewEx(&__pyx_mdef_15View_dot_MemoryView_1__pyx_unpickle_Enum, NULL, __pyx_n_s_View_MemoryView); if (unlikely(!__pyx_t_1)) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_GOTREF(__pyx_t_1); - if (PyDict_SetItem(__pyx_d, __pyx_n_s_pyx_unpickle_Enum, __pyx_t_1) < 0) __PYX_ERR(1, 1, __pyx_L1_error) - __Pyx_DECREF(__pyx_t_1); __pyx_t_1 = 0; - - /* "(tree fragment)":11 - * __pyx_unpickle_Enum__set_state( __pyx_result, __pyx_state) - * return __pyx_result - * cdef __pyx_unpickle_Enum__set_state(Enum __pyx_result, tuple __pyx_state): # <<<<<<<<<<<<<< - * __pyx_result.name = __pyx_state[0] - * if len(__pyx_state) > 1 and hasattr(__pyx_result, '__dict__'): - */ - - /*--- Wrapped vars code ---*/ - - goto __pyx_L0; - __pyx_L1_error:; - __Pyx_XDECREF(__pyx_t_1); - if (__pyx_m) { - if (__pyx_d) { - __Pyx_AddTraceback("init vits.monotonic_align.core", __pyx_clineno, __pyx_lineno, __pyx_filename); - } - Py_CLEAR(__pyx_m); - } else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_ImportError, "init vits.monotonic_align.core"); - } - __pyx_L0:; - __Pyx_RefNannyFinishContext(); - #if CYTHON_PEP489_MULTI_PHASE_INIT - return (__pyx_m != NULL) ? 0 : -1; - #elif PY_MAJOR_VERSION >= 3 - return __pyx_m; - #else - return; - #endif -} - -/* --- Runtime support code --- */ -/* Refnanny */ -#if CYTHON_REFNANNY -static __Pyx_RefNannyAPIStruct *__Pyx_RefNannyImportAPI(const char *modname) { - PyObject *m = NULL, *p = NULL; - void *r = NULL; - m = PyImport_ImportModule(modname); - if (!m) goto end; - p = PyObject_GetAttrString(m, "RefNannyAPI"); - if (!p) goto end; - r = PyLong_AsVoidPtr(p); -end: - Py_XDECREF(p); - Py_XDECREF(m); - return (__Pyx_RefNannyAPIStruct *)r; -} -#endif - -/* PyObjectGetAttrStr */ -#if CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStr(PyObject* obj, PyObject* attr_name) { - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro)) - return tp->tp_getattro(obj, attr_name); -#if PY_MAJOR_VERSION < 3 - if (likely(tp->tp_getattr)) - return tp->tp_getattr(obj, PyString_AS_STRING(attr_name)); -#endif - return PyObject_GetAttr(obj, attr_name); -} -#endif - -/* GetBuiltinName */ -static PyObject *__Pyx_GetBuiltinName(PyObject *name) { - PyObject* result = __Pyx_PyObject_GetAttrStr(__pyx_b, name); - if (unlikely(!result)) { - PyErr_Format(PyExc_NameError, -#if PY_MAJOR_VERSION >= 3 - "name '%U' is not defined", name); -#else - "name '%.200s' is not defined", PyString_AS_STRING(name)); -#endif - } - return result; -} - -/* MemviewSliceInit */ -static int -__Pyx_init_memviewslice(struct __pyx_memoryview_obj *memview, - int ndim, - __Pyx_memviewslice *memviewslice, - int memview_is_new_reference) -{ - __Pyx_RefNannyDeclarations - int i, retval=-1; - Py_buffer *buf = &memview->view; - __Pyx_RefNannySetupContext("init_memviewslice", 0); - if (unlikely(memviewslice->memview || memviewslice->data)) { - PyErr_SetString(PyExc_ValueError, - "memviewslice is already initialized!"); - goto fail; - } - if (buf->strides) { - for (i = 0; i < ndim; i++) { - memviewslice->strides[i] = buf->strides[i]; - } - } else { - Py_ssize_t stride = buf->itemsize; - for (i = ndim - 1; i >= 0; i--) { - memviewslice->strides[i] = stride; - stride *= buf->shape[i]; - } - } - for (i = 0; i < ndim; i++) { - memviewslice->shape[i] = buf->shape[i]; - if (buf->suboffsets) { - memviewslice->suboffsets[i] = buf->suboffsets[i]; - } else { - memviewslice->suboffsets[i] = -1; - } - } - memviewslice->memview = memview; - memviewslice->data = (char *)buf->buf; - if (__pyx_add_acquisition_count(memview) == 0 && !memview_is_new_reference) { - Py_INCREF(memview); - } - retval = 0; - goto no_fail; -fail: - memviewslice->memview = 0; - memviewslice->data = 0; - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} -#ifndef Py_NO_RETURN -#define Py_NO_RETURN -#endif -static void __pyx_fatalerror(const char *fmt, ...) Py_NO_RETURN { - va_list vargs; - char msg[200]; -#if PY_VERSION_HEX >= 0x030A0000 || defined(HAVE_STDARG_PROTOTYPES) - va_start(vargs, fmt); -#else - va_start(vargs); -#endif - vsnprintf(msg, 200, fmt, vargs); - va_end(vargs); - Py_FatalError(msg); -} -static CYTHON_INLINE int -__pyx_add_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)++; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE int -__pyx_sub_acquisition_count_locked(__pyx_atomic_int *acquisition_count, - PyThread_type_lock lock) -{ - int result; - PyThread_acquire_lock(lock, 1); - result = (*acquisition_count)--; - PyThread_release_lock(lock); - return result; -} -static CYTHON_INLINE void -__Pyx_INC_MEMVIEW(__Pyx_memviewslice *memslice, int have_gil, int lineno) -{ - int first_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) - return; - if (unlikely(__pyx_get_slice_count(memview) < 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - first_time = __pyx_add_acquisition_count(memview) == 0; - if (unlikely(first_time)) { - if (have_gil) { - Py_INCREF((PyObject *) memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_INCREF((PyObject *) memview); - PyGILState_Release(_gilstate); - } - } -} -static CYTHON_INLINE void __Pyx_XDEC_MEMVIEW(__Pyx_memviewslice *memslice, - int have_gil, int lineno) { - int last_time; - struct __pyx_memoryview_obj *memview = memslice->memview; - if (unlikely(!memview || (PyObject *) memview == Py_None)) { - memslice->memview = NULL; - return; - } - if (unlikely(__pyx_get_slice_count(memview) <= 0)) - __pyx_fatalerror("Acquisition count is %d (line %d)", - __pyx_get_slice_count(memview), lineno); - last_time = __pyx_sub_acquisition_count(memview) == 1; - memslice->data = NULL; - if (unlikely(last_time)) { - if (have_gil) { - Py_CLEAR(memslice->memview); - } else { - PyGILState_STATE _gilstate = PyGILState_Ensure(); - Py_CLEAR(memslice->memview); - PyGILState_Release(_gilstate); - } - } else { - memslice->memview = NULL; - } -} - -/* RaiseArgTupleInvalid */ -static void __Pyx_RaiseArgtupleInvalid( - const char* func_name, - int exact, - Py_ssize_t num_min, - Py_ssize_t num_max, - Py_ssize_t num_found) -{ - Py_ssize_t num_expected; - const char *more_or_less; - if (num_found < num_min) { - num_expected = num_min; - more_or_less = "at least"; - } else { - num_expected = num_max; - more_or_less = "at most"; - } - if (exact) { - more_or_less = "exactly"; - } - PyErr_Format(PyExc_TypeError, - "%.200s() takes %.8s %" CYTHON_FORMAT_SSIZE_T "d positional argument%.1s (%" CYTHON_FORMAT_SSIZE_T "d given)", - func_name, more_or_less, num_expected, - (num_expected == 1) ? "" : "s", num_found); -} - -/* RaiseDoubleKeywords */ -static void __Pyx_RaiseDoubleKeywordsError( - const char* func_name, - PyObject* kw_name) -{ - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION >= 3 - "%s() got multiple values for keyword argument '%U'", func_name, kw_name); - #else - "%s() got multiple values for keyword argument '%s'", func_name, - PyString_AsString(kw_name)); - #endif -} - -/* ParseKeywords */ -static int __Pyx_ParseOptionalKeywords( - PyObject *kwds, - PyObject **argnames[], - PyObject *kwds2, - PyObject *values[], - Py_ssize_t num_pos_args, - const char* function_name) -{ - PyObject *key = 0, *value = 0; - Py_ssize_t pos = 0; - PyObject*** name; - PyObject*** first_kw_arg = argnames + num_pos_args; - while (PyDict_Next(kwds, &pos, &key, &value)) { - name = first_kw_arg; - while (*name && (**name != key)) name++; - if (*name) { - values[name-argnames] = value; - continue; - } - name = first_kw_arg; - #if PY_MAJOR_VERSION < 3 - if (likely(PyString_Check(key))) { - while (*name) { - if ((CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**name) == PyString_GET_SIZE(key)) - && _PyString_Eq(**name, key)) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - if ((**argname == key) || ( - (CYTHON_COMPILING_IN_PYPY || PyString_GET_SIZE(**argname) == PyString_GET_SIZE(key)) - && _PyString_Eq(**argname, key))) { - goto arg_passed_twice; - } - argname++; - } - } - } else - #endif - if (likely(PyUnicode_Check(key))) { - while (*name) { - int cmp = (**name == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**name) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**name, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) { - values[name-argnames] = value; - break; - } - name++; - } - if (*name) continue; - else { - PyObject*** argname = argnames; - while (argname != first_kw_arg) { - int cmp = (**argname == key) ? 0 : - #if !CYTHON_COMPILING_IN_PYPY && PY_MAJOR_VERSION >= 3 - (__Pyx_PyUnicode_GET_LENGTH(**argname) != __Pyx_PyUnicode_GET_LENGTH(key)) ? 1 : - #endif - PyUnicode_Compare(**argname, key); - if (cmp < 0 && unlikely(PyErr_Occurred())) goto bad; - if (cmp == 0) goto arg_passed_twice; - argname++; - } - } - } else - goto invalid_keyword_type; - if (kwds2) { - if (unlikely(PyDict_SetItem(kwds2, key, value))) goto bad; - } else { - goto invalid_keyword; - } - } - return 0; -arg_passed_twice: - __Pyx_RaiseDoubleKeywordsError(function_name, key); - goto bad; -invalid_keyword_type: - PyErr_Format(PyExc_TypeError, - "%.200s() keywords must be strings", function_name); - goto bad; -invalid_keyword: - PyErr_Format(PyExc_TypeError, - #if PY_MAJOR_VERSION < 3 - "%.200s() got an unexpected keyword argument '%.200s'", - function_name, PyString_AsString(key)); - #else - "%s() got an unexpected keyword argument '%U'", - function_name, key); - #endif -bad: - return -1; -} - -/* None */ -static CYTHON_INLINE void __Pyx_RaiseUnboundLocalError(const char *varname) { - PyErr_Format(PyExc_UnboundLocalError, "local variable '%s' referenced before assignment", varname); -} - -/* ArgTypeTest */ -static int __Pyx__ArgTypeTest(PyObject *obj, PyTypeObject *type, const char *name, int exact) -{ - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - else if (exact) { - #if PY_MAJOR_VERSION == 2 - if ((type == &PyBaseString_Type) && likely(__Pyx_PyBaseString_CheckExact(obj))) return 1; - #endif - } - else { - if (likely(__Pyx_TypeCheck(obj, type))) return 1; - } - PyErr_Format(PyExc_TypeError, - "Argument '%.200s' has incorrect type (expected %.200s, got %.200s)", - name, type->tp_name, Py_TYPE(obj)->tp_name); - return 0; -} - -/* PyObjectCall */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_Call(PyObject *func, PyObject *arg, PyObject *kw) { - PyObject *result; - ternaryfunc call = Py_TYPE(func)->tp_call; - if (unlikely(!call)) - return PyObject_Call(func, arg, kw); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = (*call)(func, arg, kw); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyErrFetchRestore */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx_ErrRestoreInState(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - tmp_type = tstate->curexc_type; - tmp_value = tstate->curexc_value; - tmp_tb = tstate->curexc_traceback; - tstate->curexc_type = type; - tstate->curexc_value = value; - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -static CYTHON_INLINE void __Pyx_ErrFetchInState(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - *type = tstate->curexc_type; - *value = tstate->curexc_value; - *tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -} -#endif - -/* RaiseException */ -#if PY_MAJOR_VERSION < 3 -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, - CYTHON_UNUSED PyObject *cause) { - __Pyx_PyThreadState_declare - Py_XINCREF(type); - if (!value || value == Py_None) - value = NULL; - else - Py_INCREF(value); - if (!tb || tb == Py_None) - tb = NULL; - else { - Py_INCREF(tb); - if (!PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto raise_error; - } - } - if (PyType_Check(type)) { -#if CYTHON_COMPILING_IN_PYPY - if (!value) { - Py_INCREF(Py_None); - value = Py_None; - } -#endif - PyErr_NormalizeException(&type, &value, &tb); - } else { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto raise_error; - } - value = type; - type = (PyObject*) Py_TYPE(type); - Py_INCREF(type); - if (!PyType_IsSubtype((PyTypeObject *)type, (PyTypeObject *)PyExc_BaseException)) { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto raise_error; - } - } - __Pyx_PyThreadState_assign - __Pyx_ErrRestore(type, value, tb); - return; -raise_error: - Py_XDECREF(value); - Py_XDECREF(type); - Py_XDECREF(tb); - return; -} -#else -static void __Pyx_Raise(PyObject *type, PyObject *value, PyObject *tb, PyObject *cause) { - PyObject* owned_instance = NULL; - if (tb == Py_None) { - tb = 0; - } else if (tb && !PyTraceBack_Check(tb)) { - PyErr_SetString(PyExc_TypeError, - "raise: arg 3 must be a traceback or None"); - goto bad; - } - if (value == Py_None) - value = 0; - if (PyExceptionInstance_Check(type)) { - if (value) { - PyErr_SetString(PyExc_TypeError, - "instance exception may not have a separate value"); - goto bad; - } - value = type; - type = (PyObject*) Py_TYPE(value); - } else if (PyExceptionClass_Check(type)) { - PyObject *instance_class = NULL; - if (value && PyExceptionInstance_Check(value)) { - instance_class = (PyObject*) Py_TYPE(value); - if (instance_class != type) { - int is_subclass = PyObject_IsSubclass(instance_class, type); - if (!is_subclass) { - instance_class = NULL; - } else if (unlikely(is_subclass == -1)) { - goto bad; - } else { - type = instance_class; - } - } - } - if (!instance_class) { - PyObject *args; - if (!value) - args = PyTuple_New(0); - else if (PyTuple_Check(value)) { - Py_INCREF(value); - args = value; - } else - args = PyTuple_Pack(1, value); - if (!args) - goto bad; - owned_instance = PyObject_Call(type, args, NULL); - Py_DECREF(args); - if (!owned_instance) - goto bad; - value = owned_instance; - if (!PyExceptionInstance_Check(value)) { - PyErr_Format(PyExc_TypeError, - "calling %R should have returned an instance of " - "BaseException, not %R", - type, Py_TYPE(value)); - goto bad; - } - } - } else { - PyErr_SetString(PyExc_TypeError, - "raise: exception class must be a subclass of BaseException"); - goto bad; - } - if (cause) { - PyObject *fixed_cause; - if (cause == Py_None) { - fixed_cause = NULL; - } else if (PyExceptionClass_Check(cause)) { - fixed_cause = PyObject_CallObject(cause, NULL); - if (fixed_cause == NULL) - goto bad; - } else if (PyExceptionInstance_Check(cause)) { - fixed_cause = cause; - Py_INCREF(fixed_cause); - } else { - PyErr_SetString(PyExc_TypeError, - "exception causes must derive from " - "BaseException"); - goto bad; - } - PyException_SetCause(value, fixed_cause); - } - PyErr_SetObject(type, value); - if (tb) { -#if CYTHON_COMPILING_IN_PYPY - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_Fetch(&tmp_type, &tmp_value, &tmp_tb); - Py_INCREF(tb); - PyErr_Restore(tmp_type, tmp_value, tb); - Py_XDECREF(tmp_tb); -#else - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject* tmp_tb = tstate->curexc_traceback; - if (tb != tmp_tb) { - Py_INCREF(tb); - tstate->curexc_traceback = tb; - Py_XDECREF(tmp_tb); - } -#endif - } -bad: - Py_XDECREF(owned_instance); - return; -} -#endif - -/* PyCFunctionFastCall */ -#if CYTHON_FAST_PYCCALL -static CYTHON_INLINE PyObject * __Pyx_PyCFunction_FastCall(PyObject *func_obj, PyObject **args, Py_ssize_t nargs) { - PyCFunctionObject *func = (PyCFunctionObject*)func_obj; - PyCFunction meth = PyCFunction_GET_FUNCTION(func); - PyObject *self = PyCFunction_GET_SELF(func); - int flags = PyCFunction_GET_FLAGS(func); - assert(PyCFunction_Check(func)); - assert(METH_FASTCALL == (flags & ~(METH_CLASS | METH_STATIC | METH_COEXIST | METH_KEYWORDS | METH_STACKLESS))); - assert(nargs >= 0); - assert(nargs == 0 || args != NULL); - /* _PyCFunction_FastCallDict() must not be called with an exception set, - because it may clear it (directly or indirectly) and so the - caller loses its exception */ - assert(!PyErr_Occurred()); - if ((PY_VERSION_HEX < 0x030700A0) || unlikely(flags & METH_KEYWORDS)) { - return (*((__Pyx_PyCFunctionFastWithKeywords)(void*)meth)) (self, args, nargs, NULL); - } else { - return (*((__Pyx_PyCFunctionFast)(void*)meth)) (self, args, nargs); - } -} -#endif - -/* PyFunctionFastCall */ -#if CYTHON_FAST_PYCALL -static PyObject* __Pyx_PyFunction_FastCallNoKw(PyCodeObject *co, PyObject **args, Py_ssize_t na, - PyObject *globals) { - PyFrameObject *f; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject **fastlocals; - Py_ssize_t i; - PyObject *result; - assert(globals != NULL); - /* XXX Perhaps we should create a specialized - PyFrame_New() that doesn't take locals, but does - take builtins without sanity checking them. - */ - assert(tstate != NULL); - f = PyFrame_New(tstate, co, globals, NULL); - if (f == NULL) { - return NULL; - } - fastlocals = __Pyx_PyFrame_GetLocalsplus(f); - for (i = 0; i < na; i++) { - Py_INCREF(*args); - fastlocals[i] = *args++; - } - result = PyEval_EvalFrameEx(f,0); - ++tstate->recursion_depth; - Py_DECREF(f); - --tstate->recursion_depth; - return result; -} -#if 1 || PY_VERSION_HEX < 0x030600B1 -static PyObject *__Pyx_PyFunction_FastCallDict(PyObject *func, PyObject **args, Py_ssize_t nargs, PyObject *kwargs) { - PyCodeObject *co = (PyCodeObject *)PyFunction_GET_CODE(func); - PyObject *globals = PyFunction_GET_GLOBALS(func); - PyObject *argdefs = PyFunction_GET_DEFAULTS(func); - PyObject *closure; -#if PY_MAJOR_VERSION >= 3 - PyObject *kwdefs; -#endif - PyObject *kwtuple, **k; - PyObject **d; - Py_ssize_t nd; - Py_ssize_t nk; - PyObject *result; - assert(kwargs == NULL || PyDict_Check(kwargs)); - nk = kwargs ? PyDict_Size(kwargs) : 0; - if (Py_EnterRecursiveCall((char*)" while calling a Python object")) { - return NULL; - } - if ( -#if PY_MAJOR_VERSION >= 3 - co->co_kwonlyargcount == 0 && -#endif - likely(kwargs == NULL || nk == 0) && - co->co_flags == (CO_OPTIMIZED | CO_NEWLOCALS | CO_NOFREE)) { - if (argdefs == NULL && co->co_argcount == nargs) { - result = __Pyx_PyFunction_FastCallNoKw(co, args, nargs, globals); - goto done; - } - else if (nargs == 0 && argdefs != NULL - && co->co_argcount == Py_SIZE(argdefs)) { - /* function called with no arguments, but all parameters have - a default value: use default values as arguments .*/ - args = &PyTuple_GET_ITEM(argdefs, 0); - result =__Pyx_PyFunction_FastCallNoKw(co, args, Py_SIZE(argdefs), globals); - goto done; - } - } - if (kwargs != NULL) { - Py_ssize_t pos, i; - kwtuple = PyTuple_New(2 * nk); - if (kwtuple == NULL) { - result = NULL; - goto done; - } - k = &PyTuple_GET_ITEM(kwtuple, 0); - pos = i = 0; - while (PyDict_Next(kwargs, &pos, &k[i], &k[i+1])) { - Py_INCREF(k[i]); - Py_INCREF(k[i+1]); - i += 2; - } - nk = i / 2; - } - else { - kwtuple = NULL; - k = NULL; - } - closure = PyFunction_GET_CLOSURE(func); -#if PY_MAJOR_VERSION >= 3 - kwdefs = PyFunction_GET_KW_DEFAULTS(func); -#endif - if (argdefs != NULL) { - d = &PyTuple_GET_ITEM(argdefs, 0); - nd = Py_SIZE(argdefs); - } - else { - d = NULL; - nd = 0; - } -#if PY_MAJOR_VERSION >= 3 - result = PyEval_EvalCodeEx((PyObject*)co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, kwdefs, closure); -#else - result = PyEval_EvalCodeEx(co, globals, (PyObject *)NULL, - args, (int)nargs, - k, (int)nk, - d, (int)nd, closure); -#endif - Py_XDECREF(kwtuple); -done: - Py_LeaveRecursiveCall(); - return result; -} -#endif -#endif - -/* PyObjectCall2Args */ -static CYTHON_UNUSED PyObject* __Pyx_PyObject_Call2Args(PyObject* function, PyObject* arg1, PyObject* arg2) { - PyObject *args, *result = NULL; - #if CYTHON_FAST_PYCALL - if (PyFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyFunction_FastCall(function, args, 2); - } - #endif - #if CYTHON_FAST_PYCCALL - if (__Pyx_PyFastCFunction_Check(function)) { - PyObject *args[2] = {arg1, arg2}; - return __Pyx_PyCFunction_FastCall(function, args, 2); - } - #endif - args = PyTuple_New(2); - if (unlikely(!args)) goto done; - Py_INCREF(arg1); - PyTuple_SET_ITEM(args, 0, arg1); - Py_INCREF(arg2); - PyTuple_SET_ITEM(args, 1, arg2); - Py_INCREF(function); - result = __Pyx_PyObject_Call(function, args, NULL); - Py_DECREF(args); - Py_DECREF(function); -done: - return result; -} - -/* PyObjectCallMethO */ -#if CYTHON_COMPILING_IN_CPYTHON -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallMethO(PyObject *func, PyObject *arg) { - PyObject *self, *result; - PyCFunction cfunc; - cfunc = PyCFunction_GET_FUNCTION(func); - self = PyCFunction_GET_SELF(func); - if (unlikely(Py_EnterRecursiveCall((char*)" while calling a Python object"))) - return NULL; - result = cfunc(self, arg); - Py_LeaveRecursiveCall(); - if (unlikely(!result) && unlikely(!PyErr_Occurred())) { - PyErr_SetString( - PyExc_SystemError, - "NULL result without error in PyObject_Call"); - } - return result; -} -#endif - -/* PyObjectCallOneArg */ -#if CYTHON_COMPILING_IN_CPYTHON -static PyObject* __Pyx__PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_New(1); - if (unlikely(!args)) return NULL; - Py_INCREF(arg); - PyTuple_SET_ITEM(args, 0, arg); - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { -#if CYTHON_FAST_PYCALL - if (PyFunction_Check(func)) { - return __Pyx_PyFunction_FastCall(func, &arg, 1); - } -#endif - if (likely(PyCFunction_Check(func))) { - if (likely(PyCFunction_GET_FLAGS(func) & METH_O)) { - return __Pyx_PyObject_CallMethO(func, arg); -#if CYTHON_FAST_PYCCALL - } else if (__Pyx_PyFastCFunction_Check(func)) { - return __Pyx_PyCFunction_FastCall(func, &arg, 1); -#endif - } - } - return __Pyx__PyObject_CallOneArg(func, arg); -} -#else -static CYTHON_INLINE PyObject* __Pyx_PyObject_CallOneArg(PyObject *func, PyObject *arg) { - PyObject *result; - PyObject *args = PyTuple_Pack(1, arg); - if (unlikely(!args)) return NULL; - result = __Pyx_PyObject_Call(func, args, NULL); - Py_DECREF(args); - return result; -} -#endif - -/* BytesEquals */ -static CYTHON_INLINE int __Pyx_PyBytes_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else - if (s1 == s2) { - return (equals == Py_EQ); - } else if (PyBytes_CheckExact(s1) & PyBytes_CheckExact(s2)) { - const char *ps1, *ps2; - Py_ssize_t length = PyBytes_GET_SIZE(s1); - if (length != PyBytes_GET_SIZE(s2)) - return (equals == Py_NE); - ps1 = PyBytes_AS_STRING(s1); - ps2 = PyBytes_AS_STRING(s2); - if (ps1[0] != ps2[0]) { - return (equals == Py_NE); - } else if (length == 1) { - return (equals == Py_EQ); - } else { - int result; -#if CYTHON_USE_UNICODE_INTERNALS && (PY_VERSION_HEX < 0x030B0000) - Py_hash_t hash1, hash2; - hash1 = ((PyBytesObject*)s1)->ob_shash; - hash2 = ((PyBytesObject*)s2)->ob_shash; - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - return (equals == Py_NE); - } -#endif - result = memcmp(ps1, ps2, (size_t)length); - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & PyBytes_CheckExact(s2)) { - return (equals == Py_NE); - } else if ((s2 == Py_None) & PyBytes_CheckExact(s1)) { - return (equals == Py_NE); - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -#endif -} - -/* UnicodeEquals */ -static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int equals) { -#if CYTHON_COMPILING_IN_PYPY - return PyObject_RichCompareBool(s1, s2, equals); -#else -#if PY_MAJOR_VERSION < 3 - PyObject* owned_ref = NULL; -#endif - int s1_is_unicode, s2_is_unicode; - if (s1 == s2) { - goto return_eq; - } - s1_is_unicode = PyUnicode_CheckExact(s1); - s2_is_unicode = PyUnicode_CheckExact(s2); -#if PY_MAJOR_VERSION < 3 - if ((s1_is_unicode & (!s2_is_unicode)) && PyString_CheckExact(s2)) { - owned_ref = PyUnicode_FromObject(s2); - if (unlikely(!owned_ref)) - return -1; - s2 = owned_ref; - s2_is_unicode = 1; - } else if ((s2_is_unicode & (!s1_is_unicode)) && PyString_CheckExact(s1)) { - owned_ref = PyUnicode_FromObject(s1); - if (unlikely(!owned_ref)) - return -1; - s1 = owned_ref; - s1_is_unicode = 1; - } else if (((!s2_is_unicode) & (!s1_is_unicode))) { - return __Pyx_PyBytes_Equals(s1, s2, equals); - } -#endif - if (s1_is_unicode & s2_is_unicode) { - Py_ssize_t length; - int kind; - void *data1, *data2; - if (unlikely(__Pyx_PyUnicode_READY(s1) < 0) || unlikely(__Pyx_PyUnicode_READY(s2) < 0)) - return -1; - length = __Pyx_PyUnicode_GET_LENGTH(s1); - if (length != __Pyx_PyUnicode_GET_LENGTH(s2)) { - goto return_ne; - } -#if CYTHON_USE_UNICODE_INTERNALS - { - Py_hash_t hash1, hash2; - #if CYTHON_PEP393_ENABLED - hash1 = ((PyASCIIObject*)s1)->hash; - hash2 = ((PyASCIIObject*)s2)->hash; - #else - hash1 = ((PyUnicodeObject*)s1)->hash; - hash2 = ((PyUnicodeObject*)s2)->hash; - #endif - if (hash1 != hash2 && hash1 != -1 && hash2 != -1) { - goto return_ne; - } - } -#endif - kind = __Pyx_PyUnicode_KIND(s1); - if (kind != __Pyx_PyUnicode_KIND(s2)) { - goto return_ne; - } - data1 = __Pyx_PyUnicode_DATA(s1); - data2 = __Pyx_PyUnicode_DATA(s2); - if (__Pyx_PyUnicode_READ(kind, data1, 0) != __Pyx_PyUnicode_READ(kind, data2, 0)) { - goto return_ne; - } else if (length == 1) { - goto return_eq; - } else { - int result = memcmp(data1, data2, (size_t)(length * kind)); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ) ? (result == 0) : (result != 0); - } - } else if ((s1 == Py_None) & s2_is_unicode) { - goto return_ne; - } else if ((s2 == Py_None) & s1_is_unicode) { - goto return_ne; - } else { - int result; - PyObject* py_result = PyObject_RichCompare(s1, s2, equals); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - if (!py_result) - return -1; - result = __Pyx_PyObject_IsTrue(py_result); - Py_DECREF(py_result); - return result; - } -return_eq: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_EQ); -return_ne: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(owned_ref); - #endif - return (equals == Py_NE); -#endif -} - -/* DivInt[Py_ssize_t] */ -static CYTHON_INLINE Py_ssize_t __Pyx_div_Py_ssize_t(Py_ssize_t a, Py_ssize_t b) { - Py_ssize_t q = a / b; - Py_ssize_t r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* GetAttr */ -static CYTHON_INLINE PyObject *__Pyx_GetAttr(PyObject *o, PyObject *n) { -#if CYTHON_USE_TYPE_SLOTS -#if PY_MAJOR_VERSION >= 3 - if (likely(PyUnicode_Check(n))) -#else - if (likely(PyString_Check(n))) -#endif - return __Pyx_PyObject_GetAttrStr(o, n); -#endif - return PyObject_GetAttr(o, n); -} - -/* GetItemInt */ -static PyObject *__Pyx_GetItemInt_Generic(PyObject *o, PyObject* j) { - PyObject *r; - if (!j) return NULL; - r = PyObject_GetItem(o, j); - Py_DECREF(j); - return r; -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_List_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyList_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyList_GET_SIZE(o)))) { - PyObject *r = PyList_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Tuple_Fast(PyObject *o, Py_ssize_t i, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS - Py_ssize_t wrapped_i = i; - if (wraparound & unlikely(i < 0)) { - wrapped_i += PyTuple_GET_SIZE(o); - } - if ((!boundscheck) || likely(__Pyx_is_valid_index(wrapped_i, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, wrapped_i); - Py_INCREF(r); - return r; - } - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -#else - return PySequence_GetItem(o, i); -#endif -} -static CYTHON_INLINE PyObject *__Pyx_GetItemInt_Fast(PyObject *o, Py_ssize_t i, int is_list, - CYTHON_NCP_UNUSED int wraparound, - CYTHON_NCP_UNUSED int boundscheck) { -#if CYTHON_ASSUME_SAFE_MACROS && !CYTHON_AVOID_BORROWED_REFS && CYTHON_USE_TYPE_SLOTS - if (is_list || PyList_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyList_GET_SIZE(o); - if ((!boundscheck) || (likely(__Pyx_is_valid_index(n, PyList_GET_SIZE(o))))) { - PyObject *r = PyList_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } - else if (PyTuple_CheckExact(o)) { - Py_ssize_t n = ((!wraparound) | likely(i >= 0)) ? i : i + PyTuple_GET_SIZE(o); - if ((!boundscheck) || likely(__Pyx_is_valid_index(n, PyTuple_GET_SIZE(o)))) { - PyObject *r = PyTuple_GET_ITEM(o, n); - Py_INCREF(r); - return r; - } - } else { - PySequenceMethods *m = Py_TYPE(o)->tp_as_sequence; - if (likely(m && m->sq_item)) { - if (wraparound && unlikely(i < 0) && likely(m->sq_length)) { - Py_ssize_t l = m->sq_length(o); - if (likely(l >= 0)) { - i += l; - } else { - if (!PyErr_ExceptionMatches(PyExc_OverflowError)) - return NULL; - PyErr_Clear(); - } - } - return m->sq_item(o, i); - } - } -#else - if (is_list || PySequence_Check(o)) { - return PySequence_GetItem(o, i); - } -#endif - return __Pyx_GetItemInt_Generic(o, PyInt_FromSsize_t(i)); -} - -/* ObjectGetItem */ -#if CYTHON_USE_TYPE_SLOTS -static PyObject *__Pyx_PyObject_GetIndex(PyObject *obj, PyObject* index) { - PyObject *runerr; - Py_ssize_t key_value; - PySequenceMethods *m = Py_TYPE(obj)->tp_as_sequence; - if (unlikely(!(m && m->sq_item))) { - PyErr_Format(PyExc_TypeError, "'%.200s' object is not subscriptable", Py_TYPE(obj)->tp_name); - return NULL; - } - key_value = __Pyx_PyIndex_AsSsize_t(index); - if (likely(key_value != -1 || !(runerr = PyErr_Occurred()))) { - return __Pyx_GetItemInt_Fast(obj, key_value, 0, 1, 1); - } - if (PyErr_GivenExceptionMatches(runerr, PyExc_OverflowError)) { - PyErr_Clear(); - PyErr_Format(PyExc_IndexError, "cannot fit '%.200s' into an index-sized integer", Py_TYPE(index)->tp_name); - } - return NULL; -} -static PyObject *__Pyx_PyObject_GetItem(PyObject *obj, PyObject* key) { - PyMappingMethods *m = Py_TYPE(obj)->tp_as_mapping; - if (likely(m && m->mp_subscript)) { - return m->mp_subscript(obj, key); - } - return __Pyx_PyObject_GetIndex(obj, key); -} -#endif - -/* decode_c_string */ -static CYTHON_INLINE PyObject* __Pyx_decode_c_string( - const char* cstring, Py_ssize_t start, Py_ssize_t stop, - const char* encoding, const char* errors, - PyObject* (*decode_func)(const char *s, Py_ssize_t size, const char *errors)) { - Py_ssize_t length; - if (unlikely((start < 0) | (stop < 0))) { - size_t slen = strlen(cstring); - if (unlikely(slen > (size_t) PY_SSIZE_T_MAX)) { - PyErr_SetString(PyExc_OverflowError, - "c-string too long to convert to Python"); - return NULL; - } - length = (Py_ssize_t) slen; - if (start < 0) { - start += length; - if (start < 0) - start = 0; - } - if (stop < 0) - stop += length; - } - if (unlikely(stop <= start)) - return __Pyx_NewRef(__pyx_empty_unicode); - length = stop - start; - cstring += start; - if (decode_func) { - return decode_func(cstring, length, errors); - } else { - return PyUnicode_Decode(cstring, length, encoding, errors); - } -} - -/* PyErrExceptionMatches */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx_PyErr_ExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; icurexc_type; - if (exc_type == err) return 1; - if (unlikely(!exc_type)) return 0; - if (unlikely(PyTuple_Check(err))) - return __Pyx_PyErr_ExceptionMatchesTuple(exc_type, err); - return __Pyx_PyErr_GivenExceptionMatches(exc_type, err); -} -#endif - -/* GetAttr3 */ -static PyObject *__Pyx_GetAttr3Default(PyObject *d) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (unlikely(!__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - return NULL; - __Pyx_PyErr_Clear(); - Py_INCREF(d); - return d; -} -static CYTHON_INLINE PyObject *__Pyx_GetAttr3(PyObject *o, PyObject *n, PyObject *d) { - PyObject *r = __Pyx_GetAttr(o, n); - return (likely(r)) ? r : __Pyx_GetAttr3Default(d); -} - -/* PyDictVersioning */ -#if CYTHON_USE_DICT_VERSIONS && CYTHON_USE_TYPE_SLOTS -static CYTHON_INLINE PY_UINT64_T __Pyx_get_tp_dict_version(PyObject *obj) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - return likely(dict) ? __PYX_GET_DICT_VERSION(dict) : 0; -} -static CYTHON_INLINE PY_UINT64_T __Pyx_get_object_dict_version(PyObject *obj) { - PyObject **dictptr = NULL; - Py_ssize_t offset = Py_TYPE(obj)->tp_dictoffset; - if (offset) { -#if CYTHON_COMPILING_IN_CPYTHON - dictptr = (likely(offset > 0)) ? (PyObject **) ((char *)obj + offset) : _PyObject_GetDictPtr(obj); -#else - dictptr = _PyObject_GetDictPtr(obj); -#endif - } - return (dictptr && *dictptr) ? __PYX_GET_DICT_VERSION(*dictptr) : 0; -} -static CYTHON_INLINE int __Pyx_object_dict_version_matches(PyObject* obj, PY_UINT64_T tp_dict_version, PY_UINT64_T obj_dict_version) { - PyObject *dict = Py_TYPE(obj)->tp_dict; - if (unlikely(!dict) || unlikely(tp_dict_version != __PYX_GET_DICT_VERSION(dict))) - return 0; - return obj_dict_version == __Pyx_get_object_dict_version(obj); -} -#endif - -/* GetModuleGlobalName */ -#if CYTHON_USE_DICT_VERSIONS -static PyObject *__Pyx__GetModuleGlobalName(PyObject *name, PY_UINT64_T *dict_version, PyObject **dict_cached_value) -#else -static CYTHON_INLINE PyObject *__Pyx__GetModuleGlobalName(PyObject *name) -#endif -{ - PyObject *result; -#if !CYTHON_AVOID_BORROWED_REFS -#if CYTHON_COMPILING_IN_CPYTHON && PY_VERSION_HEX >= 0x030500A1 - result = _PyDict_GetItem_KnownHash(__pyx_d, name, ((PyASCIIObject *) name)->hash); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } else if (unlikely(PyErr_Occurred())) { - return NULL; - } -#else - result = PyDict_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } -#endif -#else - result = PyObject_GetItem(__pyx_d, name); - __PYX_UPDATE_DICT_CACHE(__pyx_d, result, *dict_cached_value, *dict_version) - if (likely(result)) { - return __Pyx_NewRef(result); - } - PyErr_Clear(); -#endif - return __Pyx_GetBuiltinName(name); -} - -/* RaiseTooManyValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseTooManyValuesError(Py_ssize_t expected) { - PyErr_Format(PyExc_ValueError, - "too many values to unpack (expected %" CYTHON_FORMAT_SSIZE_T "d)", expected); -} - -/* RaiseNeedMoreValuesToUnpack */ -static CYTHON_INLINE void __Pyx_RaiseNeedMoreValuesError(Py_ssize_t index) { - PyErr_Format(PyExc_ValueError, - "need more than %" CYTHON_FORMAT_SSIZE_T "d value%.1s to unpack", - index, (index == 1) ? "" : "s"); -} - -/* RaiseNoneIterError */ -static CYTHON_INLINE void __Pyx_RaiseNoneNotIterableError(void) { - PyErr_SetString(PyExc_TypeError, "'NoneType' object is not iterable"); -} - -/* ExtTypeTest */ -static CYTHON_INLINE int __Pyx_TypeTest(PyObject *obj, PyTypeObject *type) { - if (unlikely(!type)) { - PyErr_SetString(PyExc_SystemError, "Missing type object"); - return 0; - } - if (likely(__Pyx_TypeCheck(obj, type))) - return 1; - PyErr_Format(PyExc_TypeError, "Cannot convert %.200s to %.200s", - Py_TYPE(obj)->tp_name, type->tp_name); - return 0; -} - -/* GetTopmostException */ -#if CYTHON_USE_EXC_INFO_STACK -static _PyErr_StackItem * -__Pyx_PyErr_GetTopmostException(PyThreadState *tstate) -{ - _PyErr_StackItem *exc_info = tstate->exc_info; - while ((exc_info->exc_type == NULL || exc_info->exc_type == Py_None) && - exc_info->previous_item != NULL) - { - exc_info = exc_info->previous_item; - } - return exc_info; -} -#endif - -/* SaveResetException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = __Pyx_PyErr_GetTopmostException(tstate); - *type = exc_info->exc_type; - *value = exc_info->exc_value; - *tb = exc_info->exc_traceback; - #else - *type = tstate->exc_type; - *value = tstate->exc_value; - *tb = tstate->exc_traceback; - #endif - Py_XINCREF(*type); - Py_XINCREF(*value); - Py_XINCREF(*tb); -} -static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = type; - exc_info->exc_value = value; - exc_info->exc_traceback = tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = type; - tstate->exc_value = value; - tstate->exc_traceback = tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -} -#endif - -/* GetException */ -#if CYTHON_FAST_THREAD_STATE -static int __Pyx__GetException(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) -#else -static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb) -#endif -{ - PyObject *local_type, *local_value, *local_tb; -#if CYTHON_FAST_THREAD_STATE - PyObject *tmp_type, *tmp_value, *tmp_tb; - local_type = tstate->curexc_type; - local_value = tstate->curexc_value; - local_tb = tstate->curexc_traceback; - tstate->curexc_type = 0; - tstate->curexc_value = 0; - tstate->curexc_traceback = 0; -#else - PyErr_Fetch(&local_type, &local_value, &local_tb); -#endif - PyErr_NormalizeException(&local_type, &local_value, &local_tb); -#if CYTHON_FAST_THREAD_STATE - if (unlikely(tstate->curexc_type)) -#else - if (unlikely(PyErr_Occurred())) -#endif - goto bad; - #if PY_MAJOR_VERSION >= 3 - if (local_tb) { - if (unlikely(PyException_SetTraceback(local_value, local_tb) < 0)) - goto bad; - } - #endif - Py_XINCREF(local_tb); - Py_XINCREF(local_type); - Py_XINCREF(local_value); - *type = local_type; - *value = local_value; - *tb = local_tb; -#if CYTHON_FAST_THREAD_STATE - #if CYTHON_USE_EXC_INFO_STACK - { - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = local_type; - exc_info->exc_value = local_value; - exc_info->exc_traceback = local_tb; - } - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = local_type; - tstate->exc_value = local_value; - tstate->exc_traceback = local_tb; - #endif - Py_XDECREF(tmp_type); - Py_XDECREF(tmp_value); - Py_XDECREF(tmp_tb); -#else - PyErr_SetExcInfo(local_type, local_value, local_tb); -#endif - return 0; -bad: - *type = 0; - *value = 0; - *tb = 0; - Py_XDECREF(local_type); - Py_XDECREF(local_value); - Py_XDECREF(local_tb); - return -1; -} - -/* SwapException */ -#if CYTHON_FAST_THREAD_STATE -static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - #if CYTHON_USE_EXC_INFO_STACK - _PyErr_StackItem *exc_info = tstate->exc_info; - tmp_type = exc_info->exc_type; - tmp_value = exc_info->exc_value; - tmp_tb = exc_info->exc_traceback; - exc_info->exc_type = *type; - exc_info->exc_value = *value; - exc_info->exc_traceback = *tb; - #else - tmp_type = tstate->exc_type; - tmp_value = tstate->exc_value; - tmp_tb = tstate->exc_traceback; - tstate->exc_type = *type; - tstate->exc_value = *value; - tstate->exc_traceback = *tb; - #endif - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#else -static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value, PyObject **tb) { - PyObject *tmp_type, *tmp_value, *tmp_tb; - PyErr_GetExcInfo(&tmp_type, &tmp_value, &tmp_tb); - PyErr_SetExcInfo(*type, *value, *tb); - *type = tmp_type; - *value = tmp_value; - *tb = tmp_tb; -} -#endif - -/* Import */ -static PyObject *__Pyx_Import(PyObject *name, PyObject *from_list, int level) { - PyObject *empty_list = 0; - PyObject *module = 0; - PyObject *global_dict = 0; - PyObject *empty_dict = 0; - PyObject *list; - #if PY_MAJOR_VERSION < 3 - PyObject *py_import; - py_import = __Pyx_PyObject_GetAttrStr(__pyx_b, __pyx_n_s_import); - if (!py_import) - goto bad; - #endif - if (from_list) - list = from_list; - else { - empty_list = PyList_New(0); - if (!empty_list) - goto bad; - list = empty_list; - } - global_dict = PyModule_GetDict(__pyx_m); - if (!global_dict) - goto bad; - empty_dict = PyDict_New(); - if (!empty_dict) - goto bad; - { - #if PY_MAJOR_VERSION >= 3 - if (level == -1) { - if ((1) && (strchr(__Pyx_MODULE_NAME, '.'))) { - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, 1); - if (!module) { - if (!PyErr_ExceptionMatches(PyExc_ImportError)) - goto bad; - PyErr_Clear(); - } - } - level = 0; - } - #endif - if (!module) { - #if PY_MAJOR_VERSION < 3 - PyObject *py_level = PyInt_FromLong(level); - if (!py_level) - goto bad; - module = PyObject_CallFunctionObjArgs(py_import, - name, global_dict, empty_dict, list, py_level, (PyObject *)NULL); - Py_DECREF(py_level); - #else - module = PyImport_ImportModuleLevelObject( - name, global_dict, empty_dict, list, level); - #endif - } - } -bad: - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_import); - #endif - Py_XDECREF(empty_list); - Py_XDECREF(empty_dict); - return module; -} - -/* FastTypeChecks */ -#if CYTHON_COMPILING_IN_CPYTHON -static int __Pyx_InBases(PyTypeObject *a, PyTypeObject *b) { - while (a) { - a = a->tp_base; - if (a == b) - return 1; - } - return b == &PyBaseObject_Type; -} -static CYTHON_INLINE int __Pyx_IsSubtype(PyTypeObject *a, PyTypeObject *b) { - PyObject *mro; - if (a == b) return 1; - mro = a->tp_mro; - if (likely(mro)) { - Py_ssize_t i, n; - n = PyTuple_GET_SIZE(mro); - for (i = 0; i < n; i++) { - if (PyTuple_GET_ITEM(mro, i) == (PyObject *)b) - return 1; - } - return 0; - } - return __Pyx_InBases(a, b); -} -#if PY_MAJOR_VERSION == 2 -static int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject* exc_type2) { - PyObject *exception, *value, *tb; - int res; - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - __Pyx_ErrFetch(&exception, &value, &tb); - res = exc_type1 ? PyObject_IsSubclass(err, exc_type1) : 0; - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - if (!res) { - res = PyObject_IsSubclass(err, exc_type2); - if (unlikely(res == -1)) { - PyErr_WriteUnraisable(err); - res = 0; - } - } - __Pyx_ErrRestore(exception, value, tb); - return res; -} -#else -static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err, PyObject* exc_type1, PyObject *exc_type2) { - int res = exc_type1 ? __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type1) : 0; - if (!res) { - res = __Pyx_IsSubtype((PyTypeObject*)err, (PyTypeObject*)exc_type2); - } - return res; -} -#endif -static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) { - Py_ssize_t i, n; - assert(PyExceptionClass_Check(exc_type)); - n = PyTuple_GET_SIZE(tuple); -#if PY_MAJOR_VERSION >= 3 - for (i=0; i= 0 || (x^b) >= 0)) - return PyInt_FromLong(x); - return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - #endif - #if CYTHON_USE_PYLONG_INTERNALS - if (likely(PyLong_CheckExact(op1))) { - const long b = intval; - long a, x; -#ifdef HAVE_LONG_LONG - const PY_LONG_LONG llb = intval; - PY_LONG_LONG lla, llx; -#endif - const digit* digits = ((PyLongObject*)op1)->ob_digit; - const Py_ssize_t size = Py_SIZE(op1); - if (likely(__Pyx_sst_abs(size) <= 1)) { - a = likely(size) ? digits[0] : 0; - if (size == -1) a = -a; - } else { - switch (size) { - case -2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 2: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - a = (long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 2 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 3: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - a = (long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 3 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case -4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = -(PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - case 4: - if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - a = (long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0])); - break; -#ifdef HAVE_LONG_LONG - } else if (8 * sizeof(PY_LONG_LONG) - 1 > 4 * PyLong_SHIFT) { - lla = (PY_LONG_LONG) (((((((((unsigned PY_LONG_LONG)digits[3]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[2]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[1]) << PyLong_SHIFT) | (unsigned PY_LONG_LONG)digits[0])); - goto long_long; -#endif - } - CYTHON_FALLTHROUGH; - default: return PyLong_Type.tp_as_number->nb_add(op1, op2); - } - } - x = a + b; - return PyLong_FromLong(x); -#ifdef HAVE_LONG_LONG - long_long: - llx = lla + llb; - return PyLong_FromLongLong(llx); -#endif - - - } - #endif - if (PyFloat_CheckExact(op1)) { - const long b = intval; - double a = PyFloat_AS_DOUBLE(op1); - double result; - PyFPE_START_PROTECT("add", return NULL) - result = ((double)a) + (double)b; - PyFPE_END_PROTECT(result) - return PyFloat_FromDouble(result); - } - return (inplace ? PyNumber_InPlaceAdd : PyNumber_Add)(op1, op2); -} -#endif - -/* DivInt[long] */ -static CYTHON_INLINE long __Pyx_div_long(long a, long b) { - long q = a / b; - long r = a - q*b; - q -= ((r != 0) & ((r ^ b) < 0)); - return q; -} - -/* ImportFrom */ -static PyObject* __Pyx_ImportFrom(PyObject* module, PyObject* name) { - PyObject* value = __Pyx_PyObject_GetAttrStr(module, name); - if (unlikely(!value) && PyErr_ExceptionMatches(PyExc_AttributeError)) { - PyErr_Format(PyExc_ImportError, - #if PY_MAJOR_VERSION < 3 - "cannot import name %.230s", PyString_AS_STRING(name)); - #else - "cannot import name %S", name); - #endif - } - return value; -} - -/* HasAttr */ -static CYTHON_INLINE int __Pyx_HasAttr(PyObject *o, PyObject *n) { - PyObject *r; - if (unlikely(!__Pyx_PyBaseString_Check(n))) { - PyErr_SetString(PyExc_TypeError, - "hasattr(): attribute name must be string"); - return -1; - } - r = __Pyx_GetAttr(o, n); - if (unlikely(!r)) { - PyErr_Clear(); - return 0; - } else { - Py_DECREF(r); - return 1; - } -} - -/* PyObject_GenericGetAttrNoDict */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject *__Pyx_RaiseGenericGetAttributeError(PyTypeObject *tp, PyObject *attr_name) { - PyErr_Format(PyExc_AttributeError, -#if PY_MAJOR_VERSION >= 3 - "'%.50s' object has no attribute '%U'", - tp->tp_name, attr_name); -#else - "'%.50s' object has no attribute '%.400s'", - tp->tp_name, PyString_AS_STRING(attr_name)); -#endif - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GenericGetAttrNoDict(PyObject* obj, PyObject* attr_name) { - PyObject *descr; - PyTypeObject *tp = Py_TYPE(obj); - if (unlikely(!PyString_Check(attr_name))) { - return PyObject_GenericGetAttr(obj, attr_name); - } - assert(!tp->tp_dictoffset); - descr = _PyType_Lookup(tp, attr_name); - if (unlikely(!descr)) { - return __Pyx_RaiseGenericGetAttributeError(tp, attr_name); - } - Py_INCREF(descr); - #if PY_MAJOR_VERSION < 3 - if (likely(PyType_HasFeature(Py_TYPE(descr), Py_TPFLAGS_HAVE_CLASS))) - #endif - { - descrgetfunc f = Py_TYPE(descr)->tp_descr_get; - if (unlikely(f)) { - PyObject *res = f(descr, obj, (PyObject *)tp); - Py_DECREF(descr); - return res; - } - } - return descr; -} -#endif - -/* PyObject_GenericGetAttr */ -#if CYTHON_USE_TYPE_SLOTS && CYTHON_USE_PYTYPE_LOOKUP && PY_VERSION_HEX < 0x03070000 -static PyObject* __Pyx_PyObject_GenericGetAttr(PyObject* obj, PyObject* attr_name) { - if (unlikely(Py_TYPE(obj)->tp_dictoffset)) { - return PyObject_GenericGetAttr(obj, attr_name); - } - return __Pyx_PyObject_GenericGetAttrNoDict(obj, attr_name); -} -#endif - -/* SetVTable */ -static int __Pyx_SetVtable(PyObject *dict, void *vtable) { -#if PY_VERSION_HEX >= 0x02070000 - PyObject *ob = PyCapsule_New(vtable, 0, 0); -#else - PyObject *ob = PyCObject_FromVoidPtr(vtable, 0); -#endif - if (!ob) - goto bad; - if (PyDict_SetItem(dict, __pyx_n_s_pyx_vtable, ob) < 0) - goto bad; - Py_DECREF(ob); - return 0; -bad: - Py_XDECREF(ob); - return -1; -} - -/* PyObjectGetAttrStrNoError */ -static void __Pyx_PyObject_GetAttrStr_ClearAttributeError(void) { - __Pyx_PyThreadState_declare - __Pyx_PyThreadState_assign - if (likely(__Pyx_PyErr_ExceptionMatches(PyExc_AttributeError))) - __Pyx_PyErr_Clear(); -} -static CYTHON_INLINE PyObject* __Pyx_PyObject_GetAttrStrNoError(PyObject* obj, PyObject* attr_name) { - PyObject *result; -#if CYTHON_COMPILING_IN_CPYTHON && CYTHON_USE_TYPE_SLOTS && PY_VERSION_HEX >= 0x030700B1 - PyTypeObject* tp = Py_TYPE(obj); - if (likely(tp->tp_getattro == PyObject_GenericGetAttr)) { - return _PyObject_GenericGetAttrWithDict(obj, attr_name, NULL, 1); - } -#endif - result = __Pyx_PyObject_GetAttrStr(obj, attr_name); - if (unlikely(!result)) { - __Pyx_PyObject_GetAttrStr_ClearAttributeError(); - } - return result; -} - -/* SetupReduce */ -static int __Pyx_setup_reduce_is_named(PyObject* meth, PyObject* name) { - int ret; - PyObject *name_attr; - name_attr = __Pyx_PyObject_GetAttrStr(meth, __pyx_n_s_name_2); - if (likely(name_attr)) { - ret = PyObject_RichCompareBool(name_attr, name, Py_EQ); - } else { - ret = -1; - } - if (unlikely(ret < 0)) { - PyErr_Clear(); - ret = 0; - } - Py_XDECREF(name_attr); - return ret; -} -static int __Pyx_setup_reduce(PyObject* type_obj) { - int ret = 0; - PyObject *object_reduce = NULL; - PyObject *object_getstate = NULL; - PyObject *object_reduce_ex = NULL; - PyObject *reduce = NULL; - PyObject *reduce_ex = NULL; - PyObject *reduce_cython = NULL; - PyObject *setstate = NULL; - PyObject *setstate_cython = NULL; - PyObject *getstate = NULL; -#if CYTHON_USE_PYTYPE_LOOKUP - getstate = _PyType_Lookup((PyTypeObject*)type_obj, __pyx_n_s_getstate); -#else - getstate = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_getstate); - if (!getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (getstate) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_getstate = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_getstate); -#else - object_getstate = __Pyx_PyObject_GetAttrStrNoError((PyObject*)&PyBaseObject_Type, __pyx_n_s_getstate); - if (!object_getstate && PyErr_Occurred()) { - goto __PYX_BAD; - } -#endif - if (object_getstate != getstate) { - goto __PYX_GOOD; - } - } -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce_ex = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#else - object_reduce_ex = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce_ex); if (!object_reduce_ex) goto __PYX_BAD; -#endif - reduce_ex = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce_ex); if (unlikely(!reduce_ex)) goto __PYX_BAD; - if (reduce_ex == object_reduce_ex) { -#if CYTHON_USE_PYTYPE_LOOKUP - object_reduce = _PyType_Lookup(&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#else - object_reduce = __Pyx_PyObject_GetAttrStr((PyObject*)&PyBaseObject_Type, __pyx_n_s_reduce); if (!object_reduce) goto __PYX_BAD; -#endif - reduce = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_reduce); if (unlikely(!reduce)) goto __PYX_BAD; - if (reduce == object_reduce || __Pyx_setup_reduce_is_named(reduce, __pyx_n_s_reduce_cython)) { - reduce_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_reduce_cython); - if (likely(reduce_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce, reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_reduce_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (reduce == object_reduce || PyErr_Occurred()) { - goto __PYX_BAD; - } - setstate = __Pyx_PyObject_GetAttrStr(type_obj, __pyx_n_s_setstate); - if (!setstate) PyErr_Clear(); - if (!setstate || __Pyx_setup_reduce_is_named(setstate, __pyx_n_s_setstate_cython)) { - setstate_cython = __Pyx_PyObject_GetAttrStrNoError(type_obj, __pyx_n_s_setstate_cython); - if (likely(setstate_cython)) { - ret = PyDict_SetItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate, setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - ret = PyDict_DelItem(((PyTypeObject*)type_obj)->tp_dict, __pyx_n_s_setstate_cython); if (unlikely(ret < 0)) goto __PYX_BAD; - } else if (!setstate || PyErr_Occurred()) { - goto __PYX_BAD; - } - } - PyType_Modified((PyTypeObject*)type_obj); - } - } - goto __PYX_GOOD; -__PYX_BAD: - if (!PyErr_Occurred()) - PyErr_Format(PyExc_RuntimeError, "Unable to initialize pickling for %s", ((PyTypeObject*)type_obj)->tp_name); - ret = -1; -__PYX_GOOD: -#if !CYTHON_USE_PYTYPE_LOOKUP - Py_XDECREF(object_reduce); - Py_XDECREF(object_reduce_ex); - Py_XDECREF(object_getstate); - Py_XDECREF(getstate); -#endif - Py_XDECREF(reduce); - Py_XDECREF(reduce_ex); - Py_XDECREF(reduce_cython); - Py_XDECREF(setstate); - Py_XDECREF(setstate_cython); - return ret; -} - -/* CLineInTraceback */ -#ifndef CYTHON_CLINE_IN_TRACEBACK -static int __Pyx_CLineForTraceback(CYTHON_NCP_UNUSED PyThreadState *tstate, int c_line) { - PyObject *use_cline; - PyObject *ptype, *pvalue, *ptraceback; -#if CYTHON_COMPILING_IN_CPYTHON - PyObject **cython_runtime_dict; -#endif - if (unlikely(!__pyx_cython_runtime)) { - return c_line; - } - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); -#if CYTHON_COMPILING_IN_CPYTHON - cython_runtime_dict = _PyObject_GetDictPtr(__pyx_cython_runtime); - if (likely(cython_runtime_dict)) { - __PYX_PY_DICT_LOOKUP_IF_MODIFIED( - use_cline, *cython_runtime_dict, - __Pyx_PyDict_GetItemStr(*cython_runtime_dict, __pyx_n_s_cline_in_traceback)) - } else -#endif - { - PyObject *use_cline_obj = __Pyx_PyObject_GetAttrStr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback); - if (use_cline_obj) { - use_cline = PyObject_Not(use_cline_obj) ? Py_False : Py_True; - Py_DECREF(use_cline_obj); - } else { - PyErr_Clear(); - use_cline = NULL; - } - } - if (!use_cline) { - c_line = 0; - (void) PyObject_SetAttr(__pyx_cython_runtime, __pyx_n_s_cline_in_traceback, Py_False); - } - else if (use_cline == Py_False || (use_cline != Py_True && PyObject_Not(use_cline) != 0)) { - c_line = 0; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - return c_line; -} -#endif - -/* CodeObjectCache */ -static int __pyx_bisect_code_objects(__Pyx_CodeObjectCacheEntry* entries, int count, int code_line) { - int start = 0, mid = 0, end = count - 1; - if (end >= 0 && code_line > entries[end].code_line) { - return count; - } - while (start < end) { - mid = start + (end - start) / 2; - if (code_line < entries[mid].code_line) { - end = mid; - } else if (code_line > entries[mid].code_line) { - start = mid + 1; - } else { - return mid; - } - } - if (code_line <= entries[mid].code_line) { - return mid; - } else { - return mid + 1; - } -} -static PyCodeObject *__pyx_find_code_object(int code_line) { - PyCodeObject* code_object; - int pos; - if (unlikely(!code_line) || unlikely(!__pyx_code_cache.entries)) { - return NULL; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if (unlikely(pos >= __pyx_code_cache.count) || unlikely(__pyx_code_cache.entries[pos].code_line != code_line)) { - return NULL; - } - code_object = __pyx_code_cache.entries[pos].code_object; - Py_INCREF(code_object); - return code_object; -} -static void __pyx_insert_code_object(int code_line, PyCodeObject* code_object) { - int pos, i; - __Pyx_CodeObjectCacheEntry* entries = __pyx_code_cache.entries; - if (unlikely(!code_line)) { - return; - } - if (unlikely(!entries)) { - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Malloc(64*sizeof(__Pyx_CodeObjectCacheEntry)); - if (likely(entries)) { - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = 64; - __pyx_code_cache.count = 1; - entries[0].code_line = code_line; - entries[0].code_object = code_object; - Py_INCREF(code_object); - } - return; - } - pos = __pyx_bisect_code_objects(__pyx_code_cache.entries, __pyx_code_cache.count, code_line); - if ((pos < __pyx_code_cache.count) && unlikely(__pyx_code_cache.entries[pos].code_line == code_line)) { - PyCodeObject* tmp = entries[pos].code_object; - entries[pos].code_object = code_object; - Py_DECREF(tmp); - return; - } - if (__pyx_code_cache.count == __pyx_code_cache.max_count) { - int new_max = __pyx_code_cache.max_count + 64; - entries = (__Pyx_CodeObjectCacheEntry*)PyMem_Realloc( - __pyx_code_cache.entries, ((size_t)new_max) * sizeof(__Pyx_CodeObjectCacheEntry)); - if (unlikely(!entries)) { - return; - } - __pyx_code_cache.entries = entries; - __pyx_code_cache.max_count = new_max; - } - for (i=__pyx_code_cache.count; i>pos; i--) { - entries[i] = entries[i-1]; - } - entries[pos].code_line = code_line; - entries[pos].code_object = code_object; - __pyx_code_cache.count++; - Py_INCREF(code_object); -} - -/* AddTraceback */ -#include "compile.h" -#include "frameobject.h" -#include "traceback.h" -#if PY_VERSION_HEX >= 0x030b00a6 - #ifndef Py_BUILD_CORE - #define Py_BUILD_CORE 1 - #endif - #include "internal/pycore_frame.h" -#endif -static PyCodeObject* __Pyx_CreateCodeObjectForTraceback( - const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = NULL; - PyObject *py_funcname = NULL; - #if PY_MAJOR_VERSION < 3 - PyObject *py_srcfile = NULL; - py_srcfile = PyString_FromString(filename); - if (!py_srcfile) goto bad; - #endif - if (c_line) { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - #else - py_funcname = PyUnicode_FromFormat( "%s (%s:%d)", funcname, __pyx_cfilenm, c_line); - if (!py_funcname) goto bad; - funcname = PyUnicode_AsUTF8(py_funcname); - if (!funcname) goto bad; - #endif - } - else { - #if PY_MAJOR_VERSION < 3 - py_funcname = PyString_FromString(funcname); - if (!py_funcname) goto bad; - #endif - } - #if PY_MAJOR_VERSION < 3 - py_code = __Pyx_PyCode_New( - 0, - 0, - 0, - 0, - 0, - __pyx_empty_bytes, /*PyObject *code,*/ - __pyx_empty_tuple, /*PyObject *consts,*/ - __pyx_empty_tuple, /*PyObject *names,*/ - __pyx_empty_tuple, /*PyObject *varnames,*/ - __pyx_empty_tuple, /*PyObject *freevars,*/ - __pyx_empty_tuple, /*PyObject *cellvars,*/ - py_srcfile, /*PyObject *filename,*/ - py_funcname, /*PyObject *name,*/ - py_line, - __pyx_empty_bytes /*PyObject *lnotab*/ - ); - Py_DECREF(py_srcfile); - #else - py_code = PyCode_NewEmpty(filename, funcname, py_line); - #endif - Py_XDECREF(py_funcname); // XDECREF since it's only set on Py3 if cline - return py_code; -bad: - Py_XDECREF(py_funcname); - #if PY_MAJOR_VERSION < 3 - Py_XDECREF(py_srcfile); - #endif - return NULL; -} -static void __Pyx_AddTraceback(const char *funcname, int c_line, - int py_line, const char *filename) { - PyCodeObject *py_code = 0; - PyFrameObject *py_frame = 0; - PyThreadState *tstate = __Pyx_PyThreadState_Current; - PyObject *ptype, *pvalue, *ptraceback; - if (c_line) { - c_line = __Pyx_CLineForTraceback(tstate, c_line); - } - py_code = __pyx_find_code_object(c_line ? -c_line : py_line); - if (!py_code) { - __Pyx_ErrFetchInState(tstate, &ptype, &pvalue, &ptraceback); - py_code = __Pyx_CreateCodeObjectForTraceback( - funcname, c_line, py_line, filename); - if (!py_code) { - /* If the code object creation fails, then we should clear the - fetched exception references and propagate the new exception */ - Py_XDECREF(ptype); - Py_XDECREF(pvalue); - Py_XDECREF(ptraceback); - goto bad; - } - __Pyx_ErrRestoreInState(tstate, ptype, pvalue, ptraceback); - __pyx_insert_code_object(c_line ? -c_line : py_line, py_code); - } - py_frame = PyFrame_New( - tstate, /*PyThreadState *tstate,*/ - py_code, /*PyCodeObject *code,*/ - __pyx_d, /*PyObject *globals,*/ - 0 /*PyObject *locals*/ - ); - if (!py_frame) goto bad; - __Pyx_PyFrame_SetLineNumber(py_frame, py_line); - PyTraceBack_Here(py_frame); -bad: - Py_XDECREF(py_code); - Py_XDECREF(py_frame); -} - -#if PY_MAJOR_VERSION < 3 -static int __Pyx_GetBuffer(PyObject *obj, Py_buffer *view, int flags) { - if (PyObject_CheckBuffer(obj)) return PyObject_GetBuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_array_type)) return __pyx_array_getbuffer(obj, view, flags); - if (__Pyx_TypeCheck(obj, __pyx_memoryview_type)) return __pyx_memoryview_getbuffer(obj, view, flags); - PyErr_Format(PyExc_TypeError, "'%.200s' does not have the buffer interface", Py_TYPE(obj)->tp_name); - return -1; -} -static void __Pyx_ReleaseBuffer(Py_buffer *view) { - PyObject *obj = view->obj; - if (!obj) return; - if (PyObject_CheckBuffer(obj)) { - PyBuffer_Release(view); - return; - } - if ((0)) {} - view->obj = NULL; - Py_DECREF(obj); -} -#endif - - -/* MemviewSliceIsContig */ -static int -__pyx_memviewslice_is_contig(const __Pyx_memviewslice mvs, char order, int ndim) -{ - int i, index, step, start; - Py_ssize_t itemsize = mvs.memview->view.itemsize; - if (order == 'F') { - step = 1; - start = 0; - } else { - step = -1; - start = ndim - 1; - } - for (i = 0; i < ndim; i++) { - index = start + step * i; - if (mvs.suboffsets[index] >= 0 || mvs.strides[index] != itemsize) - return 0; - itemsize *= mvs.shape[index]; - } - return 1; -} - -/* OverlappingSlices */ -static void -__pyx_get_array_memory_extents(__Pyx_memviewslice *slice, - void **out_start, void **out_end, - int ndim, size_t itemsize) -{ - char *start, *end; - int i; - start = end = slice->data; - for (i = 0; i < ndim; i++) { - Py_ssize_t stride = slice->strides[i]; - Py_ssize_t extent = slice->shape[i]; - if (extent == 0) { - *out_start = *out_end = start; - return; - } else { - if (stride > 0) - end += stride * (extent - 1); - else - start += stride * (extent - 1); - } - } - *out_start = start; - *out_end = end + itemsize; -} -static int -__pyx_slices_overlap(__Pyx_memviewslice *slice1, - __Pyx_memviewslice *slice2, - int ndim, size_t itemsize) -{ - void *start1, *end1, *start2, *end2; - __pyx_get_array_memory_extents(slice1, &start1, &end1, ndim, itemsize); - __pyx_get_array_memory_extents(slice2, &start2, &end2, ndim, itemsize); - return (start1 < end2) && (start2 < end1); -} - -/* Capsule */ -static CYTHON_INLINE PyObject * -__pyx_capsule_create(void *p, CYTHON_UNUSED const char *sig) -{ - PyObject *cobj; -#if PY_VERSION_HEX >= 0x02070000 - cobj = PyCapsule_New(p, sig, NULL); -#else - cobj = PyCObject_FromVoidPtr(p, NULL); -#endif - return cobj; -} - -/* IsLittleEndian */ -static CYTHON_INLINE int __Pyx_Is_Little_Endian(void) -{ - union { - uint32_t u32; - uint8_t u8[4]; - } S; - S.u32 = 0x01020304; - return S.u8[0] == 4; -} - -/* BufferFormatCheck */ -static void __Pyx_BufFmt_Init(__Pyx_BufFmt_Context* ctx, - __Pyx_BufFmt_StackElem* stack, - __Pyx_TypeInfo* type) { - stack[0].field = &ctx->root; - stack[0].parent_offset = 0; - ctx->root.type = type; - ctx->root.name = "buffer dtype"; - ctx->root.offset = 0; - ctx->head = stack; - ctx->head->field = &ctx->root; - ctx->fmt_offset = 0; - ctx->head->parent_offset = 0; - ctx->new_packmode = '@'; - ctx->enc_packmode = '@'; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->is_complex = 0; - ctx->is_valid_array = 0; - ctx->struct_alignment = 0; - while (type->typegroup == 'S') { - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = 0; - type = type->fields->type; - } -} -static int __Pyx_BufFmt_ParseNumber(const char** ts) { - int count; - const char* t = *ts; - if (*t < '0' || *t > '9') { - return -1; - } else { - count = *t++ - '0'; - while (*t >= '0' && *t <= '9') { - count *= 10; - count += *t++ - '0'; - } - } - *ts = t; - return count; -} -static int __Pyx_BufFmt_ExpectNumber(const char **ts) { - int number = __Pyx_BufFmt_ParseNumber(ts); - if (number == -1) - PyErr_Format(PyExc_ValueError,\ - "Does not understand character buffer dtype format string ('%c')", **ts); - return number; -} -static void __Pyx_BufFmt_RaiseUnexpectedChar(char ch) { - PyErr_Format(PyExc_ValueError, - "Unexpected format string character: '%c'", ch); -} -static const char* __Pyx_BufFmt_DescribeTypeChar(char ch, int is_complex) { - switch (ch) { - case '?': return "'bool'"; - case 'c': return "'char'"; - case 'b': return "'signed char'"; - case 'B': return "'unsigned char'"; - case 'h': return "'short'"; - case 'H': return "'unsigned short'"; - case 'i': return "'int'"; - case 'I': return "'unsigned int'"; - case 'l': return "'long'"; - case 'L': return "'unsigned long'"; - case 'q': return "'long long'"; - case 'Q': return "'unsigned long long'"; - case 'f': return (is_complex ? "'complex float'" : "'float'"); - case 'd': return (is_complex ? "'complex double'" : "'double'"); - case 'g': return (is_complex ? "'complex long double'" : "'long double'"); - case 'T': return "a struct"; - case 'O': return "Python object"; - case 'P': return "a pointer"; - case 's': case 'p': return "a string"; - case 0: return "end"; - default: return "unparseable format string"; - } -} -static size_t __Pyx_BufFmt_TypeCharToStandardSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return 2; - case 'i': case 'I': case 'l': case 'L': return 4; - case 'q': case 'Q': return 8; - case 'f': return (is_complex ? 8 : 4); - case 'd': return (is_complex ? 16 : 8); - case 'g': { - PyErr_SetString(PyExc_ValueError, "Python does not define a standard format string size for long double ('g').."); - return 0; - } - case 'O': case 'P': return sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static size_t __Pyx_BufFmt_TypeCharToNativeSize(char ch, int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(short); - case 'i': case 'I': return sizeof(int); - case 'l': case 'L': return sizeof(long); - #ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(PY_LONG_LONG); - #endif - case 'f': return sizeof(float) * (is_complex ? 2 : 1); - case 'd': return sizeof(double) * (is_complex ? 2 : 1); - case 'g': return sizeof(long double) * (is_complex ? 2 : 1); - case 'O': case 'P': return sizeof(void*); - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -typedef struct { char c; short x; } __Pyx_st_short; -typedef struct { char c; int x; } __Pyx_st_int; -typedef struct { char c; long x; } __Pyx_st_long; -typedef struct { char c; float x; } __Pyx_st_float; -typedef struct { char c; double x; } __Pyx_st_double; -typedef struct { char c; long double x; } __Pyx_st_longdouble; -typedef struct { char c; void *x; } __Pyx_st_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { char c; PY_LONG_LONG x; } __Pyx_st_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToAlignment(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_st_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_st_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_st_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_st_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_st_float) - sizeof(float); - case 'd': return sizeof(__Pyx_st_double) - sizeof(double); - case 'g': return sizeof(__Pyx_st_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_st_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -/* These are for computing the padding at the end of the struct to align - on the first member of the struct. This will probably the same as above, - but we don't have any guarantees. - */ -typedef struct { short x; char c; } __Pyx_pad_short; -typedef struct { int x; char c; } __Pyx_pad_int; -typedef struct { long x; char c; } __Pyx_pad_long; -typedef struct { float x; char c; } __Pyx_pad_float; -typedef struct { double x; char c; } __Pyx_pad_double; -typedef struct { long double x; char c; } __Pyx_pad_longdouble; -typedef struct { void *x; char c; } __Pyx_pad_void_p; -#ifdef HAVE_LONG_LONG -typedef struct { PY_LONG_LONG x; char c; } __Pyx_pad_longlong; -#endif -static size_t __Pyx_BufFmt_TypeCharToPadding(char ch, CYTHON_UNUSED int is_complex) { - switch (ch) { - case '?': case 'c': case 'b': case 'B': case 's': case 'p': return 1; - case 'h': case 'H': return sizeof(__Pyx_pad_short) - sizeof(short); - case 'i': case 'I': return sizeof(__Pyx_pad_int) - sizeof(int); - case 'l': case 'L': return sizeof(__Pyx_pad_long) - sizeof(long); -#ifdef HAVE_LONG_LONG - case 'q': case 'Q': return sizeof(__Pyx_pad_longlong) - sizeof(PY_LONG_LONG); -#endif - case 'f': return sizeof(__Pyx_pad_float) - sizeof(float); - case 'd': return sizeof(__Pyx_pad_double) - sizeof(double); - case 'g': return sizeof(__Pyx_pad_longdouble) - sizeof(long double); - case 'P': case 'O': return sizeof(__Pyx_pad_void_p) - sizeof(void*); - default: - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } -} -static char __Pyx_BufFmt_TypeCharToGroup(char ch, int is_complex) { - switch (ch) { - case 'c': - return 'H'; - case 'b': case 'h': case 'i': - case 'l': case 'q': case 's': case 'p': - return 'I'; - case '?': case 'B': case 'H': case 'I': case 'L': case 'Q': - return 'U'; - case 'f': case 'd': case 'g': - return (is_complex ? 'C' : 'R'); - case 'O': - return 'O'; - case 'P': - return 'P'; - default: { - __Pyx_BufFmt_RaiseUnexpectedChar(ch); - return 0; - } - } -} -static void __Pyx_BufFmt_RaiseExpected(__Pyx_BufFmt_Context* ctx) { - if (ctx->head == NULL || ctx->head->field == &ctx->root) { - const char* expected; - const char* quote; - if (ctx->head == NULL) { - expected = "end"; - quote = ""; - } else { - expected = ctx->head->field->type->name; - quote = "'"; - } - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected %s%s%s but got %s", - quote, expected, quote, - __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex)); - } else { - __Pyx_StructField* field = ctx->head->field; - __Pyx_StructField* parent = (ctx->head - 1)->field; - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch, expected '%s' but got %s in '%s.%s'", - field->type->name, __Pyx_BufFmt_DescribeTypeChar(ctx->enc_type, ctx->is_complex), - parent->type->name, field->name); - } -} -static int __Pyx_BufFmt_ProcessTypeChunk(__Pyx_BufFmt_Context* ctx) { - char group; - size_t size, offset, arraysize = 1; - if (ctx->enc_type == 0) return 0; - if (ctx->head->field->type->arraysize[0]) { - int i, ndim = 0; - if (ctx->enc_type == 's' || ctx->enc_type == 'p') { - ctx->is_valid_array = ctx->head->field->type->ndim == 1; - ndim = 1; - if (ctx->enc_count != ctx->head->field->type->arraysize[0]) { - PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %zu", - ctx->head->field->type->arraysize[0], ctx->enc_count); - return -1; - } - } - if (!ctx->is_valid_array) { - PyErr_Format(PyExc_ValueError, "Expected %d dimensions, got %d", - ctx->head->field->type->ndim, ndim); - return -1; - } - for (i = 0; i < ctx->head->field->type->ndim; i++) { - arraysize *= ctx->head->field->type->arraysize[i]; - } - ctx->is_valid_array = 0; - ctx->enc_count = 1; - } - group = __Pyx_BufFmt_TypeCharToGroup(ctx->enc_type, ctx->is_complex); - do { - __Pyx_StructField* field = ctx->head->field; - __Pyx_TypeInfo* type = field->type; - if (ctx->enc_packmode == '@' || ctx->enc_packmode == '^') { - size = __Pyx_BufFmt_TypeCharToNativeSize(ctx->enc_type, ctx->is_complex); - } else { - size = __Pyx_BufFmt_TypeCharToStandardSize(ctx->enc_type, ctx->is_complex); - } - if (ctx->enc_packmode == '@') { - size_t align_at = __Pyx_BufFmt_TypeCharToAlignment(ctx->enc_type, ctx->is_complex); - size_t align_mod_offset; - if (align_at == 0) return -1; - align_mod_offset = ctx->fmt_offset % align_at; - if (align_mod_offset > 0) ctx->fmt_offset += align_at - align_mod_offset; - if (ctx->struct_alignment == 0) - ctx->struct_alignment = __Pyx_BufFmt_TypeCharToPadding(ctx->enc_type, - ctx->is_complex); - } - if (type->size != size || type->typegroup != group) { - if (type->typegroup == 'C' && type->fields != NULL) { - size_t parent_offset = ctx->head->parent_offset + field->offset; - ++ctx->head; - ctx->head->field = type->fields; - ctx->head->parent_offset = parent_offset; - continue; - } - if ((type->typegroup == 'H' || group == 'H') && type->size == size) { - } else { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - } - offset = ctx->head->parent_offset + field->offset; - if (ctx->fmt_offset != offset) { - PyErr_Format(PyExc_ValueError, - "Buffer dtype mismatch; next field is at offset %" CYTHON_FORMAT_SSIZE_T "d but %" CYTHON_FORMAT_SSIZE_T "d expected", - (Py_ssize_t)ctx->fmt_offset, (Py_ssize_t)offset); - return -1; - } - ctx->fmt_offset += size; - if (arraysize) - ctx->fmt_offset += (arraysize - 1) * size; - --ctx->enc_count; - while (1) { - if (field == &ctx->root) { - ctx->head = NULL; - if (ctx->enc_count != 0) { - __Pyx_BufFmt_RaiseExpected(ctx); - return -1; - } - break; - } - ctx->head->field = ++field; - if (field->type == NULL) { - --ctx->head; - field = ctx->head->field; - continue; - } else if (field->type->typegroup == 'S') { - size_t parent_offset = ctx->head->parent_offset + field->offset; - if (field->type->fields->type == NULL) continue; - field = field->type->fields; - ++ctx->head; - ctx->head->field = field; - ctx->head->parent_offset = parent_offset; - break; - } else { - break; - } - } - } while (ctx->enc_count); - ctx->enc_type = 0; - ctx->is_complex = 0; - return 0; -} -static PyObject * -__pyx_buffmt_parse_array(__Pyx_BufFmt_Context* ctx, const char** tsp) -{ - const char *ts = *tsp; - int i = 0, number, ndim; - ++ts; - if (ctx->new_count != 1) { - PyErr_SetString(PyExc_ValueError, - "Cannot handle repeated arrays in format string"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ndim = ctx->head->field->type->ndim; - while (*ts && *ts != ')') { - switch (*ts) { - case ' ': case '\f': case '\r': case '\n': case '\t': case '\v': continue; - default: break; - } - number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - if (i < ndim && (size_t) number != ctx->head->field->type->arraysize[i]) - return PyErr_Format(PyExc_ValueError, - "Expected a dimension of size %zu, got %d", - ctx->head->field->type->arraysize[i], number); - if (*ts != ',' && *ts != ')') - return PyErr_Format(PyExc_ValueError, - "Expected a comma in format string, got '%c'", *ts); - if (*ts == ',') ts++; - i++; - } - if (i != ndim) - return PyErr_Format(PyExc_ValueError, "Expected %d dimension(s), got %d", - ctx->head->field->type->ndim, i); - if (!*ts) { - PyErr_SetString(PyExc_ValueError, - "Unexpected end of format string, expected ')'"); - return NULL; - } - ctx->is_valid_array = 1; - ctx->new_count = 1; - *tsp = ++ts; - return Py_None; -} -static const char* __Pyx_BufFmt_CheckString(__Pyx_BufFmt_Context* ctx, const char* ts) { - int got_Z = 0; - while (1) { - switch(*ts) { - case 0: - if (ctx->enc_type != 0 && ctx->head == NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - if (ctx->head != NULL) { - __Pyx_BufFmt_RaiseExpected(ctx); - return NULL; - } - return ts; - case ' ': - case '\r': - case '\n': - ++ts; - break; - case '<': - if (!__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Little-endian buffer not supported on big-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '>': - case '!': - if (__Pyx_Is_Little_Endian()) { - PyErr_SetString(PyExc_ValueError, "Big-endian buffer not supported on little-endian compiler"); - return NULL; - } - ctx->new_packmode = '='; - ++ts; - break; - case '=': - case '@': - case '^': - ctx->new_packmode = *ts++; - break; - case 'T': - { - const char* ts_after_sub; - size_t i, struct_count = ctx->new_count; - size_t struct_alignment = ctx->struct_alignment; - ctx->new_count = 1; - ++ts; - if (*ts != '{') { - PyErr_SetString(PyExc_ValueError, "Buffer acquisition: Expected '{' after 'T'"); - return NULL; - } - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - ctx->enc_count = 0; - ctx->struct_alignment = 0; - ++ts; - ts_after_sub = ts; - for (i = 0; i != struct_count; ++i) { - ts_after_sub = __Pyx_BufFmt_CheckString(ctx, ts); - if (!ts_after_sub) return NULL; - } - ts = ts_after_sub; - if (struct_alignment) ctx->struct_alignment = struct_alignment; - } - break; - case '}': - { - size_t alignment = ctx->struct_alignment; - ++ts; - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_type = 0; - if (alignment && ctx->fmt_offset % alignment) { - ctx->fmt_offset += alignment - (ctx->fmt_offset % alignment); - } - } - return ts; - case 'x': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->fmt_offset += ctx->new_count; - ctx->new_count = 1; - ctx->enc_count = 0; - ctx->enc_type = 0; - ctx->enc_packmode = ctx->new_packmode; - ++ts; - break; - case 'Z': - got_Z = 1; - ++ts; - if (*ts != 'f' && *ts != 'd' && *ts != 'g') { - __Pyx_BufFmt_RaiseUnexpectedChar('Z'); - return NULL; - } - CYTHON_FALLTHROUGH; - case '?': case 'c': case 'b': case 'B': case 'h': case 'H': case 'i': case 'I': - case 'l': case 'L': case 'q': case 'Q': - case 'f': case 'd': case 'g': - case 'O': case 'p': - if ((ctx->enc_type == *ts) && (got_Z == ctx->is_complex) && - (ctx->enc_packmode == ctx->new_packmode) && (!ctx->is_valid_array)) { - ctx->enc_count += ctx->new_count; - ctx->new_count = 1; - got_Z = 0; - ++ts; - break; - } - CYTHON_FALLTHROUGH; - case 's': - if (__Pyx_BufFmt_ProcessTypeChunk(ctx) == -1) return NULL; - ctx->enc_count = ctx->new_count; - ctx->enc_packmode = ctx->new_packmode; - ctx->enc_type = *ts; - ctx->is_complex = got_Z; - ++ts; - ctx->new_count = 1; - got_Z = 0; - break; - case ':': - ++ts; - while(*ts != ':') ++ts; - ++ts; - break; - case '(': - if (!__pyx_buffmt_parse_array(ctx, &ts)) return NULL; - break; - default: - { - int number = __Pyx_BufFmt_ExpectNumber(&ts); - if (number == -1) return NULL; - ctx->new_count = (size_t)number; - } - } - } -} - -/* TypeInfoCompare */ - static int -__pyx_typeinfo_cmp(__Pyx_TypeInfo *a, __Pyx_TypeInfo *b) -{ - int i; - if (!a || !b) - return 0; - if (a == b) - return 1; - if (a->size != b->size || a->typegroup != b->typegroup || - a->is_unsigned != b->is_unsigned || a->ndim != b->ndim) { - if (a->typegroup == 'H' || b->typegroup == 'H') { - return a->size == b->size; - } else { - return 0; - } - } - if (a->ndim) { - for (i = 0; i < a->ndim; i++) - if (a->arraysize[i] != b->arraysize[i]) - return 0; - } - if (a->typegroup == 'S') { - if (a->flags != b->flags) - return 0; - if (a->fields || b->fields) { - if (!(a->fields && b->fields)) - return 0; - for (i = 0; a->fields[i].type && b->fields[i].type; i++) { - __Pyx_StructField *field_a = a->fields + i; - __Pyx_StructField *field_b = b->fields + i; - if (field_a->offset != field_b->offset || - !__pyx_typeinfo_cmp(field_a->type, field_b->type)) - return 0; - } - return !a->fields[i].type && !b->fields[i].type; - } - } - return 1; -} - -/* MemviewSliceValidateAndInit */ - static int -__pyx_check_strides(Py_buffer *buf, int dim, int ndim, int spec) -{ - if (buf->shape[dim] <= 1) - return 1; - if (buf->strides) { - if (spec & __Pyx_MEMVIEW_CONTIG) { - if (spec & (__Pyx_MEMVIEW_PTR|__Pyx_MEMVIEW_FULL)) { - if (unlikely(buf->strides[dim] != sizeof(void *))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly contiguous " - "in dimension %d.", dim); - goto fail; - } - } else if (unlikely(buf->strides[dim] != buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_FOLLOW) { - Py_ssize_t stride = buf->strides[dim]; - if (stride < 0) - stride = -stride; - if (unlikely(stride < buf->itemsize)) { - PyErr_SetString(PyExc_ValueError, - "Buffer and memoryview are not contiguous " - "in the same dimension."); - goto fail; - } - } - } else { - if (unlikely(spec & __Pyx_MEMVIEW_CONTIG && dim != ndim - 1)) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not contiguous in " - "dimension %d", dim); - goto fail; - } else if (unlikely(spec & (__Pyx_MEMVIEW_PTR))) { - PyErr_Format(PyExc_ValueError, - "C-contiguous buffer is not indirect in " - "dimension %d", dim); - goto fail; - } else if (unlikely(buf->suboffsets)) { - PyErr_SetString(PyExc_ValueError, - "Buffer exposes suboffsets but no strides"); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_check_suboffsets(Py_buffer *buf, int dim, CYTHON_UNUSED int ndim, int spec) -{ - if (spec & __Pyx_MEMVIEW_DIRECT) { - if (unlikely(buf->suboffsets && buf->suboffsets[dim] >= 0)) { - PyErr_Format(PyExc_ValueError, - "Buffer not compatible with direct access " - "in dimension %d.", dim); - goto fail; - } - } - if (spec & __Pyx_MEMVIEW_PTR) { - if (unlikely(!buf->suboffsets || (buf->suboffsets[dim] < 0))) { - PyErr_Format(PyExc_ValueError, - "Buffer is not indirectly accessible " - "in dimension %d.", dim); - goto fail; - } - } - return 1; -fail: - return 0; -} -static int -__pyx_verify_contig(Py_buffer *buf, int ndim, int c_or_f_flag) -{ - int i; - if (c_or_f_flag & __Pyx_IS_F_CONTIG) { - Py_ssize_t stride = 1; - for (i = 0; i < ndim; i++) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not fortran contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } else if (c_or_f_flag & __Pyx_IS_C_CONTIG) { - Py_ssize_t stride = 1; - for (i = ndim - 1; i >- 1; i--) { - if (unlikely(stride * buf->itemsize != buf->strides[i] && buf->shape[i] > 1)) { - PyErr_SetString(PyExc_ValueError, - "Buffer not C contiguous."); - goto fail; - } - stride = stride * buf->shape[i]; - } - } - return 1; -fail: - return 0; -} -static int __Pyx_ValidateAndInit_memviewslice( - int *axes_specs, - int c_or_f_flag, - int buf_flags, - int ndim, - __Pyx_TypeInfo *dtype, - __Pyx_BufFmt_StackElem stack[], - __Pyx_memviewslice *memviewslice, - PyObject *original_obj) -{ - struct __pyx_memoryview_obj *memview, *new_memview; - __Pyx_RefNannyDeclarations - Py_buffer *buf; - int i, spec = 0, retval = -1; - __Pyx_BufFmt_Context ctx; - int from_memoryview = __pyx_memoryview_check(original_obj); - __Pyx_RefNannySetupContext("ValidateAndInit_memviewslice", 0); - if (from_memoryview && __pyx_typeinfo_cmp(dtype, ((struct __pyx_memoryview_obj *) - original_obj)->typeinfo)) { - memview = (struct __pyx_memoryview_obj *) original_obj; - new_memview = NULL; - } else { - memview = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - original_obj, buf_flags, 0, dtype); - new_memview = memview; - if (unlikely(!memview)) - goto fail; - } - buf = &memview->view; - if (unlikely(buf->ndim != ndim)) { - PyErr_Format(PyExc_ValueError, - "Buffer has wrong number of dimensions (expected %d, got %d)", - ndim, buf->ndim); - goto fail; - } - if (new_memview) { - __Pyx_BufFmt_Init(&ctx, stack, dtype); - if (unlikely(!__Pyx_BufFmt_CheckString(&ctx, buf->format))) goto fail; - } - if (unlikely((unsigned) buf->itemsize != dtype->size)) { - PyErr_Format(PyExc_ValueError, - "Item size of buffer (%" CYTHON_FORMAT_SSIZE_T "u byte%s) " - "does not match size of '%s' (%" CYTHON_FORMAT_SSIZE_T "u byte%s)", - buf->itemsize, - (buf->itemsize > 1) ? "s" : "", - dtype->name, - dtype->size, - (dtype->size > 1) ? "s" : ""); - goto fail; - } - if (buf->len > 0) { - for (i = 0; i < ndim; i++) { - spec = axes_specs[i]; - if (unlikely(!__pyx_check_strides(buf, i, ndim, spec))) - goto fail; - if (unlikely(!__pyx_check_suboffsets(buf, i, ndim, spec))) - goto fail; - } - if (unlikely(buf->strides && !__pyx_verify_contig(buf, ndim, c_or_f_flag))) - goto fail; - } - if (unlikely(__Pyx_init_memviewslice(memview, ndim, memviewslice, - new_memview != NULL) == -1)) { - goto fail; - } - retval = 0; - goto no_fail; -fail: - Py_XDECREF(new_memview); - retval = -1; -no_fail: - __Pyx_RefNannyFinishContext(); - return retval; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_d_d_dc_float(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_FOLLOW), (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 3, - &__Pyx_TypeInfo_float, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* ObjectToMemviewSlice */ - static CYTHON_INLINE __Pyx_memviewslice __Pyx_PyObject_to_MemoryviewSlice_dc_int(PyObject *obj, int writable_flag) { - __Pyx_memviewslice result = { 0, 0, { 0 }, { 0 }, { 0 } }; - __Pyx_BufFmt_StackElem stack[1]; - int axes_specs[] = { (__Pyx_MEMVIEW_DIRECT | __Pyx_MEMVIEW_CONTIG) }; - int retcode; - if (obj == Py_None) { - result.memview = (struct __pyx_memoryview_obj *) Py_None; - return result; - } - retcode = __Pyx_ValidateAndInit_memviewslice(axes_specs, __Pyx_IS_C_CONTIG, - (PyBUF_C_CONTIGUOUS | PyBUF_FORMAT) | writable_flag, 1, - &__Pyx_TypeInfo_int, stack, - &result, obj); - if (unlikely(retcode == -1)) - goto __pyx_fail; - return result; -__pyx_fail: - result.memview = NULL; - result.data = NULL; - return result; -} - -/* CIntFromPyVerify */ - #define __PYX_VERIFY_RETURN_INT(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 0) -#define __PYX_VERIFY_RETURN_INT_EXC(target_type, func_type, func_value)\ - __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, 1) -#define __PYX__VERIFY_RETURN_INT(target_type, func_type, func_value, exc)\ - {\ - func_type value = func_value;\ - if (sizeof(target_type) < sizeof(func_type)) {\ - if (unlikely(value != (func_type) (target_type) value)) {\ - func_type zero = 0;\ - if (exc && unlikely(value == (func_type)-1 && PyErr_Occurred()))\ - return (target_type) -1;\ - if (is_unsigned && unlikely(value < zero))\ - goto raise_neg_overflow;\ - else\ - goto raise_overflow;\ - }\ - }\ - return (target_type) value;\ - } - -/* MemviewSliceCopyTemplate */ - static __Pyx_memviewslice -__pyx_memoryview_copy_new_contig(const __Pyx_memviewslice *from_mvs, - const char *mode, int ndim, - size_t sizeof_dtype, int contig_flag, - int dtype_is_object) -{ - __Pyx_RefNannyDeclarations - int i; - __Pyx_memviewslice new_mvs = { 0, 0, { 0 }, { 0 }, { 0 } }; - struct __pyx_memoryview_obj *from_memview = from_mvs->memview; - Py_buffer *buf = &from_memview->view; - PyObject *shape_tuple = NULL; - PyObject *temp_int = NULL; - struct __pyx_array_obj *array_obj = NULL; - struct __pyx_memoryview_obj *memview_obj = NULL; - __Pyx_RefNannySetupContext("__pyx_memoryview_copy_new_contig", 0); - for (i = 0; i < ndim; i++) { - if (unlikely(from_mvs->suboffsets[i] >= 0)) { - PyErr_Format(PyExc_ValueError, "Cannot copy memoryview slice with " - "indirect dimensions (axis %d)", i); - goto fail; - } - } - shape_tuple = PyTuple_New(ndim); - if (unlikely(!shape_tuple)) { - goto fail; - } - __Pyx_GOTREF(shape_tuple); - for(i = 0; i < ndim; i++) { - temp_int = PyInt_FromSsize_t(from_mvs->shape[i]); - if(unlikely(!temp_int)) { - goto fail; - } else { - PyTuple_SET_ITEM(shape_tuple, i, temp_int); - temp_int = NULL; - } - } - array_obj = __pyx_array_new(shape_tuple, sizeof_dtype, buf->format, (char *) mode, NULL); - if (unlikely(!array_obj)) { - goto fail; - } - __Pyx_GOTREF(array_obj); - memview_obj = (struct __pyx_memoryview_obj *) __pyx_memoryview_new( - (PyObject *) array_obj, contig_flag, - dtype_is_object, - from_mvs->memview->typeinfo); - if (unlikely(!memview_obj)) - goto fail; - if (unlikely(__Pyx_init_memviewslice(memview_obj, ndim, &new_mvs, 1) < 0)) - goto fail; - if (unlikely(__pyx_memoryview_copy_contents(*from_mvs, new_mvs, ndim, ndim, - dtype_is_object) < 0)) - goto fail; - goto no_fail; -fail: - __Pyx_XDECREF(new_mvs.memview); - new_mvs.memview = NULL; - new_mvs.data = NULL; -no_fail: - __Pyx_XDECREF(shape_tuple); - __Pyx_XDECREF(temp_int); - __Pyx_XDECREF(array_obj); - __Pyx_RefNannyFinishContext(); - return new_mvs; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_int(int value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(int) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(int) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(int) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(int), - little, !is_unsigned); - } -} - -/* CIntFromPy */ - static CYTHON_INLINE int __Pyx_PyInt_As_int(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const int neg_one = (int) -1, const_zero = (int) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(int) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(int, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (int) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case 1: __PYX_VERIFY_RETURN_INT(int, digit, digits[0]) - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 2 * PyLong_SHIFT) { - return (int) (((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 3 * PyLong_SHIFT) { - return (int) (((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) >= 4 * PyLong_SHIFT) { - return (int) (((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (int) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(int) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (int) 0; - case -1: __PYX_VERIFY_RETURN_INT(int, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(int, digit, +digits[0]) - case -2: - if (8 * sizeof(int) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(int) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - return (int) ((((((int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(int) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(int) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - return (int) ((((((((int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(int) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) (((int)-1)*(((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(int) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(int, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(int) - 1 > 4 * PyLong_SHIFT) { - return (int) ((((((((((int)digits[3]) << PyLong_SHIFT) | (int)digits[2]) << PyLong_SHIFT) | (int)digits[1]) << PyLong_SHIFT) | (int)digits[0]))); - } - } - break; - } -#endif - if (sizeof(int) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(int, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(int) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(int, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - int val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (int) -1; - } - } else { - int val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (int) -1; - val = __Pyx_PyInt_As_int(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to int"); - return (int) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to int"); - return (int) -1; -} - -/* CIntToPy */ - static CYTHON_INLINE PyObject* __Pyx_PyInt_From_long(long value) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; - if (is_unsigned) { - if (sizeof(long) < sizeof(long)) { - return PyInt_FromLong((long) value); - } else if (sizeof(long) <= sizeof(unsigned long)) { - return PyLong_FromUnsignedLong((unsigned long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - return PyLong_FromUnsignedLongLong((unsigned PY_LONG_LONG) value); -#endif - } - } else { - if (sizeof(long) <= sizeof(long)) { - return PyInt_FromLong((long) value); -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - return PyLong_FromLongLong((PY_LONG_LONG) value); -#endif - } - } - { - int one = 1; int little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&value; - return _PyLong_FromByteArray(bytes, sizeof(long), - little, !is_unsigned); - } -} - -/* CIntFromPy */ - static CYTHON_INLINE long __Pyx_PyInt_As_long(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const long neg_one = (long) -1, const_zero = (long) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(long) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(long, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (long) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case 1: __PYX_VERIFY_RETURN_INT(long, digit, digits[0]) - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 2 * PyLong_SHIFT) { - return (long) (((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 3 * PyLong_SHIFT) { - return (long) (((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) >= 4 * PyLong_SHIFT) { - return (long) (((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (long) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(long) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (long) 0; - case -1: __PYX_VERIFY_RETURN_INT(long, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(long, digit, +digits[0]) - case -2: - if (8 * sizeof(long) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(long) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - return (long) ((((((long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(long) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(long) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - return (long) ((((((((long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(long) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) (((long)-1)*(((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(long) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(long, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(long) - 1 > 4 * PyLong_SHIFT) { - return (long) ((((((((((long)digits[3]) << PyLong_SHIFT) | (long)digits[2]) << PyLong_SHIFT) | (long)digits[1]) << PyLong_SHIFT) | (long)digits[0]))); - } - } - break; - } -#endif - if (sizeof(long) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(long, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(long) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(long, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - long val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (long) -1; - } - } else { - long val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (long) -1; - val = __Pyx_PyInt_As_long(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to long"); - return (long) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to long"); - return (long) -1; -} - -/* CIntFromPy */ - static CYTHON_INLINE char __Pyx_PyInt_As_char(PyObject *x) { -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic push -#pragma GCC diagnostic ignored "-Wconversion" -#endif - const char neg_one = (char) -1, const_zero = (char) 0; -#ifdef __Pyx_HAS_GCC_DIAGNOSTIC -#pragma GCC diagnostic pop -#endif - const int is_unsigned = neg_one > const_zero; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x))) { - if (sizeof(char) < sizeof(long)) { - __PYX_VERIFY_RETURN_INT(char, long, PyInt_AS_LONG(x)) - } else { - long val = PyInt_AS_LONG(x); - if (is_unsigned && unlikely(val < 0)) { - goto raise_neg_overflow; - } - return (char) val; - } - } else -#endif - if (likely(PyLong_Check(x))) { - if (is_unsigned) { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case 1: __PYX_VERIFY_RETURN_INT(char, digit, digits[0]) - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 2 * PyLong_SHIFT) { - return (char) (((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 3 * PyLong_SHIFT) { - return (char) (((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) >= 4 * PyLong_SHIFT) { - return (char) (((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0])); - } - } - break; - } -#endif -#if CYTHON_COMPILING_IN_CPYTHON - if (unlikely(Py_SIZE(x) < 0)) { - goto raise_neg_overflow; - } -#else - { - int result = PyObject_RichCompareBool(x, Py_False, Py_LT); - if (unlikely(result < 0)) - return (char) -1; - if (unlikely(result == 1)) - goto raise_neg_overflow; - } -#endif - if (sizeof(char) <= sizeof(unsigned long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned long, PyLong_AsUnsignedLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(unsigned PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, unsigned PY_LONG_LONG, PyLong_AsUnsignedLongLong(x)) -#endif - } - } else { -#if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)x)->ob_digit; - switch (Py_SIZE(x)) { - case 0: return (char) 0; - case -1: __PYX_VERIFY_RETURN_INT(char, sdigit, (sdigit) (-(sdigit)digits[0])) - case 1: __PYX_VERIFY_RETURN_INT(char, digit, +digits[0]) - case -2: - if (8 * sizeof(char) - 1 > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 2: - if (8 * sizeof(char) > 1 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 2 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - return (char) ((((((char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -3: - if (8 * sizeof(char) - 1 > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 3: - if (8 * sizeof(char) > 2 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 3 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - return (char) ((((((((char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case -4: - if (8 * sizeof(char) - 1 > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, long, -(long) (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) (((char)-1)*(((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - case 4: - if (8 * sizeof(char) > 3 * PyLong_SHIFT) { - if (8 * sizeof(unsigned long) > 4 * PyLong_SHIFT) { - __PYX_VERIFY_RETURN_INT(char, unsigned long, (((((((((unsigned long)digits[3]) << PyLong_SHIFT) | (unsigned long)digits[2]) << PyLong_SHIFT) | (unsigned long)digits[1]) << PyLong_SHIFT) | (unsigned long)digits[0]))) - } else if (8 * sizeof(char) - 1 > 4 * PyLong_SHIFT) { - return (char) ((((((((((char)digits[3]) << PyLong_SHIFT) | (char)digits[2]) << PyLong_SHIFT) | (char)digits[1]) << PyLong_SHIFT) | (char)digits[0]))); - } - } - break; - } -#endif - if (sizeof(char) <= sizeof(long)) { - __PYX_VERIFY_RETURN_INT_EXC(char, long, PyLong_AsLong(x)) -#ifdef HAVE_LONG_LONG - } else if (sizeof(char) <= sizeof(PY_LONG_LONG)) { - __PYX_VERIFY_RETURN_INT_EXC(char, PY_LONG_LONG, PyLong_AsLongLong(x)) -#endif - } - } - { -#if CYTHON_COMPILING_IN_PYPY && !defined(_PyLong_AsByteArray) - PyErr_SetString(PyExc_RuntimeError, - "_PyLong_AsByteArray() not available in PyPy, cannot convert large numbers"); -#else - char val; - PyObject *v = __Pyx_PyNumber_IntOrLong(x); - #if PY_MAJOR_VERSION < 3 - if (likely(v) && !PyLong_Check(v)) { - PyObject *tmp = v; - v = PyNumber_Long(tmp); - Py_DECREF(tmp); - } - #endif - if (likely(v)) { - int one = 1; int is_little = (int)*(unsigned char *)&one; - unsigned char *bytes = (unsigned char *)&val; - int ret = _PyLong_AsByteArray((PyLongObject *)v, - bytes, sizeof(val), - is_little, !is_unsigned); - Py_DECREF(v); - if (likely(!ret)) - return val; - } -#endif - return (char) -1; - } - } else { - char val; - PyObject *tmp = __Pyx_PyNumber_IntOrLong(x); - if (!tmp) return (char) -1; - val = __Pyx_PyInt_As_char(tmp); - Py_DECREF(tmp); - return val; - } -raise_overflow: - PyErr_SetString(PyExc_OverflowError, - "value too large to convert to char"); - return (char) -1; -raise_neg_overflow: - PyErr_SetString(PyExc_OverflowError, - "can't convert negative value to char"); - return (char) -1; -} - -/* CheckBinaryVersion */ - static int __Pyx_check_binary_version(void) { - char ctversion[5]; - int same=1, i, found_dot; - const char* rt_from_call = Py_GetVersion(); - PyOS_snprintf(ctversion, 5, "%d.%d", PY_MAJOR_VERSION, PY_MINOR_VERSION); - found_dot = 0; - for (i = 0; i < 4; i++) { - if (!ctversion[i]) { - same = (rt_from_call[i] < '0' || rt_from_call[i] > '9'); - break; - } - if (rt_from_call[i] != ctversion[i]) { - same = 0; - break; - } - } - if (!same) { - char rtversion[5] = {'\0'}; - char message[200]; - for (i=0; i<4; ++i) { - if (rt_from_call[i] == '.') { - if (found_dot) break; - found_dot = 1; - } else if (rt_from_call[i] < '0' || rt_from_call[i] > '9') { - break; - } - rtversion[i] = rt_from_call[i]; - } - PyOS_snprintf(message, sizeof(message), - "compiletime version %s of module '%.100s' " - "does not match runtime version %s", - ctversion, __Pyx_MODULE_NAME, rtversion); - return PyErr_WarnEx(NULL, message, 1); - } - return 0; -} - -/* InitStrings */ - static int __Pyx_InitStrings(__Pyx_StringTabEntry *t) { - while (t->p) { - #if PY_MAJOR_VERSION < 3 - if (t->is_unicode) { - *t->p = PyUnicode_DecodeUTF8(t->s, t->n - 1, NULL); - } else if (t->intern) { - *t->p = PyString_InternFromString(t->s); - } else { - *t->p = PyString_FromStringAndSize(t->s, t->n - 1); - } - #else - if (t->is_unicode | t->is_str) { - if (t->intern) { - *t->p = PyUnicode_InternFromString(t->s); - } else if (t->encoding) { - *t->p = PyUnicode_Decode(t->s, t->n - 1, t->encoding, NULL); - } else { - *t->p = PyUnicode_FromStringAndSize(t->s, t->n - 1); - } - } else { - *t->p = PyBytes_FromStringAndSize(t->s, t->n - 1); - } - #endif - if (!*t->p) - return -1; - if (PyObject_Hash(*t->p) == -1) - return -1; - ++t; - } - return 0; -} - -static CYTHON_INLINE PyObject* __Pyx_PyUnicode_FromString(const char* c_str) { - return __Pyx_PyUnicode_FromStringAndSize(c_str, (Py_ssize_t)strlen(c_str)); -} -static CYTHON_INLINE const char* __Pyx_PyObject_AsString(PyObject* o) { - Py_ssize_t ignore; - return __Pyx_PyObject_AsStringAndSize(o, &ignore); -} -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT -#if !CYTHON_PEP393_ENABLED -static const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - char* defenc_c; - PyObject* defenc = _PyUnicode_AsDefaultEncodedString(o, NULL); - if (!defenc) return NULL; - defenc_c = PyBytes_AS_STRING(defenc); -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - { - char* end = defenc_c + PyBytes_GET_SIZE(defenc); - char* c; - for (c = defenc_c; c < end; c++) { - if ((unsigned char) (*c) >= 128) { - PyUnicode_AsASCIIString(o); - return NULL; - } - } - } -#endif - *length = PyBytes_GET_SIZE(defenc); - return defenc_c; -} -#else -static CYTHON_INLINE const char* __Pyx_PyUnicode_AsStringAndSize(PyObject* o, Py_ssize_t *length) { - if (unlikely(__Pyx_PyUnicode_READY(o) == -1)) return NULL; -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - if (likely(PyUnicode_IS_ASCII(o))) { - *length = PyUnicode_GET_LENGTH(o); - return PyUnicode_AsUTF8(o); - } else { - PyUnicode_AsASCIIString(o); - return NULL; - } -#else - return PyUnicode_AsUTF8AndSize(o, length); -#endif -} -#endif -#endif -static CYTHON_INLINE const char* __Pyx_PyObject_AsStringAndSize(PyObject* o, Py_ssize_t *length) { -#if __PYX_DEFAULT_STRING_ENCODING_IS_ASCII || __PYX_DEFAULT_STRING_ENCODING_IS_DEFAULT - if ( -#if PY_MAJOR_VERSION < 3 && __PYX_DEFAULT_STRING_ENCODING_IS_ASCII - __Pyx_sys_getdefaultencoding_not_ascii && -#endif - PyUnicode_Check(o)) { - return __Pyx_PyUnicode_AsStringAndSize(o, length); - } else -#endif -#if (!CYTHON_COMPILING_IN_PYPY) || (defined(PyByteArray_AS_STRING) && defined(PyByteArray_GET_SIZE)) - if (PyByteArray_Check(o)) { - *length = PyByteArray_GET_SIZE(o); - return PyByteArray_AS_STRING(o); - } else -#endif - { - char* result; - int r = PyBytes_AsStringAndSize(o, &result, length); - if (unlikely(r < 0)) { - return NULL; - } else { - return result; - } - } -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrue(PyObject* x) { - int is_true = x == Py_True; - if (is_true | (x == Py_False) | (x == Py_None)) return is_true; - else return PyObject_IsTrue(x); -} -static CYTHON_INLINE int __Pyx_PyObject_IsTrueAndDecref(PyObject* x) { - int retval; - if (unlikely(!x)) return -1; - retval = __Pyx_PyObject_IsTrue(x); - Py_DECREF(x); - return retval; -} -static PyObject* __Pyx_PyNumber_IntOrLongWrongResultType(PyObject* result, const char* type_name) { -#if PY_MAJOR_VERSION >= 3 - if (PyLong_Check(result)) { - if (PyErr_WarnFormat(PyExc_DeprecationWarning, 1, - "__int__ returned non-int (type %.200s). " - "The ability to return an instance of a strict subclass of int " - "is deprecated, and may be removed in a future version of Python.", - Py_TYPE(result)->tp_name)) { - Py_DECREF(result); - return NULL; - } - return result; - } -#endif - PyErr_Format(PyExc_TypeError, - "__%.4s__ returned non-%.4s (type %.200s)", - type_name, type_name, Py_TYPE(result)->tp_name); - Py_DECREF(result); - return NULL; -} -static CYTHON_INLINE PyObject* __Pyx_PyNumber_IntOrLong(PyObject* x) { -#if CYTHON_USE_TYPE_SLOTS - PyNumberMethods *m; -#endif - const char *name = NULL; - PyObject *res = NULL; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_Check(x) || PyLong_Check(x))) -#else - if (likely(PyLong_Check(x))) -#endif - return __Pyx_NewRef(x); -#if CYTHON_USE_TYPE_SLOTS - m = Py_TYPE(x)->tp_as_number; - #if PY_MAJOR_VERSION < 3 - if (m && m->nb_int) { - name = "int"; - res = m->nb_int(x); - } - else if (m && m->nb_long) { - name = "long"; - res = m->nb_long(x); - } - #else - if (likely(m && m->nb_int)) { - name = "int"; - res = m->nb_int(x); - } - #endif -#else - if (!PyBytes_CheckExact(x) && !PyUnicode_CheckExact(x)) { - res = PyNumber_Int(x); - } -#endif - if (likely(res)) { -#if PY_MAJOR_VERSION < 3 - if (unlikely(!PyInt_Check(res) && !PyLong_Check(res))) { -#else - if (unlikely(!PyLong_CheckExact(res))) { -#endif - return __Pyx_PyNumber_IntOrLongWrongResultType(res, name); - } - } - else if (!PyErr_Occurred()) { - PyErr_SetString(PyExc_TypeError, - "an integer is required"); - } - return res; -} -static CYTHON_INLINE Py_ssize_t __Pyx_PyIndex_AsSsize_t(PyObject* b) { - Py_ssize_t ival; - PyObject *x; -#if PY_MAJOR_VERSION < 3 - if (likely(PyInt_CheckExact(b))) { - if (sizeof(Py_ssize_t) >= sizeof(long)) - return PyInt_AS_LONG(b); - else - return PyInt_AsSsize_t(b); - } -#endif - if (likely(PyLong_CheckExact(b))) { - #if CYTHON_USE_PYLONG_INTERNALS - const digit* digits = ((PyLongObject*)b)->ob_digit; - const Py_ssize_t size = Py_SIZE(b); - if (likely(__Pyx_sst_abs(size) <= 1)) { - ival = likely(size) ? digits[0] : 0; - if (size == -1) ival = -ival; - return ival; - } else { - switch (size) { - case 2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return (Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -2: - if (8 * sizeof(Py_ssize_t) > 2 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -3: - if (8 * sizeof(Py_ssize_t) > 3 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case 4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return (Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - case -4: - if (8 * sizeof(Py_ssize_t) > 4 * PyLong_SHIFT) { - return -(Py_ssize_t) (((((((((size_t)digits[3]) << PyLong_SHIFT) | (size_t)digits[2]) << PyLong_SHIFT) | (size_t)digits[1]) << PyLong_SHIFT) | (size_t)digits[0])); - } - break; - } - } - #endif - return PyLong_AsSsize_t(b); - } - x = PyNumber_Index(b); - if (!x) return -1; - ival = PyInt_AsSsize_t(x); - Py_DECREF(x); - return ival; -} -static CYTHON_INLINE Py_hash_t __Pyx_PyIndex_AsHash_t(PyObject* o) { - if (sizeof(Py_hash_t) == sizeof(Py_ssize_t)) { - return (Py_hash_t) __Pyx_PyIndex_AsSsize_t(o); -#if PY_MAJOR_VERSION < 3 - } else if (likely(PyInt_CheckExact(o))) { - return PyInt_AS_LONG(o); -#endif - } else { - Py_ssize_t ival; - PyObject *x; - x = PyNumber_Index(o); - if (!x) return -1; - ival = PyInt_AsLong(x); - Py_DECREF(x); - return ival; - } -} -static CYTHON_INLINE PyObject * __Pyx_PyBool_FromLong(long b) { - return b ? __Pyx_NewRef(Py_True) : __Pyx_NewRef(Py_False); -} -static CYTHON_INLINE PyObject * __Pyx_PyInt_FromSize_t(size_t ival) { - return PyInt_FromSize_t(ival); -} - - -#endif /* Py_PYTHON_H */ diff --git a/spaces/lullNB/lullNew/README.md b/spaces/lullNB/lullNew/README.md deleted file mode 100644 index 6b2809ba4bbc526c30b359ac94e075f807dd752a..0000000000000000000000000000000000000000 --- a/spaces/lullNB/lullNew/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: LullNew -emoji: 🏆 -colorFrom: purple -colorTo: gray -sdk: docker -pinned: false -license: mit -app_port: 8080 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/lwchen/CodeFormer/CodeFormer/facelib/detection/matlab_cp2tform.py b/spaces/lwchen/CodeFormer/CodeFormer/facelib/detection/matlab_cp2tform.py deleted file mode 100644 index b2a8b54a91709c71437e15c68d3be9a9b0a20a34..0000000000000000000000000000000000000000 --- a/spaces/lwchen/CodeFormer/CodeFormer/facelib/detection/matlab_cp2tform.py +++ /dev/null @@ -1,317 +0,0 @@ -import numpy as np -from numpy.linalg import inv, lstsq -from numpy.linalg import matrix_rank as rank -from numpy.linalg import norm - - -class MatlabCp2tormException(Exception): - - def __str__(self): - return 'In File {}:{}'.format(__file__, super.__str__(self)) - - -def tformfwd(trans, uv): - """ - Function: - ---------- - apply affine transform 'trans' to uv - - Parameters: - ---------- - @trans: 3x3 np.array - transform matrix - @uv: Kx2 np.array - each row is a pair of coordinates (x, y) - - Returns: - ---------- - @xy: Kx2 np.array - each row is a pair of transformed coordinates (x, y) - """ - uv = np.hstack((uv, np.ones((uv.shape[0], 1)))) - xy = np.dot(uv, trans) - xy = xy[:, 0:-1] - return xy - - -def tforminv(trans, uv): - """ - Function: - ---------- - apply the inverse of affine transform 'trans' to uv - - Parameters: - ---------- - @trans: 3x3 np.array - transform matrix - @uv: Kx2 np.array - each row is a pair of coordinates (x, y) - - Returns: - ---------- - @xy: Kx2 np.array - each row is a pair of inverse-transformed coordinates (x, y) - """ - Tinv = inv(trans) - xy = tformfwd(Tinv, uv) - return xy - - -def findNonreflectiveSimilarity(uv, xy, options=None): - options = {'K': 2} - - K = options['K'] - M = xy.shape[0] - x = xy[:, 0].reshape((-1, 1)) # use reshape to keep a column vector - y = xy[:, 1].reshape((-1, 1)) # use reshape to keep a column vector - - tmp1 = np.hstack((x, y, np.ones((M, 1)), np.zeros((M, 1)))) - tmp2 = np.hstack((y, -x, np.zeros((M, 1)), np.ones((M, 1)))) - X = np.vstack((tmp1, tmp2)) - - u = uv[:, 0].reshape((-1, 1)) # use reshape to keep a column vector - v = uv[:, 1].reshape((-1, 1)) # use reshape to keep a column vector - U = np.vstack((u, v)) - - # We know that X * r = U - if rank(X) >= 2 * K: - r, _, _, _ = lstsq(X, U, rcond=-1) - r = np.squeeze(r) - else: - raise Exception('cp2tform:twoUniquePointsReq') - sc = r[0] - ss = r[1] - tx = r[2] - ty = r[3] - - Tinv = np.array([[sc, -ss, 0], [ss, sc, 0], [tx, ty, 1]]) - T = inv(Tinv) - T[:, 2] = np.array([0, 0, 1]) - - return T, Tinv - - -def findSimilarity(uv, xy, options=None): - options = {'K': 2} - - # uv = np.array(uv) - # xy = np.array(xy) - - # Solve for trans1 - trans1, trans1_inv = findNonreflectiveSimilarity(uv, xy, options) - - # Solve for trans2 - - # manually reflect the xy data across the Y-axis - xyR = xy - xyR[:, 0] = -1 * xyR[:, 0] - - trans2r, trans2r_inv = findNonreflectiveSimilarity(uv, xyR, options) - - # manually reflect the tform to undo the reflection done on xyR - TreflectY = np.array([[-1, 0, 0], [0, 1, 0], [0, 0, 1]]) - - trans2 = np.dot(trans2r, TreflectY) - - # Figure out if trans1 or trans2 is better - xy1 = tformfwd(trans1, uv) - norm1 = norm(xy1 - xy) - - xy2 = tformfwd(trans2, uv) - norm2 = norm(xy2 - xy) - - if norm1 <= norm2: - return trans1, trans1_inv - else: - trans2_inv = inv(trans2) - return trans2, trans2_inv - - -def get_similarity_transform(src_pts, dst_pts, reflective=True): - """ - Function: - ---------- - Find Similarity Transform Matrix 'trans': - u = src_pts[:, 0] - v = src_pts[:, 1] - x = dst_pts[:, 0] - y = dst_pts[:, 1] - [x, y, 1] = [u, v, 1] * trans - - Parameters: - ---------- - @src_pts: Kx2 np.array - source points, each row is a pair of coordinates (x, y) - @dst_pts: Kx2 np.array - destination points, each row is a pair of transformed - coordinates (x, y) - @reflective: True or False - if True: - use reflective similarity transform - else: - use non-reflective similarity transform - - Returns: - ---------- - @trans: 3x3 np.array - transform matrix from uv to xy - trans_inv: 3x3 np.array - inverse of trans, transform matrix from xy to uv - """ - - if reflective: - trans, trans_inv = findSimilarity(src_pts, dst_pts) - else: - trans, trans_inv = findNonreflectiveSimilarity(src_pts, dst_pts) - - return trans, trans_inv - - -def cvt_tform_mat_for_cv2(trans): - """ - Function: - ---------- - Convert Transform Matrix 'trans' into 'cv2_trans' which could be - directly used by cv2.warpAffine(): - u = src_pts[:, 0] - v = src_pts[:, 1] - x = dst_pts[:, 0] - y = dst_pts[:, 1] - [x, y].T = cv_trans * [u, v, 1].T - - Parameters: - ---------- - @trans: 3x3 np.array - transform matrix from uv to xy - - Returns: - ---------- - @cv2_trans: 2x3 np.array - transform matrix from src_pts to dst_pts, could be directly used - for cv2.warpAffine() - """ - cv2_trans = trans[:, 0:2].T - - return cv2_trans - - -def get_similarity_transform_for_cv2(src_pts, dst_pts, reflective=True): - """ - Function: - ---------- - Find Similarity Transform Matrix 'cv2_trans' which could be - directly used by cv2.warpAffine(): - u = src_pts[:, 0] - v = src_pts[:, 1] - x = dst_pts[:, 0] - y = dst_pts[:, 1] - [x, y].T = cv_trans * [u, v, 1].T - - Parameters: - ---------- - @src_pts: Kx2 np.array - source points, each row is a pair of coordinates (x, y) - @dst_pts: Kx2 np.array - destination points, each row is a pair of transformed - coordinates (x, y) - reflective: True or False - if True: - use reflective similarity transform - else: - use non-reflective similarity transform - - Returns: - ---------- - @cv2_trans: 2x3 np.array - transform matrix from src_pts to dst_pts, could be directly used - for cv2.warpAffine() - """ - trans, trans_inv = get_similarity_transform(src_pts, dst_pts, reflective) - cv2_trans = cvt_tform_mat_for_cv2(trans) - - return cv2_trans - - -if __name__ == '__main__': - """ - u = [0, 6, -2] - v = [0, 3, 5] - x = [-1, 0, 4] - y = [-1, -10, 4] - - # In Matlab, run: - # - # uv = [u'; v']; - # xy = [x'; y']; - # tform_sim=cp2tform(uv,xy,'similarity'); - # - # trans = tform_sim.tdata.T - # ans = - # -0.0764 -1.6190 0 - # 1.6190 -0.0764 0 - # -3.2156 0.0290 1.0000 - # trans_inv = tform_sim.tdata.Tinv - # ans = - # - # -0.0291 0.6163 0 - # -0.6163 -0.0291 0 - # -0.0756 1.9826 1.0000 - # xy_m=tformfwd(tform_sim, u,v) - # - # xy_m = - # - # -3.2156 0.0290 - # 1.1833 -9.9143 - # 5.0323 2.8853 - # uv_m=tforminv(tform_sim, x,y) - # - # uv_m = - # - # 0.5698 1.3953 - # 6.0872 2.2733 - # -2.6570 4.3314 - """ - u = [0, 6, -2] - v = [0, 3, 5] - x = [-1, 0, 4] - y = [-1, -10, 4] - - uv = np.array((u, v)).T - xy = np.array((x, y)).T - - print('\n--->uv:') - print(uv) - print('\n--->xy:') - print(xy) - - trans, trans_inv = get_similarity_transform(uv, xy) - - print('\n--->trans matrix:') - print(trans) - - print('\n--->trans_inv matrix:') - print(trans_inv) - - print('\n---> apply transform to uv') - print('\nxy_m = uv_augmented * trans') - uv_aug = np.hstack((uv, np.ones((uv.shape[0], 1)))) - xy_m = np.dot(uv_aug, trans) - print(xy_m) - - print('\nxy_m = tformfwd(trans, uv)') - xy_m = tformfwd(trans, uv) - print(xy_m) - - print('\n---> apply inverse transform to xy') - print('\nuv_m = xy_augmented * trans_inv') - xy_aug = np.hstack((xy, np.ones((xy.shape[0], 1)))) - uv_m = np.dot(xy_aug, trans_inv) - print(uv_m) - - print('\nuv_m = tformfwd(trans_inv, xy)') - uv_m = tformfwd(trans_inv, xy) - print(uv_m) - - uv_m = tforminv(trans, xy) - print('\nuv_m = tforminv(trans, xy)') - print(uv_m) diff --git a/spaces/matthoffner/chatbot-mini/utils/app/importExport.ts b/spaces/matthoffner/chatbot-mini/utils/app/importExport.ts deleted file mode 100644 index 0fe677d566cdc904a30b215a16095a26e8c6cb77..0000000000000000000000000000000000000000 --- a/spaces/matthoffner/chatbot-mini/utils/app/importExport.ts +++ /dev/null @@ -1,164 +0,0 @@ -import { Conversation } from '@/types/chat'; -import { - ExportFormatV1, - ExportFormatV2, - ExportFormatV3, - ExportFormatV4, - LatestExportFormat, - SupportedExportFormats, -} from '@/types/export'; -import { FolderInterface } from '@/types/folder'; -import { Prompt } from '@/types/prompt'; - -import { cleanConversationHistory } from './clean'; - -export function isExportFormatV1(obj: any): obj is ExportFormatV1 { - return Array.isArray(obj); -} - -export function isExportFormatV2(obj: any): obj is ExportFormatV2 { - return !('version' in obj) && 'folders' in obj && 'history' in obj; -} - -export function isExportFormatV3(obj: any): obj is ExportFormatV3 { - return obj.version === 3; -} - -export function isExportFormatV4(obj: any): obj is ExportFormatV4 { - return obj.version === 4; -} - -export const isLatestExportFormat = isExportFormatV4; - -export function cleanData(data: SupportedExportFormats): LatestExportFormat { - if (isExportFormatV1(data)) { - return { - version: 4, - history: cleanConversationHistory(data), - folders: [], - prompts: [], - }; - } - - if (isExportFormatV2(data)) { - return { - version: 4, - history: cleanConversationHistory(data.history || []), - folders: (data.folders || []).map((chatFolder) => ({ - id: chatFolder.id.toString(), - name: chatFolder.name, - type: 'chat', - })), - prompts: [], - }; - } - - if (isExportFormatV3(data)) { - return { ...data, version: 4, prompts: [] }; - } - - if (isExportFormatV4(data)) { - return data; - } - - throw new Error('Unsupported data format'); -} - -function currentDate() { - const date = new Date(); - const month = date.getMonth() + 1; - const day = date.getDate(); - return `${month}-${day}`; -} - -export const exportData = () => { - let history = localStorage.getItem('conversationHistory'); - let folders = localStorage.getItem('folders'); - let prompts = localStorage.getItem('prompts'); - - if (history) { - history = JSON.parse(history); - } - - if (folders) { - folders = JSON.parse(folders); - } - - if (prompts) { - prompts = JSON.parse(prompts); - } - - const data = { - version: 4, - history: history || [], - folders: folders || [], - prompts: prompts || [], - } as LatestExportFormat; - - const blob = new Blob([JSON.stringify(data, null, 2)], { - type: 'application/json', - }); - const url = URL.createObjectURL(blob); - const link = document.createElement('a'); - link.download = `chatbot_ui_history_${currentDate()}.json`; - link.href = url; - link.style.display = 'none'; - document.body.appendChild(link); - link.click(); - document.body.removeChild(link); - URL.revokeObjectURL(url); -}; - -export const importData = ( - data: SupportedExportFormats, -): LatestExportFormat => { - const { history, folders, prompts } = cleanData(data); - - const oldConversations = localStorage.getItem('conversationHistory'); - const oldConversationsParsed = oldConversations - ? JSON.parse(oldConversations) - : []; - - const newHistory: Conversation[] = [ - ...oldConversationsParsed, - ...history, - ].filter( - (conversation, index, self) => - index === self.findIndex((c) => c.id === conversation.id), - ); - localStorage.setItem('conversationHistory', JSON.stringify(newHistory)); - if (newHistory.length > 0) { - localStorage.setItem( - 'selectedConversation', - JSON.stringify(newHistory[newHistory.length - 1]), - ); - } else { - localStorage.removeItem('selectedConversation'); - } - - const oldFolders = localStorage.getItem('folders'); - const oldFoldersParsed = oldFolders ? JSON.parse(oldFolders) : []; - const newFolders: FolderInterface[] = [ - ...oldFoldersParsed, - ...folders, - ].filter( - (folder, index, self) => - index === self.findIndex((f) => f.id === folder.id), - ); - localStorage.setItem('folders', JSON.stringify(newFolders)); - - const oldPrompts = localStorage.getItem('prompts'); - const oldPromptsParsed = oldPrompts ? JSON.parse(oldPrompts) : []; - const newPrompts: Prompt[] = [...oldPromptsParsed, ...prompts].filter( - (prompt, index, self) => - index === self.findIndex((p) => p.id === prompt.id), - ); - localStorage.setItem('prompts', JSON.stringify(newPrompts)); - - return { - version: 4, - history: newHistory, - folders: newFolders, - prompts: newPrompts, - }; -}; diff --git a/spaces/mav735/mri-assistent/app.py b/spaces/mav735/mri-assistent/app.py deleted file mode 100644 index 8f8df32dce384b323e3292f7f48d76f28db47e28..0000000000000000000000000000000000000000 --- a/spaces/mav735/mri-assistent/app.py +++ /dev/null @@ -1,44 +0,0 @@ -import gradio as gr -from model import get_results_model -from model import model_ -import cv2 - -IMAGES = 0 - - -def predict_image(image): - global IMAGES - paths = f'images/image_{IMAGES}.jpg' - cv2.imwrite(paths, image) - IMAGES += 1 - result = get_results_model(paths, model_) - if result[2] < 0.001: - label_img = 'Unrecognised' - pred_acc = '' - else: - label_img = result[1] - pred_acc = f'Probability:   **{(result[2] * 100):.2f} %**' - return result[0], f' Class:   **{label_img}**      {pred_acc}' - - -with gr.Blocks() as demo: - gr.Markdown('**MRI Assistant**') - with gr.Row(): - with gr.Column(): - image_input = gr.Image(label='MRI') - label = gr.Markdown("") - image_output = gr.Image(label='AI results') - - image_button = gr.Button("Predict results") - - gr.Markdown(r""" - Social:\ -    *1.*   [*Developers*](https://t.me/HenSolaris) \ -    *2.*   [*Telegram bot*](https://t.me/Altsheimer_AI_bot) - """) - - image_button.click(predict_image, inputs=image_input, outputs=[image_output, label]) - -demo.launch() - -print('launched!') diff --git a/spaces/maxmax20160403/sovits5.0/whisper/decoding.py b/spaces/maxmax20160403/sovits5.0/whisper/decoding.py deleted file mode 100644 index 603546d4c9ff67514d2567576935b974fe373bef..0000000000000000000000000000000000000000 --- a/spaces/maxmax20160403/sovits5.0/whisper/decoding.py +++ /dev/null @@ -1,712 +0,0 @@ -from dataclasses import dataclass, field -from typing import Dict, List, Tuple, Iterable, Optional, Sequence, Union, TYPE_CHECKING - -import numpy as np -import torch -import torch.nn.functional as F -from torch import Tensor -from torch.distributions import Categorical - -from .audio import CHUNK_LENGTH -from .tokenizer import Tokenizer, get_tokenizer -from .utils import compression_ratio - -if TYPE_CHECKING: - from .model import Whisper - - -@torch.no_grad() -def detect_language(model: "Whisper", mel: Tensor, tokenizer: Tokenizer = None) -> Tuple[Tensor, List[dict]]: - """ - Detect the spoken language in the audio, and return them as list of strings, along with the ids - of the most probable language tokens and the probability distribution over all language tokens. - This is performed outside the main decode loop in order to not interfere with kv-caching. - - Returns - ------- - language_tokens : Tensor, shape = (n_audio,) - ids of the most probable language tokens, which appears after the startoftranscript token. - language_probs : List[Dict[str, float]], length = n_audio - list of dictionaries containing the probability distribution over all languages. - """ - if tokenizer is None: - tokenizer = get_tokenizer(model.is_multilingual) - if tokenizer.language is None or tokenizer.language_token not in tokenizer.sot_sequence: - raise ValueError(f"This model doesn't have language tokens so it can't perform lang id") - - single = mel.ndim == 2 - if single: - mel = mel.unsqueeze(0) - - # skip encoder forward pass if already-encoded audio features were given - if mel.shape[-2:] != (model.dims.n_audio_ctx, model.dims.n_audio_state): - mel = model.encoder(mel) - - # forward pass using a single token, startoftranscript - n_audio = mel.shape[0] - x = torch.tensor([[tokenizer.sot]] * n_audio).to(mel.device) # [n_audio, 1] - logits = model.logits(x, mel)[:, 0] - - # collect detected languages; suppress all non-language tokens - mask = torch.ones(logits.shape[-1], dtype=torch.bool) - mask[list(tokenizer.all_language_tokens)] = False - logits[:, mask] = -np.inf - language_tokens = logits.argmax(dim=-1) - language_token_probs = logits.softmax(dim=-1).cpu() - language_probs = [ - { - c: language_token_probs[i, j].item() - for j, c in zip(tokenizer.all_language_tokens, tokenizer.all_language_codes) - } - for i in range(n_audio) - ] - - if single: - language_tokens = language_tokens[0] - language_probs = language_probs[0] - - return language_tokens, language_probs - - -@dataclass(frozen=True) -class DecodingOptions: - task: str = "transcribe" # whether to perform X->X "transcribe" or X->English "translate" - language: Optional[str] = None # language that the audio is in; uses detected language if None - - # sampling-related options - temperature: float = 0.0 - sample_len: Optional[int] = None # maximum number of tokens to sample - best_of: Optional[int] = None # number of independent samples to collect, when t > 0 - beam_size: Optional[int] = None # number of beams in beam search, when t == 0 - patience: Optional[float] = None # patience in beam search (https://arxiv.org/abs/2204.05424) - - # options for ranking generations (either beams or best-of-N samples) - length_penalty: Optional[float] = None # "alpha" in Google NMT, None defaults to length norm - - # prompt, prefix, and token suppression - prompt: Optional[Union[str, List[int]]] = None # text or tokens for the previous context - prefix: Optional[Union[str, List[int]]] = None # text or tokens to prefix the current context - suppress_blank: bool = True # this will suppress blank outputs - - # list of tokens ids (or comma-separated token ids) to suppress - # "-1" will suppress a set of symbols as defined in `tokenizer.non_speech_tokens()` - suppress_tokens: Optional[Union[str, Iterable[int]]] = "-1" - - # timestamp sampling options - without_timestamps: bool = False # use <|notimestamps|> to sample text tokens only - max_initial_timestamp: Optional[float] = 1.0 # the initial timestamp cannot be later than this - - # implementation details - fp16: bool = True # use fp16 for most of the calculation - - -@dataclass(frozen=True) -class DecodingResult: - audio_features: Tensor - language: str - language_probs: Optional[Dict[str, float]] = None - tokens: List[int] = field(default_factory=list) - text: str = "" - avg_logprob: float = np.nan - no_speech_prob: float = np.nan - temperature: float = np.nan - compression_ratio: float = np.nan - - -class Inference: - def logits(self, tokens: Tensor, audio_features: Tensor) -> Tensor: - """Perform a forward pass on the decoder and return per-token logits""" - raise NotImplementedError - - def rearrange_kv_cache(self, source_indices) -> None: - """Update the key-value cache according to the updated beams""" - raise NotImplementedError - - def cleanup_caching(self) -> None: - """Clean up any resources or hooks after decoding is finished""" - pass - - -class PyTorchInference(Inference): - def __init__(self, model: "Whisper", initial_token_length: int): - self.model: "Whisper" = model - self.initial_token_length = initial_token_length - self.kv_cache = {} - self.hooks = [] - - def logits(self, tokens: Tensor, audio_features: Tensor) -> Tensor: - if not self.kv_cache: - self.kv_cache, self.hooks = self.model.install_kv_cache_hooks() - - if tokens.shape[-1] > self.initial_token_length: - # only need to use the last token except in the first forward pass - tokens = tokens[:, -1:] - - return self.model.decoder(tokens, audio_features, kv_cache=self.kv_cache) - - def cleanup_caching(self): - for hook in self.hooks: - hook.remove() - - self.kv_cache = {} - self.hooks = [] - - def rearrange_kv_cache(self, source_indices): - for module, tensor in self.kv_cache.items(): - # update the key/value cache to contain the selected sequences - self.kv_cache[module] = tensor[source_indices].detach() - - -class SequenceRanker: - def rank(self, tokens: List[List[Tensor]], sum_logprobs: List[List[float]]) -> List[int]: - """ - Given a list of groups of samples and their cumulative log probabilities, - return the indices of the samples in each group to select as the final result - """ - raise NotImplementedError - - -class MaximumLikelihoodRanker(SequenceRanker): - """ - Select the sample with the highest log probabilities, penalized using either - a simple length normalization or Google NMT paper's length penalty - """ - - def __init__(self, length_penalty: Optional[float]): - self.length_penalty = length_penalty - - def rank(self, tokens: List[List[Tensor]], sum_logprobs: List[List[float]]): - def scores(logprobs, lengths): - result = [] - for logprob, length in zip(logprobs, lengths): - if self.length_penalty is None: - penalty = length - else: - # from the Google NMT paper - penalty = ((5 + length) / 6) ** self.length_penalty - result.append(logprob / penalty) - return result - - # get the sequence with the highest score - lengths = [[len(t) for t in s] for s in tokens] - return [np.argmax(scores(p, l)) for p, l in zip(sum_logprobs, lengths)] - - -class TokenDecoder: - def reset(self): - """Initialize any stateful variables for decoding a new sequence""" - - def update(self, tokens: Tensor, logits: Tensor, sum_logprobs: Tensor) -> Tuple[Tensor, bool]: - """Specify how to select the next token, based on the current trace and logits - - Parameters - ---------- - tokens : Tensor, shape = (n_batch, current_sequence_length) - all tokens in the context so far, including the prefix and sot_sequence tokens - - logits : Tensor, shape = (n_batch, vocab_size) - per-token logits of the probability distribution at the current step - - sum_logprobs : Tensor, shape = (n_batch) - cumulative log probabilities for each sequence - - Returns - ------- - tokens : Tensor, shape = (n_batch, current_sequence_length + 1) - the tokens, appended with the selected next token - - completed : bool - True if all sequences has reached the end of text - - """ - raise NotImplementedError - - def finalize( - self, tokens: Tensor, sum_logprobs: Tensor - ) -> Tuple[Sequence[Sequence[Tensor]], List[List[float]]]: - """Finalize search and return the final candidate sequences - - Parameters - ---------- - tokens : Tensor, shape = (n_audio, n_group, current_sequence_length) - all tokens in the context so far, including the prefix and sot_sequence - - sum_logprobs : Tensor, shape = (n_audio, n_group) - cumulative log probabilities for each sequence - - Returns - ------- - tokens : Sequence[Sequence[Tensor]], length = n_audio - sequence of Tensors containing candidate token sequences, for each audio input - - sum_logprobs : List[List[float]], length = n_audio - sequence of cumulative log probabilities corresponding to the above - - """ - raise NotImplementedError - - -class GreedyDecoder(TokenDecoder): - def __init__(self, temperature: float, eot: int): - self.temperature = temperature - self.eot = eot - - def update(self, tokens: Tensor, logits: Tensor, sum_logprobs: Tensor) -> Tuple[Tensor, bool]: - temperature = self.temperature - if temperature == 0: - next_tokens = logits.argmax(dim=-1) - else: - next_tokens = Categorical(logits=logits / temperature).sample() - - logprobs = F.log_softmax(logits.float(), dim=-1) - current_logprobs = logprobs[torch.arange(logprobs.shape[0]), next_tokens] - sum_logprobs += current_logprobs * (tokens[:, -1] != self.eot) - - next_tokens[tokens[:, -1] == self.eot] = self.eot - tokens = torch.cat([tokens, next_tokens[:, None]], dim=-1) - - completed = (tokens[:, -1] == self.eot).all() - return tokens, completed - - def finalize(self, tokens: Tensor, sum_logprobs: Tensor): - # make sure each sequence has at least one EOT token at the end - tokens = F.pad(tokens, (0, 1), value=self.eot) - return tokens, sum_logprobs.tolist() - - -class BeamSearchDecoder(TokenDecoder): - def __init__(self, beam_size: int, eot: int, inference: Inference, patience: Optional[float] = None): - self.beam_size = beam_size - self.eot = eot - self.inference = inference - self.patience = patience or 1.0 - self.max_candidates: int = round(beam_size * self.patience) - self.finished_sequences = None - - assert self.max_candidates > 0, f"Invalid beam size ({beam_size}) or patience ({patience})" - - def reset(self): - self.finished_sequences = None - - def update(self, tokens: Tensor, logits: Tensor, sum_logprobs: Tensor) -> Tuple[Tensor, bool]: - if tokens.shape[0] % self.beam_size != 0: - raise ValueError(f"{tokens.shape}[0] % {self.beam_size} != 0") - - n_audio = tokens.shape[0] // self.beam_size - if self.finished_sequences is None: # for the first update - self.finished_sequences = [{} for _ in range(n_audio)] - - logprobs = F.log_softmax(logits.float(), dim=-1) - next_tokens, source_indices, finished_sequences = [], [], [] - for i in range(n_audio): - scores, sources, finished = {}, {}, {} - - # STEP 1: calculate the cumulative log probabilities for possible candidates - for j in range(self.beam_size): - idx = i * self.beam_size + j - prefix = tokens[idx].tolist() - for logprob, token in zip(*logprobs[idx].topk(self.beam_size + 1)): - new_logprob = (sum_logprobs[idx] + logprob).item() - sequence = tuple(prefix + [token.item()]) - scores[sequence] = new_logprob - sources[sequence] = idx - - # STEP 2: rank the candidates and keep the top beam_size sequences for each audio - saved = 0 - for sequence in sorted(scores, key=scores.get, reverse=True): - if sequence[-1] == self.eot: - finished[sequence] = scores[sequence] - else: - sum_logprobs[len(next_tokens)] = scores[sequence] - next_tokens.append(sequence) - source_indices.append(sources[sequence]) - - saved += 1 - if saved == self.beam_size: - break - - finished_sequences.append(finished) - - tokens = torch.tensor(next_tokens, device=tokens.device) - self.inference.rearrange_kv_cache(source_indices) - - # add newly finished sequences to self.finished_sequences - assert len(self.finished_sequences) == len(finished_sequences) - for previously_finished, newly_finished in zip(self.finished_sequences, finished_sequences): - for seq in sorted(newly_finished, key=newly_finished.get, reverse=True): - if len(previously_finished) >= self.max_candidates: - break # the candidate list is full - previously_finished[seq] = newly_finished[seq] - - # mark as completed if all audio has enough number of samples - completed = all( - len(sequences) >= self.max_candidates for sequences in self.finished_sequences - ) - return tokens, completed - - def finalize(self, preceding_tokens: Tensor, sum_logprobs: Tensor): - # collect all finished sequences, including patience, and add unfinished ones if not enough - sum_logprobs = sum_logprobs.cpu() - for i, sequences in enumerate(self.finished_sequences): - if len(sequences) < self.beam_size: # when not enough sequences are finished - for j in list(np.argsort(sum_logprobs[i]))[::-1]: - sequence = preceding_tokens[i, j].tolist() + [self.eot] - sequences[tuple(sequence)] = sum_logprobs[i][j].item() - if len(sequences) >= self.beam_size: - break - - tokens: List[List[Tensor]] = [ - [torch.tensor(seq) for seq in sequences.keys()] for sequences in self.finished_sequences - ] - sum_logprobs: List[List[float]] = [ - list(sequences.values()) for sequences in self.finished_sequences - ] - return tokens, sum_logprobs - - -class LogitFilter: - def apply(self, logits: Tensor, tokens: Tensor) -> None: - """Apply any filtering or masking to logits in-place - - Parameters - ---------- - logits : Tensor, shape = (n_batch, vocab_size) - per-token logits of the probability distribution at the current step - - tokens : Tensor, shape = (n_batch, current_sequence_length) - all tokens in the context so far, including the prefix and sot_sequence tokens - - """ - raise NotImplementedError - - -class SuppressBlank(LogitFilter): - def __init__(self, tokenizer: Tokenizer, sample_begin: int): - self.tokenizer = tokenizer - self.sample_begin = sample_begin - - def apply(self, logits: Tensor, tokens: Tensor): - if tokens.shape[1] == self.sample_begin: - logits[:, self.tokenizer.encode(" ") + [self.tokenizer.eot]] = -np.inf - - -class SuppressTokens(LogitFilter): - def __init__(self, suppress_tokens: Sequence[int]): - self.suppress_tokens = list(suppress_tokens) - - def apply(self, logits: Tensor, tokens: Tensor): - logits[:, self.suppress_tokens] = -np.inf - - -class ApplyTimestampRules(LogitFilter): - def __init__( - self, tokenizer: Tokenizer, sample_begin: int, max_initial_timestamp_index: Optional[int] - ): - self.tokenizer = tokenizer - self.sample_begin = sample_begin - self.max_initial_timestamp_index = max_initial_timestamp_index - - def apply(self, logits: Tensor, tokens: Tensor): - # suppress <|notimestamps|> which is handled by without_timestamps - if self.tokenizer.no_timestamps is not None: - logits[:, self.tokenizer.no_timestamps] = -np.inf - - # timestamps have to appear in pairs, except directly before EOT; mask logits accordingly - for k in range(tokens.shape[0]): - seq = [t for t in tokens[k, self.sample_begin :].tolist()] - last_was_timestamp = len(seq) >= 1 and seq[-1] >= self.tokenizer.timestamp_begin - penultimate_was_timestamp = len(seq) < 2 or seq[-2] >= self.tokenizer.timestamp_begin - - if last_was_timestamp: - if penultimate_was_timestamp: # has to be non-timestamp - logits[k, self.tokenizer.timestamp_begin :] = -np.inf - else: # cannot be normal text tokens - logits[k, : self.tokenizer.eot] = -np.inf - - if tokens.shape[1] == self.sample_begin: - # suppress generating non-timestamp tokens at the beginning - logits[:, : self.tokenizer.timestamp_begin] = -np.inf - - # apply the `max_initial_timestamp` option - if self.max_initial_timestamp_index is not None: - last_allowed = self.tokenizer.timestamp_begin + self.max_initial_timestamp_index - logits[:, last_allowed + 1 :] = -np.inf - - # if sum of probability over timestamps is above any other token, sample timestamp - logprobs = F.log_softmax(logits.float(), dim=-1) - for k in range(tokens.shape[0]): - timestamp_logprob = logprobs[k, self.tokenizer.timestamp_begin :].logsumexp(dim=-1) - max_text_token_logprob = logprobs[k, : self.tokenizer.timestamp_begin].max() - if timestamp_logprob > max_text_token_logprob: - logits[k, : self.tokenizer.timestamp_begin] = -np.inf - - -class DecodingTask: - inference: Inference - sequence_ranker: SequenceRanker - decoder: TokenDecoder - logit_filters: List[LogitFilter] - - def __init__(self, model: "Whisper", options: DecodingOptions): - self.model = model - - language = options.language or "en" - tokenizer = get_tokenizer(model.is_multilingual, language=language, task=options.task) - self.tokenizer: Tokenizer = tokenizer - self.options: DecodingOptions = self._verify_options(options) - - self.n_group: int = options.beam_size or options.best_of or 1 - self.n_ctx: int = model.dims.n_text_ctx - self.sample_len: int = options.sample_len or model.dims.n_text_ctx // 2 - - self.sot_sequence: Tuple[int] = tokenizer.sot_sequence - if self.options.without_timestamps: - self.sot_sequence = tokenizer.sot_sequence_including_notimestamps - - self.initial_tokens: Tuple[int] = self._get_initial_tokens() - self.sample_begin: int = len(self.initial_tokens) - self.sot_index: int = self.initial_tokens.index(tokenizer.sot) - - # inference: implements the forward pass through the decoder, including kv caching - self.inference = PyTorchInference(model, len(self.initial_tokens)) - - # sequence ranker: implements how to rank a group of sampled sequences - self.sequence_ranker = MaximumLikelihoodRanker(options.length_penalty) - - # decoder: implements how to select the next tokens, given the autoregressive distribution - if options.beam_size is not None: - self.decoder = BeamSearchDecoder( - options.beam_size, tokenizer.eot, self.inference, options.patience - ) - else: - self.decoder = GreedyDecoder(options.temperature, tokenizer.eot) - - # logit filters: applies various rules to suppress or penalize certain tokens - self.logit_filters = [] - if self.options.suppress_blank: - self.logit_filters.append(SuppressBlank(self.tokenizer, self.sample_begin)) - if self.options.suppress_tokens: - self.logit_filters.append(SuppressTokens(self._get_suppress_tokens())) - if not options.without_timestamps: - precision = CHUNK_LENGTH / model.dims.n_audio_ctx # usually 0.02 seconds - max_initial_timestamp_index = None - if options.max_initial_timestamp: - max_initial_timestamp_index = round(self.options.max_initial_timestamp / precision) - self.logit_filters.append( - ApplyTimestampRules(tokenizer, self.sample_begin, max_initial_timestamp_index) - ) - - def _verify_options(self, options: DecodingOptions) -> DecodingOptions: - if options.beam_size is not None and options.best_of is not None: - raise ValueError("beam_size and best_of can't be given together") - if options.temperature == 0: - if options.best_of is not None: - raise ValueError("best_of with greedy sampling (T=0) is not compatible") - if options.patience is not None and options.beam_size is None: - raise ValueError("patience requires beam_size to be given") - if options.length_penalty is not None and not (0 <= options.length_penalty <= 1): - raise ValueError("length_penalty (alpha) should be a value between 0 and 1") - - return options - - def _get_initial_tokens(self) -> Tuple[int]: - tokens = list(self.sot_sequence) - prefix = self.options.prefix - prompt = self.options.prompt - - if prefix: - prefix_tokens = ( - self.tokenizer.encode(" " + prefix.strip()) if isinstance(prefix, str) else prefix - ) - if self.sample_len is not None: - max_prefix_len = self.n_ctx // 2 - self.sample_len - prefix_tokens = prefix_tokens[-max_prefix_len:] - tokens = tokens + prefix_tokens - - if prompt: - prompt_tokens = ( - self.tokenizer.encode(" " + prompt.strip()) if isinstance(prompt, str) else prompt - ) - tokens = [self.tokenizer.sot_prev] + prompt_tokens[-(self.n_ctx // 2 - 1) :] + tokens - - return tuple(tokens) - - def _get_suppress_tokens(self) -> Tuple[int]: - suppress_tokens = self.options.suppress_tokens - - if isinstance(suppress_tokens, str): - suppress_tokens = [int(t) for t in suppress_tokens.split(",")] - - if -1 in suppress_tokens: - suppress_tokens = [t for t in suppress_tokens if t >= 0] - suppress_tokens.extend(self.tokenizer.non_speech_tokens) - elif suppress_tokens is None or len(suppress_tokens) == 0: - suppress_tokens = [] # interpret empty string as an empty list - else: - assert isinstance(suppress_tokens, list), "suppress_tokens must be a list" - - suppress_tokens.extend( - [self.tokenizer.sot, self.tokenizer.sot_prev, self.tokenizer.sot_lm] - ) - if self.tokenizer.no_speech is not None: - # no-speech probability is collected separately - suppress_tokens.append(self.tokenizer.no_speech) - - return tuple(sorted(set(suppress_tokens))) - - def _get_audio_features(self, mel: Tensor): - if self.options.fp16: - mel = mel.half() - - if mel.shape[-2:] == (self.model.dims.n_audio_ctx, self.model.dims.n_audio_state): - # encoded audio features are given; skip audio encoding - print("encoded audio features are given; skip audio encoding") - audio_features = mel - else: - print(mel.shape) - print("===============================") - audio_features = self.model.encoder(mel) - - if audio_features.dtype != (torch.float16 if self.options.fp16 else torch.float32): - return TypeError(f"audio_features has an incorrect dtype: {audio_features.dtype}") - - return audio_features - - def _detect_language(self, audio_features: Tensor, tokens: Tensor): - languages = [self.options.language] * audio_features.shape[0] - lang_probs = None - - if self.options.language is None or self.options.task == "lang_id": - lang_tokens, lang_probs = self.model.detect_language(audio_features, self.tokenizer) - languages = [max(probs, key=probs.get) for probs in lang_probs] - if self.options.language is None: - tokens[:, self.sot_index + 1] = lang_tokens # write language tokens - - return languages, lang_probs - - def _main_loop(self, audio_features: Tensor, tokens: Tensor): - assert audio_features.shape[0] == tokens.shape[0] - n_batch = tokens.shape[0] - sum_logprobs: Tensor = torch.zeros(n_batch, device=audio_features.device) - no_speech_probs = [np.nan] * n_batch - - try: - for i in range(self.sample_len): - logits = self.inference.logits(tokens, audio_features) - - if i == 0 and self.tokenizer.no_speech is not None: # save no_speech_probs - probs_at_sot = logits[:, self.sot_index].float().softmax(dim=-1) - no_speech_probs = probs_at_sot[:, self.tokenizer.no_speech].tolist() - - # now we need to consider the logits at the last token only - logits = logits[:, -1] - - # apply the logit filters, e.g. for suppressing or applying penalty to - for logit_filter in self.logit_filters: - logit_filter.apply(logits, tokens) - - # expand the tokens tensor with the selected next tokens - tokens, completed = self.decoder.update(tokens, logits, sum_logprobs) - - if completed or tokens.shape[-1] > self.n_ctx: - break - finally: - self.inference.cleanup_caching() - - return tokens, sum_logprobs, no_speech_probs - - @torch.no_grad() - def run(self, mel: Tensor) -> List[DecodingResult]: - self.decoder.reset() - tokenizer: Tokenizer = self.tokenizer - n_audio: int = mel.shape[0] - - audio_features: Tensor = self._get_audio_features(mel) # encoder forward pass - tokens: Tensor = torch.tensor([self.initial_tokens]).repeat(n_audio, 1) - - # detect language if requested, overwriting the language token - languages, language_probs = self._detect_language(audio_features, tokens) - if self.options.task == "lang_id": - return [ - DecodingResult(audio_features=features, language=language, language_probs=probs) - for features, language, probs in zip(audio_features, languages, language_probs) - ] - - # repeat the audio & text tensors by the group size, for beam search or best-of-n sampling - audio_features = audio_features.repeat_interleave(self.n_group, dim=0) - tokens = tokens.repeat_interleave(self.n_group, dim=0).to(audio_features.device) - - # call the main sampling loop - tokens, sum_logprobs, no_speech_probs = self._main_loop(audio_features, tokens) - - # reshape the tensors to have (n_audio, n_group) as the first two dimensions - audio_features = audio_features[:: self.n_group] - no_speech_probs = no_speech_probs[:: self.n_group] - assert audio_features.shape[0] == len(no_speech_probs) == n_audio - - tokens = tokens.reshape(n_audio, self.n_group, -1) - sum_logprobs = sum_logprobs.reshape(n_audio, self.n_group) - - # get the final candidates for each group, and slice between the first sampled token and EOT - tokens, sum_logprobs = self.decoder.finalize(tokens, sum_logprobs) - tokens: List[List[Tensor]] = [ - [t[self.sample_begin : (t == tokenizer.eot).nonzero()[0, 0]] for t in s] for s in tokens - ] - - # select the top-ranked sample in each group - selected = self.sequence_ranker.rank(tokens, sum_logprobs) - tokens: List[List[int]] = [t[i].tolist() for i, t in zip(selected, tokens)] - texts: List[str] = [tokenizer.decode(t).strip() for t in tokens] - - sum_logprobs: List[float] = [lp[i] for i, lp in zip(selected, sum_logprobs)] - avg_logprobs: List[float] = [lp / (len(t) + 1) for t, lp in zip(tokens, sum_logprobs)] - - fields = (texts, languages, tokens, audio_features, avg_logprobs, no_speech_probs) - if len(set(map(len, fields))) != 1: - raise RuntimeError(f"inconsistent result lengths: {list(map(len, fields))}") - - return [ - DecodingResult( - audio_features=features, - language=language, - tokens=tokens, - text=text, - avg_logprob=avg_logprob, - no_speech_prob=no_speech_prob, - temperature=self.options.temperature, - compression_ratio=compression_ratio(text), - ) - for text, language, tokens, features, avg_logprob, no_speech_prob in zip(*fields) - ] - - -@torch.no_grad() -def decode(model: "Whisper", mel: Tensor, options: DecodingOptions = DecodingOptions()) -> Union[DecodingResult, List[DecodingResult]]: - """ - Performs decoding of 30-second audio segment(s), provided as Mel spectrogram(s). - - Parameters - ---------- - model: Whisper - the Whisper model instance - - mel: torch.Tensor, shape = (80, 3000) or (*, 80, 3000) - A tensor containing the Mel spectrogram(s) - - options: DecodingOptions - A dataclass that contains all necessary options for decoding 30-second segments - - Returns - ------- - result: Union[DecodingResult, List[DecodingResult]] - The result(s) of decoding contained in `DecodingResult` dataclass instance(s) - """ - single = mel.ndim == 2 - if single: - mel = mel.unsqueeze(0) - result = DecodingTask(model, options).run(mel) - - if single: - result = result[0] - - return result diff --git a/spaces/merve/anonymization/public/uncertainty-calibration/footnote.js b/spaces/merve/anonymization/public/uncertainty-calibration/footnote.js deleted file mode 100644 index 05eac09cc1b8466bb2c440b6fd23060cd91f5017..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/public/uncertainty-calibration/footnote.js +++ /dev/null @@ -1,73 +0,0 @@ -!(() => { - var ttFnSel = d3.select('body').selectAppend('div.tooltip-footnote.tooltip-footnote-hidden') - - function index2superscipt(i){ - return (i + 1 + '') - .split('') - .map(num => '⁰¹²³⁴⁵⁶⁷⁸⁹'[num]) - .join('') - } - - var footendSel = d3.selectAll('.footend') - .each(function(d, i){ - var sel = d3.select(this) - var ogHTML = sel.parent().html() - - sel - .at({href: '#footstart-' + i, id: 'footend-' + i}) - .text(index2superscipt(i)) - .datum(ogHTML) - }) - - footendSel.parent().parent().selectAll('br').remove() - - var footstartSel = d3.selectAll('.footstart') - .each(function(d, i){ - d3.select(this) - .at({ - href: '#footend-' + i, - }) - .text(index2superscipt(i)) - .datum(footendSel.data()[i]) - .parent().at({id: 'footstart-' + i}) - }) - .call(addLockedTooltip) - - - function addLockedTooltip(sel){ - sel - .on('mouseover', function(d, i){ - ttFnSel - .classed('tooltip-footnote-hidden', 0) - .html(d).select('.footend').remove() - - var [x, y] = d3.mouse(d3.select('html').node()) - var bb = ttFnSel.node().getBoundingClientRect(), - left = d3.clamp(20, (x-bb.width/2), window.innerWidth - bb.width - 20), - top = innerHeight + scrollY > y + 20 + bb.height ? y + 20 : y - bb.height - 10; - - ttFnSel.st({left, top}) - }) - .on('mousemove', mousemove) - .on('mouseout', mouseout) - - ttFnSel - .on('mousemove', mousemove) - .on('mouseout', mouseout) - - function mousemove(){ - if (window.__ttfade) window.__ttfade.stop() - } - - function mouseout(){ - if (window.__ttfade) window.__ttfade.stop() - window.__ttfade = d3.timeout( - () => ttFnSel.classed('tooltip-footnote-hidden', 1), - 250 - ) - } - } - -})() - - diff --git a/spaces/merve/anonymization/source/third_party/simple-statistics.min.js b/spaces/merve/anonymization/source/third_party/simple-statistics.min.js deleted file mode 100644 index 9191046b7dc959d771a904875817c2b9c26ff0e5..0000000000000000000000000000000000000000 --- a/spaces/merve/anonymization/source/third_party/simple-statistics.min.js +++ /dev/null @@ -1,3 +0,0 @@ -// https://github.com/simple-statistics/simple-statistics Copyright (c) 2014, Tom MacWright - -!function(t,r){"object"==typeof exports&&"undefined"!=typeof module?r(exports):"function"==typeof define&&define.amd?define(["exports"],r):r(t.ss={})}(this,function(t){"use strict";function r(t){if(0===t.length)return 0;for(var r,n=t[0],e=0,a=1;a=Math.abs(t[a])?e+=n-r+t[a]:e+=t[a]-r+n,n=r;return n+e}function g(t){if(0===t.length)throw new Error("mean requires at least one data point");return r(t)/t.length}function n(t,r){var n,e,a=g(t),o=0;if(2===r)for(e=0;er&&(r=t[n]);return r}function i(t,r){var n=t.length*r;if(0===t.length)throw new Error("quantile requires at least one data point.");if(r<0||1f&&p(t,n,e);sf;)l--}t[n]===f?p(t,n,l):p(t,++l,e),l<=r&&(n=l+1),r<=l&&(e=l-1)}}function p(t,r,n){var e=t[r];t[r]=t[n],t[n]=e}function s(t,r){var n=t.slice();if(Array.isArray(r)){!function(t,r){for(var n=[0],e=0;et[t.length-1])return 1;var n=function(t,r){var n=0,e=0,a=t.length;for(;e>>1]?a=n:e=-~n;return e}(t,r);if(t[n]!==r)return n/t.length;n++;var e=function(t,r){var n=0,e=0,a=t.length;for(;e=t[n=e+a>>>1]?e=-~n:a=n;return e}(t,r);if(e===n)return n/t.length;var a=e-n+1;return a*(e+n)/2/a/t.length}function m(t){var r=s(t,.75),n=s(t,.25);if("number"==typeof r&&"number"==typeof n)return r-n}function d(t){return+s(t,.5)}function b(t){for(var r=d(t),n=[],e=0;e=e[n][u]);--g)(s=x(h,u,o,i)+e[n-1][h-1])n&&(n=t[e]),t[e]t.length)throw new Error("cannot generate more classes than there are data values");var n=f(t);if(1===y(n))return[n];var e=S(r,n.length),a=S(r,n.length);!function(t,r,n){for(var e,a=r[0].length,o=t[Math.floor(a/2)],i=[],u=[],h=0;h=Math.abs(a)&&(c+=1);else if("greater"===n)for(h=0;h<=e;h++)o[h]>=a&&(c+=1);else for(h=0;h<=e;h++)o[h]<=a&&(c+=1);return c/e},t.bisect=function(t,r,n,e,a){if("function"!=typeof t)throw new TypeError("func must be a function");for(var o=0;o - -Search, ranking and recommendation systems can help find useful documents in large datasets. However, these datasets reflect the biases of the society in which they were created and the systems risk re-entrenching those biases. For example, if someone who is not a white man searches for "CEO pictures" and sees a [page of white men](https://www.nytimes.com/interactive/2018/04/24/upshot/women-and-men-named-john.html), they may feel that only white men can be CEOs, further perpetuating lack of representation at companies' executive levels. - -Using the careful quantification outlined in a recent paper, [Diversity and Inclusion Metrics in Subset Selection](https://arxiv.org/pdf/2002.03256.pdf), we can quantify biases and push these systems to return a wider range of results. - -The mathematics of all this is a little easier to follow with abstract shapes. Let's take a look at some of them: - -
- -Suppose we want to return about 30% green boxes to reflect the distribution of some larger universe of shapes. Try clicking on the shapes below to select some of them — can you find a better subset to return? - -
- -Another diversity metric we care about is the percentage of dots... how close to 35% dots can you get? - -
- -If we can only return a single subset, how should we consider multiple diversity metrics? Sometimes it isn't possible to reduce the difference of every metric to zero. One natural approach: find the selection with the **lowest mean difference** across all the metrics to get as close as possible to all the targets. - -In other circumstances, like picking a panel of speakers, avoiding badly representing any single category might be more important. This can be done by finding the subset with the **lowest max difference**. Try minimizing both below: - -
- -Notice that minimizing the mean results in a different subset than minimizing the max; how else might using one over the other change the results? - -### Ranking Measures - -We can pull out more detail by showing how the mean difference and maximum difference rank lots of sets. Below, there are 20 sets of 10 shapes sorted by the two measures. Try adjusting the target slider on the left to see how the rankings change; each set's percentage of green, dots and small shapes are shown in the small histograms. - -
- -At the extremes, the choice of measure can have a big impact: if we want to try and return all green results, we can shift the green target up to 100%. With this target, the minimum difference basically sorts the sets by the number of green items and uses the other targets as a tiebreaker. In contrast, sorting by the mean difference balances the green target more with the dot and small targets. - -
- -Beyond mean and max differences, there are more ways to combine diversity metrics, like taking the cross of two metrics to account for [intersectionality](https://en.wikipedia.org/wiki/Intersectionality). The absolute value of the difference in target and actual percentages can also be quantified in other ways — you might want to penalize undershooting more than overshooting, for example. It's important to keep in mind what exactly you're trying to maximize and the dataset that you're operating on. - -### Which Measure is Best? - -In a vacuum, all of these ranking methods are defensible. Picking one requires knowledge of the dataset and broader societal context. - -For example, the doctors on the left have more variance along the shirt color attribute, but they're less diverse by gender than the doctors on the right. With the shirt color and gender targets we've picked, the two subsets have the same mean and max differences However, in most applications, it's more important to have a representative sample of socially relevant characteristics, like gender, rather than something less salient, like clothing color. - -
- -Just selecting a diverse sample isn't sufficient either. [Diversity and Inclusion Metrics in Subset Selection](https://arxiv.org/pdf/2002.03256.pdf) introduces a way of measuring "inclusion" - how well does the searcher feel represented in the results? - -Below, we have gender diversity, without inclusion for women, in the “construction worker” image domain. Masculine-presenting individuals are shown in realistic, modern construction worker situations, while feminine-presenting individuals and other gender presentations are depicted as historic nostalgia, toys, clipart, or passive. - -
- -The context of the query and the searcher also plays in the quality of search results. A search for "work clothing" that shows a mixed palette of colors for men's clothing and only pink women's clothing might make the searcher feel that women need to appear stereotypically feminine in a professional setting. But the same set of women's clothes might be appropriate to show for a "pink women work clothes" search or if the searcher had previously expressed a preference for pink. - -We saw how a small switch from mean to max made a huge difference in what abstract shapes are returned – and how things can get even more complex when socially salient characteristics are layered in. Defaults and small decisions can encode our priorities and values; intentionally thinking about how diversity and inclusion are being measured and which characteristics are emphasized is a step towards designing more equitable systems. - -### More Reading - -The [Diversity and Inclusion Metrics](https://arxiv.org/pdf/2002.03256.pdf) paper has a [Colab](https://colab.research.google.com/github/PAIR-code/ai-explorables/blob/master/source/measuring-diversity/diversity-and-inclusion.ipynb) with a detailed desciption of the metrics, additional visualizations and a reference Python implementation. - -The difficulties of [measuring fairness](https://pair.withgoogle.com/explorables/measuring-fairness/) in general have been well studied; subset selection is still an active area of research. [Fairness of Exposure in Rankings](https://www.cs.cornell.edu/~tj/publications/singh_joachims_18a.pdf) proposes a ranking algorithm that incorporates fairness constraints. [Toward creating a fairer ranking in search engine results](https://www.ilab.cs.rutgers.edu/~rg522/publication/gao-2020-ipm/gao-2020-ipm.pdf) measures diversity bias in actual search results. - -Inferring user preferences is also tricky; you can checkout ways to design for user feedback and control over queries in the [People + AI Guidebook](https://pair.withgoogle.com/chapter/feedback-controls/). - -### Credits - -Adam Pearce, Dylan Baker, Ellen Jiang, Meg Mitchell\* and Timnit Gebru\* // March 2021 - -*Work done while at Google - -Thanks to Alex Hanna, Carey Radebaugh, Emily Denton, Fernanda Viégas, James Wexler, Jess Holbrook, Ludovic Peran, Martin Wattenberg, Michael Terry, Yannick Assogba and Zan Armstrong for their help with this piece. - - -

More Explorables

- -

- - - - - - - - - - - - - \ No newline at end of file diff --git a/spaces/merve/hidden-bias/public/third_party/d3_.js b/spaces/merve/hidden-bias/public/third_party/d3_.js deleted file mode 100644 index 9c4b6815ec3cdc0e9f8a072b2d05be7ad48fa703..0000000000000000000000000000000000000000 --- a/spaces/merve/hidden-bias/public/third_party/d3_.js +++ /dev/null @@ -1,143 +0,0 @@ -/** - * @license - * Lodash lodash.com/license | Underscore.js 1.8.3 underscorejs.org/LICENSE - */ -;(function(){function n(n,t){return n.set(t[0],t[1]),n}function t(n,t){return n.add(t),n}function r(n,t,r){switch(r.length){case 0:return n.call(t);case 1:return n.call(t,r[0]);case 2:return n.call(t,r[0],r[1]);case 3:return n.call(t,r[0],r[1],r[2])}return n.apply(t,r)}function e(n,t,r,e){for(var u=-1,i=null==n?0:n.length;++u"']/g,J=RegExp(G.source),Y=RegExp(H.source),Q=/<%-([\s\S]+?)%>/g,X=/<%([\s\S]+?)%>/g,nn=/<%=([\s\S]+?)%>/g,tn=/\.|\[(?:[^[\]]*|(["'])(?:(?!\1)[^\\]|\\.)*?\1)\]/,rn=/^\w*$/,en=/^\./,un=/[^.[\]]+|\[(?:(-?\d+(?:\.\d+)?)|(["'])((?:(?!\2)[^\\]|\\.)*?)\2)\]|(?=(?:\.|\[\])(?:\.|\[\]|$))/g,on=/[\\^$.*+?()[\]{}|]/g,fn=RegExp(on.source),cn=/^\s+|\s+$/g,an=/^\s+/,ln=/\s+$/,sn=/\{(?:\n\/\* \[wrapped with .+\] \*\/)?\n?/,hn=/\{\n\/\* \[wrapped with (.+)\] \*/,pn=/,? & /,_n=/[^\x00-\x2f\x3a-\x40\x5b-\x60\x7b-\x7f]+/g,vn=/\\(\\)?/g,gn=/\$\{([^\\}]*(?:\\.[^\\}]*)*)\}/g,dn=/\w*$/,yn=/^[-+]0x[0-9a-f]+$/i,bn=/^0b[01]+$/i,xn=/^\[object .+?Constructor\]$/,jn=/^0o[0-7]+$/i,wn=/^(?:0|[1-9]\d*)$/,mn=/[\xc0-\xd6\xd8-\xf6\xf8-\xff\u0100-\u017f]/g,An=/($^)/,kn=/['\n\r\u2028\u2029\\]/g,En="[\\ufe0e\\ufe0f]?(?:[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]|\\ud83c[\\udffb-\\udfff])?(?:\\u200d(?:[^\\ud800-\\udfff]|(?:\\ud83c[\\udde6-\\uddff]){2}|[\\ud800-\\udbff][\\udc00-\\udfff])[\\ufe0e\\ufe0f]?(?:[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]|\\ud83c[\\udffb-\\udfff])?)*",On="(?:[\\u2700-\\u27bf]|(?:\\ud83c[\\udde6-\\uddff]){2}|[\\ud800-\\udbff][\\udc00-\\udfff])"+En,Sn="(?:[^\\ud800-\\udfff][\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]?|[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]|(?:\\ud83c[\\udde6-\\uddff]){2}|[\\ud800-\\udbff][\\udc00-\\udfff]|[\\ud800-\\udfff])",In=RegExp("['\u2019]","g"),Rn=RegExp("[\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff]","g"),zn=RegExp("\\ud83c[\\udffb-\\udfff](?=\\ud83c[\\udffb-\\udfff])|"+Sn+En,"g"),Wn=RegExp(["[A-Z\\xc0-\\xd6\\xd8-\\xde]?[a-z\\xdf-\\xf6\\xf8-\\xff]+(?:['\u2019](?:d|ll|m|re|s|t|ve))?(?=[\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000]|[A-Z\\xc0-\\xd6\\xd8-\\xde]|$)|(?:[A-Z\\xc0-\\xd6\\xd8-\\xde]|[^\\ud800-\\udfff\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000\\d+\\u2700-\\u27bfa-z\\xdf-\\xf6\\xf8-\\xffA-Z\\xc0-\\xd6\\xd8-\\xde])+(?:['\u2019](?:D|LL|M|RE|S|T|VE))?(?=[\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000]|[A-Z\\xc0-\\xd6\\xd8-\\xde](?:[a-z\\xdf-\\xf6\\xf8-\\xff]|[^\\ud800-\\udfff\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000\\d+\\u2700-\\u27bfa-z\\xdf-\\xf6\\xf8-\\xffA-Z\\xc0-\\xd6\\xd8-\\xde])|$)|[A-Z\\xc0-\\xd6\\xd8-\\xde]?(?:[a-z\\xdf-\\xf6\\xf8-\\xff]|[^\\ud800-\\udfff\\xac\\xb1\\xd7\\xf7\\x00-\\x2f\\x3a-\\x40\\x5b-\\x60\\x7b-\\xbf\\u2000-\\u206f \\t\\x0b\\f\\xa0\\ufeff\\n\\r\\u2028\\u2029\\u1680\\u180e\\u2000\\u2001\\u2002\\u2003\\u2004\\u2005\\u2006\\u2007\\u2008\\u2009\\u200a\\u202f\\u205f\\u3000\\d+\\u2700-\\u27bfa-z\\xdf-\\xf6\\xf8-\\xffA-Z\\xc0-\\xd6\\xd8-\\xde])+(?:['\u2019](?:d|ll|m|re|s|t|ve))?|[A-Z\\xc0-\\xd6\\xd8-\\xde]+(?:['\u2019](?:D|LL|M|RE|S|T|VE))?|\\d*(?:(?:1ST|2ND|3RD|(?![123])\\dTH)\\b)|\\d*(?:(?:1st|2nd|3rd|(?![123])\\dth)\\b)|\\d+",On].join("|"),"g"),Bn=RegExp("[\\u200d\\ud800-\\udfff\\u0300-\\u036f\\ufe20-\\ufe2f\\u20d0-\\u20ff\\ufe0e\\ufe0f]"),Ln=/[a-z][A-Z]|[A-Z]{2,}[a-z]|[0-9][a-zA-Z]|[a-zA-Z][0-9]|[^a-zA-Z0-9 ]/,Un="Array Buffer DataView Date Error Float32Array Float64Array Function Int8Array Int16Array Int32Array Map Math Object Promise RegExp Set String Symbol TypeError Uint8Array Uint8ClampedArray Uint16Array Uint32Array WeakMap _ clearTimeout isFinite parseInt setTimeout".split(" "),Cn={}; -Cn["[object Float32Array]"]=Cn["[object Float64Array]"]=Cn["[object Int8Array]"]=Cn["[object Int16Array]"]=Cn["[object Int32Array]"]=Cn["[object Uint8Array]"]=Cn["[object Uint8ClampedArray]"]=Cn["[object Uint16Array]"]=Cn["[object Uint32Array]"]=true,Cn["[object Arguments]"]=Cn["[object Array]"]=Cn["[object ArrayBuffer]"]=Cn["[object Boolean]"]=Cn["[object DataView]"]=Cn["[object Date]"]=Cn["[object Error]"]=Cn["[object Function]"]=Cn["[object Map]"]=Cn["[object Number]"]=Cn["[object Object]"]=Cn["[object RegExp]"]=Cn["[object Set]"]=Cn["[object String]"]=Cn["[object WeakMap]"]=false; -var Dn={};Dn["[object Arguments]"]=Dn["[object Array]"]=Dn["[object ArrayBuffer]"]=Dn["[object DataView]"]=Dn["[object Boolean]"]=Dn["[object Date]"]=Dn["[object Float32Array]"]=Dn["[object Float64Array]"]=Dn["[object Int8Array]"]=Dn["[object Int16Array]"]=Dn["[object Int32Array]"]=Dn["[object Map]"]=Dn["[object Number]"]=Dn["[object Object]"]=Dn["[object RegExp]"]=Dn["[object Set]"]=Dn["[object String]"]=Dn["[object Symbol]"]=Dn["[object Uint8Array]"]=Dn["[object Uint8ClampedArray]"]=Dn["[object Uint16Array]"]=Dn["[object Uint32Array]"]=true, -Dn["[object Error]"]=Dn["[object Function]"]=Dn["[object WeakMap]"]=false;var Mn,Tn={"\\":"\\","'":"'","\n":"n","\r":"r","\u2028":"u2028","\u2029":"u2029"},$n=parseFloat,Fn=parseInt,Nn=typeof global=="object"&&global&&global.Object===Object&&global,Pn=typeof self=="object"&&self&&self.Object===Object&&self,Zn=Nn||Pn||Function("return this")(),qn=typeof exports=="object"&&exports&&!exports.nodeType&&exports,Vn=qn&&typeof module=="object"&&module&&!module.nodeType&&module,Kn=Vn&&Vn.exports===qn,Gn=Kn&&Nn.process; -n:{try{Mn=Gn&&Gn.binding&&Gn.binding("util");break n}catch(n){}Mn=void 0}var Hn=Mn&&Mn.isArrayBuffer,Jn=Mn&&Mn.isDate,Yn=Mn&&Mn.isMap,Qn=Mn&&Mn.isRegExp,Xn=Mn&&Mn.isSet,nt=Mn&&Mn.isTypedArray,tt=j("length"),rt=w({"\xc0":"A","\xc1":"A","\xc2":"A","\xc3":"A","\xc4":"A","\xc5":"A","\xe0":"a","\xe1":"a","\xe2":"a","\xe3":"a","\xe4":"a","\xe5":"a","\xc7":"C","\xe7":"c","\xd0":"D","\xf0":"d","\xc8":"E","\xc9":"E","\xca":"E","\xcb":"E","\xe8":"e","\xe9":"e","\xea":"e","\xeb":"e","\xcc":"I","\xcd":"I","\xce":"I", -"\xcf":"I","\xec":"i","\xed":"i","\xee":"i","\xef":"i","\xd1":"N","\xf1":"n","\xd2":"O","\xd3":"O","\xd4":"O","\xd5":"O","\xd6":"O","\xd8":"O","\xf2":"o","\xf3":"o","\xf4":"o","\xf5":"o","\xf6":"o","\xf8":"o","\xd9":"U","\xda":"U","\xdb":"U","\xdc":"U","\xf9":"u","\xfa":"u","\xfb":"u","\xfc":"u","\xdd":"Y","\xfd":"y","\xff":"y","\xc6":"Ae","\xe6":"ae","\xde":"Th","\xfe":"th","\xdf":"ss","\u0100":"A","\u0102":"A","\u0104":"A","\u0101":"a","\u0103":"a","\u0105":"a","\u0106":"C","\u0108":"C","\u010a":"C", -"\u010c":"C","\u0107":"c","\u0109":"c","\u010b":"c","\u010d":"c","\u010e":"D","\u0110":"D","\u010f":"d","\u0111":"d","\u0112":"E","\u0114":"E","\u0116":"E","\u0118":"E","\u011a":"E","\u0113":"e","\u0115":"e","\u0117":"e","\u0119":"e","\u011b":"e","\u011c":"G","\u011e":"G","\u0120":"G","\u0122":"G","\u011d":"g","\u011f":"g","\u0121":"g","\u0123":"g","\u0124":"H","\u0126":"H","\u0125":"h","\u0127":"h","\u0128":"I","\u012a":"I","\u012c":"I","\u012e":"I","\u0130":"I","\u0129":"i","\u012b":"i","\u012d":"i", -"\u012f":"i","\u0131":"i","\u0134":"J","\u0135":"j","\u0136":"K","\u0137":"k","\u0138":"k","\u0139":"L","\u013b":"L","\u013d":"L","\u013f":"L","\u0141":"L","\u013a":"l","\u013c":"l","\u013e":"l","\u0140":"l","\u0142":"l","\u0143":"N","\u0145":"N","\u0147":"N","\u014a":"N","\u0144":"n","\u0146":"n","\u0148":"n","\u014b":"n","\u014c":"O","\u014e":"O","\u0150":"O","\u014d":"o","\u014f":"o","\u0151":"o","\u0154":"R","\u0156":"R","\u0158":"R","\u0155":"r","\u0157":"r","\u0159":"r","\u015a":"S","\u015c":"S", -"\u015e":"S","\u0160":"S","\u015b":"s","\u015d":"s","\u015f":"s","\u0161":"s","\u0162":"T","\u0164":"T","\u0166":"T","\u0163":"t","\u0165":"t","\u0167":"t","\u0168":"U","\u016a":"U","\u016c":"U","\u016e":"U","\u0170":"U","\u0172":"U","\u0169":"u","\u016b":"u","\u016d":"u","\u016f":"u","\u0171":"u","\u0173":"u","\u0174":"W","\u0175":"w","\u0176":"Y","\u0177":"y","\u0178":"Y","\u0179":"Z","\u017b":"Z","\u017d":"Z","\u017a":"z","\u017c":"z","\u017e":"z","\u0132":"IJ","\u0133":"ij","\u0152":"Oe","\u0153":"oe", -"\u0149":"'n","\u017f":"s"}),et=w({"&":"&","<":"<",">":">",'"':""","'":"'"}),ut=w({"&":"&","<":"<",">":">",""":'"',"'":"'"}),it=function w(En){function On(n){if(xu(n)&&!af(n)&&!(n instanceof Mn)){if(n instanceof zn)return n;if(ci.call(n,"__wrapped__"))return Pe(n)}return new zn(n)}function Sn(){}function zn(n,t){this.__wrapped__=n,this.__actions__=[],this.__chain__=!!t,this.__index__=0,this.__values__=F}function Mn(n){this.__wrapped__=n,this.__actions__=[],this.__dir__=1, -this.__filtered__=false,this.__iteratees__=[],this.__takeCount__=4294967295,this.__views__=[]}function Tn(n){var t=-1,r=null==n?0:n.length;for(this.clear();++t=t?n:t)),n}function dt(n,t,r,e,i,o){var f,c=1&t,a=2&t,l=4&t;if(r&&(f=i?r(n,e,i,o):r(n)),f!==F)return f;if(!bu(n))return n;if(e=af(n)){if(f=Ee(n),!c)return Mr(n,f)}else{var s=yo(n),h="[object Function]"==s||"[object GeneratorFunction]"==s;if(sf(n))return Wr(n,c);if("[object Object]"==s||"[object Arguments]"==s||h&&!i){if(f=a||h?{}:Oe(n),!c)return a?Fr(n,pt(f,n)):$r(n,ht(f,n))}else{if(!Dn[s])return i?n:{};f=Se(n,s,dt,c)}}if(o||(o=new Vn), -i=o.get(n))return i;o.set(n,f);var a=l?a?ye:de:a?Uu:Lu,p=e?F:a(n);return u(p||n,function(e,u){p&&(u=e,e=n[u]),at(f,u,dt(e,t,r,u,n,o))}),f}function yt(n){var t=Lu(n);return function(r){return bt(r,n,t)}}function bt(n,t,r){var e=r.length;if(null==n)return!e;for(n=ni(n);e--;){var u=r[e],i=t[u],o=n[u];if(o===F&&!(u in n)||!i(o))return false}return true}function xt(n,t,r){if(typeof n!="function")throw new ei("Expected a function");return jo(function(){n.apply(F,r)},t)}function jt(n,t,r,e){var u=-1,i=c,o=true,f=n.length,s=[],h=t.length; -if(!f)return s;r&&(t=l(t,S(r))),e?(i=a,o=false):200<=t.length&&(i=R,o=false,t=new qn(t));n:for(;++ut}function Bt(n,t){return null!=n&&ci.call(n,t)}function Lt(n,t){return null!=n&&t in ni(n)}function Ut(n,t,r){for(var e=r?a:c,u=n[0].length,i=n.length,o=i,f=Hu(i),s=1/0,h=[];o--;){var p=n[o];o&&t&&(p=l(p,S(t))),s=Mi(p.length,s),f[o]=!r&&(t||120<=u&&120<=p.length)?new qn(o&&p):F}var p=n[0],_=-1,v=f[0];n:for(;++_t.length?n:It(n,vr(t,0,-1)),t=null==n?n:n[$e(Ge(t))],null==t?F:r(t,n,e)}function Mt(n){return xu(n)&&"[object Arguments]"==zt(n)}function Tt(n){return xu(n)&&"[object ArrayBuffer]"==zt(n)}function $t(n){return xu(n)&&"[object Date]"==zt(n)}function Ft(n,t,r,e,u){if(n===t)t=true;else if(null==n||null==t||!xu(n)&&!xu(t))t=n!==n&&t!==t;else n:{ -var i=af(n),o=af(t),f=i?"[object Array]":yo(n),c=o?"[object Array]":yo(t),f="[object Arguments]"==f?"[object Object]":f,c="[object Arguments]"==c?"[object Object]":c,a="[object Object]"==f,o="[object Object]"==c;if((c=f==c)&&sf(n)){if(!sf(t)){t=false;break n}i=true,a=false}if(c&&!a)u||(u=new Vn),t=i||gf(n)?_e(n,t,r,e,Ft,u):ve(n,t,f,r,e,Ft,u);else{if(!(1&r)&&(i=a&&ci.call(n,"__wrapped__"),f=o&&ci.call(t,"__wrapped__"),i||f)){n=i?n.value():n,t=f?t.value():t,u||(u=new Vn),t=Ft(n,t,r,e,u);break n}if(c)t:if(u||(u=new Vn), -i=1&r,f=de(n),o=f.length,c=de(t).length,o==c||i){for(a=o;a--;){var l=f[a];if(!(i?l in t:ci.call(t,l))){t=false;break t}}if((c=u.get(n))&&u.get(t))t=c==t;else{c=true,u.set(n,t),u.set(t,n);for(var s=i;++at?r:0,Re(t,r)?n[t]:F}function rr(n,t,r){var e=-1;return t=l(t.length?t:[Nu],S(je())),n=Yt(n,function(n){return{a:l(t,function(t){return t(n)}),b:++e,c:n}}),A(n,function(n,t){var e;n:{e=-1;for(var u=n.a,i=t.a,o=u.length,f=r.length;++e=f?c:c*("desc"==r[e]?-1:1); -break n}}e=n.b-t.b}return e})}function er(n,t){return ur(n,t,function(t,r){return Bu(n,r)})}function ur(n,t,r){for(var e=-1,u=t.length,i={};++et||9007199254740991t&&(t=-t>u?0:u+t),r=r>u?u:r,0>r&&(r+=u),u=t>r?0:r-t>>>0,t>>>=0,r=Hu(u);++e=u){for(;e>>1,o=n[i];null!==o&&!Au(o)&&(r?o<=t:ot.length?n:It(n,vr(t,0,-1)), -null==n||delete n[$e(Ge(t))]}function Ar(n,t,r,e){for(var u=n.length,i=e?u:-1;(e?i--:++ie)return e?wr(n[0]):[];for(var u=-1,i=Hu(e);++u=e?n:vr(n,t,r)}function Wr(n,t){if(t)return n.slice();var r=n.length,r=yi?yi(r):new n.constructor(r);return n.copy(r),r}function Br(n){var t=new n.constructor(n.byteLength);return new di(t).set(new di(n)),t}function Lr(n,t){return new n.constructor(t?Br(n.buffer):n.buffer,n.byteOffset,n.length)}function Ur(n,t){ -if(n!==t){var r=n!==F,e=null===n,u=n===n,i=Au(n),o=t!==F,f=null===t,c=t===t,a=Au(t);if(!f&&!a&&!i&&n>t||i&&o&&c&&!f&&!a||e&&o&&c||!r&&c||!u)return 1;if(!e&&!i&&!a&&nu?F:i,u=1),t=ni(t);++eo&&f[0]!==a&&f[o-1]!==a?[]:C(f,a),o-=c.length,or?r?ar(t,n):t:(r=ar(t,Ri(n/T(t))),Bn.test(t)?zr($(r),0,n).join(""):r.slice(0,n))}function ue(n,t,e,u){function i(){for(var t=-1,c=arguments.length,a=-1,l=u.length,s=Hu(l+c),h=this&&this!==Zn&&this instanceof i?f:n;++at||e)&&(1&n&&(i[2]=h[2],t|=1&r?0:4),(r=h[3])&&(e=i[3],i[3]=e?Cr(e,r,h[4]):r,i[4]=e?C(i[3],"__lodash_placeholder__"):h[4]),(r=h[5])&&(e=i[5],i[5]=e?Dr(e,r,h[6]):r,i[6]=e?C(i[5],"__lodash_placeholder__"):h[6]),(r=h[7])&&(i[7]=r),128&n&&(i[8]=null==i[8]?h[8]:Mi(i[8],h[8])),null==i[9]&&(i[9]=h[9]),i[0]=h[0],i[1]=t),n=i[0],t=i[1], -r=i[2],e=i[3],u=i[4],f=i[9]=i[9]===F?c?0:n.length:Di(i[9]-a,0),!f&&24&t&&(t&=-25),De((h?lo:xo)(t&&1!=t?8==t||16==t?Jr(n,t,f):32!=t&&33!=t||u.length?Xr.apply(F,i):ue(n,t,r,e):Vr(n,t,r),i),n,t)}function se(n,t,r,e){return n===F||hu(n,ii[r])&&!ci.call(e,r)?t:n}function he(n,t,r,e,u,i){return bu(n)&&bu(t)&&(i.set(t,n),nr(n,t,F,he,i),i.delete(t)),n}function pe(n){return wu(n)?F:n}function _e(n,t,r,e,u,i){var o=1&r,f=n.length,c=t.length;if(f!=c&&!(o&&c>f))return false;if((c=i.get(n))&&i.get(t))return c==t;var c=-1,a=true,l=2&r?new qn:F; -for(i.set(n,t),i.set(t,n);++cr&&(r=Di(e+r,0)),g(n,je(t,3),r)):-1}function qe(n,t,r){var e=null==n?0:n.length;if(!e)return-1;var u=e-1;return r!==F&&(u=Ou(r),u=0>r?Di(e+u,0):Mi(u,e-1)), -g(n,je(t,3),u,true)}function Ve(n){return(null==n?0:n.length)?kt(n,1):[]}function Ke(n){return n&&n.length?n[0]:F}function Ge(n){var t=null==n?0:n.length;return t?n[t-1]:F}function He(n,t){return n&&n.length&&t&&t.length?or(n,t):n}function Je(n){return null==n?n:Ni.call(n)}function Ye(n){if(!n||!n.length)return[];var t=0;return n=f(n,function(n){if(_u(n))return t=Di(n.length,t),true}),E(t,function(t){return l(n,j(t))})}function Qe(n,t){if(!n||!n.length)return[];var e=Ye(n);return null==t?e:l(e,function(n){ -return r(t,F,n)})}function Xe(n){return n=On(n),n.__chain__=true,n}function nu(n,t){return t(n)}function tu(){return this}function ru(n,t){return(af(n)?u:oo)(n,je(t,3))}function eu(n,t){return(af(n)?i:fo)(n,je(t,3))}function uu(n,t){return(af(n)?l:Yt)(n,je(t,3))}function iu(n,t,r){return t=r?F:t,t=n&&null==t?n.length:t,le(n,128,F,F,F,F,t)}function ou(n,t){var r;if(typeof t!="function")throw new ei("Expected a function");return n=Ou(n),function(){return 0<--n&&(r=t.apply(this,arguments)),1>=n&&(t=F), -r}}function fu(n,t,r){return t=r?F:t,n=le(n,8,F,F,F,F,F,t),n.placeholder=fu.placeholder,n}function cu(n,t,r){return t=r?F:t,n=le(n,16,F,F,F,F,F,t),n.placeholder=cu.placeholder,n}function au(n,t,r){function e(t){var r=c,e=a;return c=a=F,_=t,s=n.apply(e,r)}function u(n){var r=n-p;return n-=_,p===F||r>=t||0>r||g&&n>=l}function i(){var n=Jo();if(u(n))return o(n);var r,e=jo;r=n-_,n=t-(n-p),r=g?Mi(n,l-r):n,h=e(i,r)}function o(n){return h=F,d&&c?e(n):(c=a=F,s)}function f(){var n=Jo(),r=u(n);if(c=arguments, -a=this,p=n,r){if(h===F)return _=n=p,h=jo(i,t),v?e(n):s;if(g)return h=jo(i,t),e(p)}return h===F&&(h=jo(i,t)),s}var c,a,l,s,h,p,_=0,v=false,g=false,d=true;if(typeof n!="function")throw new ei("Expected a function");return t=Iu(t)||0,bu(r)&&(v=!!r.leading,l=(g="maxWait"in r)?Di(Iu(r.maxWait)||0,t):l,d="trailing"in r?!!r.trailing:d),f.cancel=function(){h!==F&&ho(h),_=0,c=p=a=h=F},f.flush=function(){return h===F?s:o(Jo())},f}function lu(n,t){function r(){var e=arguments,u=t?t.apply(this,e):e[0],i=r.cache;return i.has(u)?i.get(u):(e=n.apply(this,e), -r.cache=i.set(u,e)||i,e)}if(typeof n!="function"||null!=t&&typeof t!="function")throw new ei("Expected a function");return r.cache=new(lu.Cache||Pn),r}function su(n){if(typeof n!="function")throw new ei("Expected a function");return function(){var t=arguments;switch(t.length){case 0:return!n.call(this);case 1:return!n.call(this,t[0]);case 2:return!n.call(this,t[0],t[1]);case 3:return!n.call(this,t[0],t[1],t[2])}return!n.apply(this,t)}}function hu(n,t){return n===t||n!==n&&t!==t}function pu(n){return null!=n&&yu(n.length)&&!gu(n); -}function _u(n){return xu(n)&&pu(n)}function vu(n){if(!xu(n))return false;var t=zt(n);return"[object Error]"==t||"[object DOMException]"==t||typeof n.message=="string"&&typeof n.name=="string"&&!wu(n)}function gu(n){return!!bu(n)&&(n=zt(n),"[object Function]"==n||"[object GeneratorFunction]"==n||"[object AsyncFunction]"==n||"[object Proxy]"==n)}function du(n){return typeof n=="number"&&n==Ou(n)}function yu(n){return typeof n=="number"&&-1=n}function bu(n){var t=typeof n;return null!=n&&("object"==t||"function"==t); -}function xu(n){return null!=n&&typeof n=="object"}function ju(n){return typeof n=="number"||xu(n)&&"[object Number]"==zt(n)}function wu(n){return!(!xu(n)||"[object Object]"!=zt(n))&&(n=bi(n),null===n||(n=ci.call(n,"constructor")&&n.constructor,typeof n=="function"&&n instanceof n&&fi.call(n)==hi))}function mu(n){return typeof n=="string"||!af(n)&&xu(n)&&"[object String]"==zt(n)}function Au(n){return typeof n=="symbol"||xu(n)&&"[object Symbol]"==zt(n)}function ku(n){if(!n)return[];if(pu(n))return mu(n)?$(n):Mr(n); -if(Ai&&n[Ai]){n=n[Ai]();for(var t,r=[];!(t=n.next()).done;)r.push(t.value);return r}return t=yo(n),("[object Map]"==t?L:"[object Set]"==t?D:Du)(n)}function Eu(n){return n?(n=Iu(n),n===N||n===-N?1.7976931348623157e308*(0>n?-1:1):n===n?n:0):0===n?n:0}function Ou(n){n=Eu(n);var t=n%1;return n===n?t?n-t:n:0}function Su(n){return n?gt(Ou(n),0,4294967295):0}function Iu(n){if(typeof n=="number")return n;if(Au(n))return P;if(bu(n)&&(n=typeof n.valueOf=="function"?n.valueOf():n,n=bu(n)?n+"":n),typeof n!="string")return 0===n?n:+n; -n=n.replace(cn,"");var t=bn.test(n);return t||jn.test(n)?Fn(n.slice(2),t?2:8):yn.test(n)?P:+n}function Ru(n){return Tr(n,Uu(n))}function zu(n){return null==n?"":jr(n)}function Wu(n,t,r){return n=null==n?F:It(n,t),n===F?r:n}function Bu(n,t){return null!=n&&ke(n,t,Lt)}function Lu(n){return pu(n)?Gn(n):Ht(n)}function Uu(n){if(pu(n))n=Gn(n,true);else if(bu(n)){var t,r=Le(n),e=[];for(t in n)("constructor"!=t||!r&&ci.call(n,t))&&e.push(t);n=e}else{if(t=[],null!=n)for(r in ni(n))t.push(r);n=t}return n}function Cu(n,t){ -if(null==n)return{};var r=l(ye(n),function(n){return[n]});return t=je(t),ur(n,r,function(n,r){return t(n,r[0])})}function Du(n){return null==n?[]:I(n,Lu(n))}function Mu(n){return Nf(zu(n).toLowerCase())}function Tu(n){return(n=zu(n))&&n.replace(mn,rt).replace(Rn,"")}function $u(n,t,r){return n=zu(n),t=r?F:t,t===F?Ln.test(n)?n.match(Wn)||[]:n.match(_n)||[]:n.match(t)||[]}function Fu(n){return function(){return n}}function Nu(n){return n}function Pu(n){return Gt(typeof n=="function"?n:dt(n,1))}function Zu(n,t,r){ -var e=Lu(t),i=St(t,e);null!=r||bu(t)&&(i.length||!e.length)||(r=t,t=n,n=this,i=St(t,Lu(t)));var o=!(bu(r)&&"chain"in r&&!r.chain),f=gu(n);return u(i,function(r){var e=t[r];n[r]=e,f&&(n.prototype[r]=function(){var t=this.__chain__;if(o||t){var r=n(this.__wrapped__);return(r.__actions__=Mr(this.__actions__)).push({func:e,args:arguments,thisArg:n}),r.__chain__=t,r}return e.apply(n,s([this.value()],arguments))})}),n}function qu(){}function Vu(n){return We(n)?j($e(n)):ir(n)}function Ku(){return[]}function Gu(){ -return false}En=null==En?Zn:it.defaults(Zn.Object(),En,it.pick(Zn,Un));var Hu=En.Array,Ju=En.Date,Yu=En.Error,Qu=En.Function,Xu=En.Math,ni=En.Object,ti=En.RegExp,ri=En.String,ei=En.TypeError,ui=Hu.prototype,ii=ni.prototype,oi=En["__core-js_shared__"],fi=Qu.prototype.toString,ci=ii.hasOwnProperty,ai=0,li=function(){var n=/[^.]+$/.exec(oi&&oi.keys&&oi.keys.IE_PROTO||"");return n?"Symbol(src)_1."+n:""}(),si=ii.toString,hi=fi.call(ni),pi=Zn._,_i=ti("^"+fi.call(ci).replace(on,"\\$&").replace(/hasOwnProperty|(function).*?(?=\\\()| for .+?(?=\\\])/g,"$1.*?")+"$"),vi=Kn?En.Buffer:F,gi=En.Symbol,di=En.Uint8Array,yi=vi?vi.f:F,bi=U(ni.getPrototypeOf,ni),xi=ni.create,ji=ii.propertyIsEnumerable,wi=ui.splice,mi=gi?gi.isConcatSpreadable:F,Ai=gi?gi.iterator:F,ki=gi?gi.toStringTag:F,Ei=function(){ -try{var n=Ae(ni,"defineProperty");return n({},"",{}),n}catch(n){}}(),Oi=En.clearTimeout!==Zn.clearTimeout&&En.clearTimeout,Si=Ju&&Ju.now!==Zn.Date.now&&Ju.now,Ii=En.setTimeout!==Zn.setTimeout&&En.setTimeout,Ri=Xu.ceil,zi=Xu.floor,Wi=ni.getOwnPropertySymbols,Bi=vi?vi.isBuffer:F,Li=En.isFinite,Ui=ui.join,Ci=U(ni.keys,ni),Di=Xu.max,Mi=Xu.min,Ti=Ju.now,$i=En.parseInt,Fi=Xu.random,Ni=ui.reverse,Pi=Ae(En,"DataView"),Zi=Ae(En,"Map"),qi=Ae(En,"Promise"),Vi=Ae(En,"Set"),Ki=Ae(En,"WeakMap"),Gi=Ae(ni,"create"),Hi=Ki&&new Ki,Ji={},Yi=Fe(Pi),Qi=Fe(Zi),Xi=Fe(qi),no=Fe(Vi),to=Fe(Ki),ro=gi?gi.prototype:F,eo=ro?ro.valueOf:F,uo=ro?ro.toString:F,io=function(){ -function n(){}return function(t){return bu(t)?xi?xi(t):(n.prototype=t,t=new n,n.prototype=F,t):{}}}();On.templateSettings={escape:Q,evaluate:X,interpolate:nn,variable:"",imports:{_:On}},On.prototype=Sn.prototype,On.prototype.constructor=On,zn.prototype=io(Sn.prototype),zn.prototype.constructor=zn,Mn.prototype=io(Sn.prototype),Mn.prototype.constructor=Mn,Tn.prototype.clear=function(){this.__data__=Gi?Gi(null):{},this.size=0},Tn.prototype.delete=function(n){return n=this.has(n)&&delete this.__data__[n], -this.size-=n?1:0,n},Tn.prototype.get=function(n){var t=this.__data__;return Gi?(n=t[n],"__lodash_hash_undefined__"===n?F:n):ci.call(t,n)?t[n]:F},Tn.prototype.has=function(n){var t=this.__data__;return Gi?t[n]!==F:ci.call(t,n)},Tn.prototype.set=function(n,t){var r=this.__data__;return this.size+=this.has(n)?0:1,r[n]=Gi&&t===F?"__lodash_hash_undefined__":t,this},Nn.prototype.clear=function(){this.__data__=[],this.size=0},Nn.prototype.delete=function(n){var t=this.__data__;return n=lt(t,n),!(0>n)&&(n==t.length-1?t.pop():wi.call(t,n,1), ---this.size,true)},Nn.prototype.get=function(n){var t=this.__data__;return n=lt(t,n),0>n?F:t[n][1]},Nn.prototype.has=function(n){return-1e?(++this.size,r.push([n,t])):r[e][1]=t,this},Pn.prototype.clear=function(){this.size=0,this.__data__={hash:new Tn,map:new(Zi||Nn),string:new Tn}},Pn.prototype.delete=function(n){return n=we(this,n).delete(n),this.size-=n?1:0,n},Pn.prototype.get=function(n){return we(this,n).get(n); -},Pn.prototype.has=function(n){return we(this,n).has(n)},Pn.prototype.set=function(n,t){var r=we(this,n),e=r.size;return r.set(n,t),this.size+=r.size==e?0:1,this},qn.prototype.add=qn.prototype.push=function(n){return this.__data__.set(n,"__lodash_hash_undefined__"),this},qn.prototype.has=function(n){return this.__data__.has(n)},Vn.prototype.clear=function(){this.__data__=new Nn,this.size=0},Vn.prototype.delete=function(n){var t=this.__data__;return n=t.delete(n),this.size=t.size,n},Vn.prototype.get=function(n){ -return this.__data__.get(n)},Vn.prototype.has=function(n){return this.__data__.has(n)},Vn.prototype.set=function(n,t){var r=this.__data__;if(r instanceof Nn){var e=r.__data__;if(!Zi||199>e.length)return e.push([n,t]),this.size=++r.size,this;r=this.__data__=new Pn(e)}return r.set(n,t),this.size=r.size,this};var oo=Zr(Et),fo=Zr(Ot,true),co=qr(),ao=qr(true),lo=Hi?function(n,t){return Hi.set(n,t),n}:Nu,so=Ei?function(n,t){return Ei(n,"toString",{configurable:true,enumerable:false,value:Fu(t),writable:true})}:Nu,ho=Oi||function(n){ -return Zn.clearTimeout(n)},po=Vi&&1/D(new Vi([,-0]))[1]==N?function(n){return new Vi(n)}:qu,_o=Hi?function(n){return Hi.get(n)}:qu,vo=Wi?function(n){return null==n?[]:(n=ni(n),f(Wi(n),function(t){return ji.call(n,t)}))}:Ku,go=Wi?function(n){for(var t=[];n;)s(t,vo(n)),n=bi(n);return t}:Ku,yo=zt;(Pi&&"[object DataView]"!=yo(new Pi(new ArrayBuffer(1)))||Zi&&"[object Map]"!=yo(new Zi)||qi&&"[object Promise]"!=yo(qi.resolve())||Vi&&"[object Set]"!=yo(new Vi)||Ki&&"[object WeakMap]"!=yo(new Ki))&&(yo=function(n){ -var t=zt(n);if(n=(n="[object Object]"==t?n.constructor:F)?Fe(n):"")switch(n){case Yi:return"[object DataView]";case Qi:return"[object Map]";case Xi:return"[object Promise]";case no:return"[object Set]";case to:return"[object WeakMap]"}return t});var bo=oi?gu:Gu,xo=Me(lo),jo=Ii||function(n,t){return Zn.setTimeout(n,t)},wo=Me(so),mo=function(n){n=lu(n,function(n){return 500===t.size&&t.clear(),n});var t=n.cache;return n}(function(n){var t=[];return en.test(n)&&t.push(""),n.replace(un,function(n,r,e,u){ -t.push(e?u.replace(vn,"$1"):r||n)}),t}),Ao=lr(function(n,t){return _u(n)?jt(n,kt(t,1,_u,true)):[]}),ko=lr(function(n,t){var r=Ge(t);return _u(r)&&(r=F),_u(n)?jt(n,kt(t,1,_u,true),je(r,2)):[]}),Eo=lr(function(n,t){var r=Ge(t);return _u(r)&&(r=F),_u(n)?jt(n,kt(t,1,_u,true),F,r):[]}),Oo=lr(function(n){var t=l(n,Sr);return t.length&&t[0]===n[0]?Ut(t):[]}),So=lr(function(n){var t=Ge(n),r=l(n,Sr);return t===Ge(r)?t=F:r.pop(),r.length&&r[0]===n[0]?Ut(r,je(t,2)):[]}),Io=lr(function(n){var t=Ge(n),r=l(n,Sr);return(t=typeof t=="function"?t:F)&&r.pop(), -r.length&&r[0]===n[0]?Ut(r,F,t):[]}),Ro=lr(He),zo=ge(function(n,t){var r=null==n?0:n.length,e=vt(n,t);return fr(n,l(t,function(n){return Re(n,r)?+n:n}).sort(Ur)),e}),Wo=lr(function(n){return wr(kt(n,1,_u,true))}),Bo=lr(function(n){var t=Ge(n);return _u(t)&&(t=F),wr(kt(n,1,_u,true),je(t,2))}),Lo=lr(function(n){var t=Ge(n),t=typeof t=="function"?t:F;return wr(kt(n,1,_u,true),F,t)}),Uo=lr(function(n,t){return _u(n)?jt(n,t):[]}),Co=lr(function(n){return Er(f(n,_u))}),Do=lr(function(n){var t=Ge(n);return _u(t)&&(t=F), -Er(f(n,_u),je(t,2))}),Mo=lr(function(n){var t=Ge(n),t=typeof t=="function"?t:F;return Er(f(n,_u),F,t)}),To=lr(Ye),$o=lr(function(n){var t=n.length,t=1=t}),cf=Mt(function(){return arguments}())?Mt:function(n){return xu(n)&&ci.call(n,"callee")&&!ji.call(n,"callee")},af=Hu.isArray,lf=Hn?S(Hn):Tt,sf=Bi||Gu,hf=Jn?S(Jn):$t,pf=Yn?S(Yn):Nt,_f=Qn?S(Qn):qt,vf=Xn?S(Xn):Vt,gf=nt?S(nt):Kt,df=oe(Jt),yf=oe(function(n,t){return n<=t}),bf=Pr(function(n,t){ -if(Le(t)||pu(t))Tr(t,Lu(t),n);else for(var r in t)ci.call(t,r)&&at(n,r,t[r])}),xf=Pr(function(n,t){Tr(t,Uu(t),n)}),jf=Pr(function(n,t,r,e){Tr(t,Uu(t),n,e)}),wf=Pr(function(n,t,r,e){Tr(t,Lu(t),n,e)}),mf=ge(vt),Af=lr(function(n){return n.push(F,se),r(jf,F,n)}),kf=lr(function(n){return n.push(F,he),r(Rf,F,n)}),Ef=ne(function(n,t,r){n[t]=r},Fu(Nu)),Of=ne(function(n,t,r){ci.call(n,t)?n[t].push(r):n[t]=[r]},je),Sf=lr(Dt),If=Pr(function(n,t,r){nr(n,t,r)}),Rf=Pr(function(n,t,r,e){nr(n,t,r,e)}),zf=ge(function(n,t){ -var r={};if(null==n)return r;var e=false;t=l(t,function(t){return t=Rr(t,n),e||(e=1--n)return t.apply(this,arguments)}},On.ary=iu,On.assign=bf,On.assignIn=xf,On.assignInWith=jf,On.assignWith=wf,On.at=mf,On.before=ou,On.bind=Yo,On.bindAll=Zf,On.bindKey=Qo,On.castArray=function(){if(!arguments.length)return[];var n=arguments[0];return af(n)?n:[n]}, -On.chain=Xe,On.chunk=function(n,t,r){if(t=(r?ze(n,t,r):t===F)?1:Di(Ou(t),0),r=null==n?0:n.length,!r||1>t)return[];for(var e=0,u=0,i=Hu(Ri(r/t));et?0:t,e)):[]},On.dropRight=function(n,t,r){var e=null==n?0:n.length;return e?(t=r||t===F?1:Ou(t),t=e-t,vr(n,0,0>t?0:t)):[]},On.dropRightWhile=function(n,t){return n&&n.length?Ar(n,je(t,3),true,true):[]},On.dropWhile=function(n,t){return n&&n.length?Ar(n,je(t,3),true):[]},On.fill=function(n,t,r,e){var u=null==n?0:n.length;if(!u)return[];for(r&&typeof r!="number"&&ze(n,t,r)&&(r=0,e=u),u=n.length,r=Ou(r),0>r&&(r=-r>u?0:u+r),e=e===F||e>u?u:Ou(e),0>e&&(e+=u),e=r>e?0:Su(e);r>>0,r?(n=zu(n))&&(typeof t=="string"||null!=t&&!_f(t))&&(t=jr(t), -!t&&Bn.test(n))?zr($(n),0,r):n.split(t,r):[]},On.spread=function(n,t){if(typeof n!="function")throw new ei("Expected a function");return t=null==t?0:Di(Ou(t),0),lr(function(e){var u=e[t];return e=zr(e,0,t),u&&s(e,u),r(n,this,e)})},On.tail=function(n){var t=null==n?0:n.length;return t?vr(n,1,t):[]},On.take=function(n,t,r){return n&&n.length?(t=r||t===F?1:Ou(t),vr(n,0,0>t?0:t)):[]},On.takeRight=function(n,t,r){var e=null==n?0:n.length;return e?(t=r||t===F?1:Ou(t),t=e-t,vr(n,0>t?0:t,e)):[]},On.takeRightWhile=function(n,t){ -return n&&n.length?Ar(n,je(t,3),false,true):[]},On.takeWhile=function(n,t){return n&&n.length?Ar(n,je(t,3)):[]},On.tap=function(n,t){return t(n),n},On.throttle=function(n,t,r){var e=true,u=true;if(typeof n!="function")throw new ei("Expected a function");return bu(r)&&(e="leading"in r?!!r.leading:e,u="trailing"in r?!!r.trailing:u),au(n,t,{leading:e,maxWait:t,trailing:u})},On.thru=nu,On.toArray=ku,On.toPairs=Bf,On.toPairsIn=Lf,On.toPath=function(n){return af(n)?l(n,$e):Au(n)?[n]:Mr(mo(zu(n)))},On.toPlainObject=Ru, -On.transform=function(n,t,r){var e=af(n),i=e||sf(n)||gf(n);if(t=je(t,4),null==r){var o=n&&n.constructor;r=i?e?new o:[]:bu(n)&&gu(o)?io(bi(n)):{}}return(i?u:Et)(n,function(n,e,u){return t(r,n,e,u)}),r},On.unary=function(n){return iu(n,1)},On.union=Wo,On.unionBy=Bo,On.unionWith=Lo,On.uniq=function(n){return n&&n.length?wr(n):[]},On.uniqBy=function(n,t){return n&&n.length?wr(n,je(t,2)):[]},On.uniqWith=function(n,t){return t=typeof t=="function"?t:F,n&&n.length?wr(n,F,t):[]},On.unset=function(n,t){return null==n||mr(n,t); -},On.unzip=Ye,On.unzipWith=Qe,On.update=function(n,t,r){return null==n?n:pr(n,t,Ir(r)(It(n,t)),void 0)},On.updateWith=function(n,t,r,e){return e=typeof e=="function"?e:F,null!=n&&(n=pr(n,t,Ir(r)(It(n,t)),e)),n},On.values=Du,On.valuesIn=function(n){return null==n?[]:I(n,Uu(n))},On.without=Uo,On.words=$u,On.wrap=function(n,t){return rf(Ir(t),n)},On.xor=Co,On.xorBy=Do,On.xorWith=Mo,On.zip=To,On.zipObject=function(n,t){return Or(n||[],t||[],at)},On.zipObjectDeep=function(n,t){return Or(n||[],t||[],pr); -},On.zipWith=$o,On.entries=Bf,On.entriesIn=Lf,On.extend=xf,On.extendWith=jf,Zu(On,On),On.add=nc,On.attempt=Pf,On.camelCase=Uf,On.capitalize=Mu,On.ceil=tc,On.clamp=function(n,t,r){return r===F&&(r=t,t=F),r!==F&&(r=Iu(r),r=r===r?r:0),t!==F&&(t=Iu(t),t=t===t?t:0),gt(Iu(n),t,r)},On.clone=function(n){return dt(n,4)},On.cloneDeep=function(n){return dt(n,5)},On.cloneDeepWith=function(n,t){return t=typeof t=="function"?t:F,dt(n,5,t)},On.cloneWith=function(n,t){return t=typeof t=="function"?t:F,dt(n,4,t)}, -On.conformsTo=function(n,t){return null==t||bt(n,t,Lu(t))},On.deburr=Tu,On.defaultTo=function(n,t){return null==n||n!==n?t:n},On.divide=rc,On.endsWith=function(n,t,r){n=zu(n),t=jr(t);var e=n.length,e=r=r===F?e:gt(Ou(r),0,e);return r-=t.length,0<=r&&n.slice(r,e)==t},On.eq=hu,On.escape=function(n){return(n=zu(n))&&Y.test(n)?n.replace(H,et):n},On.escapeRegExp=function(n){return(n=zu(n))&&fn.test(n)?n.replace(on,"\\$&"):n},On.every=function(n,t,r){var e=af(n)?o:wt;return r&&ze(n,t,r)&&(t=F),e(n,je(t,3)); -},On.find=Po,On.findIndex=Ze,On.findKey=function(n,t){return v(n,je(t,3),Et)},On.findLast=Zo,On.findLastIndex=qe,On.findLastKey=function(n,t){return v(n,je(t,3),Ot)},On.floor=ec,On.forEach=ru,On.forEachRight=eu,On.forIn=function(n,t){return null==n?n:co(n,je(t,3),Uu)},On.forInRight=function(n,t){return null==n?n:ao(n,je(t,3),Uu)},On.forOwn=function(n,t){return n&&Et(n,je(t,3))},On.forOwnRight=function(n,t){return n&&Ot(n,je(t,3))},On.get=Wu,On.gt=of,On.gte=ff,On.has=function(n,t){return null!=n&&ke(n,t,Bt); -},On.hasIn=Bu,On.head=Ke,On.identity=Nu,On.includes=function(n,t,r,e){return n=pu(n)?n:Du(n),r=r&&!e?Ou(r):0,e=n.length,0>r&&(r=Di(e+r,0)),mu(n)?r<=e&&-1r&&(r=Di(e+r,0)),d(n,t,r)):-1},On.inRange=function(n,t,r){return t=Eu(t),r===F?(r=t,t=0):r=Eu(r),n=Iu(n),n>=Mi(t,r)&&n=n},On.isSet=vf,On.isString=mu,On.isSymbol=Au,On.isTypedArray=gf,On.isUndefined=function(n){return n===F},On.isWeakMap=function(n){return xu(n)&&"[object WeakMap]"==yo(n)},On.isWeakSet=function(n){return xu(n)&&"[object WeakSet]"==zt(n)},On.join=function(n,t){ -return null==n?"":Ui.call(n,t)},On.kebabCase=Cf,On.last=Ge,On.lastIndexOf=function(n,t,r){var e=null==n?0:n.length;if(!e)return-1;var u=e;if(r!==F&&(u=Ou(r),u=0>u?Di(e+u,0):Mi(u,e-1)),t===t){for(r=u+1;r--&&n[r]!==t;);n=r}else n=g(n,b,u,true);return n},On.lowerCase=Df,On.lowerFirst=Mf,On.lt=df,On.lte=yf,On.max=function(n){return n&&n.length?mt(n,Nu,Wt):F},On.maxBy=function(n,t){return n&&n.length?mt(n,je(t,2),Wt):F},On.mean=function(n){return x(n,Nu)},On.meanBy=function(n,t){return x(n,je(t,2))},On.min=function(n){ -return n&&n.length?mt(n,Nu,Jt):F},On.minBy=function(n,t){return n&&n.length?mt(n,je(t,2),Jt):F},On.stubArray=Ku,On.stubFalse=Gu,On.stubObject=function(){return{}},On.stubString=function(){return""},On.stubTrue=function(){return true},On.multiply=uc,On.nth=function(n,t){return n&&n.length?tr(n,Ou(t)):F},On.noConflict=function(){return Zn._===this&&(Zn._=pi),this},On.noop=qu,On.now=Jo,On.pad=function(n,t,r){n=zu(n);var e=(t=Ou(t))?T(n):0;return!t||e>=t?n:(t=(t-e)/2,ee(zi(t),r)+n+ee(Ri(t),r))},On.padEnd=function(n,t,r){ -n=zu(n);var e=(t=Ou(t))?T(n):0;return t&&et){var e=n;n=t,t=e}return r||n%1||t%1?(r=Fi(),Mi(n+r*(t-n+$n("1e-"+((r+"").length-1))),t)):cr(n,t); -},On.reduce=function(n,t,r){var e=af(n)?h:m,u=3>arguments.length;return e(n,je(t,4),r,u,oo)},On.reduceRight=function(n,t,r){var e=af(n)?p:m,u=3>arguments.length;return e(n,je(t,4),r,u,fo)},On.repeat=function(n,t,r){return t=(r?ze(n,t,r):t===F)?1:Ou(t),ar(zu(n),t)},On.replace=function(){var n=arguments,t=zu(n[0]);return 3>n.length?t:t.replace(n[1],n[2])},On.result=function(n,t,r){t=Rr(t,n);var e=-1,u=t.length;for(u||(u=1,n=F);++en||9007199254740991=i)return n;if(i=r-T(e),1>i)return e; -if(r=o?zr(o,0,i).join(""):n.slice(0,i),u===F)return r+e;if(o&&(i+=r.length-i),_f(u)){if(n.slice(i).search(u)){var f=r;for(u.global||(u=ti(u.source,zu(dn.exec(u))+"g")),u.lastIndex=0;o=u.exec(f);)var c=o.index;r=r.slice(0,c===F?i:c)}}else n.indexOf(jr(u),i)!=i&&(u=r.lastIndexOf(u),-1e.__dir__?"Right":"")}),e},Mn.prototype[n+"Right"]=function(t){ -return this.reverse()[n](t).reverse()}}),u(["filter","map","takeWhile"],function(n,t){var r=t+1,e=1==r||3==r;Mn.prototype[n]=function(n){var t=this.clone();return t.__iteratees__.push({iteratee:je(n,3),type:r}),t.__filtered__=t.__filtered__||e,t}}),u(["head","last"],function(n,t){var r="take"+(t?"Right":"");Mn.prototype[n]=function(){return this[r](1).value()[0]}}),u(["initial","tail"],function(n,t){var r="drop"+(t?"":"Right");Mn.prototype[n]=function(){return this.__filtered__?new Mn(this):this[r](1); -}}),Mn.prototype.compact=function(){return this.filter(Nu)},Mn.prototype.find=function(n){return this.filter(n).head()},Mn.prototype.findLast=function(n){return this.reverse().find(n)},Mn.prototype.invokeMap=lr(function(n,t){return typeof n=="function"?new Mn(this):this.map(function(r){return Dt(r,n,t)})}),Mn.prototype.reject=function(n){return this.filter(su(je(n)))},Mn.prototype.slice=function(n,t){n=Ou(n);var r=this;return r.__filtered__&&(0t)?new Mn(r):(0>n?r=r.takeRight(-n):n&&(r=r.drop(n)), -t!==F&&(t=Ou(t),r=0>t?r.dropRight(-t):r.take(t-n)),r)},Mn.prototype.takeRightWhile=function(n){return this.reverse().takeWhile(n).reverse()},Mn.prototype.toArray=function(){return this.take(4294967295)},Et(Mn.prototype,function(n,t){var r=/^(?:filter|find|map|reject)|While$/.test(t),e=/^(?:head|last)$/.test(t),u=On[e?"take"+("last"==t?"Right":""):t],i=e||/^find/.test(t);u&&(On.prototype[t]=function(){function t(n){return n=u.apply(On,s([n],f)),e&&h?n[0]:n}var o=this.__wrapped__,f=e?[1]:arguments,c=o instanceof Mn,a=f[0],l=c||af(o); -l&&r&&typeof a=="function"&&1!=a.length&&(c=l=false);var h=this.__chain__,p=!!this.__actions__.length,a=i&&!h,c=c&&!p;return!i&&l?(o=c?o:new Mn(this),o=n.apply(o,f),o.__actions__.push({func:nu,args:[t],thisArg:F}),new zn(o,h)):a&&c?n.apply(this,f):(o=this.thru(t),a?e?o.value()[0]:o.value():o)})}),u("pop push shift sort splice unshift".split(" "),function(n){var t=ui[n],r=/^(?:push|sort|unshift)$/.test(n)?"tap":"thru",e=/^(?:pop|shift)$/.test(n);On.prototype[n]=function(){var n=arguments;if(e&&!this.__chain__){ -var u=this.value();return t.apply(af(u)?u:[],n)}return this[r](function(r){return t.apply(af(r)?r:[],n)})}}),Et(Mn.prototype,function(n,t){var r=On[t];if(r){var e=r.name+"";(Ji[e]||(Ji[e]=[])).push({name:t,func:r})}}),Ji[Xr(F,2).name]=[{name:"wrapper",func:F}],Mn.prototype.clone=function(){var n=new Mn(this.__wrapped__);return n.__actions__=Mr(this.__actions__),n.__dir__=this.__dir__,n.__filtered__=this.__filtered__,n.__iteratees__=Mr(this.__iteratees__),n.__takeCount__=this.__takeCount__,n.__views__=Mr(this.__views__), -n},Mn.prototype.reverse=function(){if(this.__filtered__){var n=new Mn(this);n.__dir__=-1,n.__filtered__=true}else n=this.clone(),n.__dir__*=-1;return n},Mn.prototype.value=function(){var n,t=this.__wrapped__.value(),r=this.__dir__,e=af(t),u=0>r,i=e?t.length:0;n=i;for(var o=this.__views__,f=0,c=-1,a=o.length;++c=this.__values__.length;return{done:n,value:n?F:this.__values__[this.__index__++]}},On.prototype.plant=function(n){for(var t,r=this;r instanceof Sn;){var e=Pe(r);e.__index__=0,e.__values__=F,t?u.__wrapped__=e:t=e;var u=e,r=r.__wrapped__}return u.__wrapped__=n,t},On.prototype.reverse=function(){var n=this.__wrapped__;return n instanceof Mn?(this.__actions__.length&&(n=new Mn(this)),n=n.reverse(),n.__actions__.push({func:nu,args:[Je],thisArg:F}),new zn(n,this.__chain__)):this.thru(Je); -},On.prototype.toJSON=On.prototype.valueOf=On.prototype.value=function(){return kr(this.__wrapped__,this.__actions__)},On.prototype.first=On.prototype.head,Ai&&(On.prototype[Ai]=tu),On}();typeof define=="function"&&typeof define.amd=="object"&&define.amd?(Zn._=it, define(function(){return it})):Vn?((Vn.exports=it)._=it,qn._=it):Zn._=it}).call(this);!function(t,n){"object"==typeof exports&&"undefined"!=typeof module?n(exports):"function"==typeof define&&define.amd?define(["exports"],n):n(t.d3=t.d3||{})}(this,function(t){"use strict";function n(t){return function(n,e){return Mf(t(n),e)}}function e(t,n){return[t,n]}function r(t,n,e){var r=(n-t)/Math.max(0,e),i=Math.floor(Math.log(r)/Math.LN10),o=r/Math.pow(10,i);return i>=0?(o>=If?10:o>=Hf?5:o>=Bf?2:1)*Math.pow(10,i):-Math.pow(10,-i)/(o>=If?10:o>=Hf?5:o>=Bf?2:1)}function i(t,n,e){var r=Math.abs(n-t)/Math.max(0,e),i=Math.pow(10,Math.floor(Math.log(r)/Math.LN10)),o=r/i;return o>=If?i*=10:o>=Hf?i*=5:o>=Bf&&(i*=2),n=0&&(e=t.slice(r+1),t=t.slice(0,r)),t&&!n.hasOwnProperty(t))throw new Error("unknown type: "+t);return{type:t,name:e}})}function m(t,n){for(var e,r=0,i=t.length;r=0&&(n=t.slice(e+1),t=t.slice(0,e)),{type:t,name:n}})}function A(t){return function(){var n=this.__on;if(n){for(var e,r=0,i=-1,o=n.length;rn?1:t>=n?0:NaN}function U(t){return function(){this.removeAttribute(t)}}function O(t){return function(){this.removeAttributeNS(t.space,t.local)}}function F(t,n){return function(){this.setAttribute(t,n)}}function Y(t,n){return function(){this.setAttributeNS(t.space,t.local,n)}}function I(t,n){return function(){var e=n.apply(this,arguments);null==e?this.removeAttribute(t):this.setAttribute(t,e)}}function H(t,n){return function(){var e=n.apply(this,arguments);null==e?this.removeAttributeNS(t.space,t.local):this.setAttributeNS(t.space,t.local,e)}}function B(t){return function(){this.style.removeProperty(t)}}function j(t,n,e){return function(){this.style.setProperty(t,n,e)}}function X(t,n,e){return function(){var r=n.apply(this,arguments);null==r?this.style.removeProperty(t):this.style.setProperty(t,r,e)}}function W(t,n){return t.style.getPropertyValue(n)||Gl(t).getComputedStyle(t,null).getPropertyValue(n)}function V(t){return function(){delete this[t]}}function $(t,n){return function(){this[t]=n}}function Z(t,n){return function(){var e=n.apply(this,arguments);null==e?delete this[t]:this[t]=e}}function G(t){return t.trim().split(/^|\s+/)}function Q(t){return t.classList||new J(t)}function J(t){this._node=t,this._names=G(t.getAttribute("class")||"")}function K(t,n){for(var e=Q(t),r=-1,i=n.length;++r>8&15|n>>4&240,n>>4&15|240&n,(15&n)<<4|15&n,1)):(n=Mh.exec(t))?Et(parseInt(n[1],16)):(n=Th.exec(t))?new Rt(n[1],n[2],n[3],1):(n=Nh.exec(t))?new Rt(255*n[1]/100,255*n[2]/100,255*n[3]/100,1):(n=kh.exec(t))?Ct(n[1],n[2],n[3],n[4]):(n=Sh.exec(t))?Ct(255*n[1]/100,255*n[2]/100,255*n[3]/100,n[4]):(n=Ah.exec(t))?Lt(n[1],n[2]/100,n[3]/100,1):(n=Eh.exec(t))?Lt(n[1],n[2]/100,n[3]/100,n[4]):Ch.hasOwnProperty(t)?Et(Ch[t]):"transparent"===t?new Rt(NaN,NaN,NaN,0):null}function Et(t){return new Rt(t>>16&255,t>>8&255,255&t,1)}function Ct(t,n,e,r){return r<=0&&(t=n=e=NaN),new Rt(t,n,e,r)}function zt(t){return t instanceof St||(t=At(t)),t?(t=t.rgb(),new Rt(t.r,t.g,t.b,t.opacity)):new Rt}function Pt(t,n,e,r){return 1===arguments.length?zt(t):new Rt(t,n,e,null==r?1:r)}function Rt(t,n,e,r){this.r=+t,this.g=+n,this.b=+e,this.opacity=+r}function Lt(t,n,e,r){return r<=0?t=n=e=NaN:e<=0||e>=1?t=n=NaN:n<=0&&(t=NaN),new Ut(t,n,e,r)}function Dt(t){if(t instanceof Ut)return new Ut(t.h,t.s,t.l,t.opacity);if(t instanceof St||(t=At(t)),!t)return new Ut;if(t instanceof Ut)return t;t=t.rgb();var n=t.r/255,e=t.g/255,r=t.b/255,i=Math.min(n,e,r),o=Math.max(n,e,r),u=NaN,a=o-i,c=(o+i)/2;return a?(u=n===o?(e-r)/a+6*(e0&&c<1?0:u,new Ut(u,a,c,t.opacity)}function qt(t,n,e,r){return 1===arguments.length?Dt(t):new Ut(t,n,e,null==r?1:r)}function Ut(t,n,e,r){this.h=+t,this.s=+n,this.l=+e,this.opacity=+r}function Ot(t,n,e){return 255*(t<60?n+(e-n)*t/60:t<180?e:t<240?n+(e-n)*(240-t)/60:n)}function Ft(t){if(t instanceof It)return new It(t.l,t.a,t.b,t.opacity);if(t instanceof $t){var n=t.h*zh;return new It(t.l,Math.cos(n)*t.c,Math.sin(n)*t.c,t.opacity)}t instanceof Rt||(t=zt(t));var e=Xt(t.r),r=Xt(t.g),i=Xt(t.b),o=Ht((.4124564*e+.3575761*r+.1804375*i)/Rh),u=Ht((.2126729*e+.7151522*r+.072175*i)/Lh);return new It(116*u-16,500*(o-u),200*(u-Ht((.0193339*e+.119192*r+.9503041*i)/Dh)),t.opacity)}function Yt(t,n,e,r){return 1===arguments.length?Ft(t):new It(t,n,e,null==r?1:r)}function It(t,n,e,r){this.l=+t,this.a=+n,this.b=+e,this.opacity=+r}function Ht(t){return t>Fh?Math.pow(t,1/3):t/Oh+qh}function Bt(t){return t>Uh?t*t*t:Oh*(t-qh)}function jt(t){return 255*(t<=.0031308?12.92*t:1.055*Math.pow(t,1/2.4)-.055)}function Xt(t){return(t/=255)<=.04045?t/12.92:Math.pow((t+.055)/1.055,2.4)}function Wt(t){if(t instanceof $t)return new $t(t.h,t.c,t.l,t.opacity);t instanceof It||(t=Ft(t));var n=Math.atan2(t.b,t.a)*Ph;return new $t(n<0?n+360:n,Math.sqrt(t.a*t.a+t.b*t.b),t.l,t.opacity)}function Vt(t,n,e,r){return 1===arguments.length?Wt(t):new $t(t,n,e,null==r?1:r)}function $t(t,n,e,r){this.h=+t,this.c=+n,this.l=+e,this.opacity=+r}function Zt(t){if(t instanceof Qt)return new Qt(t.h,t.s,t.l,t.opacity);t instanceof Rt||(t=zt(t));var n=t.r/255,e=t.g/255,r=t.b/255,i=(Vh*r+Xh*n-Wh*e)/(Vh+Xh-Wh),o=r-i,u=(jh*(e-i)-Hh*o)/Bh,a=Math.sqrt(u*u+o*o)/(jh*i*(1-i)),c=a?Math.atan2(u,o)*Ph-120:NaN;return new Qt(c<0?c+360:c,a,i,t.opacity)}function Gt(t,n,e,r){return 1===arguments.length?Zt(t):new Qt(t,n,e,null==r?1:r)}function Qt(t,n,e,r){this.h=+t,this.s=+n,this.l=+e,this.opacity=+r}function Jt(t,n,e,r,i){var o=t*t,u=o*t;return((1-3*t+3*o-u)*n+(4-6*o+3*u)*e+(1+3*t+3*o-3*u)*r+u*i)/6}function Kt(t,n){return function(e){return t+e*n}}function tn(t,n,e){return t=Math.pow(t,e),n=Math.pow(n,e)-t,e=1/e,function(r){return Math.pow(t+r*n,e)}}function nn(t,n){var e=n-t;return e?Kt(t,e>180||e<-180?e-360*Math.round(e/360):e):ep(isNaN(t)?n:t)}function en(t){return 1==(t=+t)?rn:function(n,e){return e-n?tn(n,e,t):ep(isNaN(n)?e:n)}}function rn(t,n){var e=n-t;return e?Kt(t,e):ep(isNaN(t)?n:t)}function on(t){return function(n){var e,r,i=n.length,o=new Array(i),u=new Array(i),a=new Array(i);for(e=0;e180?n+=360:n-t>180&&(t+=360),o.push({i:e.push(i(e)+"rotate(",null,r)-2,x:cp(t,n)})):n&&e.push(i(e)+"rotate("+n+r)}function a(t,n,e,o){t!==n?o.push({i:e.push(i(e)+"skewX(",null,r)-2,x:cp(t,n)}):n&&e.push(i(e)+"skewX("+n+r)}function c(t,n,e,r,o,u){if(t!==e||n!==r){var a=o.push(i(o)+"scale(",null,",",null,")");u.push({i:a-4,x:cp(t,e)},{i:a-2,x:cp(n,r)})}else 1===e&&1===r||o.push(i(o)+"scale("+e+","+r+")")}return function(n,e){var r=[],i=[];return n=t(n),e=t(e),o(n.translateX,n.translateY,e.translateX,e.translateY,r,i),u(n.rotate,e.rotate,r,i),a(n.skewX,e.skewX,r,i),c(n.scaleX,n.scaleY,e.scaleX,e.scaleY,r,i),n=e=null,function(t){for(var n,e=-1,o=i.length;++e=0&&n._call.call(null,t),n=n._next;--Ep}function Mn(){Lp=(Rp=qp.now())+Dp,Ep=Cp=0;try{wn()}finally{Ep=0,Nn(),Lp=0}}function Tn(){var t=qp.now(),n=t-Rp;n>Pp&&(Dp-=n,Rp=t)}function Nn(){for(var t,n,e=Jh,r=1/0;e;)e._call?(r>e._time&&(r=e._time),t=e,e=e._next):(n=e._next,e._next=null,e=t?t._next=n:Jh=n);Kh=t,kn(r)}function kn(t){if(!Ep){Cp&&(Cp=clearTimeout(Cp));t-Lp>24?(t<1/0&&(Cp=setTimeout(Mn,t-qp.now()-Dp)),zp&&(zp=clearInterval(zp))):(zp||(Rp=qp.now(),zp=setInterval(Tn,Pp)),Ep=1,Up(Mn))}}function Sn(t,n){var e=En(t,n);if(e.state>Hp)throw new Error("too late; already scheduled");return e}function An(t,n){var e=En(t,n);if(e.state>jp)throw new Error("too late; already started");return e}function En(t,n){var e=t.__transition;if(!e||!(e=e[n]))throw new Error("transition not found");return e}function Cn(t,n,e){function r(t){e.state=Bp,e.timer.restart(i,e.delay,e.time),e.delay<=t&&i(t-e.delay)}function i(r){var s,f,l,h;if(e.state!==Bp)return u();for(s in c)if(h=c[s],h.name===e.name){if(h.state===Xp)return Op(i);h.state===Wp?(h.state=$p,h.timer.stop(),h.on.call("interrupt",t,t.__data__,h.index,h.group),delete c[s]):+s=0&&(t=t.slice(0,n)),!t||"start"===t})}function $n(t,n,e){var r,i,o=Vn(n)?Sn:An;return function(){var u=o(this,t),a=u.on;a!==r&&(i=(r=a).copy()).on(n,e),u.on=i}}function Zn(t){return function(){var n=this.parentNode;for(var e in this.__transition)if(+e!==t)return;n&&n.removeChild(this)}}function Gn(t,n){var e,r,i;return function(){var o=W(this,t),u=(this.style.removeProperty(t),W(this,t));return o===u?null:o===e&&u===r?i:i=n(e=o,r=u)}}function Qn(t){return function(){this.style.removeProperty(t)}}function Jn(t,n,e){var r,i;return function(){var o=W(this,t);return o===e?null:o===r?i:i=n(r=o,e)}}function Kn(t,n,e){var r,i,o;return function(){var u=W(this,t),a=e(this);return null==a&&(this.style.removeProperty(t),a=W(this,t)),u===a?null:u===r&&a===i?o:o=n(r=u,i=a)}}function te(t,n,e){function r(){var r=this,i=n.apply(r,arguments);return i&&function(n){r.style.setProperty(t,i(n),e)}}return r._value=n,r}function ne(t){return function(){this.textContent=t}}function ee(t){return function(){var n=t(this);this.textContent=null==n?"":n}}function re(t,n,e,r){this._groups=t,this._parents=n,this._name=e,this._id=r}function ie(t){return _t().transition(t)}function oe(){return++yd}function ue(t){return+t}function ae(t){return t*t}function ce(t){return t*(2-t)}function se(t){return((t*=2)<=1?t*t:--t*(2-t)+1)/2}function fe(t){return t*t*t}function le(t){return--t*t*t+1}function he(t){return((t*=2)<=1?t*t*t:(t-=2)*t*t+2)/2}function pe(t){return 1-Math.cos(t*Md)}function de(t){return Math.sin(t*Md)}function ve(t){return(1-Math.cos(wd*t))/2}function ge(t){return Math.pow(2,10*t-10)}function ye(t){return 1-Math.pow(2,-10*t)}function _e(t){return((t*=2)<=1?Math.pow(2,10*t-10):2-Math.pow(2,10-10*t))/2}function me(t){return 1-Math.sqrt(1-t*t)}function xe(t){return Math.sqrt(1- --t*t)}function be(t){return((t*=2)<=1?1-Math.sqrt(1-t*t):Math.sqrt(1-(t-=2)*t)+1)/2}function we(t){return 1-Me(1-t)}function Me(t){return(t=+t)Math.abs(t[1]-O[1])?M=!0:w=!0),O=t,b=!0,Vd(),o()}function o(){var t;switch(m=O[0]-U[0],x=O[1]-U[1],k){case Zd:case $d:S&&(m=Math.max(P-l,Math.min(L-v,m)),h=l+m,g=v+m),A&&(x=Math.max(R-p,Math.min(D-y,x)),d=p+x,_=y+x);break;case Gd:S<0?(m=Math.max(P-l,Math.min(L-l,m)),h=l+m,g=v):S>0&&(m=Math.max(P-v,Math.min(L-v,m)),h=l,g=v+m),A<0?(x=Math.max(R-p,Math.min(D-p,x)),d=p+x,_=y):A>0&&(x=Math.max(R-y,Math.min(D-y,x)),d=p,_=y+x);break;case Qd:S&&(h=Math.max(P,Math.min(L,l-m*S)),g=Math.max(P,Math.min(L,v+m*S))),A&&(d=Math.max(R,Math.min(D,p-x*A)),_=Math.max(R,Math.min(D,y+x*A)))}g0&&(l=h-m),A<0?y=_-x:A>0&&(p=d-x),k=Zd,I.attr("cursor",nv.selection),o());break;default:return}Vd()}function s(){switch(t.event.keyCode){case 16:q&&(w=M=q=!1,o());break;case 18:k===Qd&&(S<0?v=g:S>0&&(l=h),A<0?y=_:A>0&&(p=d),k=Gd,o());break;case 32:k===Zd&&(t.event.altKey?(S&&(v=g-m*S,l=h+m*S),A&&(y=_-x*A,p=d+x*A),k=Qd):(S<0?v=g:S>0&&(l=h),A<0?y=_:A>0&&(p=d),k=Gd),I.attr("cursor",nv[N]),o());break;default:return}Vd()}if(t.event.touches){if(t.event.changedTouches.length=(o=(v+y)/2))?v=o:y=o,(f=e>=(u=(g+_)/2))?g=u:_=u,i=p,!(p=p[l=f<<1|s]))return i[l]=d,t;if(a=+t._x.call(null,p.data),c=+t._y.call(null,p.data),n===a&&e===c)return d.next=p,i?i[l]=d:t._root=d,t;do{i=i?i[l]=new Array(4):t._root=new Array(4),(s=n>=(o=(v+y)/2))?v=o:y=o,(f=e>=(u=(g+_)/2))?g=u:_=u}while((l=f<<1|s)==(h=(c>=u)<<1|a>=o));return i[h]=p,i[l]=d,t}function er(t){var n,e,r,i,o=t.length,u=new Array(o),a=new Array(o),c=1/0,s=1/0,f=-1/0,l=-1/0;for(e=0;ef&&(f=r),il&&(l=i));for(f",i=n[3]||"-",o=n[4]||"",u=!!n[5],a=n[6]&&+n[6],c=!!n[7],s=n[8]&&+n[8].slice(1),f=n[9]||"" -;"n"===f?(c=!0,f="g"):bg[f]||(f=""),(u||"0"===e&&"="===r)&&(u=!0,e="0",r="="),this.fill=e,this.align=r,this.sign=i,this.symbol=o,this.zero=u,this.width=a,this.comma=c,this.precision=s,this.type=f}function yr(n){return Mg=kg(n),t.format=Mg.format,t.formatPrefix=Mg.formatPrefix,Mg}function _r(){this.reset()}function mr(t,n,e){var r=t.s=n+e,i=r-n,o=r-i;t.t=n-o+(e-i)}function xr(t){return t>1?0:t<-1?fy:Math.acos(t)}function br(t){return t>1?ly:t<-1?-ly:Math.asin(t)}function wr(t){return(t=Ty(t/2))*t}function Mr(){}function Tr(t,n){t&&Ey.hasOwnProperty(t.type)&&Ey[t.type](t,n)}function Nr(t,n,e){var r,i=-1,o=t.length-e;for(n.lineStart();++i=0?1:-1,i=r*e,o=my(n),u=Ty(n),a=Dg*u,c=Lg*o+a*my(i),s=a*r*Ty(i);zy.add(_y(s,c)),Rg=t,Lg=o,Dg=u}function zr(t){return[_y(t[1],t[0]),br(t[2])]}function Pr(t){var n=t[0],e=t[1],r=my(e);return[r*my(n),r*Ty(n),Ty(e)]}function Rr(t,n){return t[0]*n[0]+t[1]*n[1]+t[2]*n[2]}function Lr(t,n){return[t[1]*n[2]-t[2]*n[1],t[2]*n[0]-t[0]*n[2],t[0]*n[1]-t[1]*n[0]]}function Dr(t,n){t[0]+=n[0],t[1]+=n[1],t[2]+=n[2]}function qr(t,n){return[t[0]*n,t[1]*n,t[2]*n]}function Ur(t){var n=ky(t[0]*t[0]+t[1]*t[1]+t[2]*t[2]);t[0]/=n,t[1]/=n,t[2]/=n}function Or(t,n){jg.push(Xg=[qg=t,Og=t]),nFg&&(Fg=n)}function Fr(t,n){var e=Pr([t*vy,n*vy]);if(Bg){var r=Lr(Bg,e),i=[r[1],-r[0],0],o=Lr(i,r);Ur(o),o=zr(o);var u,a=t-Yg,c=a>0?1:-1,s=o[0]*dy*c,f=gy(a)>180;f^(c*YgFg&&(Fg=u):(s=(s+360)%360-180,f^(c*YgFg&&(Fg=n))),f?tXr(qg,Og)&&(Og=t):Xr(t,Og)>Xr(qg,Og)&&(qg=t):Og>=qg?(tOg&&(Og=t)):t>Yg?Xr(qg,t)>Xr(qg,Og)&&(Og=t):Xr(t,Og)>Xr(qg,Og)&&(qg=t)}else jg.push(Xg=[qg=t,Og=t]);nFg&&(Fg=n),Bg=e,Yg=t}function Yr(){qy.point=Fr}function Ir(){Xg[0]=qg,Xg[1]=Og,qy.point=Or,Bg=null}function Hr(t,n){if(Bg){var e=t-Yg;Dy.add(gy(e)>180?e+(e>0?360:-360):e)}else Ig=t,Hg=n;Ry.point(t,n),Fr(t,n)}function Br(){Ry.lineStart()}function jr(){Hr(Ig,Hg),Ry.lineEnd(),gy(Dy)>sy&&(qg=-(Og=180)),Xg[0]=qg,Xg[1]=Og,Bg=null}function Xr(t,n){return(n-=t)<0?n+360:n}function Wr(t,n){return t[0]-n[0]}function Vr(t,n){return t[0]<=t[1]?t[0]<=n&&n<=t[1]:nfy?t-py:t<-fy?t+py:t,n]}function oi(t,n,e){return(t%=py)?n||e?Iy(ai(t),ci(n,e)):ai(t):n||e?ci(n,e):ii}function ui(t){return function(n,e){return n+=t,[n>fy?n-py:n<-fy?n+py:n,e]}}function ai(t){var n=ui(t);return n.invert=ui(-t),n}function ci(t,n){function e(t,n){var e=my(n),a=my(t)*e,c=Ty(t)*e,s=Ty(n),f=s*r+a*i;return[_y(c*o-f*u,a*r-s*i),br(f*o+c*u)]}var r=my(t),i=Ty(t),o=my(n),u=Ty(n);return e.invert=function(t,n){var e=my(n),a=my(t)*e,c=Ty(t)*e,s=Ty(n),f=s*o-c*u;return[_y(c*o+s*u,a*r+f*i),br(f*r-a*i)]},e}function si(t,n,e,r,i,o){if(e){var u=my(n),a=Ty(n),c=r*e;null==i?(i=n+r*py,o=n-c/2):(i=fi(u,i),o=fi(u,o),(r>0?io)&&(i+=r*py));for(var s,f=i;r>0?f>o:f1}function di(t,n){return((t=t.x)[0]<0?t[1]-ly-sy:ly-t[1])-((n=n.x)[0]<0?n[1]-ly-sy:ly-n[1])}function vi(t){var n,e=NaN,r=NaN,i=NaN;return{lineStart:function(){t.lineStart(),n=1},point:function(o,u){var a=o>0?fy:-fy,c=gy(o-e);gy(c-fy)0?ly:-ly),t.point(i,r),t.lineEnd(),t.lineStart(),t.point(a,r),t.point(o,r),n=0):i!==a&&c>=fy&&(gy(e-i)sy?yy((Ty(n)*(o=my(r))*Ty(e)-Ty(r)*(i=my(n))*Ty(t))/(i*o*u)):(n+r)/2}function yi(t,n,e,r){var i;if(null==t)i=e*ly,r.point(-fy,i),r.point(0,i),r.point(fy,i),r.point(fy,0),r.point(fy,-i),r.point(0,-i),r.point(-fy,-i),r.point(-fy,0),r.point(-fy,i);else if(gy(t[0]-n[0])>sy){var o=t[0]0)do{s.point(0===f||3===f?t:e,f>1?r:n)}while((f=(f+a+4)%4)!==l);else s.point(o[0],o[1])}function u(r,i){return gy(r[0]-t)0?0:3:gy(r[0]-e)0?2:1:gy(r[1]-n)0?1:0:i>0?3:2}function a(t,n){return c(t.x,n.x)}function c(t,n){var e=u(t,1),r=u(n,1);return e!==r?e-r:0===e?n[1]-t[1]:1===e?t[0]-n[0]:2===e?t[1]-n[1]:n[0]-t[0]}return function(u){function c(t,n){i(t,n)&&k.point(t,n)}function s(){for(var n=0,e=0,i=g.length;er&&(l-o)*(r-u)>(h-u)*(t-o)&&++n:h<=r&&(l-o)*(r-u)<(h-u)*(t-o)&&--n;return n}function f(){k=S,v=[],g=[],N=!0}function l(){var t=s(),n=N&&t,e=(v=Kf(v)).length;(n||e)&&(u.polygonStart(),n&&(u.lineStart(),o(null,null,1,u),u.lineEnd()),e&&r_(v,a,t,o,u),u.polygonEnd()),k=u,v=g=y=null}function h(){A.point=d,g&&g.push(y=[]),T=!0,M=!1,b=w=NaN}function p(){v&&(d(_,m),x&&M&&S.rejoin(),v.push(S.result())),A.point=c,M&&k.lineEnd()}function d(o,u){var a=i(o,u);if(g&&y.push([o,u]),T)_=o,m=u,x=a,T=!1,a&&(k.lineStart(),k.point(o,u));else if(a&&M)k.point(o,u);else{var c=[b=Math.max(l_,Math.min(f_,b)),w=Math.max(l_,Math.min(f_,w))],s=[o=Math.max(l_,Math.min(f_,o)),u=Math.max(l_,Math.min(f_,u))];s_(c,s,t,n,e,r)?(M||(k.lineStart(),k.point(c[0],c[1])),k.point(s[0],s[1]),a||k.lineEnd(),N=!1):a&&(k.lineStart(),k.point(o,u),N=!1)}b=o,w=u,M=a}var v,g,y,_,m,x,b,w,M,T,N,k=u,S=n_(),A={point:c,lineStart:h,lineEnd:p,polygonStart:f,polygonEnd:l};return A}}function mi(){d_.point=bi,d_.lineEnd=xi}function xi(){d_.point=d_.lineEnd=Mr}function bi(t,n){t*=vy,n*=vy,Hy=t,By=Ty(n),jy=my(n),d_.point=wi}function wi(t,n){t*=vy,n*=vy;var e=Ty(n),r=my(n),i=gy(t-Hy),o=my(i),u=Ty(i),a=r*u,c=jy*e-By*r*o,s=By*e+jy*r*o;p_.add(_y(ky(a*a+c*c),s)),Hy=t,By=e,jy=r}function Mi(t,n){return!(!t||!x_.hasOwnProperty(t.type))&&x_[t.type](t,n)}function Ti(t,n){return 0===__(t,n)}function Ni(t,n){var e=__(t[0],t[1]);return __(t[0],n)+__(n,t[1])<=e+sy}function ki(t,n){return!!o_(t.map(Si),Ai(n))}function Si(t){return t=t.map(Ai),t.pop(),t}function Ai(t){return[t[0]*vy,t[1]*vy]}function Ei(t,n,e){var r=Yf(t,n-sy,e).concat(n);return function(t){return r.map(function(n){return[t,n]})}}function Ci(t,n,e){var r=Yf(t,n-sy,e).concat(n);return function(t){return r.map(function(n){return[n,t]})}}function zi(){function t(){return{type:"MultiLineString",coordinates:n()}}function n(){return Yf(xy(o/g)*g,i,g).map(h).concat(Yf(xy(s/y)*y,c,y).map(p)).concat(Yf(xy(r/d)*d,e,d).filter(function(t){return gy(t%g)>sy}).map(f)).concat(Yf(xy(a/v)*v,u,v).filter(function(t){return gy(t%y)>sy}).map(l))}var e,r,i,o,u,a,c,s,f,l,h,p,d=10,v=d,g=90,y=360,_=2.5;return t.lines=function(){return n().map(function(t){return{type:"LineString",coordinates:t}})},t.outline=function(){return{type:"Polygon",coordinates:[h(o).concat(p(c).slice(1),h(i).reverse().slice(1),p(s).reverse().slice(1))]}},t.extent=function(n){return arguments.length?t.extentMajor(n).extentMinor(n):t.extentMinor()},t.extentMajor=function(n){return arguments.length?(o=+n[0][0],i=+n[1][0],s=+n[0][1],c=+n[1][1],o>i&&(n=o,o=i,i=n),s>c&&(n=s,s=c,c=n),t.precision(_)):[[o,s],[i,c]]},t.extentMinor=function(n){return arguments.length?(r=+n[0][0],e=+n[1][0],a=+n[0][1],u=+n[1][1],r>e&&(n=r,r=e,e=n),a>u&&(n=a,a=u,u=n),t.precision(_)):[[r,a],[e,u]]},t.step=function(n){return arguments.length?t.stepMajor(n).stepMinor(n):t.stepMinor()},t.stepMajor=function(n){return arguments.length?(g=+n[0],y=+n[1],t):[g,y]},t.stepMinor=function(n){return arguments.length?(d=+n[0],v=+n[1],t):[d,v]},t.precision=function(n){return arguments.length?(_=+n,f=Ei(a,u,90),l=Ci(r,e,_),h=Ei(s,c,90),p=Ci(o,i,_),t):_},t.extentMajor([[-180,-90+sy],[180,90-sy]]).extentMinor([[-180,-80-sy],[180,80+sy]])}function Pi(){return zi()()}function Ri(){k_.point=Li}function Li(t,n){k_.point=Di,Xy=Vy=t,Wy=$y=n}function Di(t,n){N_.add($y*t-Vy*n),Vy=t,$y=n}function qi(){Di(Xy,Wy)}function Ui(t,n){tE_&&(E_=t),nC_&&(C_=n)}function Oi(t,n){P_+=t,R_+=n,++L_}function Fi(){I_.point=Yi}function Yi(t,n){I_.point=Ii,Oi(Qy=t,Jy=n)}function Ii(t,n){var e=t-Qy,r=n-Jy,i=ky(e*e+r*r);D_+=i*(Qy+t)/2,q_+=i*(Jy+n)/2,U_+=i,Oi(Qy=t,Jy=n)}function Hi(){I_.point=Oi}function Bi(){I_.point=Xi}function ji(){Wi(Zy,Gy)}function Xi(t,n){I_.point=Wi,Oi(Zy=Qy=t,Gy=Jy=n)}function Wi(t,n){var e=t-Qy,r=n-Jy,i=ky(e*e+r*r);D_+=i*(Qy+t)/2,q_+=i*(Jy+n)/2,U_+=i,i=Jy*t-Qy*n,O_+=i*(Qy+t),F_+=i*(Jy+n),Y_+=3*i,Oi(Qy=t,Jy=n)}function Vi(t){this._context=t}function $i(t,n){$_.point=Zi,B_=X_=t,j_=W_=n}function Zi(t,n){X_-=t,W_-=n,V_.add(ky(X_*X_+W_*W_)),X_=t,W_=n}function Gi(){this._string=[]}function Qi(t){return"m0,"+t+"a"+t+","+t+" 0 1,1 0,"+-2*t+"a"+t+","+t+" 0 1,1 0,"+2*t+"z"}function Ji(t){return function(n){var e=new Ki;for(var r in t)e[r]=t[r];return e.stream=n,e}}function Ki(){}function to(t,n,e){var r=t.clipExtent&&t.clipExtent();return t.scale(150).translate([0,0]),null!=r&&t.clipExtent(null),Cy(e,t.stream(z_)),n(z_.result()),null!=r&&t.clipExtent(r),t}function no(t,n,e){return to(t,function(e){var r=n[1][0]-n[0][0],i=n[1][1]-n[0][1],o=Math.min(r/(e[1][0]-e[0][0]),i/(e[1][1]-e[0][1])),u=+n[0][0]+(r-o*(e[1][0]+e[0][0]))/2,a=+n[0][1]+(i-o*(e[1][1]+e[0][1]))/2;t.scale(150*o).translate([u,a])},e)}function eo(t,n,e){return no(t,[[0,0],n],e)}function ro(t,n,e){return to(t,function(e){var r=+n,i=r/(e[1][0]-e[0][0]),o=(r-i*(e[1][0]+e[0][0]))/2,u=-i*e[0][1];t.scale(150*i).translate([o,u])},e)}function io(t,n,e){return to(t,function(e){var r=+n,i=r/(e[1][1]-e[0][1]),o=-i*e[0][0],u=(r-i*(e[1][1]+e[0][1]))/2;t.scale(150*i).translate([o,u])},e)}function oo(t){return Ji({point:function(n,e){n=t(n,e),this.stream.point(n[0],n[1])}})}function uo(t,n){function e(r,i,o,u,a,c,s,f,l,h,p,d,v,g){var y=s-r,_=f-i,m=y*y+_*_;if(m>4*n&&v--){var x=u+h,b=a+p,w=c+d,M=ky(x*x+b*b+w*w),T=br(w/=M),N=gy(gy(w)-1)n||gy((y*E+_*C)/m-.5)>.3||u*h+a*p+c*d2?t[2]%360*vy:0,i()):[b*dy,w*dy,M*dy]},n.precision=function(t){return arguments.length?(E=K_(r,A=t*t),o()):ky(A)},n.fitExtent=function(t,e){return no(n,t,e)},n.fitSize=function(t,e){return eo(n,t,e)},n.fitWidth=function(t,e){return ro(n,t,e)},n.fitHeight=function(t,e){return io(n,t,e)},function(){return u=t.apply(this,arguments),n.invert=u.invert&&e,i()}}function fo(t){var n=0,e=fy/3,r=so(t),i=r(n,e);return i.parallels=function(t){return arguments.length?r(n=t[0]*vy,e=t[1]*vy):[n*dy,e*dy]},i}function lo(t){function n(t,n){return[t*e,Ty(n)/e]}var e=my(t);return n.invert=function(t,n){return[t/e,br(n*e)]},n}function ho(t,n){function e(t,n){var e=ky(o-2*i*Ty(n))/i;return[e*Ty(t*=i),u-e*my(t)]}var r=Ty(t),i=(r+Ty(n))/2;if(gy(i)0?n<-ly+sy&&(n=-ly+sy):n>ly-sy&&(n=ly-sy);var e=o/My(mo(n),i);return[e*Ty(i*t),o-e*my(i*t)]}var r=my(t),i=t===n?Ty(t):wy(r/my(n))/wy(mo(n)/mo(t)),o=r*My(mo(t),i)/i;return i?(e.invert=function(t,n){var e=o-n,r=Ny(i)*ky(t*t+e*e);return[_y(t,gy(e))/i*Ny(e),2*yy(My(o/r,1/i))-ly]},e):yo}function bo(t,n){return[t,n]}function wo(t,n){function e(t,n){var e=o-n,r=i*t;return[e*Ty(r),o-e*my(r)]}var r=my(t),i=t===n?Ty(t):(r-my(n))/(n-t),o=r/i+t;return gy(i)=0;)n+=e[r].value;else n=1;t.value=n}function Uo(t,n){if(t===n)return t;var e=t.ancestors(),r=n.ancestors(),i=null;for(t=e.pop(),n=r.pop();t===n;)i=t,t=e.pop(),n=r.pop();return i}function Oo(t,n){var e,r,i,o,u,a=new Bo(t),c=+t.value&&(a.value=t.value),s=[a];for(null==n&&(n=Yo);e=s.pop();)if(c&&(e.value=+e.data.value),(i=n(e.data))&&(u=i.length))for(e.children=new Array(u),o=u-1;o>=0;--o)s.push(r=e.children[o]=new Bo(i[o])),r.parent=e,r.depth=e.depth+1;return a.eachBefore(Ho)}function Fo(){return Oo(this).eachBefore(Io)}function Yo(t){return t.children}function Io(t){t.data=t.data.data}function Ho(t){var n=0;do{t.height=n}while((t=t.parent)&&t.height<++n)}function Bo(t){this.data=t,this.depth=this.height=0,this.parent=null}function jo(t){for(var n,e,r=t.length;r;)e=Math.random()*r--|0,n=t[r],t[r]=t[e],t[e]=n;return t}function Xo(t,n){var e,r;if($o(n,t))return[n];for(e=0;e0&&e*e>r*r+i*i}function $o(t,n){for(var e=0;ee*e+r*r}function nu(t){var n=t._,e=t.next._,r=n.r+e.r,i=(n.x*e.r+e.x*n.r)/r,o=(n.y*e.r+e.y*n.r)/r;return i*i+o*o}function eu(t){this._=t,this.next=null,this.previous=null}function ru(t){if(!(i=t.length))return 0;var n,e,r,i,o,u,a,c,s,f,l;if(n=t[0],n.x=0,n.y=0,!(i>1))return n.r;if(e=t[1],n.x=-e.r,e.x=n.r,e.y=0,!(i>2))return n.r+e.r;Ko(e,n,r=t[2]),n=new eu(n),e=new eu(e),r=new eu(r),n.next=r.previous=e,e.next=n.previous=r,r.next=e.previous=n;t:for(a=3;a=0;)n=i[o],n.z+=e,n.m+=e,e+=n.s+(r+=n.c)}function _u(t,n,e){return t.a.parent===n.parent?t.a:e}function mu(t,n){this._=t,this.parent=null,this.children=null,this.A=null,this.a=this,this.z=0,this.m=0,this.c=0,this.s=0,this.t=null,this.i=n}function xu(t){for(var n,e,r,i,o,u=new mu(t,0),a=[u];n=a.pop();)if(r=n._.children)for(n.children=new Array(o=r.length),i=o-1;i>=0;--i)a.push(e=n.children[i]=new mu(r[i],i)),e.parent=n;return(u.parent=new mu(null,0)).children=[u],u}function bu(t,n,e,r,i,o){for(var u,a,c,s,f,l,h,p,d,v,g,y=[],_=n.children,m=0,x=0,b=_.length,w=n.value;mh&&(h=a),g=f*f*v,(p=Math.max(h/g,g/l))>d){f-=a;break}d=p}y.push(u={value:f,dice:c1&&Jm(t[e[r-2]],t[e[r-1]],t[i])<=0;)--r;e[r++]=i}return e.slice(0,r)}function Tu(t){this._size=t,this._call=this._error=null,this._tasks=[],this._data=[],this._waiting=this._active=this._ended=this._start=0}function Nu(t){if(!t._start)try{ku(t)}catch(n){if(t._tasks[t._ended+t._active-1])Au(t,n);else if(!t._data)throw n}}function ku(t){for(;t._start=t._waiting&&t._active=0;)if((e=t._tasks[r])&&(t._tasks[r]=null,e.abort))try{e.abort()}catch(n){}t._active=NaN,Eu(t)}function Eu(t){if(!t._active&&t._call){var n=t._data;t._data=void 0,t._call(t._error,n)}}function Cu(t){if(null==t)t=1/0;else if(!((t=+t)>=1))throw new Error("invalid concurrency");return new Tu(t)}function zu(t){return function(n,e){t(null==n?e:null)}}function Pu(t){var n=t.responseType;return n&&"text"!==n?t.response:t.responseText}function Ru(t,n){return function(e){return t(e.responseText,n)}}function Lu(t){function n(n){var o=n+"",u=e.get(o);if(!u){if(i!==Mx)return i;e.set(o,u=r.push(n))}return t[(u-1)%t.length]}var e=Xe(),r=[],i=Mx;return t=null==t?[]:wx.call(t),n.domain=function(t){if(!arguments.length)return r.slice();r=[],e=Xe();for(var i,o,u=-1,a=t.length;++u=e?1:r(t)}}}function Yu(t){return function(n,e){var r=t(n=+n,e=+e);return function(t){return t<=0?n:t>=1?e:r(t)}}}function Iu(t,n,e,r){var i=t[0],o=t[1],u=n[0],a=n[1];return o2?Hu:Iu,o=u=null,r}function r(n){return(o||(o=i(a,c,f?Fu(t):t,s)))(+n)}var i,o,u,a=kx,c=kx,s=pp,f=!1;return r.invert=function(t){return(u||(u=i(c,a,Ou,f?Yu(n):n)))(+t)},r.domain=function(t){return arguments.length?(a=bx.call(t,Nx),e()):a.slice()},r.range=function(t){return arguments.length?(c=wx.call(t),e()):c.slice()},r.rangeRound=function(t){return c=wx.call(t),s=dp,e()},r.clamp=function(t){return arguments.length?(f=!!t,e()):f},r.interpolate=function(t){return arguments.length?(s=t,e()):s},e()}function Xu(t){var n=t.domain;return t.ticks=function(t){var e=n();return jf(e[0],e[e.length-1],null==t?10:t)},t.tickFormat=function(t,e){return Sx(n(),t,e)},t.nice=function(e){null==e&&(e=10);var i,o=n(),u=0,a=o.length-1,c=o[u],s=o[a];return s0?(c=Math.floor(c/i)*i,s=Math.ceil(s/i)*i,i=r(c,s,e)):i<0&&(c=Math.ceil(c*i)/i,s=Math.floor(s*i)/i,i=r(c,s,e)),i>0?(o[u]=Math.floor(c/i)*i,o[a]=Math.ceil(s/i)*i,n(o)):i<0&&(o[u]=Math.ceil(c*i)/i,o[a]=Math.floor(s*i)/i,n(o)),t},t}function Wu(){var t=ju(Ou,cp);return t.copy=function(){return Bu(t,Wu())},Xu(t)}function Vu(){function t(t){return+t}var n=[0,1];return t.invert=t,t.domain=t.range=function(e){return arguments.length?(n=bx.call(e,Nx),t):n.slice()},t.copy=function(){return Vu().domain(n)},Xu(t)}function $u(t,n){return(n=Math.log(n/t))?function(e){return Math.log(e/t)/n}:Tx(n)}function Zu(t,n){return t<0?function(e){return-Math.pow(-n,e)*Math.pow(-t,1-e)}:function(e){return Math.pow(n,e)*Math.pow(t,1-e)}}function Gu(t){return isFinite(t)?+("1e"+t):t<0?0:t}function Qu(t){return 10===t?Gu:t===Math.E?Math.exp:function(n){return Math.pow(t,n)}}function Ju(t){return t===Math.E?Math.log:10===t&&Math.log10||2===t&&Math.log2||(t=Math.log(t),function(n){return Math.log(n)/t})}function Ku(t){return function(n){return-t(-n)}}function ta(){function n(){return o=Ju(i),u=Qu(i),r()[0]<0&&(o=Ku(o),u=Ku(u)),e}var e=ju($u,Zu).domain([1,10]),r=e.domain,i=10,o=Ju(10),u=Qu(10);return e.base=function(t){return arguments.length?(i=+t,n()):i},e.domain=function(t){return arguments.length?(r(t),n()):r()},e.ticks=function(t){var n,e=r(),a=e[0],c=e[e.length-1];(n=c0){for(;hc)break;v.push(l)}}else for(;h=1;--f)if(!((l=s*f)c)break;v.push(l)}}else v=jf(h,p,Math.min(p-h,d)).map(u);return n?v.reverse():v},e.tickFormat=function(n,r){if(null==r&&(r=10===i?".0e":","),"function"!=typeof r&&(r=t.format(r)),n===1/0)return r;null==n&&(n=10);var a=Math.max(1,i*n/e.ticks().length);return function(t){var n=t/u(Math.round(o(t)));return n*i0?i[n-1]:e[0],n=i?[o[i-1],r]:[o[n-1],o[n]]},t.copy=function(){return oa().domain([e,r]).range(u)},Xu(t)}function ua(){function t(t){if(t<=t)return e[kf(n,t,0,r)]}var n=[.5],e=[0,1],r=1;return t.domain=function(i){return arguments.length?(n=wx.call(i),r=Math.min(n.length,e.length-1),t):n.slice()},t.range=function(i){return arguments.length?(e=wx.call(i),r=Math.min(n.length,e.length-1),t):e.slice()},t.invertExtent=function(t){var r=e.indexOf(t);return[n[r-1],n[r]]},t.copy=function(){return ua().domain(n).range(e)},t}function aa(t,n,e,r){function i(n){return t(n=new Date(+n)),n}return i.floor=i,i.ceil=function(e){return t(e=new Date(e-1)),n(e,1),t(e),e},i.round=function(t){var n=i(t),e=i.ceil(t);return t-n0))return a;do{a.push(u=new Date(+e)),n(e,o),t(e)}while(u=n)for(;t(n),!e(n);)n.setTime(n-1)},function(t,r){if(t>=t)if(r<0)for(;++r<=0;)for(;n(t,-1),!e(t););else for(;--r>=0;)for(;n(t,1),!e(t););})},e&&(i.count=function(n,r){return Ex.setTime(+n),Cx.setTime(+r),t(Ex),t(Cx),Math.floor(e(Ex,Cx))},i.every=function(t){return t=Math.floor(t),isFinite(t)&&t>0?t>1?i.filter(r?function(n){return r(n)%t==0 -}:function(n){return i.count(0,n)%t==0}):i:null}),i}function ca(t){return aa(function(n){n.setDate(n.getDate()-(n.getDay()+7-t)%7),n.setHours(0,0,0,0)},function(t,n){t.setDate(t.getDate()+7*n)},function(t,n){return(n-t-(n.getTimezoneOffset()-t.getTimezoneOffset())*Rx)/Lx})}function sa(t){return aa(function(n){n.setUTCDate(n.getUTCDate()-(n.getUTCDay()+7-t)%7),n.setUTCHours(0,0,0,0)},function(t,n){t.setUTCDate(t.getUTCDate()+7*n)},function(t,n){return(n-t)/Lx})}function fa(t){if(0<=t.y&&t.y<100){var n=new Date(-1,t.m,t.d,t.H,t.M,t.S,t.L);return n.setFullYear(t.y),n}return new Date(t.y,t.m,t.d,t.H,t.M,t.S,t.L)}function la(t){if(0<=t.y&&t.y<100){var n=new Date(Date.UTC(-1,t.m,t.d,t.H,t.M,t.S,t.L));return n.setUTCFullYear(t.y),n}return new Date(Date.UTC(t.y,t.m,t.d,t.H,t.M,t.S,t.L))}function ha(t){return{y:t,m:0,d:1,H:0,M:0,S:0,L:0}}function pa(t){function n(t,n){return function(e){var r,i,o,u=[],a=-1,c=0,s=t.length;for(e instanceof Date||(e=new Date(+e));++a53)return null;"w"in u||(u.w=1),"Z"in u?(i=la(ha(u.y)),o=i.getUTCDay(),i=o>4||0===o?db.ceil(i):db(i),i=lb.offset(i,7*(u.V-1)),u.y=i.getUTCFullYear(),u.m=i.getUTCMonth(),u.d=i.getUTCDate()+(u.w+6)%7):(i=n(ha(u.y)),o=i.getDay(),i=o>4||0===o?jx.ceil(i):jx(i),i=Ix.offset(i,7*(u.V-1)),u.y=i.getFullYear(),u.m=i.getMonth(),u.d=i.getDate()+(u.w+6)%7)}else("W"in u||"U"in u)&&("w"in u||(u.w="u"in u?u.u%7:"W"in u?1:0),o="Z"in u?la(ha(u.y)).getUTCDay():n(ha(u.y)).getDay(),u.m=0,u.d="W"in u?(u.w+6)%7+7*u.W-(o+5)%7:u.w+7*u.U-(o+6)%7);return"Z"in u?(u.H+=u.Z/100|0,u.M+=u.Z%100,la(u)):n(u)}}function r(t,n,e,r){for(var i,o,u=0,a=n.length,c=e.length;u=c)return-1;if(37===(i=n.charCodeAt(u++))){if(i=n.charAt(u++),!(o=H[i in Pb?n.charAt(u++):i])||(r=o(t,e,r))<0)return-1}else if(i!=e.charCodeAt(r++))return-1}return r}function i(t,n,e){var r=C.exec(n.slice(e));return r?(t.p=z[r[0].toLowerCase()],e+r[0].length):-1}function o(t,n,e){var r=L.exec(n.slice(e));return r?(t.w=D[r[0].toLowerCase()],e+r[0].length):-1}function u(t,n,e){var r=P.exec(n.slice(e));return r?(t.w=R[r[0].toLowerCase()],e+r[0].length):-1}function a(t,n,e){var r=O.exec(n.slice(e));return r?(t.m=F[r[0].toLowerCase()],e+r[0].length):-1}function c(t,n,e){var r=q.exec(n.slice(e));return r?(t.m=U[r[0].toLowerCase()],e+r[0].length):-1}function s(t,n,e){return r(t,w,n,e)}function f(t,n,e){return r(t,M,n,e)}function l(t,n,e){return r(t,T,n,e)}function h(t){return S[t.getDay()]}function p(t){return k[t.getDay()]}function d(t){return E[t.getMonth()]}function v(t){return A[t.getMonth()]}function g(t){return N[+(t.getHours()>=12)]}function y(t){return S[t.getUTCDay()]}function _(t){return k[t.getUTCDay()]}function m(t){return E[t.getUTCMonth()]}function x(t){return A[t.getUTCMonth()]}function b(t){return N[+(t.getUTCHours()>=12)]}var w=t.dateTime,M=t.date,T=t.time,N=t.periods,k=t.days,S=t.shortDays,A=t.months,E=t.shortMonths,C=ga(N),z=ya(N),P=ga(k),R=ya(k),L=ga(S),D=ya(S),q=ga(A),U=ya(A),O=ga(E),F=ya(E),Y={a:h,A:p,b:d,B:v,c:null,d:Ua,e:Ua,f:Ha,H:Oa,I:Fa,j:Ya,L:Ia,m:Ba,M:ja,p:g,Q:_c,s:mc,S:Xa,u:Wa,U:Va,V:$a,w:Za,W:Ga,x:null,X:null,y:Qa,Y:Ja,Z:Ka,"%":yc},I={a:y,A:_,b:m,B:x,c:null,d:tc,e:tc,f:oc,H:nc,I:ec,j:rc,L:ic,m:uc,M:ac,p:b,Q:_c,s:mc,S:cc,u:sc,U:fc,V:lc,w:hc,W:pc,x:null,X:null,y:dc,Y:vc,Z:gc,"%":yc},H={a:o,A:u,b:a,B:c,c:s,d:Sa,e:Sa,f:Ra,H:Ea,I:Ea,j:Aa,L:Pa,m:ka,M:Ca,p:i,Q:Da,s:qa,S:za,u:ma,U:xa,V:ba,w:_a,W:wa,x:f,X:l,y:Ta,Y:Ma,Z:Na,"%":La};return Y.x=n(M,Y),Y.X=n(T,Y),Y.c=n(w,Y),I.x=n(M,I),I.X=n(T,I),I.c=n(w,I),{format:function(t){var e=n(t+="",Y);return e.toString=function(){return t},e},parse:function(t){var n=e(t+="",fa);return n.toString=function(){return t},n},utcFormat:function(t){var e=n(t+="",I);return e.toString=function(){return t},e},utcParse:function(t){var n=e(t,la);return n.toString=function(){return t},n}}}function da(t,n,e){var r=t<0?"-":"",i=(r?-t:t)+"",o=i.length;return r+(o68?1900:2e3),e+r[0].length):-1}function Na(t,n,e){var r=/^(Z)|([+-]\d\d)(?::?(\d\d))?/.exec(n.slice(e,e+6));return r?(t.Z=r[1]?0:-(r[2]+(r[3]||"00")),e+r[0].length):-1}function ka(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.m=r[0]-1,e+r[0].length):-1}function Sa(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.d=+r[0],e+r[0].length):-1}function Aa(t,n,e){var r=Rb.exec(n.slice(e,e+3));return r?(t.m=0,t.d=+r[0],e+r[0].length):-1}function Ea(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.H=+r[0],e+r[0].length):-1}function Ca(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.M=+r[0],e+r[0].length):-1}function za(t,n,e){var r=Rb.exec(n.slice(e,e+2));return r?(t.S=+r[0],e+r[0].length):-1}function Pa(t,n,e){var r=Rb.exec(n.slice(e,e+3));return r?(t.L=+r[0],e+r[0].length):-1}function Ra(t,n,e){var r=Rb.exec(n.slice(e,e+6));return r?(t.L=Math.floor(r[0]/1e3),e+r[0].length):-1}function La(t,n,e){var r=Lb.exec(n.slice(e,e+1));return r?e+r[0].length:-1}function Da(t,n,e){var r=Rb.exec(n.slice(e));return r?(t.Q=+r[0],e+r[0].length):-1}function qa(t,n,e){var r=Rb.exec(n.slice(e));return r?(t.Q=1e3*+r[0],e+r[0].length):-1}function Ua(t,n){return da(t.getDate(),n,2)}function Oa(t,n){return da(t.getHours(),n,2)}function Fa(t,n){return da(t.getHours()%12||12,n,2)}function Ya(t,n){return da(1+Ix.count(ob(t),t),n,3)}function Ia(t,n){return da(t.getMilliseconds(),n,3)}function Ha(t,n){return Ia(t,n)+"000"}function Ba(t,n){return da(t.getMonth()+1,n,2)}function ja(t,n){return da(t.getMinutes(),n,2)}function Xa(t,n){return da(t.getSeconds(),n,2)}function Wa(t){var n=t.getDay();return 0===n?7:n}function Va(t,n){return da(Bx.count(ob(t),t),n,2)}function $a(t,n){var e=t.getDay();return t=e>=4||0===e?Vx(t):Vx.ceil(t),da(Vx.count(ob(t),t)+(4===ob(t).getDay()),n,2)}function Za(t){return t.getDay()}function Ga(t,n){return da(jx.count(ob(t),t),n,2)}function Qa(t,n){return da(t.getFullYear()%100,n,2)}function Ja(t,n){return da(t.getFullYear()%1e4,n,4)}function Ka(t){var n=t.getTimezoneOffset();return(n>0?"-":(n*=-1,"+"))+da(n/60|0,"0",2)+da(n%60,"0",2)}function tc(t,n){return da(t.getUTCDate(),n,2)}function nc(t,n){return da(t.getUTCHours(),n,2)}function ec(t,n){return da(t.getUTCHours()%12||12,n,2)}function rc(t,n){return da(1+lb.count(Eb(t),t),n,3)}function ic(t,n){return da(t.getUTCMilliseconds(),n,3)}function oc(t,n){return ic(t,n)+"000"}function uc(t,n){return da(t.getUTCMonth()+1,n,2)}function ac(t,n){return da(t.getUTCMinutes(),n,2)}function cc(t,n){return da(t.getUTCSeconds(),n,2)}function sc(t){var n=t.getUTCDay();return 0===n?7:n}function fc(t,n){return da(pb.count(Eb(t),t),n,2)}function lc(t,n){var e=t.getUTCDay();return t=e>=4||0===e?yb(t):yb.ceil(t),da(yb.count(Eb(t),t)+(4===Eb(t).getUTCDay()),n,2)}function hc(t){return t.getUTCDay()}function pc(t,n){return da(db.count(Eb(t),t),n,2)}function dc(t,n){return da(t.getUTCFullYear()%100,n,2)}function vc(t,n){return da(t.getUTCFullYear()%1e4,n,4)}function gc(){return"+0000"}function yc(){return"%"}function _c(t){return+t}function mc(t){return Math.floor(+t/1e3)}function xc(n){return Cb=pa(n),t.timeFormat=Cb.format,t.timeParse=Cb.parse,t.utcFormat=Cb.utcFormat,t.utcParse=Cb.utcParse,Cb}function bc(t){return t.toISOString()}function wc(t){var n=new Date(t);return isNaN(n)?null:n}function Mc(t){return new Date(t)}function Tc(t){return t instanceof Date?+t:+new Date(+t)}function Nc(t,n,e,r,o,u,a,c,s){function f(i){return(a(i)1?0:t<-1?gw:Math.acos(t)}function Ec(t){return t>=1?yw:t<=-1?-yw:Math.asin(t)}function Cc(t){return t.innerRadius}function zc(t){return t.outerRadius}function Pc(t){return t.startAngle}function Rc(t){return t.endAngle}function Lc(t){return t&&t.padAngle}function Dc(t,n,e,r,i,o,u,a){var c=e-t,s=r-n,f=u-i,l=a-o,h=(f*(n-o)-l*(t-i))/(l*c-f*s);return[t+h*c,n+h*s]}function qc(t,n,e,r,i,o,u){var a=t-e,c=n-r,s=(u?o:-o)/dw(a*a+c*c),f=s*c,l=-s*a,h=t+f,p=n+l,d=e+f,v=r+l,g=(h+d)/2,y=(p+v)/2,_=d-h,m=v-p,x=_*_+m*m,b=i-o,w=h*v-d*p,M=(m<0?-1:1)*dw(lw(0,b*b*x-w*w)),T=(w*m-_*M)/x,N=(-w*_-m*M)/x,k=(w*m+_*M)/x,S=(-w*_+m*M)/x,A=T-g,E=N-y,C=k-g,z=S-y;return A*A+E*E>C*C+z*z&&(T=k,N=S),{cx:T,cy:N,x01:-f,y01:-l,x11:T*(i/b-1),y11:N*(i/b-1)}}function Uc(t){this._context=t}function Oc(t){return t[0]}function Fc(t){return t[1]}function Yc(t){this._curve=t}function Ic(t){function n(n){return new Yc(t(n))}return n._curve=t,n}function Hc(t){var n=t.curve;return t.angle=t.x,delete t.x,t.radius=t.y,delete t.y,t.curve=function(t){return arguments.length?n(Ic(t)):n()._curve},t}function Bc(t){return t.source}function jc(t){return t.target}function Xc(t){function n(){var n,a=Cw.call(arguments),c=e.apply(this,a),s=r.apply(this,a);if(u||(u=n=Oe()),t(u,+i.apply(this,(a[0]=c,a)),+o.apply(this,a),+i.apply(this,(a[0]=s,a)),+o.apply(this,a)),n)return u=null,n+""||null}var e=Bc,r=jc,i=Oc,o=Fc,u=null;return n.source=function(t){return arguments.length?(e=t,n):e},n.target=function(t){return arguments.length?(r=t,n):r},n.x=function(t){return arguments.length?(i="function"==typeof t?t:aw(+t),n):i},n.y=function(t){return arguments.length?(o="function"==typeof t?t:aw(+t),n):o},n.context=function(t){return arguments.length?(u=null==t?null:t,n):u},n}function Wc(t,n,e,r,i){t.moveTo(n,e),t.bezierCurveTo(n=(n+r)/2,e,n,i,r,i)}function Vc(t,n,e,r,i){t.moveTo(n,e),t.bezierCurveTo(n,e=(e+i)/2,r,e,r,i)}function $c(t,n,e,r,i){var o=Ew(n,e),u=Ew(n,e=(e+i)/2),a=Ew(r,e),c=Ew(r,i);t.moveTo(o[0],o[1]),t.bezierCurveTo(u[0],u[1],a[0],a[1],c[0],c[1])}function Zc(){return Xc(Wc)}function Gc(){return Xc(Vc)}function Qc(){var t=Xc($c);return t.angle=t.x,delete t.x,t.radius=t.y,delete t.y,t}function Jc(t,n,e){t._context.bezierCurveTo((2*t._x0+t._x1)/3,(2*t._y0+t._y1)/3,(t._x0+2*t._x1)/3,(t._y0+2*t._y1)/3,(t._x0+4*t._x1+n)/6,(t._y0+4*t._y1+e)/6)}function Kc(t){this._context=t}function ts(t){this._context=t}function ns(t){this._context=t}function es(t,n){this._basis=new Kc(t),this._beta=n}function rs(t,n,e){t._context.bezierCurveTo(t._x1+t._k*(t._x2-t._x0),t._y1+t._k*(t._y2-t._y0),t._x2+t._k*(t._x1-n),t._y2+t._k*(t._y1-e),t._x2,t._y2)}function is(t,n){this._context=t,this._k=(1-n)/6}function os(t,n){this._context=t,this._k=(1-n)/6}function us(t,n){this._context=t,this._k=(1-n)/6}function as(t,n,e){var r=t._x1,i=t._y1,o=t._x2,u=t._y2;if(t._l01_a>vw){var a=2*t._l01_2a+3*t._l01_a*t._l12_a+t._l12_2a,c=3*t._l01_a*(t._l01_a+t._l12_a);r=(r*a-t._x0*t._l12_2a+t._x2*t._l01_2a)/c,i=(i*a-t._y0*t._l12_2a+t._y2*t._l01_2a)/c}if(t._l23_a>vw){var s=2*t._l23_2a+3*t._l23_a*t._l12_a+t._l12_2a,f=3*t._l23_a*(t._l23_a+t._l12_a);o=(o*s+t._x1*t._l23_2a-n*t._l12_2a)/f,u=(u*s+t._y1*t._l23_2a-e*t._l12_2a)/f}t._context.bezierCurveTo(r,i,o,u,t._x2,t._y2)}function cs(t,n){this._context=t,this._alpha=n}function ss(t,n){this._context=t,this._alpha=n}function fs(t,n){this._context=t,this._alpha=n}function ls(t){this._context=t}function hs(t){return t<0?-1:1}function ps(t,n,e){var r=t._x1-t._x0,i=n-t._x1,o=(t._y1-t._y0)/(r||i<0&&-0),u=(e-t._y1)/(i||r<0&&-0),a=(o*i+u*r)/(r+i);return(hs(o)+hs(u))*Math.min(Math.abs(o),Math.abs(u),.5*Math.abs(a))||0}function ds(t,n){var e=t._x1-t._x0;return e?(3*(t._y1-t._y0)/e-n)/2:n}function vs(t,n,e){var r=t._x0,i=t._y0,o=t._x1,u=t._y1,a=(o-r)/3;t._context.bezierCurveTo(r+a,i+a*n,o-a,u-a*e,o,u)}function gs(t){this._context=t}function ys(t){this._context=new _s(t)}function _s(t){this._context=t}function ms(t){return new gs(t)}function xs(t){return new ys(t)}function bs(t){this._context=t}function ws(t){var n,e,r=t.length-1,i=new Array(r),o=new Array(r),u=new Array(r);for(i[0]=0,o[0]=2,u[0]=t[0]+2*t[1],n=1;n=0;--n)i[n]=(u[n]-i[n+1])/o[n];for(o[r-1]=(t[r]+i[r-1])/2,n=0;n0)){if(o/=d,d<0){if(o0){if(o>p)return;o>h&&(h=o)}if(o=r-c,d||!(o<0)){if(o/=d,d<0){if(o>p)return;o>h&&(h=o)}else if(d>0){if(o0)){if(o/=v,v<0){if(o0){if(o>p)return;o>h&&(h=o)}if(o=i-s,v||!(o<0)){if(o/=v,v<0){if(o>p)return;o>h&&(h=o)}else if(v>0){if(o0||p<1)||(h>0&&(t[0]=[c+h*d,s+h*v]),p<1&&(t[1]=[c+p*d,s+p*v]),!0)}}}}}function Fs(t,n,e,r,i){var o=t[1];if(o)return!0;var u,a,c=t[0],s=t.left,f=t.right,l=s[0],h=s[1],p=f[0],d=f[1],v=(l+p)/2,g=(h+d)/2;if(d===h){if(v=r)return;if(l>p){if(c){if(c[1]>=i)return}else c=[v,e];o=[v,i]}else{if(c){if(c[1]1)if(l>p){if(c){if(c[1]>=i)return}else c=[(e-a)/u,e];o=[(i-a)/u,i]}else{if(c){if(c[1]=r)return}else c=[n,u*n+a];o=[r,u*r+a]}else{if(c){if(c[0]EM||Math.abs(i[0][1]-i[1][1])>EM)||delete kM[o]}function Is(t){return TM[t.index]={site:t,halfedges:[]}}function Hs(t,n){var e=t.site,r=n.left,i=n.right;return e===i&&(i=r,r=e),i?Math.atan2(i[1]-r[1],i[0]-r[0]):(e===r?(r=n[1],i=n[0]):(r=n[0],i=n[1]),Math.atan2(r[0]-i[0],i[1]-r[1]))}function Bs(t,n){return n[+(n.left!==t.site)]}function js(t,n){return n[+(n.left===t.site)]}function Xs(){for(var t,n,e,r,i=0,o=TM.length;iEM||Math.abs(v-h)>EM)&&(c.splice(a,0,kM.push(qs(u,p,Math.abs(d-t)EM?[t,Math.abs(l-t)EM?[Math.abs(h-r)EM?[e,Math.abs(l-e)EM?[Math.abs(h-n)=-CM)){var p=c*c+s*s,d=f*f+l*l,v=(l*p-s*d)/h,g=(c*d-f*p)/h,y=SM.pop()||new Vs;y.arc=t,y.site=i,y.x=v+u,y.y=(y.cy=g+a)+Math.sqrt(v*v+g*g),t.circle=y;for(var _=null,m=NM._;m;)if(y.yEM)a=a.L;else{if(!((i=o-ef(a,u))>EM)){r>-EM?(n=a.P,e=a):i>-EM?(n=a,e=a.N):n=e=a;break}if(!a.R){n=a;break}a=a.R}Is(t);var c=Qs(t);if(MM.insert(n,c),n||e){if(n===e)return Zs(n),e=Qs(n.site),MM.insert(c,e),c.edge=e.edge=Ds(n.site,c.site),$s(n),void $s(e);if(!e)return void(c.edge=Ds(n.site,c.site));Zs(n),Zs(e);var s=n.site,f=s[0],l=s[1],h=t[0]-f,p=t[1]-l,d=e.site,v=d[0]-f,g=d[1]-l,y=2*(h*g-p*v),_=h*h+p*p,m=v*v+g*g,x=[(g*_-p*m)/y+f,(h*m-v*_)/y+l];Us(e.edge,s,d,x),c.edge=Ds(s,t,null,x),e.edge=Ds(t,d,null,x),$s(n),$s(e)}}function nf(t,n){var e=t.site,r=e[0],i=e[1],o=i-n;if(!o)return r;var u=t.P;if(!u)return-1/0;e=u.site;var a=e[0],c=e[1],s=c-n;if(!s)return a;var f=a-r,l=1/o-1/s,h=f/s;return l?(-h+Math.sqrt(h*h-2*l*(f*f/(-2*s)-c+s/2+i-o/2)))/l+r:(r+a)/2}function ef(t,n){var e=t.N;if(e)return nf(e,n);var r=t.site;return r[1]===n?r[0]:1/0}function rf(t,n,e){return(t[0]-e[0])*(n[1]-t[1])-(t[0]-n[0])*(e[1]-t[1])}function of(t,n){return n[1]-t[1]||n[0]-t[0]}function uf(t,n){var e,r,i,o=t.sort(of).pop();for(kM=[],TM=new Array(t.length),MM=new Cs,NM=new Cs;;)if(i=wM,o&&(!i||o[1]r?(r+i)/2:Math.min(0,r)||Math.max(0,i),u>o?(o+u)/2:Math.min(0,o)||Math.max(0,u))}function yf(){return null}function _f(){for(var t=arguments,n=0,e=t.length;no;h!=n&&(h=n,p.classed("graph-scroll-below",h));var e=!h&&pageYOffset>d;l!=e&&(l=e,p.classed("graph-scroll-fixed",l)),h&&(t=i-1),c!=t&&(a.classed("graph-scroll-active",function(n,e){return e===t}),u.call("active",null,t),c=t)}function e(){s=[];var t;a.each(function(n,e){e||(t=this.getBoundingClientRect().top),s.push(this.getBoundingClientRect().top-t)});var n=p.node().getBoundingClientRect(),e=f.node()?f.node().getBoundingClientRect().height:0;d=n.top+pageYOffset,o=n.bottom-e+pageYOffset}function r(){if(l){var n;switch(t.event.keyCode){case 39:if(t.event.metaKey)return;case 40:case 34:n=t.event.metaKey?1/0:1;break;case 37:if(t.event.metaKey)return;case 38:case 33:n=t.event.metaKey?-1/0:-1;break;case 32:n=t.event.shiftKey?-1:1;break;default:return}var e=Math.max(0,Math.min(c+n,i-1));e!=c&&(fh(document.documentElement).interrupt().transition().duration(500).tween("scroll",function(){var t=cp(pageYOffset,s[e]+d);return function(n){scrollTo(0,t(n))}}),t.event.preventDefault())}}var i,o,u=g("scroll","active"),a=fh("null"),c=NaN,s=[],f=fh("null"),l=null,h=null,p=fh("body"),d=0,v=Math.random(),y=200,_={};return _.container=function(t){return t?(p=t,_):p},_.graph=function(t){return t?(f=t,_):f},_.eventId=function(t){return t?(v=t,_):v},_.sections=function(t){return t?(a=t,i=a.size(),fh(window).on("scroll.gscroll"+v,n).on("resize.gscroll"+v,e).on("keydown.gscroll"+v,r),e(),window["gscrollTimer"+v]&&window["gscrollTimer"+v].stop(),window["gscrollTimer"+v]=bn(n),_):a},_.on=function(){var t=u.on.apply(u,arguments);return t===u?_:t},_.offset=function(t){return t?(y=t,_):y},_}var Mf=function(t,n){return tn?1:t>=n?0:NaN},Tf=function(t){return 1===t.length&&(t=n(t)),{left:function(n,e,r,i){for(null==r&&(r=0),null==i&&(i=n.length);r>>1;t(n[o],e)<0?r=o+1:i=o}return r},right:function(n,e,r,i){for(null==r&&(r=0),null==i&&(i=n.length);r>>1;t(n[o],e)>0?i=o:r=o+1}return r}}},Nf=Tf(Mf),kf=Nf.right,Sf=Nf.left,Af=function(t,n){null==n&&(n=e);for(var r=0,i=t.length-1,o=t[0],u=new Array(i<0?0:i);rt?1:n>=t?0:NaN},zf=function(t){return null===t?NaN:+t},Pf=function(t,n){var e,r,i=t.length,o=0,u=-1,a=0,c=0;if(null==n)for(;++u1)return c/(o-1)},Rf=function(t,n){var e=Pf(t,n);return e?Math.sqrt(e):e},Lf=function(t,n){var e,r,i,o=t.length,u=-1;if(null==n){for(;++u=e)for(r=i=e;++ue&&(r=e),i=e)for(r=i=e;++ue&&(r=e),i0)return[t];if((i=n0)for(t=Math.ceil(t/a),n=Math.floor(n/a),u=new Array(o=Math.ceil(n-t+1));++cl;)h.pop(),--p;var d,v=new Array(p+1);for(o=0;o<=p;++o)d=v[o]=[],d.x0=o>0?h[o-1]:f,d.x1=o=1)return+e(t[r-1],r-1,t);var r,i=(r-1)*n,o=Math.floor(i),u=+e(t[o],o,t);return u+(+e(t[o+1],o+1,t)-u)*(i-o)}},$f=function(t,n,e){return t=Uf.call(t,zf).sort(Mf),Math.ceil((e-n)/(2*(Vf(t,.75)-Vf(t,.25))*Math.pow(t.length,-1/3)))},Zf=function(t,n,e){return Math.ceil((e-n)/(3.5*Rf(t)*Math.pow(t.length,-1/3)))},Gf=function(t,n){var e,r,i=t.length,o=-1;if(null==n){for(;++o=e)for(r=e;++or&&(r=e)}else for(;++o=e)for(r=e;++or&&(r=e);return r},Qf=function(t,n){var e,r=t.length,i=r,o=-1,u=0;if(null==n)for(;++o=0;)for(r=t[i],n=r.length;--n>=0;)e[--u]=r[n];return e},tl=function(t,n){var e,r,i=t.length,o=-1;if(null==n){for(;++o=e)for(r=e;++oe&&(r=e)}else for(;++o=e)for(r=e;++oe&&(r=e);return r},nl=function(t,n){for(var e=n.length,r=new Array(e);e--;)r[e]=t[n[e]];return r},el=function(t,n){if(e=t.length){var e,r,i=0,o=0,u=t[o];for(null==n&&(n=Mf);++i0)for(var e,r,i=new Array(e),o=0;o=0&&"xmlns"!==(n=t.slice(0,e))&&(t=t.slice(e+1)),gl.hasOwnProperty(n)?{space:gl[n],local:t}:t},_l=function(t){var n=yl(t);return(n.local?w:b)(n)},ml=0;T.prototype=M.prototype={constructor:T,get:function(t){for(var n=this._;!(n in t);)if(!(t=t.parentNode))return;return t[n]},set:function(t,n){return t[this._]=n},remove:function(t){return this._ in t&&delete t[this._]},toString:function(){return this._}};var xl=function(t){return function(){return this.matches(t)}};if("undefined"!=typeof document){var bl=document.documentElement;if(!bl.matches){var wl=bl.webkitMatchesSelector||bl.msMatchesSelector||bl.mozMatchesSelector||bl.oMatchesSelector;xl=function(t){return function(){return wl.call(this,t)}}}}var Ml=xl,Tl={};if(t.event=null,"undefined"!=typeof document){"onmouseenter"in document.documentElement||(Tl={mouseenter:"mouseover",mouseleave:"mouseout"})}var Nl=function(t,n,e){var r,i,o=S(t+""),u=o.length;{if(!(arguments.length<2)){for(a=n?E:A,null==e&&(e=!1),r=0;r=x&&(x=m+1);!(_=g[x])&&++x=0;)(r=i[o])&&(u&&u!==r.nextSibling&&u.parentNode.insertBefore(r,u),u=r);return this},Hl=function(t){function n(n,e){return n&&e?t(n.__data__,e.__data__):!n-!e}t||(t=q);for(var e=this._groups,r=e.length,i=new Array(r),o=0;o1?this.each((null==n?B:"function"==typeof n?X:j)(t,n,null==e?"":e)):W(this.node(),t)},Jl=function(t,n){return arguments.length>1?this.each((null==n?V:"function"==typeof n?Z:$)(t,n)):this.node()[t]};J.prototype={add:function(t){this._names.indexOf(t)<0&&(this._names.push(t),this._node.setAttribute("class",this._names.join(" ")))},remove:function(t){var n=this._names.indexOf(t);n>=0&&(this._names.splice(n,1),this._node.setAttribute("class",this._names.join(" ")))},contains:function(t){return this._names.indexOf(t)>=0}};var Kl=function(t,n){var e=G(t+"");if(arguments.length<2){for(var r=Q(this.node()),i=-1,o=e.length;++ib}_.mouse("drag")}function i(){fh(t.event.view).on("mousemove.drag mouseup.drag",null),xt(t.event.view,l),dh(),_.mouse("end")}function o(){if(p.apply(this,arguments)){var n,e,r=t.event.changedTouches,i=d.apply(this,arguments),o=r.length;for(n=0;n=240?t-240:t+120,i,r),Ot(t,i,r),Ot(t<120?t+240:t-120,i,r),this.opacity)},displayable:function(){return(0<=this.s&&this.s<=1||isNaN(this.s))&&0<=this.l&&this.l<=1&&0<=this.opacity&&this.opacity<=1}}));var zh=Math.PI/180,Ph=180/Math.PI,Rh=.95047,Lh=1,Dh=1.08883,qh=4/29,Uh=6/29,Oh=3*Uh*Uh,Fh=Uh*Uh*Uh;_h(It,Yt,kt(St,{brighter:function(t){return new It(this.l+18*(null==t?1:t),this.a,this.b,this.opacity)},darker:function(t){return new It(this.l-18*(null==t?1:t),this.a,this.b,this.opacity)},rgb:function(){var t=(this.l+16)/116,n=isNaN(this.a)?t:t+this.a/500,e=isNaN(this.b)?t:t-this.b/200;return t=Lh*Bt(t),n=Rh*Bt(n),e=Dh*Bt(e),new Rt(jt(3.2404542*n-1.5371385*t-.4985314*e),jt(-.969266*n+1.8760108*t+.041556*e),jt(.0556434*n-.2040259*t+1.0572252*e),this.opacity)}})),_h($t,Vt,kt(St,{brighter:function(t){return new $t(this.h,this.c,this.l+18*(null==t?1:t),this.opacity)},darker:function(t){return new $t(this.h,this.c,this.l-18*(null==t?1:t),this.opacity)},rgb:function(){return Ft(this).rgb()}}));var Yh=-.14861,Ih=1.78277,Hh=-.29227,Bh=-.90649,jh=1.97294,Xh=jh*Bh,Wh=jh*Ih,Vh=Ih*Hh-Bh*Yh;_h(Qt,Gt,kt(St,{brighter:function(t){return t=null==t?1/.7:Math.pow(1/.7,t),new Qt(this.h,this.s,this.l*t,this.opacity)},darker:function(t){return t=null==t?.7:Math.pow(.7,t),new Qt(this.h,this.s,this.l*t,this.opacity)},rgb:function(){var t=isNaN(this.h)?0:(this.h+120)*zh,n=+this.l,e=isNaN(this.s)?0:this.s*n*(1-n),r=Math.cos(t),i=Math.sin(t);return new Rt(255*(n+e*(Yh*r+Ih*i)),255*(n+e*(Hh*r+Bh*i)),255*(n+e*(jh*r)),this.opacity)}}));var $h,Zh,Gh,Qh,Jh,Kh,tp=function(t){var n=t.length-1;return function(e){var r=e<=0?e=0:e>=1?(e=1,n-1):Math.floor(e*n),i=t[r],o=t[r+1],u=r>0?t[r-1]:2*i-o,a=ro&&(i=n.slice(o,i),a[u]?a[u]+=i:a[++u]=i),(e=e[0])===(r=r[0])?a[u]?a[u]+=r:a[++u]=r:(a[++u]=null,c.push({i:u,x:cp(e,r)})),o=lp.lastIndex;return ojp&&e.stateBp&&e.name===n)return new re([[t]],Bd,n,+r)}return null},Xd=function(t){return function(){return t}},Wd=function(t,n,e){this.target=t,this.type=n,this.selection=e},Vd=function(){t.event.preventDefault(),t.event.stopImmediatePropagation()},$d={name:"drag"},Zd={name:"space"},Gd={name:"handle"},Qd={name:"center"},Jd={name:"x",handles:["e","w"].map(Se),input:function(t,n){return t&&[[t[0],n[0][1]],[t[1],n[1][1]]]},output:function(t){return t&&[t[0][0],t[1][0]]}},Kd={name:"y",handles:["n","s"].map(Se),input:function(t,n){return t&&[[n[0][0],t[0]],[n[1][0],t[1]]]},output:function(t){return t&&[t[0][1],t[1][1]]}},tv={name:"xy",handles:["n","e","s","w","nw","ne","se","sw"].map(Se),input:function(t){return t},output:function(t){return t}},nv={overlay:"crosshair",selection:"move",n:"ns-resize",e:"ew-resize",s:"ns-resize",w:"ew-resize",nw:"nwse-resize",ne:"nesw-resize",se:"nwse-resize",sw:"nesw-resize"},ev={e:"w",w:"e",nw:"ne",ne:"nw",se:"sw",sw:"se"},rv={n:"s",s:"n",nw:"sw",ne:"se",se:"ne",sw:"nw"},iv={overlay:1,selection:1,n:null,e:1,s:null,w:-1,nw:-1,ne:1,se:1,sw:-1},ov={overlay:1,selection:1,n:-1,e:null,s:1,w:null,nw:-1,ne:-1,se:1,sw:1},uv=function(){return De(tv)},av=Math.cos,cv=Math.sin,sv=Math.PI,fv=sv/2,lv=2*sv,hv=Math.max,pv=function(){function t(t){var o,u,a,c,s,f,l=t.length,h=[],p=Yf(l),d=[],v=[],g=v.groups=new Array(l),y=new Array(l*l);for(o=0,s=-1;++s1e-6)if(Math.abs(f*a-c*s)>1e-6&&i){var h=e-o,p=r-u,d=a*a+c*c,v=h*h+p*p,g=Math.sqrt(d),y=Math.sqrt(l),_=i*Math.tan((gv-Math.acos((d+l-v)/(2*g*y)))/2),m=_/y,x=_/g;Math.abs(m-1)>1e-6&&(this._+="L"+(t+m*s)+","+(n+m*f)),this._+="A"+i+","+i+",0,0,"+ +(f*h>s*p)+","+(this._x1=t+x*a)+","+(this._y1=n+x*c)}else this._+="L"+(this._x1=t)+","+(this._y1=n);else;},arc:function(t,n,e,r,i,o){t=+t,n=+n,e=+e;var u=e*Math.cos(r),a=e*Math.sin(r),c=t+u,s=n+a,f=1^o,l=o?r-i:i-r;if(e<0)throw new Error("negative radius: "+e);null===this._x1?this._+="M"+c+","+s:(Math.abs(this._x1-c)>1e-6||Math.abs(this._y1-s)>1e-6)&&(this._+="L"+c+","+s),e&&(l<0&&(l=l%yv+yv),l>_v?this._+="A"+e+","+e+",0,1,"+f+","+(t-u)+","+(n-a)+"A"+e+","+e+",0,1,"+f+","+(this._x1=c)+","+(this._y1=s):l>1e-6&&(this._+="A"+e+","+e+",0,"+ +(l>=gv)+","+f+","+(this._x1=t+e*Math.cos(i))+","+(this._y1=n+e*Math.sin(i))))},rect:function(t,n,e,r){this._+="M"+(this._x0=this._x1=+t)+","+(this._y0=this._y1=+n)+"h"+ +e+"v"+ +r+"h"+-e+"Z"},toString:function(){return this._}};var mv=function(){function t(){var t,a=dv.call(arguments),c=n.apply(this,a),s=e.apply(this,a),f=+r.apply(this,(a[0]=c,a)),l=i.apply(this,a)-fv,h=o.apply(this,a)-fv,p=f*av(l),d=f*cv(l),v=+r.apply(this,(a[0]=s,a)),g=i.apply(this,a)-fv,y=o.apply(this,a)-fv;if(u||(u=t=Oe()),u.moveTo(p,d),u.arc(0,0,f,l,h),l===g&&h===y||(u.quadraticCurveTo(0,0,v*av(g),v*cv(g)),u.arc(0,0,v,g,y)),u.quadraticCurveTo(0,0,p,d),u.closePath(),t)return u=null,t+""||null}var n=Fe,e=Ye,r=Ie,i=He,o=Be,u=null;return t.radius=function(n){return arguments.length?(r="function"==typeof n?n:vv(+n),t):r},t.startAngle=function(n){return arguments.length?(i="function"==typeof n?n:vv(+n),t):i},t.endAngle=function(n){return arguments.length?(o="function"==typeof n?n:vv(+n),t):o},t.source=function(e){return arguments.length?(n=e,t):n},t.target=function(n){return arguments.length?(e=n,t):e},t.context=function(n){return arguments.length?(u=null==n?null:n,t):u},t};je.prototype=Xe.prototype={constructor:je,has:function(t){return"$"+t in this},get:function(t){return this["$"+t]},set:function(t,n){return this["$"+t]=n,this},remove:function(t){var n="$"+t;return n in this&&delete this[n]},clear:function(){for(var t in this)"$"===t[0]&&delete this[t]},keys:function(){var t=[];for(var n in this)"$"===n[0]&&t.push(n.slice(1));return t},values:function(){var t=[];for(var n in this)"$"===n[0]&&t.push(this[n]);return t},entries:function(){var t=[];for(var n in this)"$"===n[0]&&t.push({key:n.slice(1),value:this[n]});return t},size:function(){var t=0;for(var n in this)"$"===n[0]&&++t;return t},empty:function(){for(var t in this)if("$"===t[0])return!1;return!0},each:function(t){for(var n in this)"$"===n[0]&&t(this[n],n.slice(1),this)}};var xv=function(){function t(n,i,u,a){if(i>=o.length)return null!=e&&n.sort(e),null!=r?r(n):n;for(var c,s,f,l=-1,h=n.length,p=o[i++],d=Xe(),v=u();++lo.length)return t;var i,a=u[e-1];return null!=r&&e>=o.length?i=t.entries():(i=[],t.each(function(t,r){i.push({key:r,values:n(t,e)})})),null!=a?i.sort(function(t,n){return a(t.key,n.key)}):i}var e,r,i,o=[],u=[];return i={object:function(n){return t(n,0,We,Ve)},map:function(n){return t(n,0,$e,Ze)},entries:function(e){return n(t(e,0,$e,Ze),0)},key:function(t){return o.push(t),i},sortKeys:function(t){return u[o.length-1]=t,i},sortValues:function(t){return e=t,i},rollup:function(t){return r=t,i}}},bv=Xe.prototype;Ge.prototype=Qe.prototype={constructor:Ge,has:bv.has,add:function(t){return t+="",this["$"+t]=t,this},remove:bv.remove,clear:bv.clear,values:bv.keys,size:bv.size,empty:bv.empty,each:bv.each};var wv=function(t){var n=[];for(var e in t)n.push(e);return n},Mv=function(t){var n=[];for(var e in t)n.push(t[e]);return n},Tv=function(t){var n=[];for(var e in t)n.push({key:e,value:t[e]});return n},Nv={},kv={},Sv=34,Av=10,Ev=13,Cv=function(t){function n(t,n){var r,i,o=e(t,function(t,e){if(r)return r(t,e-1);i=t,r=n?Ke(t,n):Je(t)});return o.columns=i||[],o}function e(t,n){function e(){if(s)return kv;if(f)return f=!1,Nv;var n,e,r=u;if(t.charCodeAt(r)===Sv){for(;u++=o?s=!0:(e=t.charCodeAt(u++))===Av?f=!0:e===Ev&&(f=!0,t.charCodeAt(u)===Av&&++u),t.slice(r+1,n-1).replace(/""/g,'"')}for(;ut||t>i||r>n||n>o))return this;var u,a,c=i-e,s=this._root;switch(a=(n<(r+o)/2)<<1|t<(e+i)/2){case 0:do{u=new Array(4),u[a]=s,s=u}while(c*=2,i=e+c,o=r+c,t>i||n>o);break;case 1:do{u=new Array(4),u[a]=s,s=u}while(c*=2,e=i-c,o=r+c,e>t||n>o);break;case 2:do{u=new Array(4),u[a]=s,s=u}while(c*=2,i=e+c,r=o-c,t>i||r>n);break;case 3:do{u=new Array(4),u[a]=s,s=u}while(c*=2,e=i-c,r=o-c,e>t||r>n)}this._root&&this._root.length&&(this._root=s)}return this._x0=e,this._y0=r,this._x1=i,this._y1=o,this},Wv=function(){var t=[];return this.visit(function(n){if(!n.length)do{t.push(n.data)}while(n=n.next)}),t},Vv=function(t){return arguments.length?this.cover(+t[0][0],+t[0][1]).cover(+t[1][0],+t[1][1]):isNaN(this._x0)?void 0:[[this._x0,this._y0],[this._x1,this._y1]]},$v=function(t,n,e,r,i){this.node=t,this.x0=n,this.y0=e,this.x1=r,this.y1=i},Zv=function(t,n,e){var r,i,o,u,a,c,s,f=this._x0,l=this._y0,h=this._x1,p=this._y1,d=[],v=this._root;for(v&&d.push(new $v(v,f,l,h,p)),null==e?e=1/0:(f=t-e,l=n-e,h=t+e,p=n+e,e*=e);c=d.pop();)if(!(!(v=c.node)||(i=c.x0)>h||(o=c.y0)>p||(u=c.x1)=y)<<1|t>=g)&&(c=d[d.length-1],d[d.length-1]=d[d.length-1-s],d[d.length-1-s]=c)}else{var _=t-+this._x.call(null,v.data),m=n-+this._y.call(null,v.data),x=_*_+m*m;if(x=(a=(d+g)/2))?d=a:g=a,(f=u>=(c=(v+y)/2))?v=c:y=c,n=p,!(p=p[l=f<<1|s]))return this;if(!p.length)break;(n[l+1&3]||n[l+2&3]||n[l+3&3])&&(e=n,h=l)}for(;p.data!==t;)if(r=p,!(p=p.next))return this;return(i=p.next)&&delete p.next,r?(i?r.next=i:delete r.next,this):n?(i?n[l]=i:delete n[l],(p=n[0]||n[1]||n[2]||n[3])&&p===(n[3]||n[2]||n[1]||n[0])&&!p.length&&(e?e[h]=p:this._root=p),this):(this._root=i,this)},Qv=function(){return this._root},Jv=function(){var t=0;return this.visit(function(n){if(!n.length)do{++t}while(n=n.next)}),t},Kv=function(t){var n,e,r,i,o,u,a=[],c=this._root;for(c&&a.push(new $v(c,this._x0,this._y0,this._x1,this._y1));n=a.pop();)if(!t(c=n.node,r=n.x0,i=n.y0,o=n.x1,u=n.y1)&&c.length){var s=(r+o)/2,f=(i+u)/2;(e=c[3])&&a.push(new $v(e,s,f,o,u)),(e=c[2])&&a.push(new $v(e,r,f,s,u)),(e=c[1])&&a.push(new $v(e,s,i,o,f)),(e=c[0])&&a.push(new $v(e,r,i,s,f))}return this},tg=function(t){var n,e=[],r=[];for(this._root&&e.push(new $v(this._root,this._x0,this._y0,this._x1,this._y1));n=e.pop();){var i=n.node;if(i.length){var o,u=n.x0,a=n.y0,c=n.x1,s=n.y1,f=(u+c)/2,l=(a+s)/2;(o=i[0])&&e.push(new $v(o,u,a,f,l)),(o=i[1])&&e.push(new $v(o,f,a,c,l)),(o=i[2])&&e.push(new $v(o,u,l,f,s)),(o=i[3])&&e.push(new $v(o,f,l,c,s))}r.push(n)}for(;n=r.pop();)t(n.node,n.x0,n.y0,n.x1,n.y1);return this},ng=function(t){return arguments.length?(this._x=t,this):this._x},eg=function(t){return arguments.length?(this._y=t,this):this._y},rg=ur.prototype=ar.prototype;rg.copy=function(){var t,n,e=new ar(this._x,this._y,this._x0,this._y0,this._x1,this._y1),r=this._root;if(!r)return e;if(!r.length)return e._root=cr(r),e;for(t=[{source:r,target:e._root=new Array(4)}];r=t.pop();)for(var i=0;i<4;++i)(n=r.source[i])&&(n.length?t.push({source:n,target:r.target[i]=new Array(4)}):r.target[i]=cr(n));return e},rg.add=jv,rg.addAll=er,rg.cover=Xv,rg.data=Wv,rg.extent=Vv,rg.find=Zv,rg.remove=Gv,rg.removeAll=rr,rg.root=Qv,rg.size=Jv,rg.visit=Kv,rg.visitAfter=tg,rg.x=ng,rg.y=eg;var ig,og=function(t){function n(){function t(t,n,e,r,i){var o=t.data,a=t.r,p=l+a;{if(!o)return n>s+p||rf+p||ic.index){var d=s-o.x-o.vx,v=f-o.y-o.vy,g=d*d+v*v;gt.r&&(t.r=t[n].r)}function r(){if(i){var n,e,r=i.length;for(o=new Array(r),n=0;n1?(null==n?l.remove(t):l.set(t,i(n)),o):l.get(t)},find:function(n,e,r){var i,o,u,a,c,s=0,f=t.length;for(null==r?r=1/0:r*=r,s=0;s1?(p.on(t,n),o):p.on(t)}}},fg=function(){function t(t){var n,a=i.length,c=ur(i,pr,dr).visitAfter(e);for(u=t,n=0;n=f)){(t.data!==o||t.next)&&(0===i&&(i=Bv(),p+=i*i),0===c&&(c=Bv(),p+=c*c),p1?r[0]+r.slice(2):r,+t.slice(e+1)]},vg=function(t){return t=dg(Math.abs(t)),t?t[1]:NaN},gg=function(t,n){return function(e,r){for(var i=e.length,o=[],u=0,a=t[0],c=0;i>0&&a>0&&(c+a+1>r&&(a=Math.max(1,r-c)),o.push(e.substring(i-=a,i+a)),!((c+=a+1)>r));)a=t[u=(u+1)%t.length];return o.reverse().join(n)}},yg=function(t){return function(n){return n.replace(/[0-9]/g,function(n){return t[+n]})}},_g=function(t,n){t=t.toPrecision(n);t:for(var e,r=t.length,i=1,o=-1;i0&&(o=0)}return o>0?t.slice(0,o)+t.slice(e+1):t},mg=function(t,n){var e=dg(t,n);if(!e)return t+"";var r=e[0],i=e[1],o=i-(ig=3*Math.max(-8,Math.min(8,Math.floor(i/3))))+1,u=r.length;return o===u?r:o>u?r+new Array(o-u+1).join("0"):o>0?r.slice(0,o)+"."+r.slice(o):"0."+new Array(1-o).join("0")+dg(t,Math.max(0,n+o-1))[0]},xg=function(t,n){var e=dg(t,n);if(!e)return t+"";var r=e[0],i=e[1];return i<0?"0."+new Array(-i).join("0")+r:r.length>i+1?r.slice(0,i+1)+"."+r.slice(i+1):r+new Array(i-r.length+2).join("0")},bg={"":_g,"%":function(t,n){return(100*t).toFixed(n)},b:function(t){return Math.round(t).toString(2)},c:function(t){return t+""},d:function(t){return Math.round(t).toString(10)},e:function(t,n){return t.toExponential(n)},f:function(t,n){return t.toFixed(n)},g:function(t,n){return t.toPrecision(n)},o:function(t){return Math.round(t).toString(8)},p:function(t,n){return xg(100*t,n)},r:xg,s:mg,X:function(t){return Math.round(t).toString(16).toUpperCase()},x:function(t){return Math.round(t).toString(16)}},wg=/^(?:(.)?([<>=^]))?([+\-\( ])?([$#])?(0)?(\d+)?(,)?(\.\d+)?([a-z%])?$/i;vr.prototype=gr.prototype,gr.prototype.toString=function(){return this.fill+this.align+this.sign+this.symbol+(this.zero?"0":"")+(null==this.width?"":Math.max(1,0|this.width))+(this.comma?",":"")+(null==this.precision?"":"."+Math.max(0,0|this.precision))+this.type};var Mg,Tg=function(t){return t},Ng=["y","z","a","f","p","n","µ","m","","k","M","G","T","P","E","Z","Y"],kg=function(t){function n(t){function n(t){var n,i,a,f=g,x=y;if("c"===v)x=_(t)+x,t="";else{t=+t;var b=t<0;if(t=_(Math.abs(t),d),b&&0==+t&&(b=!1),f=(b?"("===s?s:"-":"-"===s||"("===s?"":s)+f,x=x+("s"===v?Ng[8+ig/3]:"")+(b&&"("===s?")":""),m)for(n=-1,i=t.length;++n(a=t.charCodeAt(n))||a>57){x=(46===a?o+t.slice(n+1):t.slice(n))+x,t=t.slice(0,n);break}}p&&!l&&(t=r(t,1/0));var w=f.length+t.length+x.length,M=w>1)+f+t+x+M.slice(w);break;default:t=M+f+t+x}return u(t)}t=vr(t);var e=t.fill,c=t.align,s=t.sign,f=t.symbol,l=t.zero,h=t.width,p=t.comma,d=t.precision,v=t.type,g="$"===f?i[0]:"#"===f&&/[boxX]/.test(v)?"0"+v.toLowerCase():"",y="$"===f?i[1]:/[%p]/.test(v)?a:"",_=bg[v],m=!v||/[defgprs%]/.test(v);return d=null==d?v?6:12:/[gprs]/.test(v)?Math.max(1,Math.min(21,d)):Math.max(0,Math.min(20,d)),n.toString=function(){return t+""},n}function e(t,e){var r=n((t=vr(t),t.type="f",t)),i=3*Math.max(-8,Math.min(8,Math.floor(vg(e)/3))),o=Math.pow(10,-i),u=Ng[8+i/3];return function(t){return r(o*t)+u}}var r=t.grouping&&t.thousands?gg(t.grouping,t.thousands):Tg,i=t.currency,o=t.decimal,u=t.numerals?yg(t.numerals):Tg,a=t.percent||"%";return{format:n,formatPrefix:e}};yr({decimal:".",thousands:",",grouping:[3],currency:["$",""]});var Sg=function(t){return Math.max(0,-vg(Math.abs(t)))},Ag=function(t,n){return Math.max(0,3*Math.max(-8,Math.min(8,Math.floor(vg(n)/3)))-vg(Math.abs(t)))},Eg=function(t,n){return t=Math.abs(t),n=Math.abs(n)-t,Math.max(0,vg(n)-vg(t))+1},Cg=function(){return new _r};_r.prototype={constructor:_r,reset:function(){this.s=this.t=0},add:function(t){mr(cy,t,this.t),mr(this,cy.s,this.s),this.s?this.t+=cy.t:this.s=cy.t},valueOf:function(){return this.s}};var zg,Pg,Rg,Lg,Dg,qg,Ug,Og,Fg,Yg,Ig,Hg,Bg,jg,Xg,Wg,Vg,$g,Zg,Gg,Qg,Jg,Kg,ty,ny,ey,ry,iy,oy,uy,ay,cy=new _r,sy=1e-6,fy=Math.PI,ly=fy/2,hy=fy/4,py=2*fy,dy=180/fy,vy=fy/180,gy=Math.abs,yy=Math.atan,_y=Math.atan2,my=Math.cos,xy=Math.ceil,by=Math.exp,wy=Math.log,My=Math.pow,Ty=Math.sin,Ny=Math.sign||function(t){return t>0?1:t<0?-1:0},ky=Math.sqrt,Sy=Math.tan,Ay={Feature:function(t,n){Tr(t.geometry,n)},FeatureCollection:function(t,n){for(var e=t.features,r=-1,i=e.length;++rsy?Fg=90:Dy<-sy&&(Ug=-90),Xg[0]=qg,Xg[1]=Og}},Uy=function(t){var n,e,r,i,o,u,a;if(Fg=Og=-(qg=Ug=1/0),jg=[],Cy(t,qy),e=jg.length){for(jg.sort(Wr),n=1,r=jg[0],o=[r];nXr(r[0],r[1])&&(r[1]=i[1]),Xr(i[0],r[1])>Xr(r[0],r[1])&&(r[0]=i[0])):o.push(r=i);for(u=-1/0,e=o.length-1,n=0,r=o[e];n<=e;r=i,++n)i=o[n],(a=Xr(r[1],i[0]))>u&&(u=a,qg=i[0],Og=r[1])}return jg=Xg=null,qg===1/0||Ug===1/0?[[NaN,NaN],[NaN,NaN]]:[[qg,Ug],[Og,Fg]]},Oy={sphere:Mr,point:$r,lineStart:Gr,lineEnd:Kr,polygonStart:function(){Oy.lineStart=ti,Oy.lineEnd=ni},polygonEnd:function(){Oy.lineStart=Gr,Oy.lineEnd=Kr}},Fy=function(t){Wg=Vg=$g=Zg=Gg=Qg=Jg=Kg=ty=ny=ey=0,Cy(t,Oy);var n=ty,e=ny,r=ey,i=n*n+e*e+r*r;return i<1e-12&&(n=Qg,e=Jg,r=Kg,Vg2?t[2]*vy:0),n.invert=function(n){return n=t.invert(n[0]*vy,n[1]*vy),n[0]*=dy,n[1]*=dy,n},n},t_=function(){function t(t,n){e.push(t=r(t,n)),t[0]*=dy,t[1]*=dy}function n(){var t=i.apply(this,arguments),n=o.apply(this,arguments)*vy,c=u.apply(this,arguments)*vy;return e=[],r=oi(-t[0]*vy,-t[1]*vy,0).invert,si(a,n,c,1),t={type:"Polygon",coordinates:[e]},e=r=null,t}var e,r,i=Yy([0,0]),o=Yy(90),u=Yy(6),a={point:t};return n.center=function(t){return arguments.length?(i="function"==typeof t?t:Yy([+t[0],+t[1]]),n):i},n.radius=function(t){return arguments.length?(o="function"==typeof t?t:Yy(+t),n):o},n.precision=function(t){return arguments.length?(u="function"==typeof t?t:Yy(+t),n):u},n},n_=function(){var t,n=[];return{point:function(n,e){t.push([n,e])},lineStart:function(){n.push(t=[])},lineEnd:Mr,rejoin:function(){n.length>1&&n.push(n.pop().concat(n.shift()))},result:function(){var e=n;return n=[],t=null,e}}},e_=function(t,n){return gy(t[0]-n[0])=0;--o)i.point((f=s[o])[0],f[1]);else r(h.x,h.p.x,-1,i);h=h.p}h=h.o,s=h.z,p=!p}while(!h.v);i.lineEnd()}}},i_=Cg(),o_=function(t,n){var e=n[0],r=n[1],i=[Ty(e),-my(e),0],o=0,u=0;i_.reset();for(var a=0,c=t.length;a=0?1:-1,T=M*w,N=T>fy,k=d*x;if(i_.add(_y(k*M*Ty(T),v*b+k*my(T))),o+=N?w+M*py:w,N^h>=e^_>=e){var S=Lr(Pr(l),Pr(y));Ur(S);var A=Lr(i,S);Ur(A);var E=(N^w>=0?-1:1)*br(A[2]);(r>E||r===E&&(S[0]||S[1]))&&(u+=N^w>=0?1:-1)}}return(o<-sy||o0){for(_||(i.polygonStart(),_=!0),i.lineStart(),t=0;t1&&2&o&&u.push(u.pop().concat(u.shift())),p.push(u.filter(pi))}var h,p,d,v=n(i),g=n_(),y=n(g),_=!1,m={point:o,lineStart:a,lineEnd:c,polygonStart:function(){m.point=s,m.lineStart=f,m.lineEnd=l,p=[],h=[]},polygonEnd:function(){m.point=o,m.lineStart=a,m.lineEnd=c,p=Kf(p);var t=o_(h,r);p.length?(_||(i.polygonStart(),_=!0),r_(p,di,t,e,i)):t&&(_||(i.polygonStart(),_=!0),i.lineStart(),e(null,null,1,i),i.lineEnd()),_&&(i.polygonEnd(),_=!1),p=h=null},sphere:function(){i.polygonStart(),i.lineStart(),e(null,null,1,i),i.lineEnd(),i.polygonEnd()}};return m}},a_=u_(function(){return!0},vi,yi,[-fy,-ly]),c_=function(t){function n(n,e,r,i){si(i,t,a,r,n,e)}function e(t,n){return my(t)*my(n)>u}function r(t){var n,r,u,a,f;return{lineStart:function(){a=u=!1,f=1},point:function(l,h){var p,d=[l,h],v=e(l,h),g=c?v?0:o(l,h):v?o(l+(l<0?fy:-fy),h):0;if(!n&&(a=u=v)&&t.lineStart(),v!==u&&(!(p=i(n,d))||e_(n,p)||e_(d,p))&&(d[0]+=sy,d[1]+=sy,v=e(d[0],d[1])),v!==u)f=0,v?(t.lineStart(),p=i(d,n),t.point(p[0],p[1])):(p=i(n,d),t.point(p[0],p[1]),t.lineEnd()),n=p;else if(s&&n&&c^v){var y;g&r||!(y=i(d,n,!0))||(f=0,c?(t.lineStart(),t.point(y[0][0],y[0][1]),t.point(y[1][0],y[1][1]),t.lineEnd()):(t.point(y[1][0],y[1][1]),t.lineEnd(),t.lineStart(),t.point(y[0][0],y[0][1])))}!v||n&&e_(n,d)||t.point(d[0],d[1]),n=d,u=v,r=g},lineEnd:function(){u&&t.lineEnd(),n=null},clean:function(){return f|(a&&u)<<1}}}function i(t,n,e){var r=Pr(t),i=Pr(n),o=[1,0,0],a=Lr(r,i),c=Rr(a,a),s=a[0],f=c-s*s;if(!f)return!e&&t;var l=u*c/f,h=-u*s/f,p=Lr(o,a),d=qr(o,l);Dr(d,qr(a,h));var v=p,g=Rr(d,v),y=Rr(v,v),_=g*g-y*(Rr(d,d)-1);if(!(_<0)){var m=ky(_),x=qr(v,(-g-m)/y);if(Dr(x,d),x=zr(x),!e)return x;var b,w=t[0],M=n[0],T=t[1],N=n[1];M0^x[1]<(gy(x[0]-w)fy^(w<=x[0]&&x[0]<=M)){var E=qr(v,(-g+m)/y);return Dr(E,d),[x,zr(E)]}}}function o(n,e){var r=c?t:fy-t,i=0;return n<-r?i|=1:n>r&&(i|=2),e<-r?i|=4:e>r&&(i|=8),i}var u=my(t),a=6*vy,c=u>0,s=gy(u)>sy;return u_(e,r,n,c?[0,-t]:[-fy,t-fy])},s_=function(t,n,e,r,i,o){var u,a=t[0],c=t[1],s=n[0],f=n[1],l=0,h=1,p=s-a,d=f-c;if(u=e-a,p||!(u>0)){if(u/=p,p<0){if(u0){if(u>h)return;u>l&&(l=u)}if(u=i-a,p||!(u<0)){if(u/=p,p<0){if(u>h)return;u>l&&(l=u)}else if(p>0){if(u0)){if(u/=d,d<0){if(u0){if(u>h)return;u>l&&(l=u)}if(u=o-c,d||!(u<0)){if(u/=d,d<0){if(u>h)return;u>l&&(l=u)}else if(d>0){if(u0&&(t[0]=a+l*p,t[1]=c+l*d),h<1&&(n[0]=a+h*p,n[1]=c+h*d),!0}}}}},f_=1e9,l_=-f_,h_=function(){var t,n,e,r=0,i=0,o=960,u=500;return e={stream:function(e){return t&&n===e?t:t=_i(r,i,o,u)(n=e)},extent:function(a){return arguments.length?(r=+a[0][0],i=+a[0][1],o=+a[1][0],u=+a[1][1],t=n=null,e):[[r,i],[o,u]]}}},p_=Cg(),d_={sphere:Mr,point:Mr,lineStart:mi,lineEnd:Mr,polygonStart:Mr,polygonEnd:Mr},v_=function(t){return p_.reset(),Cy(t,d_),+p_},g_=[null,null],y_={type:"LineString",coordinates:g_},__=function(t,n){return g_[0]=t,g_[1]=n,v_(y_)},m_={Feature:function(t,n){return Mi(t.geometry,n)},FeatureCollection:function(t,n){for(var e=t.features,r=-1,i=e.length;++r=.12&&i<.234&&r>=-.425&&r<-.214?s:i>=.166&&i<.234&&r>=-.214&&r<-.115?f:c).invert(t)},t.stream=function(t){return e&&r===t?e:e=po([c.stream(r=t),s.stream(t),f.stream(t)])},t.precision=function(t){return arguments.length?(c.precision(t),s.precision(t),f.precision(t),n()):c.precision()},t.scale=function(n){return arguments.length?(c.scale(n),s.scale(.35*n),f.scale(n),t.translate(c.translate())):c.scale()},t.translate=function(t){if(!arguments.length)return c.translate();var e=c.scale(),r=+t[0],a=+t[1];return i=c.translate(t).clipExtent([[r-.455*e,a-.238*e],[r+.455*e,a+.238*e]]).stream(l),o=s.translate([r-.307*e,a+.201*e]).clipExtent([[r-.425*e+sy,a+.12*e+sy],[r-.214*e-sy,a+.234*e-sy]]).stream(l),u=f.translate([r-.205*e,a+.212*e]).clipExtent([[r-.214*e+sy,a+.166*e+sy],[r-.115*e-sy,a+.234*e-sy]]).stream(l),n()},t.fitExtent=function(n,e){return no(t,n,e)},t.fitSize=function(n,e){return eo(t,n,e)},t.fitWidth=function(n,e){return ro(t,n,e)},t.fitHeight=function(n,e){return io(t,n,e)},t.scale(1070)},im=vo(function(t){return ky(2/(1+t))});im.invert=go(function(t){return 2*br(t/2)});var om=function(){return co(im).scale(124.75).clipAngle(179.999)},um=vo(function(t){return(t=xr(t))&&t/Ty(t)});um.invert=go(function(t){return t});var am=function(){return co(um).scale(79.4188).clipAngle(179.999)};yo.invert=function(t,n){return[t,2*yy(by(n))-ly]};var cm=function(){return _o(yo).scale(961/py)},sm=function(){return fo(xo).scale(109.5).parallels([30,30])};bo.invert=bo;var fm=function(){return co(bo).scale(152.63)},lm=function(){return fo(wo).scale(131.154).center([0,13.9389])};Mo.invert=go(yy);var hm=function(){return co(Mo).scale(144.049).clipAngle(60)},pm=function(){function t(){return i=o=null,u}var n,e,r,i,o,u,a=1,c=0,s=0,f=1,l=1,h=M_,p=null,d=M_;return u={stream:function(t){return i&&o===t?i:i=h(d(o=t))},postclip:function(i){return arguments.length?(d=i,p=n=e=r=null,t()):d},clipExtent:function(i){return arguments.length?(d=null==i?(p=n=e=r=null,M_):_i(p=+i[0][0],n=+i[0][1],e=+i[1][0],r=+i[1][1]),t()):null==p?null:[[p,n],[e,r]]},scale:function(n){return arguments.length?(h=To((a=+n)*f,a*l,c,s),t()):a},translate:function(n){return arguments.length?(h=To(a*f,a*l,c=+n[0],s=+n[1]),t()):[c,s]},reflectX:function(n){return arguments.length?(h=To(a*(f=n?-1:1),a*l,c,s),t()):f<0},reflectY:function(n){return arguments.length?(h=To(a*f,a*(l=n?-1:1),c,s),t()):l<0},fitExtent:function(t,n){return no(u,t,n)},fitSize:function(t,n){return eo(u,t,n)},fitWidth:function(t,n){return ro(u,t,n)},fitHeight:function(t,n){return io(u,t,n)}}};No.invert=function(t,n){ -var e,r=n,i=25;do{var o=r*r,u=o*o;r-=e=(r*(1.007226+o*(.015085+u*(.028874*o-.044475-.005916*u)))-n)/(1.007226+o*(.045255+u*(.259866*o-.311325-.005916*11*u)))}while(gy(e)>sy&&--i>0);return[t/(.8707+(o=r*r)*(o*(o*o*o*(.003971-.001529*o)-.013791)-.131979)),r]};var dm=function(){return co(No).scale(175.295)};ko.invert=go(br);var vm=function(){return co(ko).scale(249.5).clipAngle(90+sy)};So.invert=go(function(t){return 2*yy(t)});var gm=function(){return co(So).scale(250).clipAngle(142)};Ao.invert=function(t,n){return[-n,2*yy(by(t))-ly]};var ym=function(){var t=_o(Ao),n=t.center,e=t.rotate;return t.center=function(t){return arguments.length?n([-t[1],t[0]]):(t=n(),[t[1],-t[0]])},t.rotate=function(t){return arguments.length?e([t[0],t[1],t.length>2?t[2]+90:90]):(t=e(),[t[0],t[1],t[2]-90])},e([0,0,90]).scale(159.155)},_m=function(){function t(t){var o,u=0;t.eachAfter(function(t){var e=t.children;e?(t.x=Co(e),t.y=Po(e)):(t.x=o?u+=n(t,o):0,t.y=0,o=t)});var a=Lo(t),c=Do(t),s=a.x-n(a,c)/2,f=c.x+n(c,a)/2;return t.eachAfter(i?function(n){n.x=(n.x-t.x)*e,n.y=(t.y-n.y)*r}:function(n){n.x=(n.x-s)/(f-s)*e,n.y=(1-(t.y?n.y/t.y:1))*r})}var n=Eo,e=1,r=1,i=!1;return t.separation=function(e){return arguments.length?(n=e,t):n},t.size=function(n){return arguments.length?(i=!1,e=+n[0],r=+n[1],t):i?null:[e,r]},t.nodeSize=function(n){return arguments.length?(i=!0,e=+n[0],r=+n[1],t):i?[e,r]:null},t},mm=function(){return this.eachAfter(qo)},xm=function(t){var n,e,r,i,o=this,u=[o];do{for(n=u.reverse(),u=[];o=n.pop();)if(t(o),e=o.children)for(r=0,i=e.length;r=0;--e)i.push(n[e]);return this},wm=function(t){for(var n,e,r,i=this,o=[i],u=[];i=o.pop();)if(u.push(i),n=i.children)for(e=0,r=n.length;e=0;)e+=r[i].value;n.value=e})},Tm=function(t){return this.eachBefore(function(n){n.children&&n.children.sort(t)})},Nm=function(t){for(var n=this,e=Uo(n,t),r=[n];n!==e;)n=n.parent,r.push(n);for(var i=r.length;t!==e;)r.splice(i,0,t),t=t.parent;return r},km=function(){for(var t=this,n=[t];t=t.parent;)n.push(t);return n},Sm=function(){var t=[];return this.each(function(n){t.push(n)}),t},Am=function(){var t=[];return this.eachBefore(function(n){n.children||t.push(n)}),t},Em=function(){var t=this,n=[];return t.each(function(e){e!==t&&n.push({source:e.parent,target:e})}),n};Bo.prototype=Oo.prototype={constructor:Bo,count:mm,each:xm,eachAfter:wm,eachBefore:bm,sum:Mm,sort:Tm,path:Nm,ancestors:km,descendants:Sm,leaves:Am,links:Em,copy:Fo};var Cm=Array.prototype.slice,zm=function(t){for(var n,e,r=0,i=(t=jo(Cm.call(t))).length,o=[];r0)throw new Error("cycle");return o}var n=lu,e=hu;return t.id=function(e){return arguments.length?(n=ou(e),t):n},t.parentId=function(n){return arguments.length?(e=ou(n),t):e},t};mu.prototype=Object.create(Bo.prototype);var Hm=function(){function t(t){var r=xu(t);if(r.eachAfter(n),r.parent.m=-r.z,r.eachBefore(e),c)t.eachBefore(i);else{var s=t,f=t,l=t;t.eachBefore(function(t){t.xf.x&&(f=t),t.depth>l.depth&&(l=t)});var h=s===f?1:o(s,f)/2,p=h-s.x,d=u/(f.x+h+p),v=a/(l.depth||1);t.eachBefore(function(t){t.x=(t.x+p)*d,t.y=t.depth*v})}return t}function n(t){var n=t.children,e=t.parent.children,i=t.i?e[t.i-1]:null;if(n){yu(t);var u=(n[0].z+n[n.length-1].z)/2;i?(t.z=i.z+o(t._,i._),t.m=t.z-u):t.z=u}else i&&(t.z=i.z+o(t._,i._));t.parent.A=r(t,i,t.parent.A||e[0])}function e(t){t._.x=t.z+t.parent.m,t.m+=t.parent.m}function r(t,n,e){if(n){for(var r,i=t,u=t,a=n,c=i.parent.children[0],s=i.m,f=u.m,l=a.m,h=c.m;a=vu(a),i=du(i),a&&i;)c=du(c),u=vu(u),u.a=t,r=a.z+l-i.z-s+o(a._,i._),r>0&&(gu(_u(a,t,e),t,r),s+=r,f+=r),l+=a.m,s+=i.m,h+=c.m,f+=u.m;a&&!vu(u)&&(u.t=a,u.m+=l-f),i&&!du(c)&&(c.t=i,c.m+=s-h,e=t)}return e}function i(t){t.x*=u,t.y=t.depth*a}var o=pu,u=1,a=1,c=null;return t.separation=function(n){return arguments.length?(o=n,t):o},t.size=function(n){return arguments.length?(c=!1,u=+n[0],a=+n[1],t):c?null:[u,a]},t.nodeSize=function(n){return arguments.length?(c=!0,u=+n[0],a=+n[1],t):c?[u,a]:null},t},Bm=function(t,n,e,r,i){for(var o,u=t.children,a=-1,c=u.length,s=t.value&&(i-e)/t.value;++a1?n:1)},e}(jm),Wm=function(){function t(t){return t.x0=t.y0=0,t.x1=i,t.y1=o,t.eachBefore(n),u=[0],r&&t.eachBefore(Dm),t}function n(t){var n=u[t.depth],r=t.x0+n,i=t.y0+n,o=t.x1-n,h=t.y1-n;o=n-1){var s=c[t];return s.x0=r,s.y0=i,s.x1=u,s.y1=a,void 0}for(var l=f[t],h=e/2+l,p=t+1,d=n-1;p>>1;f[v]a-i){var _=(r*y+u*g)/e;o(t,p,g,r,i,_,a),o(p,n,y,_,i,u,a)}else{var m=(i*y+a*g)/e;o(t,p,g,r,i,u,m),o(p,n,y,r,m,u,a)}}var u,a,c=t.children,s=c.length,f=new Array(s+1);for(f[0]=a=u=0;u1?n:1)},e}(jm),Gm=function(t){for(var n,e=-1,r=t.length,i=t[r-1],o=0;++e=0;--n)s.push(t[r[o[n]][2]]);for(n=+a;na!=s>a&&u<(c-e)*(a-r)/(s-r)+e&&(f=!f),c=e,s=r;return f},nx=function(t){for(var n,e,r=-1,i=t.length,o=t[i-1],u=o[0],a=o[1],c=0;++r1);return t+e*o*Math.sqrt(-2*Math.log(i)/i)}}return e.source=t,e}(ix),ax=function t(n){function e(){var t=ux.source(n).apply(this,arguments);return function(){return Math.exp(t())}}return e.source=t,e}(ix),cx=function t(n){function e(t){return function(){for(var e=0,r=0;r=200&&e<300||304===e){if(o)try{n=o.call(r,s)}catch(t){return void a.call("error",r,t)}else n=s;a.call("load",r,n)}else a.call("error",r,t)}var r,i,o,u,a=g("beforesend","progress","load","error"),c=Xe(),s=new XMLHttpRequest,f=null,l=null,h=0;if("undefined"==typeof XDomainRequest||"withCredentials"in s||!/^(http(s)?:)?\/\//.test(t)||(s=new XDomainRequest),"onload"in s?s.onload=s.onerror=s.ontimeout=e:s.onreadystatechange=function(t){s.readyState>3&&e(t)},s.onprogress=function(t){a.call("progress",r,t)},r={header:function(t,n){return t=(t+"").toLowerCase(),arguments.length<2?c.get(t):(null==n?c.remove(t):c.set(t,n+""),r)},mimeType:function(t){return arguments.length?(i=null==t?null:t+"",r):i},responseType:function(t){return arguments.length?(u=t,r):u},timeout:function(t){return arguments.length?(h=+t,r):h},user:function(t){return arguments.length<1?f:(f=null==t?null:t+"",r)},password:function(t){return arguments.length<1?l:(l=null==t?null:t+"",r)},response:function(t){return o=t,r},get:function(t,n){return r.send("GET",t,n)},post:function(t,n){return r.send("POST",t,n)},send:function(n,e,o){return s.open(n,t,!0,f,l),null==i||c.has("accept")||c.set("accept",i+",*/*"),s.setRequestHeader&&c.each(function(t,n){s.setRequestHeader(n,t)}),null!=i&&s.overrideMimeType&&s.overrideMimeType(i),null!=u&&(s.responseType=u),h>0&&(s.timeout=h),null==o&&"function"==typeof e&&(o=e,e=null),null!=o&&1===o.length&&(o=zu(o)),null!=o&&r.on("error",o).on("load",function(t){o(null,t)}),a.call("beforesend",r,s),s.send(null==e?null:e),r},abort:function(){return s.abort(),r},on:function(){var t=a.on.apply(a,arguments);return t===a?r:t}},null!=n){if("function"!=typeof n)throw new Error("invalid callback: "+n);return r.get(n)}return r},hx=function(t,n){return function(e,r){var i=lx(e).mimeType(t).response(n);if(null!=r){if("function"!=typeof r)throw new Error("invalid callback: "+r);return i.get(r)}return i}},px=hx("text/html",function(t){return document.createRange().createContextualFragment(t.responseText)}),dx=hx("application/json",function(t){return JSON.parse(t.responseText)}),vx=hx("text/plain",function(t){return t.responseText}),gx=hx("application/xml",function(t){var n=t.responseXML;if(!n)throw new Error("parse error");return n}),yx=function(t,n){return function(e,r,i){arguments.length<3&&(i=r,r=null);var o=lx(e).mimeType(t);return o.row=function(t){return arguments.length?o.response(Ru(n,r=t)):r},o.row(r),i?o.get(i):o}},_x=yx("text/csv",Pv),mx=yx("text/tab-separated-values",Uv),xx=Array.prototype,bx=xx.map,wx=xx.slice,Mx={name:"implicit"},Tx=function(t){return function(){return t}},Nx=function(t){return+t},kx=[0,1],Sx=function(n,e,r){var o,u=n[0],a=n[n.length-1],c=i(u,a,null==e?10:e);switch(r=vr(null==r?",f":r),r.type){case"s":var s=Math.max(Math.abs(u),Math.abs(a));return null!=r.precision||isNaN(o=Ag(c,s))||(r.precision=o),t.formatPrefix(r,s);case"":case"e":case"g":case"p":case"r":null!=r.precision||isNaN(o=Eg(c,Math.max(Math.abs(u),Math.abs(a))))||(r.precision=o-("e"===r.type));break;case"f":case"%":null!=r.precision||isNaN(o=Sg(c))||(r.precision=o-2*("%"===r.type))}return t.format(r)},Ax=function(t,n){t=t.slice();var e,r=0,i=t.length-1,o=t[r],u=t[i];return u0?t>1?aa(function(n){n.setTime(Math.floor(n/t)*t)},function(n,e){n.setTime(+n+e*t)},function(n,e){return(e-n)/t}):zx:null};var Px=zx.range,Rx=6e4,Lx=6048e5,Dx=aa(function(t){t.setTime(1e3*Math.floor(t/1e3))},function(t,n){t.setTime(+t+1e3*n)},function(t,n){return(n-t)/1e3},function(t){return t.getUTCSeconds()}),qx=Dx.range,Ux=aa(function(t){t.setTime(Math.floor(t/Rx)*Rx)},function(t,n){t.setTime(+t+n*Rx)},function(t,n){return(n-t)/Rx},function(t){return t.getMinutes()}),Ox=Ux.range,Fx=aa(function(t){var n=t.getTimezoneOffset()*Rx%36e5;n<0&&(n+=36e5),t.setTime(36e5*Math.floor((+t-n)/36e5)+n)},function(t,n){t.setTime(+t+36e5*n)},function(t,n){return(n-t)/36e5},function(t){return t.getHours()}),Yx=Fx.range,Ix=aa(function(t){t.setHours(0,0,0,0)},function(t,n){t.setDate(t.getDate()+n)},function(t,n){return(n-t-(n.getTimezoneOffset()-t.getTimezoneOffset())*Rx)/864e5},function(t){return t.getDate()-1}),Hx=Ix.range,Bx=ca(0),jx=ca(1),Xx=ca(2),Wx=ca(3),Vx=ca(4),$x=ca(5),Zx=ca(6),Gx=Bx.range,Qx=jx.range,Jx=Xx.range,Kx=Wx.range,tb=Vx.range,nb=$x.range,eb=Zx.range,rb=aa(function(t){t.setDate(1),t.setHours(0,0,0,0)},function(t,n){t.setMonth(t.getMonth()+n)},function(t,n){return n.getMonth()-t.getMonth()+12*(n.getFullYear()-t.getFullYear())},function(t){return t.getMonth()}),ib=rb.range,ob=aa(function(t){t.setMonth(0,1),t.setHours(0,0,0,0)},function(t,n){t.setFullYear(t.getFullYear()+n)},function(t,n){return n.getFullYear()-t.getFullYear()},function(t){return t.getFullYear()});ob.every=function(t){return isFinite(t=Math.floor(t))&&t>0?aa(function(n){n.setFullYear(Math.floor(n.getFullYear()/t)*t),n.setMonth(0,1),n.setHours(0,0,0,0)},function(n,e){n.setFullYear(n.getFullYear()+e*t)}):null};var ub=ob.range,ab=aa(function(t){t.setUTCSeconds(0,0)},function(t,n){t.setTime(+t+n*Rx)},function(t,n){return(n-t)/Rx},function(t){return t.getUTCMinutes()}),cb=ab.range,sb=aa(function(t){t.setUTCMinutes(0,0,0)},function(t,n){t.setTime(+t+36e5*n)},function(t,n){return(n-t)/36e5},function(t){return t.getUTCHours()}),fb=sb.range,lb=aa(function(t){t.setUTCHours(0,0,0,0)},function(t,n){t.setUTCDate(t.getUTCDate()+n)},function(t,n){return(n-t)/864e5},function(t){return t.getUTCDate()-1}),hb=lb.range,pb=sa(0),db=sa(1),vb=sa(2),gb=sa(3),yb=sa(4),_b=sa(5),mb=sa(6),xb=pb.range,bb=db.range,wb=vb.range,Mb=gb.range,Tb=yb.range,Nb=_b.range,kb=mb.range,Sb=aa(function(t){t.setUTCDate(1),t.setUTCHours(0,0,0,0)},function(t,n){t.setUTCMonth(t.getUTCMonth()+n)},function(t,n){return n.getUTCMonth()-t.getUTCMonth()+12*(n.getUTCFullYear()-t.getUTCFullYear())},function(t){return t.getUTCMonth()}),Ab=Sb.range,Eb=aa(function(t){t.setUTCMonth(0,1),t.setUTCHours(0,0,0,0)},function(t,n){t.setUTCFullYear(t.getUTCFullYear()+n)},function(t,n){return n.getUTCFullYear()-t.getUTCFullYear()},function(t){return t.getUTCFullYear()});Eb.every=function(t){return isFinite(t=Math.floor(t))&&t>0?aa(function(n){n.setUTCFullYear(Math.floor(n.getUTCFullYear()/t)*t),n.setUTCMonth(0,1),n.setUTCHours(0,0,0,0)},function(n,e){n.setUTCFullYear(n.getUTCFullYear()+e*t)}):null};var Cb,zb=Eb.range,Pb={"-":"",_:" ",0:"0"},Rb=/^\s*\d+/,Lb=/^%/,Db=/[\\^$*+?|[\]().{}]/g;xc({dateTime:"%x, %X",date:"%-m/%-d/%Y",time:"%-I:%M:%S %p",periods:["AM","PM"],days:["Sunday","Monday","Tuesday","Wednesday","Thursday","Friday","Saturday"],shortDays:["Sun","Mon","Tue","Wed","Thu","Fri","Sat"],months:["January","February","March","April","May","June","July","August","September","October","November","December"],shortMonths:["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]});var qb=Date.prototype.toISOString?bc:t.utcFormat("%Y-%m-%dT%H:%M:%S.%LZ"),Ub=+new Date("2000-01-01T00:00:00.000Z")?wc:t.utcParse("%Y-%m-%dT%H:%M:%S.%LZ"),Ob=1e3,Fb=60*Ob,Yb=60*Fb,Ib=24*Yb,Hb=7*Ib,Bb=30*Ib,jb=365*Ib,Xb=function(){return Nc(ob,rb,Bx,Ix,Fx,Ux,Dx,zx,t.timeFormat).domain([new Date(2e3,0,1),new Date(2e3,0,2)])},Wb=function(){return Nc(Eb,Sb,pb,lb,sb,ab,Dx,zx,t.utcFormat).domain([Date.UTC(2e3,0,1),Date.UTC(2e3,0,2)])},Vb=function(t){return t.match(/.{6}/g).map(function(t){return"#"+t})},$b=Vb("1f77b4ff7f0e2ca02cd627289467bd8c564be377c27f7f7fbcbd2217becf"),Zb=Vb("393b795254a36b6ecf9c9ede6379398ca252b5cf6bcedb9c8c6d31bd9e39e7ba52e7cb94843c39ad494ad6616be7969c7b4173a55194ce6dbdde9ed6"),Gb=Vb("3182bd6baed69ecae1c6dbefe6550dfd8d3cfdae6bfdd0a231a35474c476a1d99bc7e9c0756bb19e9ac8bcbddcdadaeb636363969696bdbdbdd9d9d9"),Qb=Vb("1f77b4aec7e8ff7f0effbb782ca02c98df8ad62728ff98969467bdc5b0d58c564bc49c94e377c2f7b6d27f7f7fc7c7c7bcbd22dbdb8d17becf9edae5"),Jb=Sp(Gt(300,.5,0),Gt(-240,.5,1)),Kb=Sp(Gt(-100,.75,.35),Gt(80,1.5,.8)),tw=Sp(Gt(260,.75,.35),Gt(80,1.5,.8)),nw=Gt(),ew=function(t){(t<0||t>1)&&(t-=Math.floor(t));var n=Math.abs(t-.5);return nw.h=360*t-100,nw.s=1.5-1.5*n,nw.l=.8-.9*n,nw+""},rw=kc(Vb("44015444025645045745055946075a46085c460a5d460b5e470d60470e6147106347116447136548146748166848176948186a481a6c481b6d481c6e481d6f481f70482071482173482374482475482576482677482878482979472a7a472c7a472d7b472e7c472f7d46307e46327e46337f463480453581453781453882443983443a83443b84433d84433e85423f854240864241864142874144874045884046883f47883f48893e49893e4a893e4c8a3d4d8a3d4e8a3c4f8a3c508b3b518b3b528b3a538b3a548c39558c39568c38588c38598c375a8c375b8d365c8d365d8d355e8d355f8d34608d34618d33628d33638d32648e32658e31668e31678e31688e30698e306a8e2f6b8e2f6c8e2e6d8e2e6e8e2e6f8e2d708e2d718e2c718e2c728e2c738e2b748e2b758e2a768e2a778e2a788e29798e297a8e297b8e287c8e287d8e277e8e277f8e27808e26818e26828e26828e25838e25848e25858e24868e24878e23888e23898e238a8d228b8d228c8d228d8d218e8d218f8d21908d21918c20928c20928c20938c1f948c1f958b1f968b1f978b1f988b1f998a1f9a8a1e9b8a1e9c891e9d891f9e891f9f881fa0881fa1881fa1871fa28720a38620a48621a58521a68522a78522a88423a98324aa8325ab8225ac8226ad8127ad8128ae8029af7f2ab07f2cb17e2db27d2eb37c2fb47c31b57b32b67a34b67935b77937b87838b9773aba763bbb753dbc743fbc7340bd7242be7144bf7046c06f48c16e4ac16d4cc26c4ec36b50c46a52c56954c56856c66758c7655ac8645cc8635ec96260ca6063cb5f65cb5e67cc5c69cd5b6ccd5a6ece5870cf5773d05675d05477d1537ad1517cd2507fd34e81d34d84d44b86d54989d5488bd6468ed64590d74393d74195d84098d83e9bd93c9dd93ba0da39a2da37a5db36a8db34aadc32addc30b0dd2fb2dd2db5de2bb8de29bade28bddf26c0df25c2df23c5e021c8e020cae11fcde11dd0e11cd2e21bd5e21ad8e219dae319dde318dfe318e2e418e5e419e7e419eae51aece51befe51cf1e51df4e61ef6e620f8e621fbe723fde725")),iw=kc(Vb("00000401000501010601010802010902020b02020d03030f03031204041405041606051806051a07061c08071e0907200a08220b09240c09260d0a290e0b2b100b2d110c2f120d31130d34140e36150e38160f3b180f3d19103f1a10421c10441d11471e114920114b21114e22115024125325125527125829115a2a115c2c115f2d11612f116331116533106734106936106b38106c390f6e3b0f703d0f713f0f72400f74420f75440f764510774710784910784a10794c117a4e117b4f127b51127c52137c54137d56147d57157e59157e5a167e5c167f5d177f5f187f601880621980641a80651a80671b80681c816a1c816b1d816d1d816e1e81701f81721f817320817521817621817822817922827b23827c23827e24828025828125818326818426818627818827818928818b29818c29818e2a81902a81912b81932b80942c80962c80982d80992d809b2e7f9c2e7f9e2f7fa02f7fa1307ea3307ea5317ea6317da8327daa337dab337cad347cae347bb0357bb2357bb3367ab5367ab73779b83779ba3878bc3978bd3977bf3a77c03a76c23b75c43c75c53c74c73d73c83e73ca3e72cc3f71cd4071cf4070d0416fd2426fd3436ed5446dd6456cd8456cd9466bdb476adc4869de4968df4a68e04c67e24d66e34e65e44f64e55064e75263e85362e95462ea5661eb5760ec5860ed5a5fee5b5eef5d5ef05f5ef1605df2625df2645cf3655cf4675cf4695cf56b5cf66c5cf66e5cf7705cf7725cf8745cf8765cf9785df9795df97b5dfa7d5efa7f5efa815ffb835ffb8560fb8761fc8961fc8a62fc8c63fc8e64fc9065fd9266fd9467fd9668fd9869fd9a6afd9b6bfe9d6cfe9f6dfea16efea36ffea571fea772fea973feaa74feac76feae77feb078feb27afeb47bfeb67cfeb77efeb97ffebb81febd82febf84fec185fec287fec488fec68afec88cfeca8dfecc8ffecd90fecf92fed194fed395fed597fed799fed89afdda9cfddc9efddea0fde0a1fde2a3fde3a5fde5a7fde7a9fde9aafdebacfcecaefceeb0fcf0b2fcf2b4fcf4b6fcf6b8fcf7b9fcf9bbfcfbbdfcfdbf")),ow=kc(Vb("00000401000501010601010802010a02020c02020e03021004031204031405041706041907051b08051d09061f0a07220b07240c08260d08290e092b10092d110a30120a32140b34150b37160b39180c3c190c3e1b0c411c0c431e0c451f0c48210c4a230c4c240c4f260c51280b53290b552b0b572d0b592f0a5b310a5c320a5e340a5f3609613809623909633b09643d09653e0966400a67420a68440a68450a69470b6a490b6a4a0c6b4c0c6b4d0d6c4f0d6c510e6c520e6d540f6d550f6d57106e59106e5a116e5c126e5d126e5f136e61136e62146e64156e65156e67166e69166e6a176e6c186e6d186e6f196e71196e721a6e741a6e751b6e771c6d781c6d7a1d6d7c1d6d7d1e6d7f1e6c801f6c82206c84206b85216b87216b88226a8a226a8c23698d23698f24699025689225689326679526679727669827669a28659b29649d29649f2a63a02a63a22b62a32c61a52c60a62d60a82e5fa92e5eab2f5ead305dae305cb0315bb1325ab3325ab43359b63458b73557b93556ba3655bc3754bd3853bf3952c03a51c13a50c33b4fc43c4ec63d4dc73e4cc83f4bca404acb4149cc4248ce4347cf4446d04545d24644d34743d44842d54a41d74b3fd84c3ed94d3dda4e3cdb503bdd513ade5238df5337e05536e15635e25734e35933e45a31e55c30e65d2fe75e2ee8602de9612bea632aeb6429eb6628ec6726ed6925ee6a24ef6c23ef6e21f06f20f1711ff1731df2741cf3761bf37819f47918f57b17f57d15f67e14f68013f78212f78410f8850ff8870ef8890cf98b0bf98c0af98e09fa9008fa9207fa9407fb9606fb9706fb9906fb9b06fb9d07fc9f07fca108fca309fca50afca60cfca80dfcaa0ffcac11fcae12fcb014fcb216fcb418fbb61afbb81dfbba1ffbbc21fbbe23fac026fac228fac42afac62df9c72ff9c932f9cb35f8cd37f8cf3af7d13df7d340f6d543f6d746f5d949f5db4cf4dd4ff4df53f4e156f3e35af3e55df2e661f2e865f2ea69f1ec6df1ed71f1ef75f1f179f2f27df2f482f3f586f3f68af4f88ef5f992f6fa96f8fb9af9fc9dfafda1fcffa4")),uw=kc(Vb("0d088710078813078916078a19068c1b068d1d068e20068f2206902406912605912805922a05932c05942e05952f059631059733059735049837049938049a3a049a3c049b3e049c3f049c41049d43039e44039e46039f48039f4903a04b03a14c02a14e02a25002a25102a35302a35502a45601a45801a45901a55b01a55c01a65e01a66001a66100a76300a76400a76600a76700a86900a86a00a86c00a86e00a86f00a87100a87201a87401a87501a87701a87801a87a02a87b02a87d03a87e03a88004a88104a78305a78405a78606a68707a68808a68a09a58b0aa58d0ba58e0ca48f0da4910ea3920fa39410a29511a19613a19814a099159f9a169f9c179e9d189d9e199da01a9ca11b9ba21d9aa31e9aa51f99a62098a72197a82296aa2395ab2494ac2694ad2793ae2892b02991b12a90b22b8fb32c8eb42e8db52f8cb6308bb7318ab83289ba3388bb3488bc3587bd3786be3885bf3984c03a83c13b82c23c81c33d80c43e7fc5407ec6417dc7427cc8437bc9447aca457acb4679cc4778cc4977cd4a76ce4b75cf4c74d04d73d14e72d24f71d35171d45270d5536fd5546ed6556dd7566cd8576bd9586ada5a6ada5b69db5c68dc5d67dd5e66de5f65de6164df6263e06363e16462e26561e26660e3685fe4695ee56a5de56b5de66c5ce76e5be76f5ae87059e97158e97257ea7457eb7556eb7655ec7754ed7953ed7a52ee7b51ef7c51ef7e50f07f4ff0804ef1814df1834cf2844bf3854bf3874af48849f48948f58b47f58c46f68d45f68f44f79044f79143f79342f89441f89540f9973ff9983ef99a3efa9b3dfa9c3cfa9e3bfb9f3afba139fba238fca338fca537fca636fca835fca934fdab33fdac33fdae32fdaf31fdb130fdb22ffdb42ffdb52efeb72dfeb82cfeba2cfebb2bfebd2afebe2afec029fdc229fdc328fdc527fdc627fdc827fdca26fdcb26fccd25fcce25fcd025fcd225fbd324fbd524fbd724fad824fada24f9dc24f9dd25f8df25f8e125f7e225f7e425f6e626f6e826f5e926f5eb27f4ed27f3ee27f3f027f2f227f1f426f1f525f0f724f0f921")),aw=function(t){return function(){return t}},cw=Math.abs,sw=Math.atan2,fw=Math.cos,lw=Math.max,hw=Math.min,pw=Math.sin,dw=Math.sqrt,vw=1e-12,gw=Math.PI,yw=gw/2,_w=2*gw,mw=function(){function t(){var t,s,f=+n.apply(this,arguments),l=+e.apply(this,arguments),h=o.apply(this,arguments)-yw,p=u.apply(this,arguments)-yw,d=cw(p-h),v=p>h;if(c||(c=t=Oe()),lvw)if(d>_w-vw)c.moveTo(l*fw(h),l*pw(h)),c.arc(0,0,l,h,p,!v),f>vw&&(c.moveTo(f*fw(p),f*pw(p)),c.arc(0,0,f,p,h,v));else{var g,y,_=h,m=p,x=h,b=p,w=d,M=d,T=a.apply(this,arguments)/2,N=T>vw&&(i?+i.apply(this,arguments):dw(f*f+l*l)),k=hw(cw(l-f)/2,+r.apply(this,arguments)),S=k,A=k;if(N>vw){var E=Ec(N/f*pw(T)),C=Ec(N/l*pw(T));(w-=2*E)>vw?(E*=v?1:-1,x+=E,b-=E):(w=0,x=b=(h+p)/2),(M-=2*C)>vw?(C*=v?1:-1,_+=C,m-=C):(M=0,_=m=(h+p)/2)}var z=l*fw(_),P=l*pw(_),R=f*fw(b),L=f*pw(b);if(k>vw){var D=l*fw(m),q=l*pw(m),U=f*fw(x),O=f*pw(x);if(dvw?Dc(z,P,U,O,D,q,R,L):[R,L],Y=z-F[0],I=P-F[1],H=D-F[0],B=q-F[1],j=1/pw(Ac((Y*H+I*B)/(dw(Y*Y+I*I)*dw(H*H+B*B)))/2),X=dw(F[0]*F[0]+F[1]*F[1]);S=hw(k,(f-X)/(j-1)),A=hw(k,(l-X)/(j+1))}}M>vw?A>vw?(g=qc(U,O,z,P,l,A,v),y=qc(D,q,R,L,l,A,v),c.moveTo(g.cx+g.x01,g.cy+g.y01),Avw&&w>vw?S>vw?(g=qc(R,L,D,q,f,-S,v),y=qc(z,P,U,O,f,-S,v),c.lineTo(g.cx+g.x01,g.cy+g.y01),S=f;--l)s.point(g[l],y[l]);s.lineEnd(),s.areaEnd()}v&&(g[n]=+e(h,n,t),y[n]=+i(h,n,t),s.point(r?+r(h,n,t):g[n],o?+o(h,n,t):y[n]))}if(p)return s=null,p+""||null}function n(){return bw().defined(u).curve(c).context(a)}var e=Oc,r=null,i=aw(0),o=Fc,u=aw(!0),a=null,c=xw,s=null;return t.x=function(n){return arguments.length?(e="function"==typeof n?n:aw(+n),r=null,t):e},t.x0=function(n){return arguments.length?(e="function"==typeof n?n:aw(+n),t):e},t.x1=function(n){return arguments.length?(r=null==n?null:"function"==typeof n?n:aw(+n),t):r},t.y=function(n){return arguments.length?(i="function"==typeof n?n:aw(+n),o=null,t):i},t.y0=function(n){return arguments.length?(i="function"==typeof n?n:aw(+n),t):i},t.y1=function(n){return arguments.length?(o=null==n?null:"function"==typeof n?n:aw(+n),t):o},t.lineX0=t.lineY0=function(){return n().x(e).y(i)},t.lineY1=function(){return n().x(e).y(o)},t.lineX1=function(){return n().x(r).y(i)},t.defined=function(n){return arguments.length?(u="function"==typeof n?n:aw(!!n),t):u},t.curve=function(n){return arguments.length?(c=n,null!=a&&(s=c(a)),t):c},t.context=function(n){return arguments.length?(null==n?a=s=null:s=c(a=n),t):a},t},Mw=function(t,n){return nt?1:n>=t?0:NaN},Tw=function(t){return t},Nw=function(){function t(t){var a,c,s,f,l,h=t.length,p=0,d=new Array(h),v=new Array(h),g=+i.apply(this,arguments),y=Math.min(_w,Math.max(-_w,o.apply(this,arguments)-g)),_=Math.min(Math.abs(y)/h,u.apply(this,arguments)),m=_*(y<0?-1:1);for(a=0;a0&&(p+=l);for(null!=e?d.sort(function(t,n){return e(v[t],v[n])}):null!=r&&d.sort(function(n,e){return r(t[n],t[e])}),a=0,s=p?(y-h*m)/p:0;a0?l*s:0)+m,v[c]={data:t[c],index:a,value:l,startAngle:g,endAngle:f,padAngle:_};return v}var n=Tw,e=Mw,r=null,i=aw(0),o=aw(_w),u=aw(0);return t.value=function(e){return arguments.length?(n="function"==typeof e?e:aw(+e),t):n},t.sortValues=function(n){return arguments.length?(e=n,r=null,t):e}, -t.sort=function(n){return arguments.length?(r=n,e=null,t):r},t.startAngle=function(n){return arguments.length?(i="function"==typeof n?n:aw(+n),t):i},t.endAngle=function(n){return arguments.length?(o="function"==typeof n?n:aw(+n),t):o},t.padAngle=function(n){return arguments.length?(u="function"==typeof n?n:aw(+n),t):u},t},kw=Ic(xw);Yc.prototype={areaStart:function(){this._curve.areaStart()},areaEnd:function(){this._curve.areaEnd()},lineStart:function(){this._curve.lineStart()},lineEnd:function(){this._curve.lineEnd()},point:function(t,n){this._curve.point(n*Math.sin(t),n*-Math.cos(t))}};var Sw=function(){return Hc(bw().curve(kw))},Aw=function(){var t=ww().curve(kw),n=t.curve,e=t.lineX0,r=t.lineX1,i=t.lineY0,o=t.lineY1;return t.angle=t.x,delete t.x,t.startAngle=t.x0,delete t.x0,t.endAngle=t.x1,delete t.x1,t.radius=t.y,delete t.y,t.innerRadius=t.y0,delete t.y0,t.outerRadius=t.y1,delete t.y1,t.lineStartAngle=function(){return Hc(e())},delete t.lineX0,t.lineEndAngle=function(){return Hc(r())},delete t.lineX1,t.lineInnerRadius=function(){return Hc(i())},delete t.lineY0,t.lineOuterRadius=function(){return Hc(o())},delete t.lineY1,t.curve=function(t){return arguments.length?n(Ic(t)):n()._curve},t},Ew=function(t,n){return[(n=+n)*Math.cos(t-=Math.PI/2),n*Math.sin(t)]},Cw=Array.prototype.slice,zw={draw:function(t,n){var e=Math.sqrt(n/gw);t.moveTo(e,0),t.arc(0,0,e,0,_w)}},Pw={draw:function(t,n){var e=Math.sqrt(n/5)/2;t.moveTo(-3*e,-e),t.lineTo(-e,-e),t.lineTo(-e,-3*e),t.lineTo(e,-3*e),t.lineTo(e,-e),t.lineTo(3*e,-e),t.lineTo(3*e,e),t.lineTo(e,e),t.lineTo(e,3*e),t.lineTo(-e,3*e),t.lineTo(-e,e),t.lineTo(-3*e,e),t.closePath()}},Rw=Math.sqrt(1/3),Lw=2*Rw,Dw={draw:function(t,n){var e=Math.sqrt(n/Lw),r=e*Rw;t.moveTo(0,-e),t.lineTo(r,0),t.lineTo(0,e),t.lineTo(-r,0),t.closePath()}},qw=Math.sin(gw/10)/Math.sin(7*gw/10),Uw=Math.sin(_w/10)*qw,Ow=-Math.cos(_w/10)*qw,Fw={draw:function(t,n){var e=Math.sqrt(.8908130915292852*n),r=Uw*e,i=Ow*e;t.moveTo(0,-e),t.lineTo(r,i);for(var o=1;o<5;++o){var u=_w*o/5,a=Math.cos(u),c=Math.sin(u);t.lineTo(c*e,-a*e),t.lineTo(a*r-c*i,c*r+a*i)}t.closePath()}},Yw={draw:function(t,n){var e=Math.sqrt(n),r=-e/2;t.rect(r,r,e,e)}},Iw=Math.sqrt(3),Hw={draw:function(t,n){var e=-Math.sqrt(n/(3*Iw));t.moveTo(0,2*e),t.lineTo(-Iw*e,-e),t.lineTo(Iw*e,-e),t.closePath()}},Bw=-.5,jw=Math.sqrt(3)/2,Xw=1/Math.sqrt(12),Ww=3*(Xw/2+1),Vw={draw:function(t,n){var e=Math.sqrt(n/Ww),r=e/2,i=e*Xw,o=r,u=e*Xw+e,a=-o,c=u;t.moveTo(r,i),t.lineTo(o,u),t.lineTo(a,c),t.lineTo(Bw*r-jw*i,jw*r+Bw*i),t.lineTo(Bw*o-jw*u,jw*o+Bw*u),t.lineTo(Bw*a-jw*c,jw*a+Bw*c),t.lineTo(Bw*r+jw*i,Bw*i-jw*r),t.lineTo(Bw*o+jw*u,Bw*u-jw*o),t.lineTo(Bw*a+jw*c,Bw*c-jw*a),t.closePath()}},$w=[zw,Pw,Dw,Yw,Fw,Hw,Vw],Zw=function(){function t(){var t;if(r||(r=t=Oe()),n.apply(this,arguments).draw(r,+e.apply(this,arguments)),t)return r=null,t+""||null}var n=aw(zw),e=aw(64),r=null;return t.type=function(e){return arguments.length?(n="function"==typeof e?e:aw(e),t):n},t.size=function(n){return arguments.length?(e="function"==typeof n?n:aw(+n),t):e},t.context=function(n){return arguments.length?(r=null==n?null:n,t):r},t},Gw=function(){};Kc.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._y0=this._y1=NaN,this._point=0},lineEnd:function(){switch(this._point){case 3:Jc(this,this._x1,this._y1);case 2:this._context.lineTo(this._x1,this._y1)}(this._line||0!==this._line&&1===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2;break;case 2:this._point=3,this._context.lineTo((5*this._x0+this._x1)/6,(5*this._y0+this._y1)/6);default:Jc(this,t,n)}this._x0=this._x1,this._x1=t,this._y0=this._y1,this._y1=n}};var Qw=function(t){return new Kc(t)};ts.prototype={areaStart:Gw,areaEnd:Gw,lineStart:function(){this._x0=this._x1=this._x2=this._x3=this._x4=this._y0=this._y1=this._y2=this._y3=this._y4=NaN,this._point=0},lineEnd:function(){switch(this._point){case 1:this._context.moveTo(this._x2,this._y2),this._context.closePath();break;case 2:this._context.moveTo((this._x2+2*this._x3)/3,(this._y2+2*this._y3)/3),this._context.lineTo((this._x3+2*this._x2)/3,(this._y3+2*this._y2)/3),this._context.closePath();break;case 3:this.point(this._x2,this._y2),this.point(this._x3,this._y3),this.point(this._x4,this._y4)}},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._x2=t,this._y2=n;break;case 1:this._point=2,this._x3=t,this._y3=n;break;case 2:this._point=3,this._x4=t,this._y4=n,this._context.moveTo((this._x0+4*this._x1+t)/6,(this._y0+4*this._y1+n)/6);break;default:Jc(this,t,n)}this._x0=this._x1,this._x1=t,this._y0=this._y1,this._y1=n}};var Jw=function(t){return new ts(t)};ns.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._y0=this._y1=NaN,this._point=0},lineEnd:function(){(this._line||0!==this._line&&3===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1;break;case 1:this._point=2;break;case 2:this._point=3;var e=(this._x0+4*this._x1+t)/6,r=(this._y0+4*this._y1+n)/6;this._line?this._context.lineTo(e,r):this._context.moveTo(e,r);break;case 3:this._point=4;default:Jc(this,t,n)}this._x0=this._x1,this._x1=t,this._y0=this._y1,this._y1=n}};var Kw=function(t){return new ns(t)};es.prototype={lineStart:function(){this._x=[],this._y=[],this._basis.lineStart()},lineEnd:function(){var t=this._x,n=this._y,e=t.length-1;if(e>0)for(var r,i=t[0],o=n[0],u=t[e]-i,a=n[e]-o,c=-1;++c<=e;)r=c/e,this._basis.point(this._beta*t[c]+(1-this._beta)*(i+r*u),this._beta*n[c]+(1-this._beta)*(o+r*a));this._x=this._y=null,this._basis.lineEnd()},point:function(t,n){this._x.push(+t),this._y.push(+n)}};var tM=function t(n){function e(t){return 1===n?new Kc(t):new es(t,n)}return e.beta=function(n){return t(+n)},e}(.85);is.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._x2=this._y0=this._y1=this._y2=NaN,this._point=0},lineEnd:function(){switch(this._point){case 2:this._context.lineTo(this._x2,this._y2);break;case 3:rs(this,this._x1,this._y1)}(this._line||0!==this._line&&1===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2,this._x1=t,this._y1=n;break;case 2:this._point=3;default:rs(this,t,n)}this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var nM=function t(n){function e(t){return new is(t,n)}return e.tension=function(n){return t(+n)},e}(0);os.prototype={areaStart:Gw,areaEnd:Gw,lineStart:function(){this._x0=this._x1=this._x2=this._x3=this._x4=this._x5=this._y0=this._y1=this._y2=this._y3=this._y4=this._y5=NaN,this._point=0},lineEnd:function(){switch(this._point){case 1:this._context.moveTo(this._x3,this._y3),this._context.closePath();break;case 2:this._context.lineTo(this._x3,this._y3),this._context.closePath();break;case 3:this.point(this._x3,this._y3),this.point(this._x4,this._y4),this.point(this._x5,this._y5)}},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._x3=t,this._y3=n;break;case 1:this._point=2,this._context.moveTo(this._x4=t,this._y4=n);break;case 2:this._point=3,this._x5=t,this._y5=n;break;default:rs(this,t,n)}this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var eM=function t(n){function e(t){return new os(t,n)}return e.tension=function(n){return t(+n)},e}(0);us.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._x2=this._y0=this._y1=this._y2=NaN,this._point=0},lineEnd:function(){(this._line||0!==this._line&&3===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1;break;case 1:this._point=2;break;case 2:this._point=3,this._line?this._context.lineTo(this._x2,this._y2):this._context.moveTo(this._x2,this._y2);break;case 3:this._point=4;default:rs(this,t,n)}this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var rM=function t(n){function e(t){return new us(t,n)}return e.tension=function(n){return t(+n)},e}(0);cs.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._x2=this._y0=this._y1=this._y2=NaN,this._l01_a=this._l12_a=this._l23_a=this._l01_2a=this._l12_2a=this._l23_2a=this._point=0},lineEnd:function(){switch(this._point){case 2:this._context.lineTo(this._x2,this._y2);break;case 3:this.point(this._x2,this._y2)}(this._line||0!==this._line&&1===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){if(t=+t,n=+n,this._point){var e=this._x2-t,r=this._y2-n;this._l23_a=Math.sqrt(this._l23_2a=Math.pow(e*e+r*r,this._alpha))}switch(this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2;break;case 2:this._point=3;default:as(this,t,n)}this._l01_a=this._l12_a,this._l12_a=this._l23_a,this._l01_2a=this._l12_2a,this._l12_2a=this._l23_2a,this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var iM=function t(n){function e(t){return n?new cs(t,n):new is(t,0)}return e.alpha=function(n){return t(+n)},e}(.5);ss.prototype={areaStart:Gw,areaEnd:Gw,lineStart:function(){this._x0=this._x1=this._x2=this._x3=this._x4=this._x5=this._y0=this._y1=this._y2=this._y3=this._y4=this._y5=NaN,this._l01_a=this._l12_a=this._l23_a=this._l01_2a=this._l12_2a=this._l23_2a=this._point=0},lineEnd:function(){switch(this._point){case 1:this._context.moveTo(this._x3,this._y3),this._context.closePath();break;case 2:this._context.lineTo(this._x3,this._y3),this._context.closePath();break;case 3:this.point(this._x3,this._y3),this.point(this._x4,this._y4),this.point(this._x5,this._y5)}},point:function(t,n){if(t=+t,n=+n,this._point){var e=this._x2-t,r=this._y2-n;this._l23_a=Math.sqrt(this._l23_2a=Math.pow(e*e+r*r,this._alpha))}switch(this._point){case 0:this._point=1,this._x3=t,this._y3=n;break;case 1:this._point=2,this._context.moveTo(this._x4=t,this._y4=n);break;case 2:this._point=3,this._x5=t,this._y5=n;break;default:as(this,t,n)}this._l01_a=this._l12_a,this._l12_a=this._l23_a,this._l01_2a=this._l12_2a,this._l12_2a=this._l23_2a,this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var oM=function t(n){function e(t){return n?new ss(t,n):new os(t,0)}return e.alpha=function(n){return t(+n)},e}(.5);fs.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._x2=this._y0=this._y1=this._y2=NaN,this._l01_a=this._l12_a=this._l23_a=this._l01_2a=this._l12_2a=this._l23_2a=this._point=0},lineEnd:function(){(this._line||0!==this._line&&3===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){if(t=+t,n=+n,this._point){var e=this._x2-t,r=this._y2-n;this._l23_a=Math.sqrt(this._l23_2a=Math.pow(e*e+r*r,this._alpha))}switch(this._point){case 0:this._point=1;break;case 1:this._point=2;break;case 2:this._point=3,this._line?this._context.lineTo(this._x2,this._y2):this._context.moveTo(this._x2,this._y2);break;case 3:this._point=4;default:as(this,t,n)}this._l01_a=this._l12_a,this._l12_a=this._l23_a,this._l01_2a=this._l12_2a,this._l12_2a=this._l23_2a,this._x0=this._x1,this._x1=this._x2,this._x2=t,this._y0=this._y1,this._y1=this._y2,this._y2=n}};var uM=function t(n){function e(t){return n?new fs(t,n):new us(t,0)}return e.alpha=function(n){return t(+n)},e}(.5);ls.prototype={areaStart:Gw,areaEnd:Gw,lineStart:function(){this._point=0},lineEnd:function(){this._point&&this._context.closePath()},point:function(t,n){t=+t,n=+n,this._point?this._context.lineTo(t,n):(this._point=1,this._context.moveTo(t,n))}};var aM=function(t){return new ls(t)};gs.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x0=this._x1=this._y0=this._y1=this._t0=NaN,this._point=0},lineEnd:function(){switch(this._point){case 2:this._context.lineTo(this._x1,this._y1);break;case 3:vs(this,this._t0,ds(this,this._t0))}(this._line||0!==this._line&&1===this._point)&&this._context.closePath(),this._line=1-this._line},point:function(t,n){var e=NaN;if(t=+t,n=+n,t!==this._x1||n!==this._y1){switch(this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2;break;case 2:this._point=3,vs(this,ds(this,e=ps(this,t,n)),e);break;default:vs(this,this._t0,e=ps(this,t,n))}this._x0=this._x1,this._x1=t,this._y0=this._y1,this._y1=n,this._t0=e}}},(ys.prototype=Object.create(gs.prototype)).point=function(t,n){gs.prototype.point.call(this,n,t)},_s.prototype={moveTo:function(t,n){this._context.moveTo(n,t)},closePath:function(){this._context.closePath()},lineTo:function(t,n){this._context.lineTo(n,t)},bezierCurveTo:function(t,n,e,r,i,o){this._context.bezierCurveTo(n,t,r,e,o,i)}},bs.prototype={areaStart:function(){this._line=0},areaEnd:function(){this._line=NaN},lineStart:function(){this._x=[],this._y=[]},lineEnd:function(){var t=this._x,n=this._y,e=t.length;if(e)if(this._line?this._context.lineTo(t[0],n[0]):this._context.moveTo(t[0],n[0]),2===e)this._context.lineTo(t[1],n[1]);else for(var r=ws(t),i=ws(n),o=0,u=1;u=0&&(this._t=1-this._t,this._line=1-this._line)},point:function(t,n){switch(t=+t,n=+n,this._point){case 0:this._point=1,this._line?this._context.lineTo(t,n):this._context.moveTo(t,n);break;case 1:this._point=2;default:if(this._t<=0)this._context.lineTo(this._x,n),this._context.lineTo(t,n);else{var e=this._x*(1-this._t)+t*this._t;this._context.lineTo(e,this._y),this._context.lineTo(e,n)}}this._x=t,this._y=n}};var sM=function(t){return new Ms(t,.5)},fM=function(t,n){if((i=t.length)>1)for(var e,r,i,o=1,u=t[n[0]],a=u.length;o=0;)e[n]=n;return e},hM=function(){function t(t){var o,u,a=n.apply(this,arguments),c=t.length,s=a.length,f=new Array(s);for(o=0;o0){for(var e,r,i,o=0,u=t[0].length;o1)for(var e,r,i,o,u,a,c=0,s=t[n[0]].length;c=0?(r[0]=o,r[1]=o+=i):i<0?(r[1]=u,r[0]=u+=i):r[0]=o},vM=function(t,n){if((e=t.length)>0){for(var e,r=0,i=t[n[0]],o=i.length;r0&&(r=(e=t[n[0]]).length)>0){for(var e,r,i,o=0,u=1;u=a)return null;var c=t-i.site[0],s=n-i.site[1],f=c*c+s*s;do{i=o.cells[r=u],u=null,i.halfedges.forEach(function(e){var r=o.edges[e],a=r.left;if(a!==i.site&&a||(a=r.right)){var c=t-a[0],s=n-a[1],l=c*c+s*s;lz}i.zoom("mouse",m(r(i.that.__zoom,i.mouse[0]=Al(i.that),i.mouse[1]),i.extent,M))}function e(){o.on("mousemove.zoom mouseup.zoom",null),xt(t.event.view,i.moved),LM(),i.end()}if(!v&&y.apply(this,arguments)){var i=u(this,arguments),o=fh(t.event.view).on("mousemove.zoom",n,!0).on("mouseup.zoom",e,!0),a=Al(this),c=t.event.clientX,s=t.event.clientY;vh(t.event.view),ff(),i.mouse=[a,this.__zoom.invert(a)],Gp(this),i.start()}}function f(){if(y.apply(this,arguments)){var i=this.__zoom,u=Al(this),a=i.invert(u),c=i.k*(t.event.shiftKey?.5:2),s=m(r(e(i,c),u,a),_.apply(this,arguments),M);LM(),T>0?fh(this).transition().duration(T).call(o,s,u):fh(this).call(n.transform,s)}}function l(){if(y.apply(this,arguments)){var n,e,r,i,o=u(this,arguments),a=t.event.changedTouches,c=a.length;for(ff(),e=0;e-1)&&(t.push(this.parentNode),!0)}).select(function(){return this.parentNode})},IM=function(t){var n,e=El(t),r=UM(t);t=_l(r.tag),n=this.select(function(){return e.apply(this,arguments)||this.appendChild(t.apply(this,arguments))});for(var i in r.attr)n.attr(i,r.attr[i]);return n},HM=function(t,n){return this.selectAll("tspan").data(function(n){return("function"==typeof t?t(n):t).map(function(t){return{line:t,parent:n}})}).enter().append("tspan").text(function(t){return t.line}).attr("x",0).attr("dy",function(t,e){return e?("function"==typeof n?n(t.parent,t.line,e):n)||15:0})},BM=function(t,n){if("string"==typeof n){console.warn("DEPRECATED: jetpack's appendMany order of arguments has changed. It's appendMany('div', data) from now on");var e=n;n=t,t=e}return this.selectAll(null).data(n).enter().append(t)},jM=function(t,n){if("object"==typeof t){for(var e in t)this.attr(e.replace(/([a-z\d])([A-Z])/g,"$1-$2").toLowerCase(),t[e]);return this}return 1==arguments.length?this.attr(t):this.attr(t,n)};_f.not=function(t){return!t},_f.run=function(t){return t()},_f.objToFn=function(t,n){return 1==arguments.length&&(n=void 0),function(e){return void 0!==t[e]?t[e]:n}};var XM=function(t,n){function e(t,n,e){return n=n.replace(/([a-z\d])([A-Z])/g,"$1-$2").toLowerCase(),~"top left bottom right padding-top padding-left padding-bottom padding-right border-top b-width border-left-width border-botto-width m border-right-width margin-top margin-left margin-bottom margin-right font-size width height stroke-width line-height margin padding border border-radius max-width min-width".indexOf(n)?t.style(n,"function"==typeof e?i(e):r(e)):t.style(n,e),t}function r(t){return t.match?t:t+"px"}function i(t){return function(){return r(t.apply(this,arguments))}}if("object"==typeof t){for(var o in t)e(this,o,t[o]);return this}return 1==arguments.length?this.style(t):e(this,t,n)},WM={A:7,a:7,B:8,b:7,C:8,c:6,D:9,d:7,E:7,e:7,F:7,f:4,G:9,g:7,H:9,h:7,I:3,i:3,J:5,j:3,K:8,k:6,L:7,l:3,M:11,m:11,N:9,n:7,O:9,o:7,P:8,p:7,Q:9,q:7,R:8,r:4,S:8,s:6,T:7,t:4,U:9,u:7,V:7,v:6,W:11,w:9,X:7,x:6,Y:7,y:6,Z:7,z:5,".":2,",":2,":":2,";":2},VM=function(t,n,e,r){function i(t){return!r&&WM[t]||WM.a}function o(t){return t.length}function u(t,n){return t-n}var a,c,s,f,l,h,p=[],d=[],v=[];return c=t.split(" "),c.forEach(function(t,n){var e=t.split("-");e.length>1?e.forEach(function(t,n){d.push(t+(nl&&a>h&&(p.push(v.join("")),v.length=0,a=0),a+=n,v.push(t)}),v.length&&p.push(v.join("")),p.filter(function(t){return""!==t})},$M=function(t){return"function"==typeof t?function(n,e){return t(n)t(e)?1:t(n)>=t(e)?0:NaN}:function(n,e){return n[t]e[t]?1:n[t]>=e[t]?0:NaN}},ZM=function(t){return"function"==typeof t?function(n,e){return t(e)t(n)?1:t(e)>=t(n)?0:NaN}:function(n,e){return e[t]n[t]?1:e[t]>=n[t]?0:NaN}},GM=function(t){t=t||{},t.margin=t.margin||{},["top","right","bottom","left"].forEach(function(n){t.margin[n]||0===t.margin[n]||(t.margin[n]=20)}),t.parentSel&&(t.sel=t.parentSel);var n=t.sel&&t.sel.node();return t.totalWidth=t.totalWidth||n&&n.offsetWidth||960,t.totalHeight=t.totalHeight||n&&n.offsetHeight||500,t.width=t.width||t.totalWidth-t.margin.left-t.margin.right,t.height=t.height||t.totalHeight-t.margin.top-t.margin.bottom, -t.totalWidth=t.width+t.margin.left+t.margin.right,t.totalHeight=t.height+t.margin.top+t.margin.bottom,t.sel=t.sel||fh("body"),t.sel.st({position:"relative",height:t.totalHeight,width:t.totalWidth}),t.x=t.x||Wu().range([0,t.width]),t.y=t.y||Wu().range([t.height,0]),t.xAxis=t.xAxis||d().scale(t.x),t.yAxis=t.yAxis||v().scale(t.y),t.layers=(t.layers||"s").split("").map(function(n){var e;if("s"==n)e=t.sel.append("svg").st({position:t.layers?"absolute":""}).attr("width",t.totalWidth).attr("height",t.totalHeight).append("g").attr("transform","translate("+t.margin.left+","+t.margin.top+")"),t.svg||(t.svg=e);else if("c"==n){var r=window.devicePixelRatio||1;e=t.sel.append("canvas").at({width:t.totalWidth*r,height:t.totalHeight*r}).st({width:t.totalWidth,height:t.totalHeight}).st({position:"absolute"}).node().getContext("2d"),e.scale(r,r),e.translate(t.margin.left,t.margin.top)}else"d"==n&&(e=t.sel.append("div").st({position:"absolute",left:t.margin.left,top:t.margin.top,width:t.width,height:t.height}));return e}),t},QM=function(t){return{xAxisSel:t.svg.append("g").attr("class","x axis").attr("transform","translate(0,"+t.height+")").call(t.xAxis),yAxisSel:t.svg.append("g").attr("class","y axis").call(t.yAxis)}},JM=function(t,n,e){return Math.max(t,Math.min(e,n))},KM=function(n,e,r){function i(t){e.classed("tooltip-hidden",!1).html("").appendMany("div",r).html(function(n){return n(t)}),fh(this).classed("tooltipped",!0)}function o(n){if(e.size()){var r=t.event,i=r.clientX,o=r.clientY,u=e.node().getBoundingClientRect(),a=JM(20,i-u.width/2,window.innerWidth-u.width-20),c=innerHeight>o+20+u.height?o+20:o-u.height-20;e.style("left",a+"px").style("top",c+"px")}}function u(t){e.classed("tooltip-hidden",!0),lh(".tooltipped").classed("tooltipped",!1)}if(n.size()){e=e||fh(".tooltip"),n.on("mouseover.attachTooltip",i).on("mousemove.attachTooltip",o).on("mouseout.attachTooltip",u).on("click.attachTooltip",function(t){console.log(t)});var a=n.datum();r=r||wv(a).filter(function(t){return"object"!=typeof a[t]&&"array"!=a[t]}).map(function(t){return function(n){return t+": "+n[t]+""}})}},tT=function(){var t=Cu(),n=[].slice.call(arguments),e=n.slice(0,n.length-1),r=n[n.length-1];e.forEach(function(n){var e=n.split("?")[0].split(".").reverse()[0],i={csv:_x,tsv:mx,json:dx}[e];if(!i)return r(new Error("Invalid type",n));t.defer(i,n)}),t.awaitAll(r)},nT=function(t,n){return xv().key(n).entries(t).map(function(t){return t.values.key=t.key,t.values})},eT=function(t,n){return n?Math.round(t*(n=Math.pow(10,n)))/n:Math.round(t)},rT=function(t,n){for(var e,r,i,o,u,a,c=bf(n),s=-1,f=t.length-bf(t),l=t[f-1];++s d.isSick ? lcolors.sick : lcolors.well, - rectOpacity: d => 0, - threshold: .8, - fpAxisOpacity: 0, - sexAxisOpacity: 0, - brAxisOpacity: 0, - truthAxisOpacity: 0, - mlAxisOpacity: 0, - pos: 'all', - botAxisY: c.width + 80, - }, - - { - textFill: d => d.isSick ? colors.sick : colors.well, - truthAxisOpacity: 1, - }, - - { - rectOpacity: d => 1, - mlAxisOpacity: 1, - - }, - - { - rectFill: d => d.grade > gs.curSlide.threshold ? lcolors.sick : lcolors.well, - textStroke: d => d.grade > gs.curSlide.threshold == d.isSick ? 0 : .6, - fpAxisOpacity: 1, - }, - - { - threshold: .61, - animateThreshold: true, - }, - - { - threshold: .89, - animateThreshold: true, - }, - - { - pos: 'sex', - fpAxisOpacity: 0, - sexAxisOpacity: 1, - threshold: .7508, - animateThreshold: false, - botAxisY: c.width + 150, - - }, - - { - brAxisOpacity: 1, - sexAxisOpacity: 0, - - }, - - { - - } - - ] - - var keys = [] - slides.forEach(d => keys = keys.concat(d3.keys(d))) - _.uniq(keys).forEach(str => { - var prev = null - slides.forEach(d => { - if (typeof(d[str]) === 'undefined'){ - d[str] = prev - } - prev = d[str] - }) - }) - - return slides -} - - - -if (window.init) window.init() diff --git a/spaces/merve/uncertainty-calibration/source/_posts/2019-11-04-data-leak.md b/spaces/merve/uncertainty-calibration/source/_posts/2019-11-04-data-leak.md deleted file mode 100644 index 51d319aa89abc8783bed834081df6553af17a08d..0000000000000000000000000000000000000000 --- a/spaces/merve/uncertainty-calibration/source/_posts/2019-11-04-data-leak.md +++ /dev/null @@ -1,102 +0,0 @@ ---- -template: post.html -title: Why Some Models Leak Data -shorttitle: Why Some Models Leak Data -summary: Machine learning models use large amounts of data, some of which can be sensitive. If they're not trained correctly, sometimes that data is inadvertently revealed. -socialsummary: Machine learning models use large amounts of data, some of which can be sensitive. If they're not trained correctly, sometimes that data is inadvertently revealed. -permalink: /data-leak/ -shareimg: https://pair.withgoogle.com/explorables/images/model-inversion.png -date: 2020-12-01 ---- - - - - - -Let's take a look at a game of soccer. - - -
- -

- -Using the position of each player as training data, we can teach a model to predict which team would get to a loose ball first at each spot on the field, indicated by the color of the pixel. - -
- -It updates in real-time—drag the players around to see the model change. - -

- -This model reveals quite a lot about the data used to train it. Even without the actual positions of the players, it is simple to see where players might be. - -
- -Click this button to move the players - -Take a guess at where the yellow team's goalie is now, then check their actual position. How close were you? - -

Sensitive Salary Data

- -In this specific soccer example, being able to make educated guesses about the data a model was trained on doesn't matter too much. But what if our data points represent something more sensitive? - -
- -We’ve fed the same numbers into the model, but now they represent salary data instead of soccer data. Building models like this is a common technique to [detect discrimination](https://www.eeoc.gov/laws/guidance/section-10-compensation-discrimination#c.%20Using%20More%20Sophisticated%20Statistical%20Techniques%20to%20Evaluate). A union might test if a company is paying men and women fairly by building a salary model that takes into account years of experience. They can then [publish](https://postguild.org/2019-pay-study/) the results to bring pressure for change or show improvement. - -In this hypothetical salary study, even though no individual salaries have been published, it is easy to infer the salary of the newest male hire. And carefully cross referencing public start dates on LinkedIn with the model could almost perfectly reveal everyone's salary. - -Because the model here is so flexible (there are hundreds of square patches with independently calculated predictions) and we have so few data points (just 22 people), it is able to "memorize" individual data points. If we're looking to share information about patterns in salaries, a simpler and more constrained model like a linear regression might be more appropriate. - -
- -By boiling down the 22 data points to two lines we're able to see broad trends without being able to guess anyone's salary. - -

Subtle Leaks

- -Removing complexity isn't a complete solution though. Depending on how the data is distributed, even a simple line can inadvertently reveal information. - -
- -In this company, almost all the men started several years ago, so the slope of the line is especially sensitive to the salary of the new hire. - -Is their salary higher or lower than average? Based on the line, we can make a pretty good guess. - -Notice that changing the salary of someone with a more common tenure barely moves the line. In general, more typical data points are less susceptible to being leaked. This sets up a tricky trade off: we want models to learn about edge cases while being sure they haven't memorized individual data points. - -

Real World Data

- -Models of real world data are often quite complex—this can improve accuracy, but makes them [more susceptible](https://blog.tensorflow.org/2020/06/introducing-new-privacy-testing-library.html) to unexpectedly leaking information. Medical models have inadvertently revealed [patients' genetic markers](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4827719/). Language models have memorized [credit card numbers](https://bair.berkeley.edu/blog/2019/08/13/memorization/). Faces can even be [reconstructed](https://rist.tech.cornell.edu/papers/mi-ccs.pdf) from image models: - -
- -[Fredrikson et al](https://rist.tech.cornell.edu/papers/mi-ccs.pdf) were able to extract the image on the left by repeatedly querying a facial recognition API. It isn't an exact match with the individual's actual face (on the right), but this attack only required access to the model's predictions, not its internal state. - -

Protecting Private Data

- -Training models with [differential privacy](http://www.cleverhans.io/privacy/2018/04/29/privacy-and-machine-learning.html) stops the training data from leaking by limiting how much the model can learn from any one data point. Differentially private models are still at the cutting edge of research, but they're being packaged into [machine learning frameworks](https://blog.tensorflow.org/2019/03/introducing-tensorflow-privacy-learning.html), making them much easier to use. When it isn't possible to train differentially private models, there are also tools that can [measure](https://github.com/tensorflow/privacy/tree/master/tensorflow_privacy/privacy/membership_inference_attack) how much data is the model memorizing. Also, standard techniques such as aggregation and limiting how much data a single source can contribute are still useful and usually improve the privacy of the model. - -As we saw in the [Collecting Sensitive Information Explorable](https://pair.withgoogle.com/explorables/anonymization/), adding enough random noise with differential privacy to protect outliers like the new hire can increase the amount of data required to reach a good level of accuracy. Depending on the application, the constraints of differential privacy could even improve the model—for instance, not learning too much from one data point can help prevent [overfitting](https://openreview.net/forum?id=r1xyx3R9tQ). - -Given the increasing utility of machine learning models for many real-world tasks, it’s clear that more and more systems, devices and apps will be powered, to some extent, by machine learning in the future. While [standard privacy best practices](https://owasp.org/www-project-top-ten/) developed for non-machine learning systems still apply to those with machine learning, the introduction of machine learning introduces new challenges, including the ability of the model to memorize some specific training data points and thus be vulnerable to privacy attacks that seek to extract this data from the model. Fortunately, techniques such as differential privacy exist that can be helpful in overcoming this specific challenge. Just as with other areas of [Responsible AI](https://ai.google/responsibilities/responsible-ai-practices/), it’s important to be aware of these new challenges that come along with machine learning and what steps can be taken to mitigate them. - - -

Credits

- -Adam Pearce and Ellen Jiang // December 2020 - -Thanks to Andreas Terzis, Ben Wedin, Carey Radebaugh, David Weinberger, Emily Reif, Fernanda Viégas, Hal Abelson, Kristen Olson, Martin Wattenberg, Michael Terry, Miguel Guevara, Thomas Steinke, Yannick Assogba, Zan Armstrong and our other colleagues at Google for their help with this piece. - - -

More Explorables

- -

- - - - - - - - - \ No newline at end of file diff --git a/spaces/mhmdrza/stabilityai-stable-diffusion-2/README.md b/spaces/mhmdrza/stabilityai-stable-diffusion-2/README.md deleted file mode 100644 index 8c262855512b301bd2e6aeb5cbf2c0c2090c6cb6..0000000000000000000000000000000000000000 --- a/spaces/mhmdrza/stabilityai-stable-diffusion-2/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Stabilityai Stable Diffusion 2 -emoji: 🐨 -colorFrom: indigo -colorTo: gray -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/mishtert/tracer/cid.py b/spaces/mishtert/tracer/cid.py deleted file mode 100644 index 2b8e840dc683e677f59b27f5a7e7343188a54e25..0000000000000000000000000000000000000000 --- a/spaces/mishtert/tracer/cid.py +++ /dev/null @@ -1,21 +0,0 @@ -class CaseInsensitiveDict(dict): - def __init__(self, *args, **kwargs): - self._keystore = {} - d = dict(*args, **kwargs) - for k in list(d.keys()): - self._keystore[self._get_lower(k)] = k - return super(CaseInsensitiveDict,self).__init__(*args,**kwargs) - - def __setitem__(self, k, v): - self._keystore[self._get_lower(k)] = k - return super(CaseInsensitiveDict, self).__setitem__(k, v) - - def __getitem__(self, k): - return super(CaseInsensitiveDict, - self).__getitem__(self._keystore[self._get_lower(k)]) - @staticmethod - def _get_lower(k): - if isinstance(k,str): - return k.lower() - else: - return k \ No newline at end of file diff --git a/spaces/mmecheri/Rakuten_Streamlit/page_descriptions/data_exp_viz_txt.md b/spaces/mmecheri/Rakuten_Streamlit/page_descriptions/data_exp_viz_txt.md deleted file mode 100644 index b067354dab8e949069798df95b2cb16621e7fbac..0000000000000000000000000000000000000000 --- a/spaces/mmecheri/Rakuten_Streamlit/page_descriptions/data_exp_viz_txt.md +++ /dev/null @@ -1,36 +0,0 @@ -#### Exploration des données - -##### Texte ->>- Les données Texte sont en 3 langues(Français, Anglais et Allemand), présence de caractères spéciaux, de balises HTML,... ->>- Tous les articles ont un champ « designation » renseigné et ne comprend pas de valeurs manquantes ->>- Présence de valeurs manquantes dans la colonne «description» (35% de l'ensemble de la clonne): ----Insersetion--- - -##### Images - ->>- Les images sont en .JPG ->>- Les images sont en couleur et toutes de dimensions(500 x 500) pixels ->>- Certaine images présentent un fond blanc important ----Insersetion--- ->>- Une forte ressemblance entre certaines classes et parfois deux classes différentes peuvent contenir le même type d’acticles ->>>Exemples: 1180 et 1140, 50 et 60 , 1280 et 1302, 10 et 2280 - ----Insersetion--- - -#### Quelques visualisations ----Insersetion--- - -La représentation ci-dessous montre la répartition des différents produits par classes: - ----Insersetion--- -La proportion des produits par catégorie est déséquilibrée. Les classes minoritaires « 1180, 1940 , 1301, 2220, 60» représentent entre 7 et 8 % du nombre d’échantillons dans la classe majoritaire « 2583 ». Les autres classes présentent des écarts moins importants. Dans l’ensemble, -le jeu de données présente un déséquilibre plutôt modéré ! ----Insersetion--- -Ci-dessous nous affichons quelques visualisations de type WordCloud et quelques échantillons d’images par classe de produit : - - ----Insersetion--- -L’exploration visuelle et l’utilisation des WordCould par classe nous ont permis d’identifier les catégories de produits suivantes : ----Insersetion--- - - diff --git a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/path.sh b/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/path.sh deleted file mode 100644 index 1a6fb5f891b55d9fd978cfe54565f112f7eedce7..0000000000000000000000000000000000000000 --- a/spaces/mshukor/UnIVAL/fairseq/examples/wav2vec/unsupervised/kaldi_self_train/st/path.sh +++ /dev/null @@ -1,5 +0,0 @@ -export KALDI_ROOT=`pwd`/../../.. -export PATH=$PWD/utils/:$KALDI_ROOT/tools/openfst/bin:$PWD:$PATH -[ ! -f $KALDI_ROOT/tools/config/common_path.sh ] && echo >&2 "The standard file $KALDI_ROOT/tools/config/common_path.sh is not present -> Exit!" && exit 1 -. $KALDI_ROOT/tools/config/common_path.sh -export LC_ALL=C diff --git a/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/data_objects/utterance.py b/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/data_objects/utterance.py deleted file mode 100644 index 0768c3420f422a7464f305b4c1fb6752c57ceda7..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/Real-Time-Voice-Cloning/encoder/data_objects/utterance.py +++ /dev/null @@ -1,26 +0,0 @@ -import numpy as np - - -class Utterance: - def __init__(self, frames_fpath, wave_fpath): - self.frames_fpath = frames_fpath - self.wave_fpath = wave_fpath - - def get_frames(self): - return np.load(self.frames_fpath) - - def random_partial(self, n_frames): - """ - Crops the frames into a partial utterance of n_frames - - :param n_frames: The number of frames of the partial utterance - :return: the partial utterance frames and a tuple indicating the start and end of the - partial utterance in the complete utterance. - """ - frames = self.get_frames() - if frames.shape[0] == n_frames: - start = 0 - else: - start = np.random.randint(0, frames.shape[0] - n_frames) - end = start + n_frames - return frames[start:end], (start, end) \ No newline at end of file diff --git a/spaces/mygyasir/XL/README.md b/spaces/mygyasir/XL/README.md deleted file mode 100644 index 463dafd0589c5d5ba5f8dfb5ff38e24a7d7140d3..0000000000000000000000000000000000000000 --- a/spaces/mygyasir/XL/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: XL -emoji: 🐢 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/paper_runfiles/generate_test_celeba-hq.sh b/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/paper_runfiles/generate_test_celeba-hq.sh deleted file mode 100644 index 7e04bba426f1c6c0528d88a0e28a5da0dde7ca3e..0000000000000000000000000000000000000000 --- a/spaces/myrad01/Inpaint-Anything/third_party/lama/bin/paper_runfiles/generate_test_celeba-hq.sh +++ /dev/null @@ -1,17 +0,0 @@ -#!/usr/bin/env bash - -# paths to data are valid for mml-ws01 -OUT_DIR="/media/inpainting/paper_data/CelebA-HQ_val_test" - -source "$(dirname $0)/env.sh" - -for datadir in "val" "test" -do - for conf in random_thin_256 random_medium_256 random_thick_256 random_thin_512 random_medium_512 random_thick_512 - do - "$BINDIR/gen_mask_dataset_hydra.py" -cn $conf datadir=$datadir location=mml-ws01-celeba-hq \ - location.out_dir=$OUT_DIR cropping.out_square_crop=False - - "$BINDIR/calc_dataset_stats.py" --samples-n 20 "$OUT_DIR/$datadir/$conf" "$OUT_DIR/$datadir/${conf}_stats" - done -done diff --git a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/loggers/wandb/README.md b/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/loggers/wandb/README.md deleted file mode 100644 index 63d999859e6d97684f6ec4ca46345d2e077c124d..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-kunshujo/ultralytics/yolov5/utils/loggers/wandb/README.md +++ /dev/null @@ -1,152 +0,0 @@ -📚 This guide explains how to use **Weights & Biases** (W&B) with YOLOv5 🚀. UPDATED 29 September 2021. -* [About Weights & Biases](#about-weights-&-biases) -* [First-Time Setup](#first-time-setup) -* [Viewing runs](#viewing-runs) -* [Disabling wandb](#disabling-wandb) -* [Advanced Usage: Dataset Versioning and Evaluation](#advanced-usage) -* [Reports: Share your work with the world!](#reports) - -## About Weights & Biases -Think of [W&B](https://wandb.ai/site?utm_campaign=repo_yolo_wandbtutorial) like GitHub for machine learning models. With a few lines of code, save everything you need to debug, compare and reproduce your models — architecture, hyperparameters, git commits, model weights, GPU usage, and even datasets and predictions. - -Used by top researchers including teams at OpenAI, Lyft, Github, and MILA, W&B is part of the new standard of best practices for machine learning. How W&B can help you optimize your machine learning workflows: - - * [Debug](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Free-2) model performance in real time - * [GPU usage](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#System-4) visualized automatically - * [Custom charts](https://wandb.ai/wandb/customizable-charts/reports/Powerful-Custom-Charts-To-Debug-Model-Peformance--VmlldzoyNzY4ODI) for powerful, extensible visualization - * [Share insights](https://wandb.ai/wandb/getting-started/reports/Visualize-Debug-Machine-Learning-Models--VmlldzoyNzY5MDk#Share-8) interactively with collaborators - * [Optimize hyperparameters](https://docs.wandb.com/sweeps) efficiently - * [Track](https://docs.wandb.com/artifacts) datasets, pipelines, and production models - -## First-Time Setup -
- Toggle Details -When you first train, W&B will prompt you to create a new account and will generate an **API key** for you. If you are an existing user you can retrieve your key from https://wandb.ai/authorize. This key is used to tell W&B where to log your data. You only need to supply your key once, and then it is remembered on the same device. - -W&B will create a cloud **project** (default is 'YOLOv5') for your training runs, and each new training run will be provided a unique run **name** within that project as project/name. You can also manually set your project and run name as: - - ```shell - $ python train.py --project ... --name ... - ``` - -YOLOv5 notebook example: Open In Colab Open In Kaggle -Screen Shot 2021-09-29 at 10 23 13 PM - - -
- -## Viewing Runs -
- Toggle Details -Run information streams from your environment to the W&B cloud console as you train. This allows you to monitor and even cancel runs in realtime . All important information is logged: - - * Training & Validation losses - * Metrics: Precision, Recall, mAP@0.5, mAP@0.5:0.95 - * Learning Rate over time - * A bounding box debugging panel, showing the training progress over time - * GPU: Type, **GPU Utilization**, power, temperature, **CUDA memory usage** - * System: Disk I/0, CPU utilization, RAM memory usage - * Your trained model as W&B Artifact - * Environment: OS and Python types, Git repository and state, **training command** - -

Weights & Biases dashboard

-
- - ## Disabling wandb -* training after running `wandb disabled` inside that directory creates no wandb run -![Screenshot (84)](https://user-images.githubusercontent.com/15766192/143441777-c780bdd7-7cb4-4404-9559-b4316030a985.png) - -* To enable wandb again, run `wandb online` -![Screenshot (85)](https://user-images.githubusercontent.com/15766192/143441866-7191b2cb-22f0-4e0f-ae64-2dc47dc13078.png) - -## Advanced Usage -You can leverage W&B artifacts and Tables integration to easily visualize and manage your datasets, models and training evaluations. Here are some quick examples to get you started. -
-

1: Train and Log Evaluation simultaneousy

- This is an extension of the previous section, but it'll also training after uploading the dataset. This also evaluation Table - Evaluation table compares your predictions and ground truths across the validation set for each epoch. It uses the references to the already uploaded datasets, - so no images will be uploaded from your system more than once. -
- Usage - Code $ python train.py --upload_data val - -![Screenshot from 2021-11-21 17-40-06](https://user-images.githubusercontent.com/15766192/142761183-c1696d8c-3f38-45ab-991a-bb0dfd98ae7d.png) -
- -

2. Visualize and Version Datasets

- Log, visualize, dynamically query, and understand your data with W&B Tables. You can use the following command to log your dataset as a W&B Table. This will generate a {dataset}_wandb.yaml file which can be used to train from dataset artifact. -
- Usage - Code $ python utils/logger/wandb/log_dataset.py --project ... --name ... --data .. - - ![Screenshot (64)](https://user-images.githubusercontent.com/15766192/128486078-d8433890-98a3-4d12-8986-b6c0e3fc64b9.png) -
- -

3: Train using dataset artifact

- When you upload a dataset as described in the first section, you get a new config file with an added `_wandb` to its name. This file contains the information that - can be used to train a model directly from the dataset artifact. This also logs evaluation -
- Usage - Code $ python train.py --data {data}_wandb.yaml - -![Screenshot (72)](https://user-images.githubusercontent.com/15766192/128979739-4cf63aeb-a76f-483f-8861-1c0100b938a5.png) -
- -

4: Save model checkpoints as artifacts

- To enable saving and versioning checkpoints of your experiment, pass `--save_period n` with the base cammand, where `n` represents checkpoint interval. - You can also log both the dataset and model checkpoints simultaneously. If not passed, only the final model will be logged - -
- Usage - Code $ python train.py --save_period 1 - -![Screenshot (68)](https://user-images.githubusercontent.com/15766192/128726138-ec6c1f60-639d-437d-b4ee-3acd9de47ef3.png) -
- -
- -

5: Resume runs from checkpoint artifacts.

-Any run can be resumed using artifacts if the --resume argument starts with wandb-artifact:// prefix followed by the run path, i.e, wandb-artifact://username/project/runid . This doesn't require the model checkpoint to be present on the local system. - -
- Usage - Code $ python train.py --resume wandb-artifact://{run_path} - -![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png) -
- -

6: Resume runs from dataset artifact & checkpoint artifacts.

- Local dataset or model checkpoints are not required. This can be used to resume runs directly on a different device - The syntax is same as the previous section, but you'll need to lof both the dataset and model checkpoints as artifacts, i.e, set bot --upload_dataset or - train from _wandb.yaml file and set --save_period - -
- Usage - Code $ python train.py --resume wandb-artifact://{run_path} - -![Screenshot (70)](https://user-images.githubusercontent.com/15766192/128728988-4e84b355-6c87-41ae-a591-14aecf45343e.png) -
- - - -

Reports

-W&B Reports can be created from your saved runs for sharing online. Once a report is created you will receive a link you can use to publically share your results. Here is an example report created from the COCO128 tutorial trainings of all four YOLOv5 models ([link](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY)). - -Weights & Biases Reports - - -## Environments - -YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled): - -- **Google Colab and Kaggle** notebooks with free GPU: Open In Colab Open In Kaggle -- **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart) -- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart) -- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) Docker Pulls - - -## Status - -![CI CPU testing](https://github.com/ultralytics/yolov5/workflows/CI%20CPU%20testing/badge.svg) - -If this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), validation ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. diff --git a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/benchmarks.py b/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/benchmarks.py deleted file mode 100644 index 446248c03f685bf5dd7a1e4fdc2677541030211f..0000000000000000000000000000000000000000 --- a/spaces/nakamura196/yolov5-ndl-layout/ultralytics/yolov5/utils/benchmarks.py +++ /dev/null @@ -1,104 +0,0 @@ -# YOLOv5 🚀 by Ultralytics, GPL-3.0 license -""" -Run YOLOv5 benchmarks on all supported export formats - -Format | `export.py --include` | Model ---- | --- | --- -PyTorch | - | yolov5s.pt -TorchScript | `torchscript` | yolov5s.torchscript -ONNX | `onnx` | yolov5s.onnx -OpenVINO | `openvino` | yolov5s_openvino_model/ -TensorRT | `engine` | yolov5s.engine -CoreML | `coreml` | yolov5s.mlmodel -TensorFlow SavedModel | `saved_model` | yolov5s_saved_model/ -TensorFlow GraphDef | `pb` | yolov5s.pb -TensorFlow Lite | `tflite` | yolov5s.tflite -TensorFlow Edge TPU | `edgetpu` | yolov5s_edgetpu.tflite -TensorFlow.js | `tfjs` | yolov5s_web_model/ - -Requirements: - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime openvino-dev tensorflow-cpu # CPU - $ pip install -r requirements.txt coremltools onnx onnx-simplifier onnxruntime-gpu openvino-dev tensorflow # GPU - $ pip install -U nvidia-tensorrt --index-url https://pypi.ngc.nvidia.com # TensorRT - -Usage: - $ python utils/benchmarks.py --weights yolov5s.pt --img 640 -""" - -import argparse -import sys -import time -from pathlib import Path - -import pandas as pd - -FILE = Path(__file__).resolve() -ROOT = FILE.parents[1] # YOLOv5 root directory -if str(ROOT) not in sys.path: - sys.path.append(str(ROOT)) # add ROOT to PATH -# ROOT = ROOT.relative_to(Path.cwd()) # relative - -import export -import val -from utils import notebook_init -from utils.general import LOGGER, print_args -from utils.torch_utils import select_device - - -def run(weights=ROOT / 'yolov5s.pt', # weights path - imgsz=640, # inference size (pixels) - batch_size=1, # batch size - data=ROOT / 'data/coco128.yaml', # dataset.yaml path - device='', # cuda device, i.e. 0 or 0,1,2,3 or cpu - half=False, # use FP16 half-precision inference - ): - y, t = [], time.time() - formats = export.export_formats() - device = select_device(device) - for i, (name, f, suffix, gpu) in formats.iterrows(): # index, (name, file, suffix, gpu-capable) - try: - if device.type != 'cpu': - assert gpu, f'{name} inference not supported on GPU' - if f == '-': - w = weights # PyTorch format - else: - w = export.run(weights=weights, imgsz=[imgsz], include=[f], device=device, half=half)[-1] # all others - assert suffix in str(w), 'export failed' - result = val.run(data, w, batch_size, imgsz, plots=False, device=device, task='benchmark', half=half) - metrics = result[0] # metrics (mp, mr, map50, map, *losses(box, obj, cls)) - speeds = result[2] # times (preprocess, inference, postprocess) - y.append([name, round(metrics[3], 4), round(speeds[1], 2)]) # mAP, t_inference - except Exception as e: - LOGGER.warning(f'WARNING: Benchmark failure for {name}: {e}') - y.append([name, None, None]) # mAP, t_inference - - # Print results - LOGGER.info('\n') - parse_opt() - notebook_init() # print system info - py = pd.DataFrame(y, columns=['Format', 'mAP@0.5:0.95', 'Inference time (ms)']) - LOGGER.info(f'\nBenchmarks complete ({time.time() - t:.2f}s)') - LOGGER.info(str(py)) - return py - - -def parse_opt(): - parser = argparse.ArgumentParser() - parser.add_argument('--weights', type=str, default=ROOT / 'yolov5s.pt', help='weights path') - parser.add_argument('--imgsz', '--img', '--img-size', type=int, default=640, help='inference size (pixels)') - parser.add_argument('--batch-size', type=int, default=1, help='batch size') - parser.add_argument('--data', type=str, default=ROOT / 'data/coco128.yaml', help='dataset.yaml path') - parser.add_argument('--device', default='', help='cuda device, i.e. 0 or 0,1,2,3 or cpu') - parser.add_argument('--half', action='store_true', help='use FP16 half-precision inference') - opt = parser.parse_args() - print_args(FILE.stem, opt) - return opt - - -def main(opt): - run(**vars(opt)) - - -if __name__ == "__main__": - opt = parse_opt() - main(opt) diff --git a/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/data/datasets/object_dataset.py b/spaces/nasa-cisto-data-science-group/satvision-base-demo/pytorch-caney/pytorch_caney/data/datasets/object_dataset.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/nateraw/text-generation/Dockerfile b/spaces/nateraw/text-generation/Dockerfile deleted file mode 100644 index 55b67bca9bb7304b8ab0898ff9d4c82002dcd53b..0000000000000000000000000000000000000000 --- a/spaces/nateraw/text-generation/Dockerfile +++ /dev/null @@ -1,27 +0,0 @@ -# Use the official Python 3.9 image -FROM python:3.9 - -# Set the working directory to /code -WORKDIR /code - -# Copy the current directory contents into the container at /code -COPY ./requirements.txt /code/requirements.txt - -# Install requirements.txt -RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt - -# Set up a new user named "user" with user ID 1000 -RUN useradd -m -u 1000 user -# Switch to the "user" user -USER user -# Set home to the user's home directory -ENV HOME=/home/user \ - PATH=/home/user/.local/bin:$PATH - -# Set the working directory to the user's home directory -WORKDIR $HOME/app - -# Copy the current directory contents into the container at $HOME/app setting the owner to the user -COPY --chown=user . $HOME/app - -CMD ["uvicorn", "app:app", "--host", "0.0.0.0", "--port", "7860"] \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/All Data V.9.8 InstallCD (use Ver.9.5 Crack) Setup Free.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/All Data V.9.8 InstallCD (use Ver.9.5 Crack) Setup Free.md deleted file mode 100644 index 6a3f4bb6ca429adeb506f0f831a7da0c2885f837..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/All Data V.9.8 InstallCD (use Ver.9.5 Crack) Setup Free.md +++ /dev/null @@ -1,127 +0,0 @@ -
-

All Data V.9.8 InstallCD (use Ver.9.5 Crack) Setup Free

-

If you are looking for a comprehensive and reliable software for repairing and maintaining your vehicle, you might have heard of All Data V.9.8 InstallCD. This is a powerful software that provides you with detailed information on diagnostics, repair procedures, wiring diagrams, service bulletins, and more for over 33,000 vehicles from 1982 to present.

-

All Data V.9.8 InstallCD (use Ver.9.5 Crack) Setup Free


DOWNLOAD ———>>> https://urlcod.com/2uIbKJ



-

However, All Data V.9.8 InstallCD is not a free software, and you need to buy a subscription or a license to use it legally. But what if you want to try it out for free without paying anything? Is there a way to do that? The answer is yes, but you need to use a crack file called Ver.9.5 Crack that can bypass the activation process and let you install All Data V.9.8 InstallCD for free.

-

In this article, we will show you how to download All Data V.9.8 InstallCD, how to use Ver.9.5 Crack to install it for free, how to use the software after installation, and some tips and warnings for using cracked software.

-

How to download All Data V.9.8 InstallCD

-

The first step is to download the contents of the All Data V.9.8 installation CD, which is a must for using the data discs that contain the vehicle information.

-

There are two ways to download All Data V.9.8 InstallCD: either by using a torrent file or by using a direct download link.

-

If you want to use a torrent file, you need to have a torrent client installed on your computer, such as uTorrent or BitTorrent.

-

Then, you can go to a torrent site like The Pirate Bay and search for "All Data v 9 8 installCD". You will find a torrent file that has been uploaded by caKraKer in 2009.

-

-

Download this torrent file and open it with your torrent client.

-

You will see that the file size is 152 MB and that there are three seeders and zero leechers.

-

Start downloading the file and wait until it is completed.

-

If you want to use a direct download link, you can go to a file-sharing site like MediaFire or Mega.nz and search for "All Data v 9 8 installCD". You will find a link that has been uploaded by someone else.

-

Click on this link and download the file to your computer.

-

You will see that the file size is also 152 MB and that it has the same contents as the torrent file.

-

After downloading the file, you need to extract it using a software like WinRAR or 7-Zip.

-

You will get a folder called "All Data v 9 8 installCD" that contains two files: "All Data v 9 8 installCD.iso" and "Ver.9.5 Crack.rar".

-

The first file is the image file of the installation CD, and the second file is the crack file that you will need later.

-

Before you proceed to the next step, you need to check if your computer meets the system requirements for running All Data V.9.8 InstallCD.

-

According to the official website of All Data, these are the minimum system requirements:

- - - -
Operating SystemProcessorMemoryHard Drive SpaceOptical Drive
Windows XP, Vista, 7, 8, or 10Pentium 4 or higher512 MB RAM or higher10 GB free space or higherDVD-ROM drive
-

If your computer meets these requirements, you can proceed to the next step. If not, you may need to upgrade your hardware or use a different computer.

-

How to use Ver.9.5 Crack to install All Data V.9.8 InstallCD for free

-

The next step is to use Ver.9.5 Crack to install All Data V.9.8 InstallCD for free on your computer.

-

This crack file is a modified version of Cubase LE AI Elements 8.exe, which is a music production software that is used by All Data V.9.8 InstallCD as a local software.

-

By using this crack file, you can trick All Data V.9.8 InstallCD into thinking that you have a valid license for Cubase and thus bypass the activation process.

-

To use Ver.9.5 Crack, you need to follow these steps:

-
    -
  1. Mount the All Data v 9 8 installCD.iso file using a software like Daemon Tools or Virtual CloneDrive.
  2. -
  3. Open the mounted drive and run the Start_Center.exe file.
  4. -
  5. Choose your preferred language and click OK.
  6. -
  7. You will see a window that shows the available local software and updates for All Data V.9.8 InstallCD.
  8. -
  9. Select Cubase LE AI Elements 8 and click Install.
  10. -
  11. Follow the instructions on the screen and complete the installation.
  12. -
  13. Do not launch Cubase after installation.
  14. -
  15. Extract the Ver.9.5 Crack.rar file using a software like WinRAR or 7-Zip.
  16. -
  17. You will get a folder called "Ver.9.5 Crack" that contains one file: "Cubase LE AI Elements 8.exe".
  18. -
  19. Copy this file and paste it into the installation folder of Cubase, which is usually located at C:\Program Files\Steinberg\Cubase LE AI Elements 8.
  20. -
  21. Replace the original Cubase LE AI Elements 8.exe file with the crack file.
  22. -
  23. To avoid updating Cubase to upcoming builds, go to C:\Windows\System32\drivers\etc and open the hosts file with Notepad.
  24. -
  25. Add these two lines at the end of the hosts file and save it:
  26. -
    127.0.0.1 www.steinberg.net 127.0.0.1 support.steinberg.de 
    -
  27. Congratulations! You have successfully installed All Data V.9.8 InstallCD for free using Ver.9.5 Crack.
  28. -
-

How to use All Data V.9.8 InstallCD after installation

-

The final step is to use All Data V.9.8 InstallCD after installation and enjoy its features and benefits.

-

To use All Data V.9.8 InstallCD, you need to have the data discs that contain the vehicle information for different makes and models.

-

You can either buy these data discs from All Data or download them from torrent sites or file-sharing sites.

-

The data discs are usually labeled as "AllData (year) (make) Disc #". For example, "AllData 2014 Ford Disc 1".

-

You need to mount these data discs using a software like Daemon Tools or Virtual CloneDrive, just like you did with the installation CD.

-

Then, you can launch All Data V.9.8 InstallCD from the Start menu or the desktop shortcut.

-

You will see a window that shows the All Data logo and the Cubase logo.

-

Click on the All Data logo to access the data discs and the repair features.

-

You will see a window that shows the vehicle selection menu.

-

You can choose the year, make, model, and engine of the vehicle that you want to repair or maintain.

-

Then, you can choose the category of information that you want to view, such as diagnostics, repair procedures, wiring diagrams, service bulletins, specifications, etc.

-

You will see a window that shows the detailed information for the selected vehicle and category.

-

You can use the navigation buttons, the search box, or the bookmarks to find the information that you need.

-

You can also print, save, or email the information as PDF files.

-

If you encounter any issues or errors while using All Data V.9.8 InstallCD, you can try these troubleshooting steps:

-
    -
  • Make sure that you have mounted the correct data disc for the selected vehicle.
  • -
  • Make sure that you have enough free space on your hard drive and enough RAM on your computer.
  • -
  • Make sure that you have disabled your antivirus software and firewall before running All Data V.9.8 InstallCD.
  • -
  • Make sure that you have not updated Cubase to any newer builds.
  • -
  • Make sure that you have not modified or deleted any files from the installation folder of All Data V.9.8 InstallCD or Cubase.
  • -
  • If none of these steps work, you may need to uninstall and reinstall All Data V.9.8 InstallCD and Ver.9.5 Crack.
  • -
-

Conclusion

-

All Data V.9.8 InstallCD is a useful software for anyone who wants to repair and maintain their vehicle with professional and accurate information.

-

However, it is not a free software, and you need to pay for a subscription or a license to use it legally.

-

If you want to try it out for free without paying anything, you can use Ver.9.5 Crack to install it for free on your computer.

-

This crack file is a modified version of Cubase LE AI Elements 8.exe, which is a music production software that is used by All Data V.9.8 InstallCD as a local software.

-

By using this crack file, you can bypass the activation process and enjoy all the features and benefits of All Data V.9.8 InstallCD.

-

However, using cracked software comes with some risks and drawbacks, such as viruses, malware, legal issues, performance issues, compatibility issues, etc.

-

Therefore, we recommend that you use All Data V.9.8 InstallCD with Ver.9.5 Crack only for educational purposes and not for commercial purposes.

-

If you find All Data V.9.8 InstallCD useful and valuable, we suggest that you buy a subscription or a license from All Data and support their work and development.

-

FAQs

-

What is the difference between All Data V.9.8 InstallCD and other versions of All Data?

-

All Data V.9.8 InstallCD is one of the older versions of All Data that was released in 2009.

-

It contains information for over 33,000 vehicles from 1982 to present.

-

Other versions of All Data include All Data Repair S3000 (released in 2011), All Data Repair S3500 (released in 2014), All Data Repair S4000 (released in 2017), and All Data Repair S4500 (released in 2020).

-

These versions contain more updated and comprehensive information for more vehicles and more features and functions than All Data V.9.8 InstallCD.

-

Is it legal to use Ver.9.5 Crack to install All Data V.9.8 InstallCD for free?

No, it is not legal to use Ver.9.5 Crack to install All Data V.9.8 InstallCD for free.

-

Ver.9.5 Crack is a pirated software that violates the terms and conditions of All Data and Cubase.

-

By using Ver.9.5 Crack, you are infringing the intellectual property rights and the copyright laws of the software developers and owners.

-

You may face legal consequences such as fines, lawsuits, or even jail time if you are caught using Ver.9.5 Crack to install All Data V.9.8 InstallCD for free.

-

Therefore, we advise you to use Ver.9.5 Crack only for educational purposes and not for commercial purposes.

-

We also advise you to buy a subscription or a license from All Data and Cubase if you want to use their software legally and ethically.

-

What are the risks of using cracked software?

-

Using cracked software such as Ver.9.5 Crack to install All Data V.9.8 InstallCD for free comes with some risks and drawbacks that you should be aware of.

-

Some of these risks and drawbacks are:

-
    -
  • Viruses and malware: Cracked software may contain malicious code that can harm your computer or steal your personal information.
  • -
  • Performance issues: Cracked software may not work properly or efficiently, causing errors, crashes, or slowdowns on your computer.
  • -
  • Compatibility issues: Cracked software may not be compatible with your operating system, your hardware, or other software on your computer, causing conflicts or incompatibilities.
  • -
  • Lack of support and updates: Cracked software may not receive any support or updates from the software developers or owners, leaving you with outdated or buggy software.
  • -
  • Lack of features and functions: Cracked software may not have all the features and functions that the original software has, limiting your options and capabilities.
  • -
  • Lack of security and privacy: Cracked software may expose your computer to hackers, cyberattacks, or data breaches, compromising your security and privacy.
  • -
-

Therefore, we recommend that you use cracked software with caution and at your own risk.

-

We also recommend that you use a reliable antivirus software and a firewall to protect your computer from viruses and malware.

-

How can I backup my data before using All Data V.9.8 InstallCD?

-

Before using All Data V.9.8 InstallCD, it is a good idea to backup your data in case something goes wrong or you lose your data.

-

You can backup your data using various methods, such as:

-
    -
  • Using an external hard drive or a USB flash drive: You can copy and paste your data from your computer to an external hard drive or a USB flash drive, which you can then store in a safe place or carry with you.
  • -
  • Using a cloud storage service: You can upload your data to a cloud storage service such as Google Drive, Dropbox, or OneDrive, which you can then access from any device or location with an internet connection.
  • -
  • Using a backup software: You can use a backup software such as EaseUS Todo Backup, Acronis True Image, or Macrium Reflect, which can automatically backup your data to a local or online destination at regular intervals or on demand.
  • -
-

By backing up your data, you can ensure that you have a copy of your data in case you need to restore it or recover it later.

-

What are some alternatives to All Data V.9.8 InstallCD?

-

If you are looking for some alternatives to All Data V.9.8 InstallCD, you may want to check out these other software that offer similar features and benefits:

-
    -
  • Mitchell 1 OnDemand5: This is another popular software for vehicle repair and maintenance that provides information on diagnostics, repair procedures, wiring diagrams, service bulletins, specifications, etc. for over 38,000 vehicles from 1983 to present.
  • -
  • AUTODATA: This is another comprehensive software for vehicle repair and maintenance that provides information on diagnostics, repair procedures, wiring diagrams, service bulletins, specifications, etc. for over 34,000 vehicles from 1982 to present.
  • -
  • HAYNES PRO: This is another reliable software for vehicle repair and maintenance that provides information on diagnostics, repair procedures, wiring diagrams, service bulletins, specifications, etc. for over 25,000 vehicles from 1982 to present.
  • -
-

b2dd77e56b
-
-
\ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/El Gran Libro Del Postre Peruano.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/El Gran Libro Del Postre Peruano.md deleted file mode 100644 index 27379540c69aef8047f327f883b3803e1e071677..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/El Gran Libro Del Postre Peruano.md +++ /dev/null @@ -1,17 +0,0 @@ - -Here is a possible title and article for the keyword "El Gran Libro Del Postre Peruano": - -

Reseña: El Gran Libro Del Postre Peruano de Sandra Plevisani

-

Si eres un amante de los postres y la gastronomía peruana, no puedes dejar de leer El Gran Libro Del Postre Peruano, el último libro de la reconocida repostera Sandra Plevisani. En este libro, Plevisani nos ofrece más de 200 recetas de postres tradicionales e innovadores, que reflejan la riqueza y diversidad de la cultura culinaria peruana.

-

El Gran Libro Del Postre Peruano


Download File ->>->>->> https://urlcod.com/2uIaVf



-

Pero El Gran Libro Del Postre Peruano no es solo un libro de recetas, sino también un viaje por la historia y el origen de cada uno de los dulces que nos presenta. Plevisani nos cuenta cómo el azúcar llegó a América y al Perú, cómo se fusionaron los ingredientes y técnicas de las diferentes culturas que conforman el país, y cómo se crearon y evolucionaron los postres más emblemáticos de la repostería peruana.

-

Además, el libro está ilustrado con hermosas fotografías que nos muestran el paso a paso de cada receta, así como el resultado final. También incluye consejos prácticos, trucos y secretos para lograr postres perfectos y deliciosos. Desde los clásicos como el suspiro a la limeña, el arroz con leche o el turrón de Doña Pepa, hasta las creaciones más originales como el cheesecake de lúcuma, el mousse de maracuyá o el brownie de quinua, El Gran Libro Del Postre Peruano es una obra imprescindible para los amantes del dulce y la cocina peruana.

Here are some more paragraphs for the article: - -

En El Gran Libro Del Postre Peruano, Plevisani nos invita a redescubrir y valorar el patrimonio dulcero de nuestro país, que es parte de nuestra identidad y memoria colectiva. Nos muestra cómo los postres peruanos son el resultado de una mezcla de influencias y sabores que han ido enriqueciéndose a lo largo de los siglos. Nos enseña cómo prepararlos con ingredientes locales y de calidad, respetando las recetas originales pero también adaptándolas a los gustos y necesidades actuales.

-

-

Así, El Gran Libro Del Postre Peruano se convierte en un homenaje a la repostería peruana, que es una de las más variadas y deliciosas del mundo. Un libro que no solo nos deleita con sus recetas, sino que también nos educa y nos emociona con sus historias. Un libro que nos hace sentir orgullosos de nuestra gastronomía y de nuestra cultura. Un libro que no puede faltar en la biblioteca de ningún peruano o de cualquier persona que quiera conocer y disfrutar de los postres peruanos.

Here are some more paragraphs for the article: - -

Para escribir El Gran Libro Del Postre Peruano, Plevisani realizó una exhaustiva investigación y recopilación de recetas, que le tomó más de cinco años. Visitó diferentes regiones del país, conversó con expertos y maestros reposteros, consultó libros y documentos antiguos, y probó y experimentó con diversos ingredientes y técnicas. Todo ello con el objetivo de ofrecernos un libro completo, riguroso y actualizado sobre la repostería peruana.

-

El Gran Libro Del Postre Peruano es, sin duda, el libro más ambicioso y esperado de Sandra Plevisani, quien es considerada una de las mejores reposteras del Perú y una referente en el ámbito gastronómico nacional e internacional. Con más de 25 años de trayectoria, Plevisani ha publicado varios libros exitosos, como Dulces Secretos, Dulces Tentaciones y Dulces Celebraciones. También ha conducido programas de televisión y radio, y ha participado en eventos y festivales culinarios alrededor del mundo.

7196e7f11a
-
-
\ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Film Impact Transitions Crack Macbook !!TOP!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Film Impact Transitions Crack Macbook !!TOP!!.md deleted file mode 100644 index 9c4e37d48b1aee05d5f06cf24ccd403ad3439d3c..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Film Impact Transitions Crack Macbook !!TOP!!.md +++ /dev/null @@ -1,35 +0,0 @@ - -

Film Impact Transitions Crack Macbook: What You Need to Know

If you are a video editor who uses Adobe Premiere Pro, you might have heard of Film Impact Transitions. These are professional video transitions that can enhance your videos with smooth and elegant effects. However, these transitions are not free, and you might be tempted to look for a cracked version online. But is it worth it? In this article, we will tell you everything you need to know about Film Impact Transitions Crack Macbook, including what they are, why you should use them, how to get them for free, what are the risks of using cracked software, and what are the alternatives to using cracked software.

What are Film Impact Transitions?

Film Impact Transitions are a collection of premium video transitions for Adobe Premiere Pro. They are designed by Film Impact, a company that specializes in creating high-quality video effects for professional editors. Film Impact Transitions offer a variety of effects, such as lights and blurs, distortions, transformers, animations, smart tools, and more. They are easy to use, customizable, and compatible with Windows and Mac operating systems.

-

Film Impact Transitions Crack Macbook


Download Ziphttps://urlcod.com/2uIavw



Film Impact Transitions are not included in Adobe Premiere Pro by default. You have to purchase them from the official website or other trusted sources. The price varies depending on the number of transitions you want to buy. For example, you can buy a single transition for $19, a bundle of 10 transitions for $99, or a bundle of 61 transitions for $299.

Why use Film Impact Transitions for Adobe Premiere Pro?

Film Impact Transitions can help you create stunning videos with minimal effort. Here are some of the benefits of using Film Impact Transitions for Adobe Premiere Pro:

  • They are professional and high-quality. Film Impact Transitions are made by experts who know how to create video effects that look realistic and appealing. They use advanced techniques such as motion blur, alpha channel handling, color grading, and more.
  • They are fast and seamless. Film Impact Transitions integrate smoothly into Adobe Premiere Pro, just like the built-in video transitions. You can drag and drop them to your timeline, adjust their duration and parameters, and preview them in real-time. They also have a high-performance render engine that reduces your export times.
  • They are creative and versatile. Film Impact Transitions offer a wide range of effects that can suit any style and mood. You can use them to reveal your next scene, add some drama or excitement, or simply spice up your videos. You can also apply them to texts, logos, photos, and other graphic elements.

How to get Film Impact Transitions for free?

If you want to try Film Impact Transitions before buying them, you can sign up for a 30-day free trial on their website. You will get access to all 61 transitions and 4 bonus transitions without any limitations or watermarks. You can also cancel your subscription anytime without any charges.

However, if you want to get Film Impact Transitions for free permanently, you might be tempted to look for a cracked version online. A cracked version is a modified version of the original software that bypasses its security features and allows you to use it without paying for it. There are many websites that claim to offer Film Impact Transitions Crack Macbook for free download. However, we strongly advise you not to use these websites, as they pose many risks and disadvantages. Here are some of the reasons why you should avoid using cracked software.

What are the risks of using cracked software?

Using cracked software is not only illegal, but also dangerous and unreliable. Here are some of the risks of using cracked software:

Legal issues

Cracked software is a form of piracy, which is a violation of intellectual property rights. By downloading and using cracked software, you are infringing on the rights of the software developers and distributors, who have invested time and money to create and sell their products. You could face legal consequences, such as fines, lawsuits, or even criminal charges, if you are caught using cracked software. You could also damage your reputation and credibility as a video editor, as you are using stolen tools to create your work.

Malware and viruses

Cracked software often comes with hidden malware and viruses that can harm your computer and your data. Malware and viruses can infect your system, steal your personal information, corrupt your files, slow down your performance, or even lock your computer and demand ransom. You could lose your valuable work, compromise your security, or expose yourself to identity theft or fraud. You could also spread the malware and viruses to other devices or networks that you connect to, putting others at risk as well.

-

Poor performance and compatibility

Cracked software often has bugs, errors, or missing features that affect its functionality and quality. Cracked software may not work properly, crash frequently, or produce poor results. You could waste your time and effort trying to fix the problems or redoing your work. Cracked software may also not be compatible with your operating system, your Adobe Premiere Pro version, or other software that you use. You could experience conflicts, glitches, or incompatibilities that prevent you from completing your projects.

What are the alternatives to using cracked software?

If you want to use Film Impact Transitions for Adobe Premiere Pro, but you don't want to pay for them or use cracked software, you have some alternatives that are legal, safe, and reliable. Here are some of the alternatives to using cracked software:

Use the official website or trusted sources

The best way to get Film Impact Transitions is to buy them from the official website or other trusted sources. This way, you can ensure that you get the original and updated version of the software, with all the features and benefits that it offers. You can also get technical support, customer service, and updates from the developers. You can also avoid any legal issues, malware and viruses, or performance and compatibility problems that come with cracked software.

Use free trials or discounts

If you want to try Film Impact Transitions before buying them, you can use the 30-day free trial that they offer on their website. You can also look for discounts or promotions that they might have from time to time. This way, you can save some money while still getting the full experience of using Film Impact Transitions.

Use open-source or freeware alternatives

If you don't want to spend any money at all on Film Impact Transitions, you can look for open-source or freeware alternatives that offer similar video transitions for Adobe Premiere Pro. Open-source or freeware alternatives are free software that anyone can use or modify without any restrictions. However, they may not have the same quality, variety, or support as Film Impact Transitions. Some examples of open-source or freeware alternatives are Blender VSE Transitions Addon, Shotcut Video Editor, and OpenShot Video Editor.

Conclusion

Film Impact Transitions are a great way to enhance your videos with professional video transitions for Adobe Premiere Pro. However, they are not free, and using cracked software to get them is not a good idea. Cracked software poses many risks and disadvantages, such as legal issues, malware and viruses, poor performance and compatibility. Instead of using cracked software, you should consider using the official website or trusted sources, free trials or discounts, or open-source or freeware alternatives. These alternatives are legal, safe, and reliable ways to get Film Impact Transitions or similar video transitions for Adobe Premiere Pro.

-

We hope this article has helped you understand what Film Impact Transitions Crack Macbook is and why you should avoid it. If you have any questions or comments about this topic, feel free to leave them below. Thank you for reading!

-

F

FAQs

-

Here are some of the frequently asked questions about Film Impact Transitions Crack Macbook:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
What is Film Impact Transitions?Film Impact Transitions are a collection of premium video transitions for Adobe Premiere Pro. They offer a variety of effects, such as lights and blurs, distortions, transformers, animations, smart tools, and more.
What is Film Impact Transitions Crack Macbook?Film Impact Transitions Crack Macbook is a modified version of the original software that bypasses its security features and allows you to use it without paying for it. It is a form of piracy and illegal to use.
What are the risks of using Film Impact Transitions Crack Macbook?Some of the risks of using Film Impact Transitions Crack Macbook are legal issues, malware and viruses, poor performance and compatibility. You could face fines, lawsuits, or criminal charges, lose your data or compromise your security, or experience bugs, errors, or conflicts.
What are the alternatives to using Film Impact Transitions Crack Macbook?Some of the alternatives to using Film Impact Transitions Crack Macbook are using the official website or trusted sources, using free trials or discounts, or using open-source or freeware alternatives. These alternatives are legal, safe, and reliable ways to get Film Impact Transitions or similar video transitions for Adobe Premiere Pro.
How can I contact Film Impact for support or feedback?You can contact Film Impact by visiting their website and filling out their contact form. You can also follow them on social media platforms such as Facebook, Twitter, YouTube, and Instagram.

b2dd77e56b
-
-
\ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Jim Brickman Greatest Hits (2004) [APE].rarl.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Jim Brickman Greatest Hits (2004) [APE].rarl.md deleted file mode 100644 index 7d54d3500cce41a3a76172ad00a4cf3edc575033..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Jim Brickman Greatest Hits (2004) [APE].rarl.md +++ /dev/null @@ -1,15 +0,0 @@ -
-

Jim Brickman Greatest Hits (2004) [APE].rarl: A Review

-

Jim Brickman is a popular American pianist and composer who has released several albums of soothing and romantic instrumental music. His greatest hits album, released in 2004, features 13 tracks of his most beloved songs, performed by him and some guest vocalists. The album is available in APE format, which is a lossless audio compression format that preserves the original sound quality of the recordings. The file name ends with .rarl, which indicates that it is a RAR archive that has been split into multiple parts for easier downloading.

-

The album includes some of Brickman's signature tunes, such as "Angel Eyes", "Rocket to the Moon", and "The Gift". It also features collaborations with singers such as Martina McBride, Michael W. Smith, and Roch Voisine. The songs are mostly gentle and melodic, with a blend of piano, strings, and soft percussion. The album showcases Brickman's talent for creating memorable melodies and harmonies that evoke various emotions and moods.

-

Jim Brickman Greatest Hits (2004) [APE].rarl


DOWNLOAD ⚹⚹⚹ https://urlcod.com/2uIcmK



-

The album is a great choice for fans of new age, piano jazz, or relaxing music. It can also serve as a background music for romantic occasions, meditation, or sleep. The album has received positive reviews from critics and listeners alike, who praise Brickman's musicality and versatility. The album is available for download from various online sources[^1^] [^2^] [^3^] [^4^], or for streaming from Apple Music[^4^].

-

If you are looking for a soothing and enjoyable musical experience, you might want to check out Jim Brickman Greatest Hits (2004) [APE].rarl. It is a collection of some of the best works of one of the most successful contemporary instrumental composers.

-

- -

Jim Brickman was born and raised in Cleveland, Ohio, where he started playing piano at the age of five. He studied composition and performance at the Cleveland Institute of Music and Case Western Reserve University. He also founded his own advertising music company, writing jingles for various brands and products. [^1^]

-

Brickman signed to Windham Hill Records in 1994 and released his first album, No Words, which featured his first solo instrumental hit, "Rocket to the Moon". He followed with several more albums that showcased his distinctive piano style and his collaborations with vocalists from different genres. He has earned two Grammy nominations, a Dove Award, a Canadian Country Music Award, and two SESAC Songwriter of the Year Awards. He is also a member of Pandora's "2 Billion Streams" Club. [^2^]

-

Brickman is not only a successful musician, but also a best-selling author, a TV personality, and a radio host. He has written three books on topics such as creativity, relationships, and wellness. He has starred in five TV concert specials that aired on PBS. He has hosted his own syndicated radio show, The Jim Brickman Show, since 1997, which features music, interviews, and lifestyle segments. [^2^] [^3^]

-

Brickman is known for his positive and uplifting messages that inspire hope, faith, and peace. He believes that music is the soundtrack to life's most memorable moments. He continues to create new music and perform live for his fans around the world. [^2^]

7b8c122e87
-
-
\ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Libro Artes Visuales 1 Secundaria Pdf.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Libro Artes Visuales 1 Secundaria Pdf.md deleted file mode 100644 index b4a254be544fd542c90ee70be68ce1f2a7b5e1fa..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Libro Artes Visuales 1 Secundaria Pdf.md +++ /dev/null @@ -1,37 +0,0 @@ - -Here is a possible title and article with SEO optimization and HTML formatting for the keyword "Libro Artes Visuales 1 Secundaria Pdf": - -

¿Qué es el Libro Artes Visuales 1 Secundaria Pdf y por qué deberías leerlo?

-

El Libro Artes Visuales 1 Secundaria Pdf es un material didáctico que aborda de manera integral los cuatro ejes en que está organizada la disciplina artística de Artes Visuales: percepción, expresión, apreciación y contextualización. Este libro está diseñado para que los alumnos de primer grado de secundaria desarrollen sus habilidades artísticas y su sensibilidad estética, así como su capacidad crítica y reflexiva sobre el arte y la cultura.

-

El Libro Artes Visuales 1 Secundaria Pdf se compone de dos secciones: la primera tiene el propósito de dar a conocer, mostrar y propiciar el análisis de obras artísticas nacionales y mundiales significativas; la segunda busca que el alumno aplique las habilidades adquiridas mediante la libre expresión y la creación. Además, el libro incluye recursos como imágenes de obras de diferentes artistas de México y del mundo, así como enlaces disponibles con información actualizada acerca de diferentes ámbitos del arte visual.

-

Libro Artes Visuales 1 Secundaria Pdf


Download File ——— https://urlcod.com/2uIcfm



-

El Libro Artes Visuales 1 Secundaria Pdf es una excelente opción para los estudiantes que quieren aprender más sobre el arte visual y sus diversas manifestaciones, así como para los docentes que buscan una herramienta pedagógica que les facilite la planeación, el desarrollo y la evaluación de sus clases. Este libro está basado en el programa oficial de la asignatura de Artes Visuales para secundaria y cumple con los estándares de calidad educativa.

-

Si quieres adquirir el Libro Artes Visuales 1 Secundaria Pdf, puedes hacerlo a través de las siguientes opciones:

-
    -
  • Ediciones Castillo: Esta editorial ofrece el libro en formato impreso y digital, con un precio accesible y una amplia distribución. Puedes consultar más información en su sitio web[^1^].
  • -
  • Academia.edu: Esta plataforma académica permite descargar el libro en formato PDF de forma gratuita, previo registro con una cuenta de correo electrónico. Puedes acceder al libro en este enlace[^4^].
  • -
  • Sway.office.com: Este sitio web ofrece una reseña del libro y un enlace para descargarlo en formato PDF. Sin embargo, este enlace puede no ser seguro o confiable, por lo que se recomienda precaución al usarlo. Puedes ver la reseña en este sitio[^3^].
  • -
-

Esperamos que este artículo te haya sido útil para conocer más sobre el Libro Artes Visuales 1 Secundaria Pdf y sus beneficios para el aprendizaje del arte visual. Te invitamos a leer este libro y a compartir tu opinión con nosotros.

-

Here are a few more paragraphs for the article: - -

¿Qué son las Artes Visuales y por qué son importantes?

-

Las Artes Visuales son una forma de expresión artística que utiliza elementos visuales como el color, la forma, la línea, el espacio, la luz y el movimiento para crear obras que transmiten un mensaje o una emoción. Las Artes Visuales abarcan diversas disciplinas como la pintura, el dibujo, la escultura, la fotografía, el cine, el video, el diseño gráfico, la ilustración y el arte digital.

-

Las Artes Visuales son importantes porque nos permiten comunicarnos con otros a través de un lenguaje universal que no necesita palabras. Además, nos ayudan a desarrollar nuestra creatividad, nuestra imaginación, nuestra sensibilidad y nuestro pensamiento crítico. Las Artes Visuales también nos acercan a otras culturas y épocas, nos muestran diferentes formas de ver el mundo y nos invitan a reflexionar sobre nuestra realidad y nuestra identidad.

-

¿Qué se aprende en el Libro Artes Visuales 1 Secundaria Pdf?

-

El Libro Artes Visuales 1 Secundaria Pdf se divide en cuatro bloques que corresponden a los cuatro ejes de la disciplina artística de Artes Visuales: percepción, expresión, apreciación y contextualización. En cada bloque se abordan diferentes temas y actividades que tienen como objetivo que el alumno:

-
    -
  • Perceba los elementos visuales que componen una obra de arte y los principios que los organizan.
  • -
  • Exprese sus ideas y sentimientos mediante la creación de obras artísticas con diferentes técnicas y materiales.
  • -
  • Aprecie las obras artísticas desde una perspectiva estética, analítica e interpretativa.
  • -
  • Contextualice las obras artísticas dentro de su marco histórico, cultural y social.
  • -
-

El Libro Artes Visuales 1 Secundaria Pdf también incluye secciones complementarias como:

-
    -
  • Para empezar: Una introducción al tema del bloque que activa los conocimientos previos del alumno.
  • -
  • Para saber más: Una ampliación del tema del bloque que profundiza en algún aspecto relevante o curioso.
  • -
  • Para reflexionar: Una serie de preguntas que estimulan el pensamiento crítico y la autoevaluación del alumno.
  • -
  • Para practicar: Una propuesta de actividad práctica que fomenta la experimentación y la expresión artística del alumno.
  • -

7196e7f11a
-
-
\ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Motorola Cp200 Cps Software Download !!INSTALL!!.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Motorola Cp200 Cps Software Download !!INSTALL!!.md deleted file mode 100644 index 1008a4ef1bf26715834be0de53bb0a8ca31f6f58..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/Motorola Cp200 Cps Software Download !!INSTALL!!.md +++ /dev/null @@ -1,98 +0,0 @@ -
-

Motorola CP200 CPS Software Download: A Complete Guide

-

If you are looking for a simple, reliable, and cost-effective communication solution for your business, you might want to consider the Motorola CP200 radio. This portable two-way radio offers increased communication flexibility with features such as push-to-talk ID, selective call, powerful audio output, and long-lasting battery life. It is also compatible with both conventional and LTR trunking systems, giving you the freedom to choose the best option for your needs.

-

motorola cp200 cps software download


Download ⇒⇒⇒ https://urlcod.com/2uIbp0



-

However, to get the most out of your Motorola CP200 radio, you need to program it according to your preferences and requirements. This is where the Motorola CP200 CPS Software comes in handy. This software allows you to customize and configure your radio settings, such as frequencies, channels, tones, power levels, scan lists, and more. You can also use it to update and upgrade your radio firmware, as well as troubleshoot any issues that might arise.

-

In this article, we will show you how to download and install the Motorola CP200 CPS Software, how to program your Motorola CP200 radio using the software, and some tips and tricks for using the software effectively. By the end of this article, you will be able to use your Motorola CP200 radio with confidence and efficiency.

-

What is Motorola CP200 CPS Software?

-

The Motorola CP200 CPS Software is a complementary download for programming and provisioning business radios. CPS stands for Customer Programming Software, which means that it allows you to customize your radio settings according to your needs. The software is compatible with several models of Motorola radios, including:

-
    -
  • CLS series
  • -
  • CLP series
  • -
  • Curve
  • -
  • DLR series
  • -
  • DTR series
  • -
  • RDX series
  • -
  • RM series
  • -
-

The software is available in different versions depending on the region and the language. The latest version for North America is R09.05. You can download it from the official website of Motorola Solutions or from other trusted sources online.

-

Features and benefits of Motorola CP200 CPS Software

-

The Motorola CP200 CPS Software offers several features and benefits that make it a useful tool for managing your radios. Some of them are:

-
    -
  • It allows you to program up to 16 channels on your radio, each with its own frequency, tone, power level, scan list, etc.
  • -
  • It allows you to assign different functions to the two programmable buttons on your radio, such as monitor, scan, squelch level, etc.
  • -
  • It allows you to enable or disable various features on your radio, such as voice scramble, VOX, battery save mode, etc.
  • -
  • It allows you to set up different profiles for different users or scenarios, such as indoor, outdoor, noisy, quiet, etc.
  • -
  • It allows you to backup and restore your radio settings, as well as clone them to other radios of the same model.
  • -
  • It allows you to update and upgrade your radio firmware to the latest version, as well as troubleshoot any issues that might occur.
  • -
-

With the Motorola CP200 CPS Software, you can optimize your radio performance and enhance your communication efficiency and productivity.

-

Compatible radios and accessories for Motorola CP200 CPS Software

-

The Motorola CP200 CPS Software is compatible with several models of Motorola radios, as mentioned above. However, not all radios have the same features and capabilities. For example, some radios are analog only, while others are digital or dual mode. Some radios have more channels than others, and some have more programmable buttons than others. Therefore, you need to check the specifications of your radio model before using the software.

-

Moreover, you need to have the right accessories to connect your radio to your computer and use the software. The most common accessory is the USB programming cable, which plugs into the accessory port of your radio and the USB port of your computer. You might also need a driver for the cable to work properly. Alternatively, you can use a Bluetooth adapter to connect your radio wirelessly to your computer, if your radio supports Bluetooth connectivity.

-

-

In addition, you might need other accessories to enhance your radio usage, such as batteries, chargers, antennas, earpieces, microphones, speakers, etc. You can find a wide range of compatible accessories for your Motorola CP200 radio on the official website of Motorola Solutions or from other authorized dealers online.

-

How to download and install Motorola CP200 CPS Software?

-

Now that you know what the Motorola CP200 CPS Software is and what it can do for you, let's see how you can download and install it on your computer. The process is fairly simple and straightforward, but you need to follow some requirements and precautions before you start.

-

Requirements and precautions for downloading and installing Motorola CP200 CPS Software

-

Before you download and install the Motorola CP200 CPS Software, you need to make sure that:

-
    -
  • You have a compatible computer with Windows operating system (Windows 7 or later) and enough disk space (at least 500 MB).
  • -
  • You have a compatible radio with a fully charged battery and a compatible programming cable or Bluetooth adapter.
  • -
  • You have an internet connection to download the software and update your radio firmware (if needed).
  • -
  • You have administrator rights on your computer to install the software and run it.
  • -
  • You have read and agreed to the terms and conditions of the software license agreement.
  • -
-

Also, you need to take some precautions before you download and install the Motorola CP200 CPS Software, such as:

-
    -
  • Backup your radio settings before using the software, in case something goes wrong or you want to restore them later.
  • -
  • Close any other applications or programs that might interfere with the software or the radio communication.
  • -
  • Do not disconnect or turn off your radio or your computer while using the software or updating your radio firmware.
  • -
  • Do not use the software or update your radio firmware in an environment with high temperature, humidity, dust, or electromagnetic interference.
  • -
-

Step-by-step instructions for downloading and installing Motorola CP200 CPS Software

-

Once you have met the requirements and taken the precautions above, you can proceed with downloading and installing the Motorola CP200 CPS Software by following these steps:

-
    -
  1. Go to the official website of Motorola Solutions and navigate to the Business Radio Support page. Alternatively, you can use this direct link to access the page.
  2. -
  3. On the Business Radio Support page, scroll down to find the section "Download Customer Programming Software". Click on the link "Download CPS" under this section.
  4. -
  5. You will be redirected to a login page where you need to enter your username and password. If you do not have an account yet, you can create one by clicking on "Register Now". You will need to provide some basic information such as your name, email address, country, etc. After creating your account, you will receive a confirmation email with a link to activate it.
  6. -
  7. After logging in with your account credentials, you will be able to access the download page where you can see different versions of the Motorola CP200 CPS Software for different regions and languages. Choose the version that matches your region and language preference. For example, if you are in North America and speak English, choose "R09 .05" from the list.
  8. -
  9. Click on the "Download" button next to the version you have chosen. You will see a pop-up window asking you to accept the terms and conditions of the software license agreement. Read the agreement carefully and click on "I Agree" if you agree with it.
  10. -
  11. The download will start automatically and you will see a progress bar showing the status of the download. Depending on your internet speed and the size of the file, the download might take a few minutes to complete.
  12. -
  13. After the download is complete, you will see a file with the extension ".exe" in your download folder. This is the installer file for the Motorola CP200 CPS Software. Double-click on this file to launch the installation wizard.
  14. -
  15. The installation wizard will guide you through the steps of installing the software on your computer. You will need to choose a destination folder, a start menu folder, and a desktop shortcut for the software. You can also choose to install additional components such as help files, sample codeplugs, etc.
  16. -
  17. After you have made your choices, click on "Install" to start the installation process. You will see a progress bar showing the status of the installation. The installation might take a few minutes to complete.
  18. -
  19. After the installation is complete, you will see a message saying "Installation Complete". Click on "Finish" to exit the installation wizard.
  20. -
  21. You have successfully downloaded and installed the Motorola CP200 CPS Software on your computer. You can now launch it from your start menu or your desktop shortcut.
  22. -
-

How to program Motorola CP200 radio using Motorola CP200 CPS Software?

-

Now that you have downloaded and installed the Motorola CP200 CPS Software on your computer, you can use it to program your Motorola CP200 radio according to your preferences and requirements. The process is easy and intuitive, but you need to follow some steps and instructions carefully.

-

Basic settings and options for programming Motorola CP200 radio

-

The basic settings and options for programming your Motorola CP200 radio include:

-
    -
  • Connecting your radio to your computer using a programming cable or a Bluetooth adapter.
  • -
  • Reading your radio settings from your radio to your computer using the software.
  • -
  • Editing your radio settings on your computer using the software.
  • -
  • Writing your radio settings from your computer to your radio using the software.
  • -
-

To perform these steps, follow these instructions:

-
    -
  1. Connect your radio to your computer using a programming cable or a Bluetooth adapter. Make sure that both devices are turned on and that the cable or the adapter is plugged into the correct ports. If you are using a cable, you might need to install a driver for it to work properly. If you are using a Bluetooth adapter, you might need to pair it with your radio first.
  2. -
  3. Launch the Motorola CP200 CPS Software on your computer. You will see a main window with several tabs and buttons. Click on the "Read Radio" button on the toolbar or go to "Device > Read Device" from the menu bar. You will see a pop-up window asking you to select a communication port for your radio. Choose the port that corresponds to your cable or your adapter and click on "OK".
  4. -
  5. The software will start reading your radio settings from your radio to your computer. You will see a progress bar showing the status of the reading process. The reading process might take a few seconds to complete.
  6. -
  7. After the reading process is complete, you will see a message saying "Read Complete". Click on "OK" to close the message window. You will also see your radio settings displayed on the main window of the software. You can now edit your radio settings on your computer using the software.
  8. -
  9. To edit your radio settings, you can use the tabs and buttons on the main window of the software. Each tab corresponds to a different category of settings, such as "General", "Channels", "Buttons", etc. You can click on each tab to see and modify the settings under that category. You can also use the buttons on the toolbar or the menu bar to access other functions, such as "Save", "Print", "Help", etc.
  10. -
  11. For example, if you want to edit the channel settings of your radio, you can click on the "Channels" tab on the main window. You will see a table with 16 rows and several columns, each representing a channel and its settings. You can click on each cell to change its value, such as frequency, tone, power level, scan list, etc. You can also use the buttons below the table to add, delete, copy, paste, or import channels.
  12. -
  13. Similarly, if you want to edit the button settings of your radio, you can click on the "Buttons" tab on the main window. You will see two drop-down menus, one for each programmable button on your radio. You can click on each menu to select a function for that button, such as monitor, scan, squelch level, etc.
  14. -
  15. After you have edited your radio settings to your satisfaction, you can write them from your computer to your radio using the software. To do this, click on the "Write Radio" button on the toolbar or go to "Device > Write Device" from the menu bar. You will see a pop-up window asking you to confirm your action. Click on "OK" to proceed.
  16. -
  17. The software will start writing your radio settings from your computer to your radio. You will see a progress bar showing the status of the writing process. The writing process might take a few seconds to complete.
  18. -
  19. After the writing process is complete, you will see a message saying "Write Complete". Click on "OK" to close the message window. You have successfully programmed your Motorola CP200 radio using the Motorola CP200 CPS Software.
  20. -
-

Advanced settings and options for programming Motorola CP200 radio

-

The basic settings and options for programming your Motorola CP200 radio are sufficient for most users and scenarios. However, if you want to access more advanced settings and options for programming your radio, you can use some additional features of the Motorola CP200 CPS Software. Some of them are:

-
    -
  • Codeplug password: This feature allows you to set a password for your radio codeplug, which is the file that contains all your radio settings. This way, you can protect your codeplug from unauthorized access or modification by others. To set a codeplug password, go to "Device > Codeplug Password" from the menu bar and enter a password of your choice.
  • -
  • Radio firmware update: This feature allows you to update your radio firmware to the latest version available from Motorola Solutions. This way, you can improve your radio performance and compatibility with new features and accessories. To update your radio firmware, go to "Device > Update Firmware" from the menu bar and follow the instructions on the screen.
  • -
  • Radio cloning: This feature allows you to clone your radio settings from one radio to another of the same model. This way, you can save time and effort by avoiding repeating

    b2dd77e56b
    -
    -
    \ No newline at end of file diff --git a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/TOP Julie Mcknight How Soon Is Now Acapella Zippy.md b/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/TOP Julie Mcknight How Soon Is Now Acapella Zippy.md deleted file mode 100644 index 3975e5b34654b98334d5c6d4dcceb43cd490694f..0000000000000000000000000000000000000000 --- a/spaces/netiMophi/DreamlikeArt-Diffusion-1.0/TOP Julie Mcknight How Soon Is Now Acapella Zippy.md +++ /dev/null @@ -1,25 +0,0 @@ - -

    How to Download Julie Mcknight's How Soon Is Now Acapella from Zippyshare

    -

    If you are a fan of Julie Mcknight, you might be interested in downloading her acapella version of How Soon Is Now, a classic song by The Smiths. This acapella was released in 2009 as part of the Dirty South remix of the song, and it showcases Julie's powerful vocals and soulful style.

    -

    |TOP| Julie Mcknight How Soon Is Now Acapella Zippy


    Download Ziphttps://urlcod.com/2uIco5



    -

    However, finding this acapella online can be tricky, as it is not available on most streaming platforms or online stores. Fortunately, there is a way to download it for free from Zippyshare, a popular file-sharing website. Here are the steps to do so:

    -
      -
    1. Go to Zippyshare.com and type "|TOP| Julie Mcknight How Soon Is Now Acapella" in the search box.
    2. -
    3. Click on the first result that appears. It should be a file named "|TOP| Julie Mcknight - How Soon Is Now (Acapella).mp3".
    4. -
    5. On the next page, scroll down until you see a green button that says "Download Now". Click on it and wait for a few seconds.
    6. -
    7. A new window will pop up with a captcha. Solve it and click on "Continue".
    8. -
    9. The download will start automatically. Save the file to your device and enjoy!
    10. -
    -

    Note: Zippyshare is a free service that hosts files uploaded by users. Therefore, the quality and legality of the files may vary. We do not endorse or promote piracy and we advise you to use this service at your own risk.

    - -

    Julie Mcknight is a singer and songwriter who has collaborated with some of the most renowned DJs and producers in the dance music scene. She is best known for her vocals on songs like Finally by Kings of Tomorrow, Diamond Life by Louie Vega and Jay Sinister Sealee, and Home by Knee Deep and DJ Spen.

    -

    How Soon Is Now is a song originally written and performed by The Smiths, an influential British rock band from the 1980s. The song was released in 1985 as a single and later appeared on their compilation album Hatful of Hollow. The song is considered one of their signature tunes and has been covered by many artists from different genres.

    -

    In 2009, Dirty South, an Australian DJ and producer, remixed How Soon Is Now with vocals by Julie Mcknight. The remix was a hit in the clubs and festivals around the world, and it received positive reviews from critics and fans alike. The remix also featured an acapella version of Julie's vocals, which added a new dimension to the song.

    -

    - -

    If you want to download Julie Mcknight's How Soon Is Now acapella from Zippyshare, you can follow the steps we have outlined above. However, if you want to support the artist and the original creators of the song, you can also buy or stream the remix from other platforms. You can find the remix on Spotify, Apple Music, Beatport, and other online stores. You can also check out Julie Mcknight's official website and social media accounts for more information about her music and upcoming projects.

    - -

    Downloading Julie Mcknight's How Soon Is Now acapella from Zippyshare can be a great way to enjoy her amazing voice and create your own remixes or mashups. However, you should also be aware of the potential risks and drawbacks of using this service. Zippyshare is not a legal or authorized source of music, and it may contain viruses, malware, or other harmful files. You should always scan the files you download with a reliable antivirus software and avoid clicking on any suspicious links or ads.

    -

    Another issue with downloading Julie Mcknight's How Soon Is Now acapella from Zippyshare is that you may be violating the intellectual property rights of the artist and the original songwriters. By downloading and using the acapella without permission or payment, you are depriving them of their rightful royalties and recognition. You may also face legal consequences if you distribute or share the acapella with others. Therefore, you should always respect the work of the creators and use the acapella for personal and non-commercial purposes only.

    7b8c122e87
    -
    -
    \ No newline at end of file diff --git a/spaces/nickmuchi/Earnings-Call-Analysis-Whisperer/functions.py b/spaces/nickmuchi/Earnings-Call-Analysis-Whisperer/functions.py deleted file mode 100644 index 0a5460d0f6d8a9ef238683e93e7e7d0bcffad2c7..0000000000000000000000000000000000000000 --- a/spaces/nickmuchi/Earnings-Call-Analysis-Whisperer/functions.py +++ /dev/null @@ -1,711 +0,0 @@ -import whisper -import os -import random -import openai -import yt_dlp -from pytube import YouTube, extract -import pandas as pd -import plotly_express as px -import nltk -import plotly.graph_objects as go -from optimum.onnxruntime import ORTModelForSequenceClassification -from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification, AutoModelForSeq2SeqLM, AutoModelForTokenClassification -from sentence_transformers import SentenceTransformer, CrossEncoder, util -import streamlit as st -import en_core_web_lg -import validators -import re -import itertools -import numpy as np -from bs4 import BeautifulSoup -import base64, time -from annotated_text import annotated_text -import pickle, math -import wikipedia -from pyvis.network import Network -import torch -from pydub import AudioSegment -from langchain.docstore.document import Document -from langchain.embeddings import HuggingFaceEmbeddings, HuggingFaceBgeEmbeddings, HuggingFaceInstructEmbeddings -from langchain.vectorstores import FAISS -from langchain.text_splitter import RecursiveCharacterTextSplitter -from langchain.chat_models import ChatOpenAI -from langchain.chains import QAGenerationChain - -from langchain.callbacks import StreamlitCallbackHandler -from langchain.agents import OpenAIFunctionsAgent, AgentExecutor -from langchain.agents.agent_toolkits import create_retriever_tool -from langchain.agents.openai_functions_agent.agent_token_buffer_memory import ( - AgentTokenBufferMemory, -) -from langchain.prompts import MessagesPlaceholder - -from langchain.prompts.chat import ( - ChatPromptTemplate, - SystemMessagePromptTemplate, - AIMessagePromptTemplate, - HumanMessagePromptTemplate, -) -from langchain.schema import ( - AIMessage, - HumanMessage, - SystemMessage -) - -from langchain.prompts import PromptTemplate - -from langsmith import Client - -client = Client() - -nltk.download('punkt') - - -from nltk import sent_tokenize - -OPEN_AI_KEY = os.environ.get('OPEN_AI_KEY') -time_str = time.strftime("%d%m%Y-%H%M%S") -HTML_WRAPPER = """
    {}
    """ - - -###################### Functions ####################################################################################### - -#load all required models and cache -@st.cache_resource -def load_models(): - - '''Load and cache all the models to be used''' - q_model = ORTModelForSequenceClassification.from_pretrained("nickmuchi/quantized-optimum-finbert-tone") - ner_model = AutoModelForTokenClassification.from_pretrained("xlm-roberta-large-finetuned-conll03-english") - q_tokenizer = AutoTokenizer.from_pretrained("nickmuchi/quantized-optimum-finbert-tone") - ner_tokenizer = AutoTokenizer.from_pretrained("xlm-roberta-large-finetuned-conll03-english") - sent_pipe = pipeline("text-classification",model=q_model, tokenizer=q_tokenizer) - sum_pipe = pipeline("summarization",model="philschmid/flan-t5-base-samsum",clean_up_tokenization_spaces=True) - ner_pipe = pipeline("ner", model=ner_model, tokenizer=ner_tokenizer, grouped_entities=True) - cross_encoder = CrossEncoder('cross-encoder/ms-marco-MiniLM-L-12-v2') #cross-encoder/ms-marco-MiniLM-L-12-v2 - sbert = SentenceTransformer('all-MiniLM-L6-v2') - - return sent_pipe, sum_pipe, ner_pipe, cross_encoder, sbert - -@st.cache_data -def load_asr_model(model_name): - - '''Load the open source whisper model in cases where the API is not working''' - model = whisper.load_model(model_name) - - return model - -@st.cache_resource -def get_spacy(): - nlp = en_core_web_lg.load() - return nlp - -nlp = get_spacy() - -sent_pipe, sum_pipe, ner_pipe, cross_encoder, sbert = load_models() - -@st.cache_data -def get_yt_audio(url): - - '''Get YT video from given URL link''' - yt = YouTube(url) - - title = yt.title - - # Get the first available audio stream and download it - audio_stream = yt.streams.filter(progressive=True, file_extension='mp4').order_by('resolution').desc().first().download() - - return audio_stream, title - -@st.cache_data -def get_yt_audio_dl(url): - - '''Back up for when pytube is down''' - - temp_audio_file = os.path.join('output', 'audio') - - ydl_opts = { - 'format': 'bestaudio/best', - 'postprocessors': [{ - 'key': 'FFmpegExtractAudio', - 'preferredcodec': 'mp3', - 'preferredquality': '192', - }], - 'outtmpl': temp_audio_file, - 'quiet': True, - } - - with yt_dlp.YoutubeDL(ydl_opts) as ydl: - - info = ydl.extract_info(url, download=False) - title = info.get('title', None) - ydl.download([url]) - - #with open(temp_audio_file+'.mp3', 'rb') as file: - audio_file = os.path.join('output', 'audio.mp3') - - return audio_file, title - - -@st.cache_data -def load_whisper_api(audio): - - '''Transcribe YT audio to text using Open AI API''' - file = open(audio, "rb") - transcript = openai.Audio.translate("whisper-1", file) - - return transcript - -@st.cache_data -def transcribe_yt_video(link, py_tube=True): - '''Transcribe YouTube video''' - - if py_tube: - - audio_file, title = get_yt_audio(link) - - print(f'audio_file:{audio_file}') - - st.session_state['audio'] = audio_file - - print(f"audio_file_session_state:{st.session_state['audio'] }") - - #Get size of audio file - audio_size = round(os.path.getsize(st.session_state['audio'])/(1024*1024),1) - - #Check if file is > 24mb, if not then use Whisper API - if audio_size <= 25: - - st.info("`Transcribing YT audio...`") - - #Use whisper API - results = load_whisper_api(st.session_state['audio'])['text'] - - else: - - st.warning('File size larger than 24mb, applying chunking and transcription',icon="⚠️") - - song = AudioSegment.from_file(st.session_state['audio'], format='mp4') - - # PyDub handles time in milliseconds - twenty_minutes = 20 * 60 * 1000 - - chunks = song[::twenty_minutes] - - transcriptions = [] - - video_id = extract.video_id(link) - - for i, chunk in enumerate(chunks): - chunk.export(f'output/chunk_{i}_{video_id}.mp4', format='mp4') - transcriptions.append(load_whisper_api(f'output/chunk_{i}_{video_id}.mp4')['text']) - - results = ','.join(transcriptions) - - else: - - audio_file, title = get_yt_audio_dl(link) - - print(f'audio_file:{audio_file}') - - st.session_state['audio'] = audio_file - - print(f"audio_file_session_state:{st.session_state['audio'] }") - - #Get size of audio file - audio_size = round(os.path.getsize(st.session_state['audio'])/(1024*1024),1) - - #Check if file is > 24mb, if not then use Whisper API - if audio_size <= 25: - - st.info("`Transcribing YT audio...`") - - #Use whisper API - results = load_whisper_api(st.session_state['audio'])['text'] - - else: - - st.warning('File size larger than 24mb, applying chunking and transcription',icon="⚠️") - - song = AudioSegment.from_file(st.session_state['audio'], format='mp3') - - # PyDub handles time in milliseconds - twenty_minutes = 20 * 60 * 1000 - - chunks = song[::twenty_minutes] - - transcriptions = [] - - video_id = extract.video_id(link) - - for i, chunk in enumerate(chunks): - chunk.export(f'output/chunk_{i}_{video_id}.mp3', format='mp3') - transcriptions.append(load_whisper_api(f'output/chunk_{i}_{video_id}.mp3')['text']) - - results = ','.join(transcriptions) - - - st.info("`YT Video transcription process complete...`") - - return results, title - -@st.cache_data -def inference(link, upload): - '''Convert Youtube video or Audio upload to text''' - - try: - - if validators.url(link): - - st.info("`Downloading YT audio...`") - - results, title = transcribe_yt_video(link) - - return results, title - - elif _upload: - - #Get size of audio file - audio_size = round(os.path.getsize(_upload)/(1024*1024),1) - - #Check if file is > 24mb, if not then use Whisper API - if audio_size <= 25: - - st.info("`Transcribing uploaded audio...`") - - #Use whisper API - results = load_whisper_api(_upload)['text'] - - else: - - st.write('File size larger than 24mb, applying chunking and transcription') - - song = AudioSegment.from_file(_upload) - - # PyDub handles time in milliseconds - twenty_minutes = 20 * 60 * 1000 - - chunks = song[::twenty_minutes] - - transcriptions = [] - - st.info("`Transcribing uploaded audio...`") - - for i, chunk in enumerate(chunks): - chunk.export(f'output/chunk_{i}.mp4', format='mp4') - transcriptions.append(load_whisper_api(f'output/chunk_{i}.mp4')['text']) - - results = ','.join(transcriptions) - - st.info("`Uploaded audio transcription process complete...`") - - return results, "Transcribed Earnings Audio" - - except Exception as e: - - st.error(f'''PyTube Error: {e}, - Using yt_dlp module, might take longer than expected''',icon="🚨") - - results, title = transcribe_yt_video(link, py_tube=False) - - # results = _asr_model.transcribe(st.session_state['audio'], task='transcribe', language='en') - - return results, title - -@st.cache_resource -def send_feedback(run_id, score): - client.create_feedback(run_id, "user_score", score=score) - -@st.cache_data -def clean_text(text): - '''Clean all text after inference''' - - text = text.encode("ascii", "ignore").decode() # unicode - text = re.sub(r"https*\S+", " ", text) # url - text = re.sub(r"@\S+", " ", text) # mentions - text = re.sub(r"#\S+", " ", text) # hastags - text = re.sub(r"\s{2,}", " ", text) # over spaces - - return text - -@st.cache_data -def chunk_long_text(text,threshold,window_size=3,stride=2): - '''Preprocess text and chunk for sentiment analysis''' - - #Convert cleaned text into sentences - sentences = sent_tokenize(text) - out = [] - - #Limit the length of each sentence to a threshold - for chunk in sentences: - if len(chunk.split()) < threshold: - out.append(chunk) - else: - words = chunk.split() - num = int(len(words)/threshold) - for i in range(0,num*threshold+1,threshold): - out.append(' '.join(words[i:threshold+i])) - - passages = [] - - #Combine sentences into a window of size window_size - for paragraph in [out]: - for start_idx in range(0, len(paragraph), stride): - end_idx = min(start_idx+window_size, len(paragraph)) - passages.append(" ".join(paragraph[start_idx:end_idx])) - - return passages - -@st.cache_data -def sentiment_pipe(earnings_text): - '''Determine the sentiment of the text''' - - earnings_sentences = chunk_long_text(earnings_text,150,1,1) - earnings_sentiment = sent_pipe(earnings_sentences) - - return earnings_sentiment, earnings_sentences - -@st.cache_data -def chunk_and_preprocess_text(text, model_name= 'philschmid/flan-t5-base-samsum'): - - '''Chunk and preprocess text for summarization''' - - tokenizer = AutoTokenizer.from_pretrained(model_name) - sentences = sent_tokenize(text) - - # initialize - length = 0 - chunk = "" - chunks = [] - count = -1 - - for sentence in sentences: - count += 1 - combined_length = len(tokenizer.tokenize(sentence)) + length # add the no. of sentence tokens to the length counter - - if combined_length <= tokenizer.max_len_single_sentence: # if it doesn't exceed - chunk += sentence + " " # add the sentence to the chunk - length = combined_length # update the length counter - - # if it is the last sentence - if count == len(sentences) - 1: - chunks.append(chunk) # save the chunk - - else: - chunks.append(chunk) # save the chunk - # reset - length = 0 - chunk = "" - - # take care of the overflow sentence - chunk += sentence + " " - length = len(tokenizer.tokenize(sentence)) - - return chunks - -@st.cache_data -def summarize_text(text_to_summarize,max_len,min_len): - '''Summarize text with HF model''' - - summarized_text = sum_pipe(text_to_summarize, - max_length=max_len, - min_length=min_len, - do_sample=False, - early_stopping=True, - num_beams=4) - summarized_text = ' '.join([summ['summary_text'] for summ in summarized_text]) - - return summarized_text - -@st.cache_data -def get_all_entities_per_sentence(text): - doc = nlp(''.join(text)) - - sentences = list(doc.sents) - - entities_all_sentences = [] - for sentence in sentences: - entities_this_sentence = [] - - # SPACY ENTITIES - for entity in sentence.ents: - entities_this_sentence.append(str(entity)) - - # XLM ENTITIES - entities_xlm = [entity["word"] for entity in ner_pipe(str(sentence))] - for entity in entities_xlm: - entities_this_sentence.append(str(entity)) - - entities_all_sentences.append(entities_this_sentence) - - return entities_all_sentences - -@st.cache_data -def get_all_entities(text): - all_entities_per_sentence = get_all_entities_per_sentence(text) - return list(itertools.chain.from_iterable(all_entities_per_sentence)) - -@st.cache_data -def get_and_compare_entities(article_content,summary_output): - - all_entities_per_sentence = get_all_entities_per_sentence(article_content) - entities_article = list(itertools.chain.from_iterable(all_entities_per_sentence)) - - all_entities_per_sentence = get_all_entities_per_sentence(summary_output) - entities_summary = list(itertools.chain.from_iterable(all_entities_per_sentence)) - - matched_entities = [] - unmatched_entities = [] - for entity in entities_summary: - if any(entity.lower() in substring_entity.lower() for substring_entity in entities_article): - matched_entities.append(entity) - elif any( - np.inner(sbert.encode(entity, show_progress_bar=False), - sbert.encode(art_entity, show_progress_bar=False)) > 0.9 for - art_entity in entities_article): - matched_entities.append(entity) - else: - unmatched_entities.append(entity) - - matched_entities = list(dict.fromkeys(matched_entities)) - unmatched_entities = list(dict.fromkeys(unmatched_entities)) - - matched_entities_to_remove = [] - unmatched_entities_to_remove = [] - - for entity in matched_entities: - for substring_entity in matched_entities: - if entity != substring_entity and entity.lower() in substring_entity.lower(): - matched_entities_to_remove.append(entity) - - for entity in unmatched_entities: - for substring_entity in unmatched_entities: - if entity != substring_entity and entity.lower() in substring_entity.lower(): - unmatched_entities_to_remove.append(entity) - - matched_entities_to_remove = list(dict.fromkeys(matched_entities_to_remove)) - unmatched_entities_to_remove = list(dict.fromkeys(unmatched_entities_to_remove)) - - for entity in matched_entities_to_remove: - matched_entities.remove(entity) - for entity in unmatched_entities_to_remove: - unmatched_entities.remove(entity) - - return matched_entities, unmatched_entities - -@st.cache_data -def highlight_entities(article_content,summary_output): - - markdown_start_red = "" - markdown_start_green = "" - markdown_end = "" - - matched_entities, unmatched_entities = get_and_compare_entities(article_content,summary_output) - - for entity in matched_entities: - summary_output = re.sub(f'({entity})(?![^rgb\(]*\))',markdown_start_green + entity + markdown_end,summary_output) - - for entity in unmatched_entities: - summary_output = re.sub(f'({entity})(?![^rgb\(]*\))',markdown_start_red + entity + markdown_end,summary_output) - - print("") - print("") - - soup = BeautifulSoup(summary_output, features="html.parser") - - return HTML_WRAPPER.format(soup) - -def summary_downloader(raw_text): - '''Download the summary generated''' - - b64 = base64.b64encode(raw_text.encode()).decode() - new_filename = "new_text_file_{}_.txt".format(time_str) - st.markdown("#### Download Summary as a File ###") - href = f'Click to Download!!' - st.markdown(href,unsafe_allow_html=True) - -@st.cache_data -def generate_eval(raw_text, N, chunk): - - # Generate N questions from context of chunk chars - # IN: text, N questions, chunk size to draw question from in the doc - # OUT: eval set as JSON list - - # raw_text = ','.join(raw_text) - - update = st.empty() - ques_update = st.empty() - update.info("`Generating sample questions ...`") - n = len(raw_text) - starting_indices = [random.randint(0, n-chunk) for _ in range(N)] - sub_sequences = [raw_text[i:i+chunk] for i in starting_indices] - chain = QAGenerationChain.from_llm(ChatOpenAI(temperature=0)) - eval_set = [] - - for i, b in enumerate(sub_sequences): - try: - qa = chain.run(b) - eval_set.append(qa) - ques_update.info(f"Creating Question: {i+1}") - - except Exception as e: - print(e) - st.warning(f'Error in generating Question: {i+1}...', icon="⚠️") - continue - - eval_set_full = list(itertools.chain.from_iterable(eval_set)) - - update.empty() - ques_update.empty() - - return eval_set_full - -@st.cache_resource -def create_prompt_and_llm(): - '''Create prompt''' - - llm = ChatOpenAI(temperature=0, streaming=True, model="gpt-4") - - message = SystemMessage( - content=( - "You are a helpful chatbot who is tasked with answering questions acuurately about earnings call transcript provided. " - "Unless otherwise explicitly stated, it is probably fair to assume that questions are about the earnings call transcript. " - "If there is any ambiguity, you probably assume they are about that." - "Do not use any information not provided in the earnings context and remember you are a to speak like a finance expert." - "If you don't know the answer, just say 'There is no relevant answer in the given earnings call transcript'" - "don't try to make up an answer" - ) - ) - - prompt = OpenAIFunctionsAgent.create_prompt( - system_message=message, - extra_prompt_messages=[MessagesPlaceholder(variable_name="history")], - ) - - return prompt, llm - -@st.cache_resource -def gen_embeddings(embedding_model): - - '''Generate embeddings for given model''' - - if 'hkunlp' in embedding_model: - - embeddings = HuggingFaceInstructEmbeddings(model_name=embedding_model, - query_instruction='Represent the Financial question for retrieving supporting paragraphs: ', - embed_instruction='Represent the Financial paragraph for retrieval: ') - - elif 'mpnet' in embedding_model: - - embeddings = HuggingFaceEmbeddings(model_name=embedding_model) - - elif 'FlagEmbedding' in embedding_model: - - encode_kwargs = {'normalize_embeddings': True} - embeddings = HuggingFaceBgeEmbeddings(model_name=embedding_model, - encode_kwargs = encode_kwargs - ) - - return embeddings - -@st.cache_data -def create_vectorstore(corpus, title, embedding_model, chunk_size=1000, overlap=50): - - '''Process text for Semantic Search''' - - text_splitter = RecursiveCharacterTextSplitter(chunk_size=chunk_size,chunk_overlap=overlap) - - texts = text_splitter.split_text(corpus) - - embeddings = gen_embeddings(embedding_model) - - vectorstore = FAISS.from_texts(texts, embeddings, metadatas=[{"source": i} for i in range(len(texts))]) - - return vectorstore - -@st.cache_data -def create_memory_and_agent(_docsearch): - - '''Embed text and generate semantic search scores''' - - #create vectorstore - vectorstore = _docsearch.as_retriever(search_kwargs={"k": 4}) - - #create retriever tool - tool = create_retriever_tool( - vectorstore, - "earnings_call_search", - "Searches and returns documents using the earnings context provided as a source, relevant to the user input question.", - ) - - tools = [tool] - - prompt,llm = create_prompt_and_llm() - - agent = OpenAIFunctionsAgent(llm=llm, tools=tools, prompt=prompt) - - agent_executor = AgentExecutor( - agent=agent, - tools=tools, - verbose=True, - return_intermediate_steps=True, - ) - - memory = AgentTokenBufferMemory(llm=llm) - - return memory, agent_executor - -@st.cache_data -def gen_sentiment(text): - '''Generate sentiment of given text''' - return sent_pipe(text)[0]['label'] - -@st.cache_data -def gen_annotated_text(df): - '''Generate annotated text''' - - tag_list=[] - for row in df.itertuples(): - label = row[2] - text = row[1] - if label == 'Positive': - tag_list.append((text,label,'#8fce00')) - elif label == 'Negative': - tag_list.append((text,label,'#f44336')) - else: - tag_list.append((text,label,'#000000')) - - return tag_list - - -def display_df_as_table(model,top_k,score='score'): - '''Display the df with text and scores as a table''' - - df = pd.DataFrame([(hit[score],passages[hit['corpus_id']]) for hit in model[0:top_k]],columns=['Score','Text']) - df['Score'] = round(df['Score'],2) - - return df - - -def make_spans(text,results): - results_list = [] - for i in range(len(results)): - results_list.append(results[i]['label']) - facts_spans = [] - facts_spans = list(zip(sent_tokenizer(text),results_list)) - return facts_spans - -##Fiscal Sentiment by Sentence -def fin_ext(text): - results = remote_clx(sent_tokenizer(text)) - return make_spans(text,results) - -## Knowledge Graphs code - -def get_article(url): - article = Article(url) - article.download() - article.parse() - return article - diff --git a/spaces/niizam/sovits-models/inference_main.py b/spaces/niizam/sovits-models/inference_main.py deleted file mode 100644 index 3b2c32ac9e29e6b016e656e937fede5d2c23e7e6..0000000000000000000000000000000000000000 --- a/spaces/niizam/sovits-models/inference_main.py +++ /dev/null @@ -1,130 +0,0 @@ -import io -import logging -import time -from pathlib import Path - -import librosa -import matplotlib.pyplot as plt -import numpy as np -import soundfile - -from inference import infer_tool -from inference import slicer -from inference.infer_tool import Svc - -logging.getLogger('numba').setLevel(logging.WARNING) -chunks_dict = infer_tool.read_temp("inference/chunks_temp.json") - - - -def main(): - import argparse - - parser = argparse.ArgumentParser(description='sovits4 inference') - - # 一定要设置的部分 - parser.add_argument('-m', '--model_path', type=str, default="logs/44k/G_0.pth", help='模型路径') - parser.add_argument('-c', '--config_path', type=str, default="configs/config.json", help='配置文件路径') - parser.add_argument('-cl', '--clip', type=float, default=0, help='音频强制切片,默认0为自动切片,单位为秒/s') - parser.add_argument('-n', '--clean_names', type=str, nargs='+', default=["君の知らない物語-src.wav"], help='wav文件名列表,放在raw文件夹下') - parser.add_argument('-t', '--trans', type=int, nargs='+', default=[0], help='音高调整,支持正负(半音)') - parser.add_argument('-s', '--spk_list', type=str, nargs='+', default=['nen'], help='合成目标说话人名称') - - # 可选项部分 - parser.add_argument('-a', '--auto_predict_f0', action='store_true', default=False,help='语音转换自动预测音高,转换歌声时不要打开这个会严重跑调') - parser.add_argument('-cm', '--cluster_model_path', type=str, default="logs/44k/kmeans_10000.pt", help='聚类模型路径,如果没有训练聚类则随便填') - parser.add_argument('-cr', '--cluster_infer_ratio', type=float, default=0, help='聚类方案占比,范围0-1,若没有训练聚类模型则默认0即可') - parser.add_argument('-lg', '--linear_gradient', type=float, default=0, help='两段音频切片的交叉淡入长度,如果强制切片后出现人声不连贯可调整该数值,如果连贯建议采用默认值0,单位为秒') - parser.add_argument('-fmp', '--f0_mean_pooling', type=bool, default=False, help='是否对F0使用均值滤波器(池化),对部分哑音有改善。注意,启动该选项会导致推理速度下降,默认关闭') - - # 不用动的部分 - parser.add_argument('-sd', '--slice_db', type=int, default=-40, help='默认-40,嘈杂的音频可以-30,干声保留呼吸可以-50') - parser.add_argument('-d', '--device', type=str, default=None, help='推理设备,None则为自动选择cpu和gpu') - parser.add_argument('-ns', '--noice_scale', type=float, default=0.4, help='噪音级别,会影响咬字和音质,较为玄学') - parser.add_argument('-p', '--pad_seconds', type=float, default=0.5, help='推理音频pad秒数,由于未知原因开头结尾会有异响,pad一小段静音段后就不会出现') - parser.add_argument('-wf', '--wav_format', type=str, default='flac', help='音频输出格式') - parser.add_argument('-lgr', '--linear_gradient_retain', type=float, default=0.75, help='自动音频切片后,需要舍弃每段切片的头尾。该参数设置交叉长度保留的比例,范围0-1,左开右闭') - - args = parser.parse_args() - - svc_model = Svc(args.model_path, args.config_path, args.device, args.cluster_model_path) - infer_tool.mkdir(["raw", "results"]) - clean_names = args.clean_names - trans = args.trans - spk_list = args.spk_list - slice_db = args.slice_db - wav_format = args.wav_format - auto_predict_f0 = args.auto_predict_f0 - cluster_infer_ratio = args.cluster_infer_ratio - noice_scale = args.noice_scale - pad_seconds = args.pad_seconds - clip = args.clip - lg = args.linear_gradient - lgr = args.linear_gradient_retain - F0_mean_pooling = args.f0_mean_pooling - - infer_tool.fill_a_to_b(trans, clean_names) - for clean_name, tran in zip(clean_names, trans): - raw_audio_path = f"raw/{clean_name}" - if "." not in raw_audio_path: - raw_audio_path += ".wav" - infer_tool.format_wav(raw_audio_path) - wav_path = Path(raw_audio_path).with_suffix('.wav') - chunks = slicer.cut(wav_path, db_thresh=slice_db) - audio_data, audio_sr = slicer.chunks2audio(wav_path, chunks) - per_size = int(clip*audio_sr) - lg_size = int(lg*audio_sr) - lg_size_r = int(lg_size*lgr) - lg_size_c_l = (lg_size-lg_size_r)//2 - lg_size_c_r = lg_size-lg_size_r-lg_size_c_l - lg = np.linspace(0,1,lg_size_r) if lg_size!=0 else 0 - - for spk in spk_list: - audio = [] - for (slice_tag, data) in audio_data: - print(f'#=====segment start, {round(len(data) / audio_sr, 3)}s======') - - length = int(np.ceil(len(data) / audio_sr * svc_model.target_sample)) - if slice_tag: - print('jump empty segment') - _audio = np.zeros(length) - audio.extend(list(infer_tool.pad_array(_audio, length))) - continue - if per_size != 0: - datas = infer_tool.split_list_by_n(data, per_size,lg_size) - else: - datas = [data] - for k,dat in enumerate(datas): - per_length = int(np.ceil(len(dat) / audio_sr * svc_model.target_sample)) if clip!=0 else length - if clip!=0: print(f'###=====segment clip start, {round(len(dat) / audio_sr, 3)}s======') - # padd - pad_len = int(audio_sr * pad_seconds) - dat = np.concatenate([np.zeros([pad_len]), dat, np.zeros([pad_len])]) - raw_path = io.BytesIO() - soundfile.write(raw_path, dat, audio_sr, format="wav") - raw_path.seek(0) - out_audio, out_sr = svc_model.infer(spk, tran, raw_path, - cluster_infer_ratio=cluster_infer_ratio, - auto_predict_f0=auto_predict_f0, - noice_scale=noice_scale, - F0_mean_pooling = F0_mean_pooling - ) - _audio = out_audio.cpu().numpy() - pad_len = int(svc_model.target_sample * pad_seconds) - _audio = _audio[pad_len:-pad_len] - _audio = infer_tool.pad_array(_audio, per_length) - if lg_size!=0 and k!=0: - lg1 = audio[-(lg_size_r+lg_size_c_r):-lg_size_c_r] if lgr != 1 else audio[-lg_size:] - lg2 = _audio[lg_size_c_l:lg_size_c_l+lg_size_r] if lgr != 1 else _audio[0:lg_size] - lg_pre = lg1*(1-lg)+lg2*lg - audio = audio[0:-(lg_size_r+lg_size_c_r)] if lgr != 1 else audio[0:-lg_size] - audio.extend(lg_pre) - _audio = _audio[lg_size_c_l+lg_size_r:] if lgr != 1 else _audio[lg_size:] - audio.extend(list(_audio)) - key = "auto" if auto_predict_f0 else f"{tran}key" - cluster_name = "" if cluster_infer_ratio == 0 else f"_{cluster_infer_ratio}" - res_path = f'./results/{clean_name}_{key}_{spk}{cluster_name}.{wav_format}' - soundfile.write(res_path, audio, svc_model.target_sample, format=wav_format) - -if __name__ == '__main__': - main() diff --git a/spaces/nomic-ai/GAIR_lima/index.html b/spaces/nomic-ai/GAIR_lima/index.html deleted file mode 100644 index b1a8219962c4659c9db09681fc0e695d33459b71..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/GAIR_lima/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - GAIR/lima - - - - -
    - -
    - - - \ No newline at end of file diff --git a/spaces/nomic-ai/bigcode_ta-prompt/README.md b/spaces/nomic-ai/bigcode_ta-prompt/README.md deleted file mode 100644 index 82077093a46046992e4f8ad828b28539e4cce995..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/bigcode_ta-prompt/README.md +++ /dev/null @@ -1,10 +0,0 @@ ---- -title: bigcode/ta-prompt -emoji: 🗺️ -colorFrom: purple -colorTo: red -sdk: static -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/nomic-ai/neulab_conala/style.css b/spaces/nomic-ai/neulab_conala/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/neulab_conala/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/nomic-ai/tweet_eval/index.html b/spaces/nomic-ai/tweet_eval/index.html deleted file mode 100644 index b3d80b7ec132a8524013d5358e8d1a1ca16ff74f..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/tweet_eval/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - tweet_eval - - - - -
    - -
    - - - \ No newline at end of file diff --git a/spaces/nomic-ai/wikiann/index.html b/spaces/nomic-ai/wikiann/index.html deleted file mode 100644 index 1657f5e9bc3f5fedd3066f318374e262698a05bb..0000000000000000000000000000000000000000 --- a/spaces/nomic-ai/wikiann/index.html +++ /dev/null @@ -1,42 +0,0 @@ - - - - wikiann - - - - -
    - -
    - - - \ No newline at end of file diff --git a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/os/coop_threads.h b/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/os/coop_threads.h deleted file mode 100644 index 9aefa614ea945d20f1699866e7931994b27d5842..0000000000000000000000000000000000000000 --- a/spaces/ntt123/vietnam-male-voice-wavegru-tts/sparse_matmul/os/coop_threads.h +++ /dev/null @@ -1,179 +0,0 @@ -/* - * Copyright 2021 Google LLC - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#ifndef LYRA_CODEC_SPARSE_MATMUL_OS_COOP_THREADS_H_ -#define LYRA_CODEC_SPARSE_MATMUL_OS_COOP_THREADS_H_ - -#include -#include // NOLINT -#include - -#define _COOP_THREADS_USE_STD_THREAD 1 - -#include "absl/memory/memory.h" -#include "glog/logging.h" - -namespace csrblocksparse { - -// A re-usable barrier. Keeps threads in extremely tight sync without -// relinquishing control. All memory writes _before_ this barrier are visible -// to all threads _after_ this barrier. Similar in spirit to -// pthreads_barrier. If you expect arrival times at this barrier to be varied -// by more than microseconds, this is probably not the right synchronization -// primitive for you. If |num_threads| exceeds the number of physical threads -// that can run simultaneously, then using this is certainly a bad idea -// (although it should still be correct). -// -// Callers MUST NOT call barrier from more threads than |num_threads|. The -// result is undefined behavior. -class SpinBarrier { - public: - explicit SpinBarrier(int num_threads) - : num_threads_(num_threads), threads_at_barrier_(0), barrier_step_(0) {} - - void barrier(); - - private: - const int num_threads_; - std::atomic threads_at_barrier_; - std::atomic barrier_step_; // unsigned to make overflow defined. -}; - -// Producer-consumer API using the same underlying mechanism as SpinBarrier. -// This class is intended to allow >=1 producers to produce data for >=1 -// consumers, without blocking the producers. -// The consumer will block if it is ready before all the producer(s) have -// produced. -// WARNING: By design this lock does not work without some other barrier that -// prevents any producer from producing again, or consumer from consuming again -// until all consumers have consumed. Basically any loop that uses -// ProducerConsumer must have at least two consume() calls in each thread (on -// different instances) in order for the lock to work correctly. -class ProducerConsumer { - public: - ProducerConsumer(int num_producers, int num_consumers) - : num_producers_(num_producers), - num_consumers_(num_consumers), - producers_ready_(0), - consumers_passed_(0) {} - - // Indicates that the data produced by this thread is ready. Does NOT block. - // NOTE that some other lock must exist between the call to this produce and - // looping back to call produce again on the same ProducerConsumer, that - // depends on all consumers having called consume. One such candidate would - // be a call to SpinBarrier above by all producers and consumers. - // Another candidate would be a separate ProducerConsumer object in which - // these producers consume some data produced by the threads that consume - // the data produced here. Eg. - // tid 0 1 2 3 - // action 1 produce produce consume consume (on ProducerConsumer 1) - // action 2 consume consume produce produce (on ProducerConsumer 2) - // action 3 produce produce consume consume (on ProducerConsumer 3) - // action 4 consume consume produce produce (on ProducerConsumer 4) - // loop back to action 1. - // NOTE: It is inadequate to loop back after action2, as thread 0 could loop - // back and consume again on PC2 while thread 1 is still completing its call - // to consume. It is still inadequate to loop back after action 3 for the same - // reason (but tsan doesn't seem to pick this up.) - inline void produce() { - producers_ready_.fetch_add(1, std::memory_order_acq_rel); - } - - // Waits if necessary for all producers to have produced before proceeding. - // The ProducerConsumer cannot be reused until all consumers have consumed. - // See detailed comment and example on produce(). - inline void consume() { - // We can't do anything until all the producers have produced. - while (producers_ready_.load(std::memory_order_acquire) < num_producers_) { -#if defined __aarch64__ || defined __arm__ - asm volatile("yield\n" ::: "memory"); -#else - // No pause for x86! The pause instruction on Skylake takes 141 clock - // cycles, which in an AVX2-down-clocked CPU is getting on for 70ns. -#endif - } - // NOTE: It is tempting to move this fetch_add to before the wait loop to - // reduce contention for the memory location, but that would break the lock, - // as then the last to arrive could zero out the producers_ready before the - // other consumers have noticed that all producers have produced. - // With the fetch_add after the wait loop, we are guaranteed that all - // producers have produced AND all consumers have noticed that they have - // produced before we zero out the counters. - int consumers = consumers_passed_.fetch_add(1, std::memory_order_acq_rel); - if (consumers == num_consumers_ - 1) { - // The last consumer to pass has to reset everything for the next time. - producers_ready_.store(0, std::memory_order_relaxed); - consumers_passed_.store(0, std::memory_order_relaxed); - } - } - int num_producers() const { return num_producers_; } - int num_consumers() const { return num_consumers_; } - - private: - const int num_producers_; - const int num_consumers_; - std::atomic producers_ready_; - std::atomic consumers_passed_; -}; - -// We define Thread here, so we can easily change its type later. - -using Thread = std::thread; -using ThreadId = std::thread::id; - -// Creates (|num_threads|-1) threads and executes a total of |num_threads| -// copies of |func| (executes one on the calling thread). -// -// Useful for long running func bodies that are intended to run in lock step. -// A possible use case for this style parallelism over a thread pool is when -// we want tight control over which memory is resident in the L2 cache of a -// processor. With a pool we have no control over which thread gets assigned -// which portion of the computation resulting in L2 thrashing. With this -// breakdown we can make sure each thread only acceses a specific L2-sized -// portion of memory. -// -// func's signature must be (SpinBarrier*, int thread_id, ...); -template -void LaunchOnThreadsWithBarrier(int num_threads, Function&& func, - Args&&... args) { - SpinBarrier spin_barrier(num_threads); - - std::vector> threads; - threads.reserve(num_threads); - for (int tid = 1; tid < num_threads; ++tid) { - auto f = [&, tid]() { func(&spin_barrier, tid, args...); }; - - threads.emplace_back(absl::make_unique(f)); -#ifndef _COOP_THREADS_USE_STD_THREAD - CHECK_OK(threads.back()->Start()); -#endif - } - - const int kLocalTid = 0; - func(&spin_barrier, kLocalTid, args...); - - for (auto& thread : threads) { -#ifdef _COOP_THREADS_USE_STD_THREAD - thread->join(); -#else - CHECK_OK(thread->Join()); -#endif - } -} - -} // namespace csrblocksparse - -#endif // LYRA_CODEC_SPARSE_MATMUL_OS_COOP_THREADS_H_ diff --git a/spaces/openkg/llm_leaderboard/constants.py b/spaces/openkg/llm_leaderboard/constants.py deleted file mode 100644 index 271c8dcf2374b96e936942a386a3b2e9b804e550..0000000000000000000000000000000000000000 --- a/spaces/openkg/llm_leaderboard/constants.py +++ /dev/null @@ -1,161 +0,0 @@ -# this is .py for store constants -MODEL_INFO = ["Model Name", "Language Model"] -AVG_INFO = ["Avg. All"] -ME_INFO=["Method Name", "Language Model"] - -# KE 固定信息 -KE_Data_INFO = ["FewNERD", "FewRel", "InstructIE-en", "MAVEN","WikiEvents"] - -KE_TASK_INFO = ["Avg. All", "FewNERD", "FewRel", "InstructIE-en", "MAVEN","WikiEvents"] -KE_DATA_TITILE_TYPE = ["markdown", "markdown", "number", "number", "number", "number", "number", "number"] -KE_CSV_DIR = "./ke_files/result-kgc.csv" -KE_COLUMN_NAMES = MODEL_INFO + KE_TASK_INFO -KE_TABLE_INTRODUCTION = """In the table below, we summarize each task performance of all the models. We use F1 score(%) as the primary evaluation metric for each tasks. - """ - -# KBQA 固定信息 -KBQA_TASK_INFO = ["Avg. All", "KQApro", "LC-quad2", "WQSP", "CWQ","GrailQA","GraphQ","QALD-9","MKQA"] -KBQA_DATA_TITILE_TYPE = ["markdown", "markdown", "number", "number", "number", "number", "number", "number", "number", "number", "number"] -KBQA_CSV_DIR = "./kbqa_files/result-kbqa.csv" -KBQA_COLUMN_NAMES = MODEL_INFO + KBQA_TASK_INFO -KBQA_TABLE_INTRODUCTION = """In the table below, we summarize each task performance of all the models. We use accuracy(%) as the primary evaluation metric for each tasks. - """ - -# ME 固定信息 -ME_Data_INFO = ["accuracy","portability","locality"] - -ME_TASK_INFO = ["accuracy","portability","locality"] -ME_DATA_TITILE_TYPE = ["markdown", "markdown", "number","number","number"] -ME_CSV_DIR = "./me_files/result-me.csv" -ME_COLUMN_NAMES = ME_INFO + ME_TASK_INFO -ME_TABLE_INTRODUCTION = """In the table below, we summarize each task performance of all the models. We use F1 score(%) as the primary evaluation metric for each tasks. - """ - - -TITLE = """#

    OpenKG LLM Leaderboard

    """ - -LEADERBORAD_INTRODUCTION = """ - ### Welcome to the leaderboard of the OpenKG-Bench! 🏆 - 🐨 OpenKG LLM Leaderboard aims to track, rank, and evaluate the performance of released Large Language Models on traditional KBQA/KGQA, KGC(Knowledge Graph Construction/Reasoning), Model Edit datasets. The data on this page is sourced from a research paper. If you intend to use the data from this page, please remember to cite the source in the last part of the page. We compare the current SOTA traditional KBQA models (fine-tuned (FT) and zero-shot (ZS)). - - """ - -SUBMIT_INTRODUCTION = """# Submit Introduction - 1. You can refer to our [KGC github repository](https://github.com/zjunlp/DeepKE/tree/main/example/llm/InstructKGC) for evaluation to obtain JSON file as preds.json. - 2. If you want to update model performance by uploading new results, please ensure 'Model Name Revision' is the same as what's shown in the leaderboard. For example, if you want to modify KnowLLM's performance, you need to fill in 'KnowLLM' in 'Revision Model Name'. - 3. After clicking 'Submit Eval', you can click 'Refresh' to obtain the latest result in the leaderboard. - - ## Submit Example - For example, if you want to upload InstructBLIP's result in the leaderboard, you need to: - 1. Fill in 'KnowLLM' in 'Model Name' if it is your first time to submit your result (You can leave 'Revision Model Name' blank). - 2. Fill in 'KnowLLM' in 'Revision Model Name' if you want to update your result (You can leave 'Model Name' blank). - 3. Select 'LLaMA-7B' in 'LLM Type'. - 4. Upload preds.json. - 5. Click the 'Submit Eval' button. - 6. Click 'Refresh' to obtain the uploaded leaderboard. -""" - - - - - -LEADERBORAD_INFO = """ -The required format for submitting JSON files for this leaderboard should be as follows: {"kgc_example_list": [], "kbqa": [], "me": []}. Below is an introduction to the submission content for each specific subtask: -Please ensure that your JSON file adheres to this format and includes the appropriate submission content for each subtask. - -## Knowledge Extraction Leaderboard Submission Instructions - -- Submission format: Organize the experimental results according to the following format and submit: -``` - { - "data_id": dataset name with specific id, - "input": input of the test example - "pred": present triples as list - }, - # for example: - entity extraction: [["Huntelaar", "athlete"], ["Guus Hiddink", "athlete"]]; - relation extraction: [["Saddam", "religion", "Sunni"]]; - event argument extraction: ["Employee#Silva", "Employee#Tsarnaev", "Place#pool"] -``` -- The list of all dataset names is as follows: - -``` -["FewNERD", "FewRel", "InstructIE-en", "MAVEN", "WikiEvents"]``` - - -## KBQA Leaderboard Submission Instructions - -- Submission format: Organize the experimental results according to the following format and submit: -``` - { - "task": task name, - "dataset": Name of the evaluation dataset - "id": Test sample id, - "answer": Generated answer - }, -``` -- The list of all dataset names is as follows: - -``` -["CWQ", "GrailQA", "GraphQuestions", "KQAPro", "LC-quad2", "MKQA", "QALD-9", "WQSP"] -``` - -## Knowledge Editing Leaderboard Submission Instructions - -- Submission format: Organize the experimental results according to the following format and submit: -``` -"subject": "the subject of the text", - "target_new": "", - "prompt": "", - "ground_truth": [ - "" - ], - "rephrase_prompt": "", - "cond": "", - "locality": { - "Neighbor": [ - { - "prompt": "", - "ground_truth": "" - } - ] - }, - "portability": { - "Reasoning": [ - { - "prompt": "", - "ground_truth": "" - } - ] - } - }, -``` -- The list of all evaluation metrics is as follows: - -``` -["accuracy","portability","locality"] -``` -""" -""" -""" - -CITATION_BUTTON_LABEL = "Copy the following snippet to cite these results" -CITATION_BUTTON_TEXT = r"""@article{tan2023evaluation, - title={Evaluation of ChatGPT as a question answering system for answering complex questions}, - author={Yiming Tan and Dehai Min and Yu Li and Wenbo Li and Nan Hu and Yongrui Chen and Guilin Qi}, - journal={arXiv preprint arXiv:2303.07992}, - year={2023} -} -@article{gui2023InstructIE, - author = {Honghao Gui and Jintian Zhang and Hongbin Ye and Ningyu Zhang}, - title = {InstructIE: {A} Chinese Instruction-based Information Extraction Dataset}, - journal = {arXiv preprint arXiv:2303.07992}, - year = {2023} -} -@article{yao2023edit, - author = {Yunzhi Yao and Peng Wang and Bozhong Tian and Siyuan Cheng and Zhoubo Li and Shumin Deng and Huajun Chen and Ningyu Zhang}, - title = {Editing Large Language Models: Problems, Methods, and Opportunities}, - journal = {arXiv preprint arXiv:2305.13172}, - year = {2023} -} -""" diff --git "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_m\303\251rn\303\266k_de_aggregate.html" "b/spaces/oskarvanderwal/MT-bias-demo/results/simple_m\303\251rn\303\266k_de_aggregate.html" deleted file mode 100644 index 20015c89611d5a0b4c0d363788603937bddf1717..0000000000000000000000000000000000000000 --- "a/spaces/oskarvanderwal/MT-bias-demo/results/simple_m\303\251rn\303\266k_de_aggregate.html" +++ /dev/null @@ -1,46 +0,0 @@ -
    0th instance:
    - -
    -
    -
    - -
    -
    - Source Saliency Heatmap -
    - x: Generated tokens, y: Attributed tokens -
    - - - -
    ▁Er▁ist▁Ingenieur.</s>
    ▁Ő0.6560.4090.0930.534
    ▁mérnök.0.7540.8840.9550.57
    </s>0.00.00.00.0
    -
    - -
    -
    -
    - -
    0th instance:
    - -
    -
    -
    - -
    -
    - Target Saliency Heatmap -
    - x: Generated tokens, y: Attributed tokens -
    - - - -
    ▁Er▁ist▁Ingenieur.</s>
    ▁Er0.2250.12-0.005
    ▁ist0.257-0.006
    ▁Ingenieur.0.624
    </s>
    -
    - -
    -
    -
    - diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/ddim.md b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/ddim.md deleted file mode 100644 index c5b79cb95fc99d8c5788f629c0063f15c19b6c39..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/docs/source/en/api/schedulers/ddim.md +++ /dev/null @@ -1,82 +0,0 @@ - - -# DDIMScheduler - -[Denoising Diffusion Implicit Models](https://huggingface.co/papers/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon. - -The abstract from the paper is: - -*Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, -yet they require simulating a Markov chain for many steps to produce a sample. -To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models -with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. -We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. -We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off -computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.* - -The original codebase of this paper can be found at [ermongroup/ddim](https://github.com/ermongroup/ddim), and you can contact the author on [tsong.me](https://tsong.me/). - -## Tips - -The paper [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://huggingface.co/papers/2305.08891) claims that a mismatch between the training and inference settings leads to suboptimal inference generation results for Stable Diffusion. To fix this, the authors propose: - - - -🧪 This is an experimental feature! - - - -1. rescale the noise schedule to enforce zero terminal signal-to-noise ratio (SNR) - -```py -pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, rescale_betas_zero_snr=True) -``` - -2. train a model with `v_prediction` (add the following argument to the [train_text_to_image.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image.py) or [train_text_to_image_lora.py](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_lora.py) scripts) - -```bash ---prediction_type="v_prediction" -``` - -3. change the sampler to always start from the last timestep - -```py -pipe.scheduler = DDIMScheduler.from_config(pipe.scheduler.config, timestep_spacing="trailing") -``` - -4. rescale classifier-free guidance to prevent over-exposure - -```py -image = pipeline(prompt, guidance_rescale=0.7).images[0] -``` - -For example: - -```py -from diffusers import DiffusionPipeline, DDIMScheduler - -pipe = DiffusionPipeline.from_pretrained("ptx0/pseudo-journey-v2", torch_dtype=torch.float16) -pipe.scheduler = DDIMScheduler.from_config( - pipe.scheduler.config, rescale_betas_zero_snr=True, timestep_spacing="trailing" -) -pipe.to("cuda") - -prompt = "A lion in galaxies, spirals, nebulae, stars, smoke, iridescent, intricate detail, octane render, 8k" -image = pipeline(prompt, guidance_rescale=0.7).images[0] -``` - -## DDIMScheduler -[[autodoc]] DDIMScheduler - -## DDIMSchedulerOutput -[[autodoc]] schedulers.scheduling_ddim.DDIMSchedulerOutput diff --git a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/__init__.py b/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/__init__.py deleted file mode 100644 index bb0acffc6fa7cead85f3b30c7ca7d2ba16748ab8..0000000000000000000000000000000000000000 --- a/spaces/pablodawson/ldm3d-inpainting/diffuserslocal/src/diffusers/pipelines/deepfloyd_if/__init__.py +++ /dev/null @@ -1,84 +0,0 @@ -from typing import TYPE_CHECKING - -from ...utils import ( - OptionalDependencyNotAvailable, - _LazyModule, - get_objects_from_module, - is_torch_available, - is_transformers_available, -) - - -_dummy_objects = {} -_import_structure = { - "timesteps": [ - "fast27_timesteps", - "smart100_timesteps", - "smart185_timesteps", - "smart27_timesteps", - "smart50_timesteps", - "super100_timesteps", - "super27_timesteps", - "super40_timesteps", - ] -} - -try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() -except OptionalDependencyNotAvailable: - from ...utils import dummy_torch_and_transformers_objects # noqa F403 - - _dummy_objects.update(get_objects_from_module(dummy_torch_and_transformers_objects)) -else: - _import_structure["pipeline_if"] = ["IFPipeline"] - _import_structure["pipeline_if_img2img"] = ["IFImg2ImgPipeline"] - _import_structure["pipeline_if_img2img_superresolution"] = ["IFImg2ImgSuperResolutionPipeline"] - _import_structure["pipeline_if_inpainting"] = ["IFInpaintingPipeline"] - _import_structure["pipeline_if_inpainting_superresolution"] = ["IFInpaintingSuperResolutionPipeline"] - _import_structure["pipeline_if_superresolution"] = ["IFSuperResolutionPipeline"] - _import_structure["pipeline_output"] = ["IFPipelineOutput"] - _import_structure["safety_checker"] = ["IFSafetyChecker"] - _import_structure["watermark"] = ["IFWatermarker"] - - -if TYPE_CHECKING: - try: - if not (is_transformers_available() and is_torch_available()): - raise OptionalDependencyNotAvailable() - - except OptionalDependencyNotAvailable: - from ...utils.dummy_torch_and_transformers_objects import * - else: - from .pipeline_if import IFPipeline - from .pipeline_if_img2img import IFImg2ImgPipeline - from .pipeline_if_img2img_superresolution import IFImg2ImgSuperResolutionPipeline - from .pipeline_if_inpainting import IFInpaintingPipeline - from .pipeline_if_inpainting_superresolution import IFInpaintingSuperResolutionPipeline - from .pipeline_if_superresolution import IFSuperResolutionPipeline - from .pipeline_output import IFPipelineOutput - from .safety_checker import IFSafetyChecker - from .timesteps import ( - fast27_timesteps, - smart27_timesteps, - smart50_timesteps, - smart100_timesteps, - smart185_timesteps, - super27_timesteps, - super40_timesteps, - super100_timesteps, - ) - from .watermark import IFWatermarker - -else: - import sys - - sys.modules[__name__] = _LazyModule( - __name__, - globals()["__file__"], - _import_structure, - module_spec=__spec__, - ) - - for name, value in _dummy_objects.items(): - setattr(sys.modules[__name__], name, value) diff --git a/spaces/padmanabhbosamia/Stable_Diffusion/README.md b/spaces/padmanabhbosamia/Stable_Diffusion/README.md deleted file mode 100644 index 63f1703d01a4167cd17cf01f17d787c3565aa37e..0000000000000000000000000000000000000000 --- a/spaces/padmanabhbosamia/Stable_Diffusion/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Stable Diffusion -emoji: 🐢 -colorFrom: blue -colorTo: pink -sdk: gradio -sdk_version: 3.50.0 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/parkyzh/bingo/src/lib/bots/bing/sr.ts b/spaces/parkyzh/bingo/src/lib/bots/bing/sr.ts deleted file mode 100644 index 7cae14da7362bd6cc1e234851c11ca67e5a99f0c..0000000000000000000000000000000000000000 --- a/spaces/parkyzh/bingo/src/lib/bots/bing/sr.ts +++ /dev/null @@ -1,106 +0,0 @@ -// @ts-ignore -const SpeechRecognitionPolyfill: typeof webkitSpeechRecognition = typeof window !== 'undefined' ? ( - // @ts-ignore - window.SpeechRecognition || - window.webkitSpeechRecognition || - // @ts-ignore - window.mozSpeechRecognition || - // @ts-ignore - window.msSpeechRecognition || - // @ts-ignore - window.oSpeechRecognition -) as typeof webkitSpeechRecognition : undefined - -type subscriber = (msg: string, command?: string) => void - -export class SR { - recognition?: SpeechRecognition - onchange?: subscriber - transcript: boolean = false - listening: boolean = false - private commandsRe?: RegExp - constructor(commands: string[]) { - this.recognition = SpeechRecognitionPolyfill ? new SpeechRecognitionPolyfill() : undefined - if (!this.recognition) { - return - } - this.configuration('zh-CN') - if (commands.length) { - this.commandsRe = new RegExp(`^(${commands.join('|')})。?$`) - } - this.recognition.onresult = this.speechRecognition - this.recognition.onerror = (err) => { - console.log('err', err.error) - this.stop() - } - this.recognition.onend = () => { - if (this.recognition && this.listening) { - this.recognition.start() - } - } - } - - speechRecognition = (event: SpeechRecognitionEvent) => { - if (!this.listening) return - for (var i = event.resultIndex; i < event.results.length; i++) { - let result = event.results[i] - if (result.isFinal) { - var alt = result[0] - const text = alt.transcript.trim() - if (this.commandsRe && this.commandsRe.test(text)) { - return this.onchange?.('', RegExp.$1) - } - if (!this.transcript) return - this.onchange?.(text) - } - } - } - - private configuration = async (lang: string = 'zh-CN') => { - return new Promise((resolve) => { - if (this.recognition) { - this.recognition.continuous = true - this.recognition.lang = lang - this.recognition.onstart = resolve - } - }) - } - - start = async () => { - if (this.recognition && !this.listening) { - await this.recognition.start() - this.transcript = true - this.listening = true - } - } - - stop = () => { - if (this.recognition) { - this.recognition.stop() - this.transcript = false - this.listening = false - } - } - - - pause = () => { - if (this.recognition) { - this.transcript = false - } - } - - resume = () => { - if (this.recognition) { - this.transcript = true - } - } - - abort = () => { - if (this.recognition && this.transcript) { - this.recognition.abort() - this.transcript = false - this.listening = false - } - } -} - diff --git a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/show.py b/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/show.py deleted file mode 100644 index 6cd8190a4cd69b6aa9164048593d6a9f47a7e82f..0000000000000000000000000000000000000000 --- a/spaces/paulengstler/interpretable-vertebral-fracture-diagnosis/netdissect/show.py +++ /dev/null @@ -1,144 +0,0 @@ -# show.py -# -# An abbreviated way to output simple HTML layout of text and images -# into a python notebook. -# -# - show a PIL image to show an inline HTML . -# - show an array of items to vertically stack them, centered in a block. -# - show an array of arrays to horizontally lay them out as inline blocks. -# - show an array of tuples to create a table. - -import PIL.Image, base64, io, IPython, types, sys -import html as html_module -from IPython.display import display - -g_buffer = None - -def blocks(obj, space=''): - return IPython.display.HTML(space.join(blocks_tags(obj))) - -def rows(obj, space=''): - return IPython.display.HTML(space.join(rows_tags(obj))) - -def rows_tags(obj): - if isinstance(obj, dict): - obj = obj.items() - results = [] - results.append('') - for row in obj: - results.append('') - for item in row: - results.append('') - results.append('') - results.append('
    ') - results.extend(blocks_tags(item)) - results.append('
    ') - return results - -def blocks_tags(obj): - results = [] - if hasattr(obj, '_repr_html_'): - results.append(obj._repr_html_()) - elif isinstance(obj, PIL.Image.Image): - results.append(pil_to_html(obj)) - elif isinstance(obj, (str, int, float)): - results.append('
    ') - results.append(html_module.escape(str(obj))) - results.append('
    ') - elif isinstance(obj, dict): - results.extend(blocks_tags([(k, v) for k, v in obj.items()])) - elif hasattr(obj, '__iter__'): - if hasattr(obj, 'tolist'): - # Handle numpy/pytorch tensors as lists. - try: - obj = obj.tolist() - except: - pass - blockstart, blockend, tstart, tend, rstart, rend, cstart, cend = [ - '
    ', - '
    ', - '', - '
    ', - '', - '', - '', - '', - ] - needs_end = False - table_mode = False - for i, line in enumerate(obj): - if i == 0: - needs_end = True - if isinstance(line, tuple): - table_mode = True - results.append(tstart) - else: - results.append(blockstart) - if table_mode: - results.append(rstart) - if not isinstance(line, str) and hasattr(line, '__iter__'): - for cell in line: - results.append(cstart) - results.extend(blocks_tags(cell)) - results.append(cend) - else: - results.append(cstart) - results.extend(blocks_tags(line)) - results.append(cend) - results.append(rend) - else: - results.extend(blocks_tags(line)) - if needs_end: - results.append(table_mode and tend or blockend) - return results - -def pil_to_b64(img, format='png'): - buffered = io.BytesIO() - img.save(buffered, format=format) - return base64.b64encode(buffered.getvalue()).decode('utf-8') - -def pil_to_url(img, format='png'): - return 'data:image/%s;base64,%s' % (format, pil_to_b64(img, format)) - -def pil_to_html(img, margin=1): - mattr = ' style="margin:%dpx"' % margin - return '' % (pil_to_url(img), mattr) - -def a(x, cols=None): - global g_buffer - if g_buffer is None: - g_buffer = [] - g_buffer.append(x) - if cols is not None and len(g_buffer) >= cols: - flush() - -def reset(): - global g_buffer - g_buffer = None - -def flush(*args, **kwargs): - global g_buffer - if g_buffer is not None: - x = g_buffer - g_buffer = None - display(blocks(x, *args, **kwargs)) - -def show(x=None, *args, **kwargs): - flush(*args, **kwargs) - if x is not None: - display(blocks(x, *args, **kwargs)) - -def html(obj, space=''): - return blocks(obj, space)._repr_html_() - -class CallableModule(types.ModuleType): - def __init__(self): - # or super().__init__(__name__) for Python 3 - types.ModuleType.__init__(self, __name__) - self.__dict__.update(sys.modules[__name__].__dict__) - def __call__(self, x=None, *args, **kwargs): - show(x, *args, **kwargs) - -sys.modules[__name__] = CallableModule() diff --git a/spaces/pixiou/bingo/src/components/tone-selector.tsx b/spaces/pixiou/bingo/src/components/tone-selector.tsx deleted file mode 100644 index 5c6e464c91f564b895acd121f0a4a79ed9c5c356..0000000000000000000000000000000000000000 --- a/spaces/pixiou/bingo/src/components/tone-selector.tsx +++ /dev/null @@ -1,43 +0,0 @@ -import React from 'react' -import { BingConversationStyle } from '@/lib/bots/bing/types' -import { cn } from '@/lib/utils' - -type ToneItem = { - type: BingConversationStyle, - name: string -} - -const ToneList: ToneItem[] = [ - { name: '有创造力', type: BingConversationStyle.Creative }, - { name: '更平衡', type: BingConversationStyle.Balanced }, - { name: '更精确', type: BingConversationStyle.Precise } -] - -interface ToneSelectorProps { - type: BingConversationStyle | '' - onChange?: (type: BingConversationStyle) => void -} - -export function ToneSelector({ type, onChange }: ToneSelectorProps) { - return ( -
    -
    - 选择对话样式 -
    -
    -
      - { - ToneList.map(tone => ( -
    • onChange?.(tone.type)}> - -
    • - )) - } -
    -
    -
    - ) -} diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/command_context.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/command_context.py deleted file mode 100644 index 139995ac3f109a82664e4913f7ebc32ecf7617e1..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/cli/command_context.py +++ /dev/null @@ -1,27 +0,0 @@ -from contextlib import ExitStack, contextmanager -from typing import ContextManager, Generator, TypeVar - -_T = TypeVar("_T", covariant=True) - - -class CommandContextMixIn: - def __init__(self) -> None: - super().__init__() - self._in_main_context = False - self._main_context = ExitStack() - - @contextmanager - def main_context(self) -> Generator[None, None, None]: - assert not self._in_main_context - - self._in_main_context = True - try: - with self._main_context: - yield - finally: - self._in_main_context = False - - def enter_context(self, context_provider: ContextManager[_T]) -> _T: - assert self._in_main_context - - return self._main_context.enter_context(context_provider) diff --git a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/_internal_utils.py b/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/_internal_utils.py deleted file mode 100644 index f2cf635e2937ee9b123a1498c5c5f723a6e20084..0000000000000000000000000000000000000000 --- a/spaces/pknez/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_vendor/requests/_internal_utils.py +++ /dev/null @@ -1,50 +0,0 @@ -""" -requests._internal_utils -~~~~~~~~~~~~~~ - -Provides utility functions that are consumed internally by Requests -which depend on extremely few external helpers (such as compat) -""" -import re - -from .compat import builtin_str - -_VALID_HEADER_NAME_RE_BYTE = re.compile(rb"^[^:\s][^:\r\n]*$") -_VALID_HEADER_NAME_RE_STR = re.compile(r"^[^:\s][^:\r\n]*$") -_VALID_HEADER_VALUE_RE_BYTE = re.compile(rb"^\S[^\r\n]*$|^$") -_VALID_HEADER_VALUE_RE_STR = re.compile(r"^\S[^\r\n]*$|^$") - -_HEADER_VALIDATORS_STR = (_VALID_HEADER_NAME_RE_STR, _VALID_HEADER_VALUE_RE_STR) -_HEADER_VALIDATORS_BYTE = (_VALID_HEADER_NAME_RE_BYTE, _VALID_HEADER_VALUE_RE_BYTE) -HEADER_VALIDATORS = { - bytes: _HEADER_VALIDATORS_BYTE, - str: _HEADER_VALIDATORS_STR, -} - - -def to_native_string(string, encoding="ascii"): - """Given a string object, regardless of type, returns a representation of - that string in the native string type, encoding and decoding where - necessary. This assumes ASCII unless told otherwise. - """ - if isinstance(string, builtin_str): - out = string - else: - out = string.decode(encoding) - - return out - - -def unicode_is_ascii(u_string): - """Determine if unicode string only contains ASCII characters. - - :param str u_string: unicode string to check. Must be unicode - and not Python 2 `str`. - :rtype: bool - """ - assert isinstance(u_string, str) - try: - u_string.encode("ascii") - return True - except UnicodeEncodeError: - return False diff --git a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/ksmedia.h b/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/ksmedia.h deleted file mode 100644 index f029b017f402efb6de349bcdd724593139f8d82e..0000000000000000000000000000000000000000 --- a/spaces/prerna9811/Chord/portaudio/src/hostapi/wasapi/mingw-include/ksmedia.h +++ /dev/null @@ -1,4610 +0,0 @@ -/** - * This file has no copyright assigned and is placed in the Public Domain. - * This file is part of the w64 mingw-runtime package. - * No warranty is given; refer to the file DISCLAIMER.PD within this package. - */ -#if !defined(_KS_) -#warning ks.h must be included before ksmedia.h -#include "ks.h" -#endif - -#if __GNUC__ >= 3 -#pragma GCC system_header -#endif - -#if !defined(_KSMEDIA_) -#define _KSMEDIA_ - -typedef struct { - KSPROPERTY Property; - KSMULTIPLE_ITEM MultipleItem; -} KSMULTIPLE_DATA_PROP,*PKSMULTIPLE_DATA_PROP; - -#define STATIC_KSMEDIUMSETID_MidiBus \ - 0x05908040L,0x3246,0x11D0,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("05908040-3246-11D0-A5D6-28DB04C10000",KSMEDIUMSETID_MidiBus); -#define KSMEDIUMSETID_MidiBus DEFINE_GUIDNAMED(KSMEDIUMSETID_MidiBus) - -#define STATIC_KSMEDIUMSETID_VPBus \ - 0xA18C15ECL,0xCE43,0x11D0,0xAB,0xE7,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("A18C15EC-CE43-11D0-ABE7-00A0C9223196",KSMEDIUMSETID_VPBus); -#define KSMEDIUMSETID_VPBus DEFINE_GUIDNAMED(KSMEDIUMSETID_VPBus) - -#define STATIC_KSINTERFACESETID_Media \ - 0x3A13EB40L,0x30A7,0x11D0,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("3A13EB40-30A7-11D0-A5D6-28DB04C10000",KSINTERFACESETID_Media); -#define KSINTERFACESETID_Media DEFINE_GUIDNAMED(KSINTERFACESETID_Media) - -typedef enum { - KSINTERFACE_MEDIA_MUSIC, - KSINTERFACE_MEDIA_WAVE_BUFFERED, - KSINTERFACE_MEDIA_WAVE_QUEUED -} KSINTERFACE_MEDIA; - -#ifndef INIT_USBAUDIO_MID -#define INIT_USBAUDIO_MID(guid,id) \ -{ \ - (guid)->Data1 = 0x4e1cecd2 + (USHORT)(id); \ - (guid)->Data2 = 0x1679; \ - (guid)->Data3 = 0x463b; \ - (guid)->Data4[0] = 0xa7; \ - (guid)->Data4[1] = 0x2f; \ - (guid)->Data4[2] = 0xa5; \ - (guid)->Data4[3] = 0xbf; \ - (guid)->Data4[4] = 0x64; \ - (guid)->Data4[5] = 0xc8; \ - (guid)->Data4[6] = 0x6e; \ - (guid)->Data4[7] = 0xba; \ -} -#define EXTRACT_USBAUDIO_MID(guid) \ - (USHORT)((guid)->Data1 - 0x4e1cecd2) -#define DEFINE_USBAUDIO_MID_GUID(id) \ - 0x4e1cecd2+(USHORT)(id),0x1679,0x463b,0xa7,0x2f,0xa5,0xbf,0x64,0xc8,0x6e,0xba -#define IS_COMPATIBLE_USBAUDIO_MID(guid) \ - (((guid)->Data1 >= 0x4e1cecd2) && \ - ((guid)->Data1 < 0x4e1cecd2 + 0xffff) && \ - ((guid)->Data2 == 0x1679) && \ - ((guid)->Data3 == 0x463b) && \ - ((guid)->Data4[0] == 0xa7) && \ - ((guid)->Data4[1] == 0x2f) && \ - ((guid)->Data4[2] == 0xa5) && \ - ((guid)->Data4[3] == 0xbf) && \ - ((guid)->Data4[4] == 0x64) && \ - ((guid)->Data4[5] == 0xc8) && \ - ((guid)->Data4[6] == 0x6e) && \ - ((guid)->Data4[7] == 0xba) ) -#endif /* INIT_USBAUDIO_MID */ - -#ifndef INIT_USBAUDIO_PID -#define INIT_USBAUDIO_PID(guid,id) \ -{ \ - (guid)->Data1 = 0xabcc5a5e + (USHORT)(id); \ - (guid)->Data2 = 0xc263; \ - (guid)->Data3 = 0x463b; \ - (guid)->Data4[0] = 0xa7; \ - (guid)->Data4[1] = 0x2f; \ - (guid)->Data4[2] = 0xa5; \ - (guid)->Data4[3] = 0xbf; \ - (guid)->Data4[4] = 0x64; \ - (guid)->Data4[5] = 0xc8; \ - (guid)->Data4[6] = 0x6e; \ - (guid)->Data4[7] = 0xba; \ -} -#define EXTRACT_USBAUDIO_PID(guid) \ - (USHORT)((guid)->Data1 - 0xabcc5a5e) -#define DEFINE_USBAUDIO_PID_GUID(id) \ - 0xabcc5a5e+(USHORT)(id),0xc263,0x463b,0xa7,0x2f,0xa5,0xbf,0x64,0xc8,0x6e,0xba -#define IS_COMPATIBLE_USBAUDIO_PID(guid) \ - (((guid)->Data1 >= 0xabcc5a5e) && \ - ((guid)->Data1 < 0xabcc5a5e + 0xffff) && \ - ((guid)->Data2 == 0xc263) && \ - ((guid)->Data3 == 0x463b) && \ - ((guid)->Data4[0] == 0xa7) && \ - ((guid)->Data4[1] == 0x2f) && \ - ((guid)->Data4[2] == 0xa5) && \ - ((guid)->Data4[3] == 0xbf) && \ - ((guid)->Data4[4] == 0x64) && \ - ((guid)->Data4[5] == 0xc8) && \ - ((guid)->Data4[6] == 0x6e) && \ - ((guid)->Data4[7] == 0xba) ) -#endif /* INIT_USBAUDIO_PID */ - -#ifndef INIT_USBAUDIO_PRODUCT_NAME -#define INIT_USBAUDIO_PRODUCT_NAME(guid,vid,pid,strIndex) \ -{ \ - (guid)->Data1 = 0XFC575048 + (USHORT)(vid); \ - (guid)->Data2 = 0x2E08 + (USHORT)(pid); \ - (guid)->Data3 = 0x463B + (USHORT)(strIndex); \ - (guid)->Data4[0] = 0xA7; \ - (guid)->Data4[1] = 0x2F; \ - (guid)->Data4[2] = 0xA5; \ - (guid)->Data4[3] = 0xBF; \ - (guid)->Data4[4] = 0x64; \ - (guid)->Data4[5] = 0xC8; \ - (guid)->Data4[6] = 0x6E; \ - (guid)->Data4[7] = 0xBA; \ -} -#define DEFINE_USBAUDIO_PRODUCT_NAME(vid,pid,strIndex) \ - 0xFC575048+(USHORT)(vid),0x2E08+(USHORT)(pid),0x463B+(USHORT)(strIndex),0xA7,0x2F,0xA5,0xBF,0x64,0xC8,0x6E,0xBA -#endif /* INIT_USBAUDIO_PRODUCT_NAME */ - -#define STATIC_KSCOMPONENTID_USBAUDIO \ - 0x8F1275F0,0x26E9,0x4264,0xBA,0x4D,0x39,0xFF,0xF0,0x1D,0x94,0xAA -DEFINE_GUIDSTRUCT("8F1275F0-26E9-4264-BA4D-39FFF01D94AA",KSCOMPONENTID_USBAUDIO); -#define KSCOMPONENTID_USBAUDIO DEFINE_GUIDNAMED(KSCOMPONENTID_USBAUDIO) - -#define INIT_USB_TERMINAL(guid,id) \ -{ \ - (guid)->Data1 = 0xDFF219E0 + (USHORT)(id); \ - (guid)->Data2 = 0xF70F; \ - (guid)->Data3 = 0x11D0; \ - (guid)->Data4[0] = 0xb9; \ - (guid)->Data4[1] = 0x17; \ - (guid)->Data4[2] = 0x00; \ - (guid)->Data4[3] = 0xa0; \ - (guid)->Data4[4] = 0xc9; \ - (guid)->Data4[5] = 0x22; \ - (guid)->Data4[6] = 0x31; \ - (guid)->Data4[7] = 0x96; \ -} -#define EXTRACT_USB_TERMINAL(guid) \ - (USHORT)((guid)->Data1 - 0xDFF219E0) -#define DEFINE_USB_TERMINAL_GUID(id) \ - 0xDFF219E0+(USHORT)(id),0xF70F,0x11D0,0xB9,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 - -#define STATIC_KSNODETYPE_MICROPHONE \ - DEFINE_USB_TERMINAL_GUID(0x0201) -DEFINE_GUIDSTRUCT("DFF21BE1-F70F-11D0-B917-00A0C9223196",KSNODETYPE_MICROPHONE); -#define KSNODETYPE_MICROPHONE DEFINE_GUIDNAMED(KSNODETYPE_MICROPHONE) - -#define STATIC_KSNODETYPE_DESKTOP_MICROPHONE \ - DEFINE_USB_TERMINAL_GUID(0x0202) -DEFINE_GUIDSTRUCT("DFF21BE2-F70F-11D0-B917-00A0C9223196",KSNODETYPE_DESKTOP_MICROPHONE); -#define KSNODETYPE_DESKTOP_MICROPHONE DEFINE_GUIDNAMED(KSNODETYPE_DESKTOP_MICROPHONE) - -#define STATIC_KSNODETYPE_PERSONAL_MICROPHONE \ - DEFINE_USB_TERMINAL_GUID(0x0203) -DEFINE_GUIDSTRUCT("DFF21BE3-F70F-11D0-B917-00A0C9223196",KSNODETYPE_PERSONAL_MICROPHONE); -#define KSNODETYPE_PERSONAL_MICROPHONE DEFINE_GUIDNAMED(KSNODETYPE_PERSONAL_MICROPHONE) - -#define STATIC_KSNODETYPE_OMNI_DIRECTIONAL_MICROPHONE \ - DEFINE_USB_TERMINAL_GUID(0x0204) -DEFINE_GUIDSTRUCT("DFF21BE4-F70F-11D0-B917-00A0C9223196",KSNODETYPE_OMNI_DIRECTIONAL_MICROPHONE); -#define KSNODETYPE_OMNI_DIRECTIONAL_MICROPHONE DEFINE_GUIDNAMED(KSNODETYPE_OMNI_DIRECTIONAL_MICROPHONE) - -#define STATIC_KSNODETYPE_MICROPHONE_ARRAY \ - DEFINE_USB_TERMINAL_GUID(0x0205) -DEFINE_GUIDSTRUCT("DFF21BE5-F70F-11D0-B917-00A0C9223196",KSNODETYPE_MICROPHONE_ARRAY); -#define KSNODETYPE_MICROPHONE_ARRAY DEFINE_GUIDNAMED(KSNODETYPE_MICROPHONE_ARRAY) - -#define STATIC_KSNODETYPE_PROCESSING_MICROPHONE_ARRAY \ - DEFINE_USB_TERMINAL_GUID(0x0206) -DEFINE_GUIDSTRUCT("DFF21BE6-F70F-11D0-B917-00A0C9223196",KSNODETYPE_PROCESSING_MICROPHONE_ARRAY); -#define KSNODETYPE_PROCESSING_MICROPHONE_ARRAY DEFINE_GUIDNAMED(KSNODETYPE_PROCESSING_MICROPHONE_ARRAY) - -#define STATIC_KSCATEGORY_MICROPHONE_ARRAY_PROCESSOR \ - 0x830a44f2,0xa32d,0x476b,0xbe,0x97,0x42,0x84,0x56,0x73,0xb3,0x5a -DEFINE_GUIDSTRUCT("830a44f2-a32d-476b-be97-42845673b35a",KSCATEGORY_MICROPHONE_ARRAY_PROCESSOR); -#define KSCATEGORY_MICROPHONE_ARRAY_PROCESSOR DEFINE_GUIDNAMED(KSCATEGORY_MICROPHONE_ARRAY_PROCESSOR) - -#define STATIC_KSNODETYPE_SPEAKER \ - DEFINE_USB_TERMINAL_GUID(0x0301) -DEFINE_GUIDSTRUCT("DFF21CE1-F70F-11D0-B917-00A0C9223196",KSNODETYPE_SPEAKER); -#define KSNODETYPE_SPEAKER DEFINE_GUIDNAMED(KSNODETYPE_SPEAKER) - -#define STATIC_KSNODETYPE_HEADPHONES \ - DEFINE_USB_TERMINAL_GUID(0x0302) -DEFINE_GUIDSTRUCT("DFF21CE2-F70F-11D0-B917-00A0C9223196",KSNODETYPE_HEADPHONES); -#define KSNODETYPE_HEADPHONES DEFINE_GUIDNAMED(KSNODETYPE_HEADPHONES) - -#define STATIC_KSNODETYPE_HEAD_MOUNTED_DISPLAY_AUDIO \ - DEFINE_USB_TERMINAL_GUID(0x0303) -DEFINE_GUIDSTRUCT("DFF21CE3-F70F-11D0-B917-00A0C9223196",KSNODETYPE_HEAD_MOUNTED_DISPLAY_AUDIO); -#define KSNODETYPE_HEAD_MOUNTED_DISPLAY_AUDIO DEFINE_GUIDNAMED(KSNODETYPE_HEAD_MOUNTED_DISPLAY_AUDIO) - -#define STATIC_KSNODETYPE_DESKTOP_SPEAKER \ - DEFINE_USB_TERMINAL_GUID(0x0304) -DEFINE_GUIDSTRUCT("DFF21CE4-F70F-11D0-B917-00A0C9223196",KSNODETYPE_DESKTOP_SPEAKER); -#define KSNODETYPE_DESKTOP_SPEAKER DEFINE_GUIDNAMED(KSNODETYPE_DESKTOP_SPEAKER) - -#define STATIC_KSNODETYPE_ROOM_SPEAKER \ - DEFINE_USB_TERMINAL_GUID(0x0305) -DEFINE_GUIDSTRUCT("DFF21CE5-F70F-11D0-B917-00A0C9223196",KSNODETYPE_ROOM_SPEAKER); -#define KSNODETYPE_ROOM_SPEAKER DEFINE_GUIDNAMED(KSNODETYPE_ROOM_SPEAKER) - -#define STATIC_KSNODETYPE_COMMUNICATION_SPEAKER \ - DEFINE_USB_TERMINAL_GUID(0x0306) -DEFINE_GUIDSTRUCT("DFF21CE6-F70F-11D0-B917-00A0C9223196",KSNODETYPE_COMMUNICATION_SPEAKER); -#define KSNODETYPE_COMMUNICATION_SPEAKER DEFINE_GUIDNAMED(KSNODETYPE_COMMUNICATION_SPEAKER) - -#define STATIC_KSNODETYPE_LOW_FREQUENCY_EFFECTS_SPEAKER \ - DEFINE_USB_TERMINAL_GUID(0x0307) -DEFINE_GUIDSTRUCT("DFF21CE7-F70F-11D0-B917-00A0C9223196",KSNODETYPE_LOW_FREQUENCY_EFFECTS_SPEAKER); -#define KSNODETYPE_LOW_FREQUENCY_EFFECTS_SPEAKER DEFINE_GUIDNAMED(KSNODETYPE_LOW_FREQUENCY_EFFECTS_SPEAKER) - -#define STATIC_KSNODETYPE_HANDSET \ - DEFINE_USB_TERMINAL_GUID(0x0401) -DEFINE_GUIDSTRUCT("DFF21DE1-F70F-11D0-B917-00A0C9223196",KSNODETYPE_HANDSET); -#define KSNODETYPE_HANDSET DEFINE_GUIDNAMED(KSNODETYPE_HANDSET) - -#define STATIC_KSNODETYPE_HEADSET \ - DEFINE_USB_TERMINAL_GUID(0x0402) -DEFINE_GUIDSTRUCT("DFF21DE2-F70F-11D0-B917-00A0C9223196",KSNODETYPE_HEADSET); -#define KSNODETYPE_HEADSET DEFINE_GUIDNAMED(KSNODETYPE_HEADSET) - -#define STATIC_KSNODETYPE_SPEAKERPHONE_NO_ECHO_REDUCTION \ - DEFINE_USB_TERMINAL_GUID(0x0403) -DEFINE_GUIDSTRUCT("DFF21DE3-F70F-11D0-B917-00A0C9223196",KSNODETYPE_SPEAKERPHONE_NO_ECHO_REDUCTION); -#define KSNODETYPE_SPEAKERPHONE_NO_ECHO_REDUCTION DEFINE_GUIDNAMED(KSNODETYPE_SPEAKERPHONE_NO_ECHO_REDUCTION) - -#define STATIC_KSNODETYPE_ECHO_SUPPRESSING_SPEAKERPHONE \ - DEFINE_USB_TERMINAL_GUID(0x0404) -DEFINE_GUIDSTRUCT("DFF21DE4-F70F-11D0-B917-00A0C9223196",KSNODETYPE_ECHO_SUPPRESSING_SPEAKERPHONE); -#define KSNODETYPE_ECHO_SUPPRESSING_SPEAKERPHONE DEFINE_GUIDNAMED(KSNODETYPE_ECHO_SUPPRESSING_SPEAKERPHONE) - -#define STATIC_KSNODETYPE_ECHO_CANCELING_SPEAKERPHONE \ - DEFINE_USB_TERMINAL_GUID(0x0405) -DEFINE_GUIDSTRUCT("DFF21DE5-F70F-11D0-B917-00A0C9223196",KSNODETYPE_ECHO_CANCELING_SPEAKERPHONE); -#define KSNODETYPE_ECHO_CANCELING_SPEAKERPHONE DEFINE_GUIDNAMED(KSNODETYPE_ECHO_CANCELING_SPEAKERPHONE) - -#define STATIC_KSNODETYPE_PHONE_LINE \ - DEFINE_USB_TERMINAL_GUID(0x0501) -DEFINE_GUIDSTRUCT("DFF21EE1-F70F-11D0-B917-00A0C9223196",KSNODETYPE_PHONE_LINE); -#define KSNODETYPE_PHONE_LINE DEFINE_GUIDNAMED(KSNODETYPE_PHONE_LINE) - -#define STATIC_KSNODETYPE_TELEPHONE \ - DEFINE_USB_TERMINAL_GUID(0x0502) -DEFINE_GUIDSTRUCT("DFF21EE2-F70F-11D0-B917-00A0C9223196",KSNODETYPE_TELEPHONE); -#define KSNODETYPE_TELEPHONE DEFINE_GUIDNAMED(KSNODETYPE_TELEPHONE) - -#define STATIC_KSNODETYPE_DOWN_LINE_PHONE \ - DEFINE_USB_TERMINAL_GUID(0x0503) -DEFINE_GUIDSTRUCT("DFF21EE3-F70F-11D0-B917-00A0C9223196",KSNODETYPE_DOWN_LINE_PHONE); -#define KSNODETYPE_DOWN_LINE_PHONE DEFINE_GUIDNAMED(KSNODETYPE_DOWN_LINE_PHONE) - -#define STATIC_KSNODETYPE_ANALOG_CONNECTOR \ - DEFINE_USB_TERMINAL_GUID(0x601) -DEFINE_GUIDSTRUCT("DFF21FE1-F70F-11D0-B917-00A0C9223196",KSNODETYPE_ANALOG_CONNECTOR); -#define KSNODETYPE_ANALOG_CONNECTOR DEFINE_GUIDNAMED(KSNODETYPE_ANALOG_CONNECTOR) - -#define STATIC_KSNODETYPE_DIGITAL_AUDIO_INTERFACE \ - DEFINE_USB_TERMINAL_GUID(0x0602) -DEFINE_GUIDSTRUCT("DFF21FE2-F70F-11D0-B917-00A0C9223196",KSNODETYPE_DIGITAL_AUDIO_INTERFACE); -#define KSNODETYPE_DIGITAL_AUDIO_INTERFACE DEFINE_GUIDNAMED(KSNODETYPE_DIGITAL_AUDIO_INTERFACE) - -#define STATIC_KSNODETYPE_LINE_CONNECTOR \ - DEFINE_USB_TERMINAL_GUID(0x0603) -DEFINE_GUIDSTRUCT("DFF21FE3-F70F-11D0-B917-00A0C9223196",KSNODETYPE_LINE_CONNECTOR); -#define KSNODETYPE_LINE_CONNECTOR DEFINE_GUIDNAMED(KSNODETYPE_LINE_CONNECTOR) - -#define STATIC_KSNODETYPE_LEGACY_AUDIO_CONNECTOR \ - DEFINE_USB_TERMINAL_GUID(0x0604) -DEFINE_GUIDSTRUCT("DFF21FE4-F70F-11D0-B917-00A0C9223196",KSNODETYPE_LEGACY_AUDIO_CONNECTOR); -#define KSNODETYPE_LEGACY_AUDIO_CONNECTOR DEFINE_GUIDNAMED(KSNODETYPE_LEGACY_AUDIO_CONNECTOR) - -#define STATIC_KSNODETYPE_SPDIF_INTERFACE \ - DEFINE_USB_TERMINAL_GUID(0x0605) -DEFINE_GUIDSTRUCT("DFF21FE5-F70F-11D0-B917-00A0C9223196",KSNODETYPE_SPDIF_INTERFACE); -#define KSNODETYPE_SPDIF_INTERFACE DEFINE_GUIDNAMED(KSNODETYPE_SPDIF_INTERFACE) - -#define STATIC_KSNODETYPE_1394_DA_STREAM \ - DEFINE_USB_TERMINAL_GUID(0x0606) -DEFINE_GUIDSTRUCT("DFF21FE6-F70F-11D0-B917-00A0C9223196",KSNODETYPE_1394_DA_STREAM); -#define KSNODETYPE_1394_DA_STREAM DEFINE_GUIDNAMED(KSNODETYPE_1394_DA_STREAM) - -#define STATIC_KSNODETYPE_1394_DV_STREAM_SOUNDTRACK \ - DEFINE_USB_TERMINAL_GUID(0x0607) -DEFINE_GUIDSTRUCT("DFF21FE7-F70F-11D0-B917-00A0C9223196",KSNODETYPE_1394_DV_STREAM_SOUNDTRACK); -#define KSNODETYPE_1394_DV_STREAM_SOUNDTRACK DEFINE_GUIDNAMED(KSNODETYPE_1394_DV_STREAM_SOUNDTRACK) - -#define STATIC_KSNODETYPE_LEVEL_CALIBRATION_NOISE_SOURCE \ - DEFINE_USB_TERMINAL_GUID(0x0701) -DEFINE_GUIDSTRUCT("DFF220E1-F70F-11D0-B917-00A0C9223196",KSNODETYPE_LEVEL_CALIBRATION_NOISE_SOURCE); -#define KSNODETYPE_LEVEL_CALIBRATION_NOISE_SOURCE DEFINE_GUIDNAMED(KSNODETYPE_LEVEL_CALIBRATION_NOISE_SOURCE) - -#define STATIC_KSNODETYPE_EQUALIZATION_NOISE \ - DEFINE_USB_TERMINAL_GUID(0x0702) -DEFINE_GUIDSTRUCT("DFF220E2-F70F-11D0-B917-00A0C9223196",KSNODETYPE_EQUALIZATION_NOISE); -#define KSNODETYPE_EQUALIZATION_NOISE DEFINE_GUIDNAMED(KSNODETYPE_EQUALIZATION_NOISE) - -#define STATIC_KSNODETYPE_CD_PLAYER \ - DEFINE_USB_TERMINAL_GUID(0x0703) -DEFINE_GUIDSTRUCT("DFF220E3-F70F-11D0-B917-00A0C9223196",KSNODETYPE_CD_PLAYER); -#define KSNODETYPE_CD_PLAYER DEFINE_GUIDNAMED(KSNODETYPE_CD_PLAYER) - -#define STATIC_KSNODETYPE_DAT_IO_DIGITAL_AUDIO_TAPE \ - DEFINE_USB_TERMINAL_GUID(0x0704) -DEFINE_GUIDSTRUCT("DFF220E4-F70F-11D0-B917-00A0C9223196",KSNODETYPE_DAT_IO_DIGITAL_AUDIO_TAPE); -#define KSNODETYPE_DAT_IO_DIGITAL_AUDIO_TAPE DEFINE_GUIDNAMED(KSNODETYPE_DAT_IO_DIGITAL_AUDIO_TAPE) - -#define STATIC_KSNODETYPE_DCC_IO_DIGITAL_COMPACT_CASSETTE \ - DEFINE_USB_TERMINAL_GUID(0x0705) -DEFINE_GUIDSTRUCT("DFF220E5-F70F-11D0-B917-00A0C9223196",KSNODETYPE_DCC_IO_DIGITAL_COMPACT_CASSETTE); -#define KSNODETYPE_DCC_IO_DIGITAL_COMPACT_CASSETTE DEFINE_GUIDNAMED(KSNODETYPE_DCC_IO_DIGITAL_COMPACT_CASSETTE) - -#define STATIC_KSNODETYPE_MINIDISK \ - DEFINE_USB_TERMINAL_GUID(0x0706) -DEFINE_GUIDSTRUCT("DFF220E6-F70F-11D0-B917-00A0C9223196",KSNODETYPE_MINIDISK); -#define KSNODETYPE_MINIDISK DEFINE_GUIDNAMED(KSNODETYPE_MINIDISK) - -#define STATIC_KSNODETYPE_ANALOG_TAPE \ - DEFINE_USB_TERMINAL_GUID(0x0707) -DEFINE_GUIDSTRUCT("DFF220E7-F70F-11D0-B917-00A0C9223196",KSNODETYPE_ANALOG_TAPE); -#define KSNODETYPE_ANALOG_TAPE DEFINE_GUIDNAMED(KSNODETYPE_ANALOG_TAPE) - -#define STATIC_KSNODETYPE_PHONOGRAPH \ - DEFINE_USB_TERMINAL_GUID(0x0708) -DEFINE_GUIDSTRUCT("DFF220E8-F70F-11D0-B917-00A0C9223196",KSNODETYPE_PHONOGRAPH); -#define KSNODETYPE_PHONOGRAPH DEFINE_GUIDNAMED(KSNODETYPE_PHONOGRAPH) - -#define STATIC_KSNODETYPE_VCR_AUDIO \ - DEFINE_USB_TERMINAL_GUID(0x0708) -DEFINE_GUIDSTRUCT("DFF220E9-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VCR_AUDIO); -#define KSNODETYPE_VCR_AUDIO DEFINE_GUIDNAMED(KSNODETYPE_VCR_AUDIO) - -#define STATIC_KSNODETYPE_VIDEO_DISC_AUDIO \ - DEFINE_USB_TERMINAL_GUID(0x070A) -DEFINE_GUIDSTRUCT("DFF220EA-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VIDEO_DISC_AUDIO); -#define KSNODETYPE_VIDEO_DISC_AUDIO DEFINE_GUIDNAMED(KSNODETYPE_VIDEO_DISC_AUDIO) - -#define STATIC_KSNODETYPE_DVD_AUDIO \ - DEFINE_USB_TERMINAL_GUID(0x070B) -DEFINE_GUIDSTRUCT("DFF220EB-F70F-11D0-B917-00A0C9223196",KSNODETYPE_DVD_AUDIO); -#define KSNODETYPE_DVD_AUDIO DEFINE_GUIDNAMED(KSNODETYPE_DVD_AUDIO) - -#define STATIC_KSNODETYPE_TV_TUNER_AUDIO \ - DEFINE_USB_TERMINAL_GUID(0x070C) -DEFINE_GUIDSTRUCT("DFF220EC-F70F-11D0-B917-00A0C9223196",KSNODETYPE_TV_TUNER_AUDIO); -#define KSNODETYPE_TV_TUNER_AUDIO DEFINE_GUIDNAMED(KSNODETYPE_TV_TUNER_AUDIO) - -#define STATIC_KSNODETYPE_SATELLITE_RECEIVER_AUDIO \ - DEFINE_USB_TERMINAL_GUID(0x070D) -DEFINE_GUIDSTRUCT("DFF220ED-F70F-11D0-B917-00A0C9223196",KSNODETYPE_SATELLITE_RECEIVER_AUDIO); -#define KSNODETYPE_SATELLITE_RECEIVER_AUDIO DEFINE_GUIDNAMED(KSNODETYPE_SATELLITE_RECEIVER_AUDIO) - -#define STATIC_KSNODETYPE_CABLE_TUNER_AUDIO \ - DEFINE_USB_TERMINAL_GUID(0x070E) -DEFINE_GUIDSTRUCT("DFF220EE-F70F-11D0-B917-00A0C9223196",KSNODETYPE_CABLE_TUNER_AUDIO); -#define KSNODETYPE_CABLE_TUNER_AUDIO DEFINE_GUIDNAMED(KSNODETYPE_CABLE_TUNER_AUDIO) - -#define STATIC_KSNODETYPE_DSS_AUDIO \ - DEFINE_USB_TERMINAL_GUID(0x070F) -DEFINE_GUIDSTRUCT("DFF220EF-F70F-11D0-B917-00A0C9223196",KSNODETYPE_DSS_AUDIO); -#define KSNODETYPE_DSS_AUDIO DEFINE_GUIDNAMED(KSNODETYPE_DSS_AUDIO) - -#define STATIC_KSNODETYPE_RADIO_RECEIVER \ - DEFINE_USB_TERMINAL_GUID(0x0710) -DEFINE_GUIDSTRUCT("DFF220F0-F70F-11D0-B917-00A0C9223196",KSNODETYPE_RADIO_RECEIVER); -#define KSNODETYPE_RADIO_RECEIVER DEFINE_GUIDNAMED(KSNODETYPE_RADIO_RECEIVER) - -#define STATIC_KSNODETYPE_RADIO_TRANSMITTER \ - DEFINE_USB_TERMINAL_GUID(0x0711) -DEFINE_GUIDSTRUCT("DFF220F1-F70F-11D0-B917-00A0C9223196",KSNODETYPE_RADIO_TRANSMITTER); -#define KSNODETYPE_RADIO_TRANSMITTER DEFINE_GUIDNAMED(KSNODETYPE_RADIO_TRANSMITTER) - -#define STATIC_KSNODETYPE_MULTITRACK_RECORDER \ - DEFINE_USB_TERMINAL_GUID(0x0712) -DEFINE_GUIDSTRUCT("DFF220F2-F70F-11D0-B917-00A0C9223196",KSNODETYPE_MULTITRACK_RECORDER); -#define KSNODETYPE_MULTITRACK_RECORDER DEFINE_GUIDNAMED(KSNODETYPE_MULTITRACK_RECORDER) - -#define STATIC_KSNODETYPE_SYNTHESIZER \ - DEFINE_USB_TERMINAL_GUID(0x0713) -DEFINE_GUIDSTRUCT("DFF220F3-F70F-11D0-B917-00A0C9223196",KSNODETYPE_SYNTHESIZER); -#define KSNODETYPE_SYNTHESIZER DEFINE_GUIDNAMED(KSNODETYPE_SYNTHESIZER) - -#define STATIC_KSNODETYPE_SWSYNTH \ - 0x423274A0L,0x8B81,0x11D1,0xA0,0x50,0x00,0x00,0xF8,0x00,0x47,0x88 -DEFINE_GUIDSTRUCT("423274A0-8B81-11D1-A050-0000F8004788",KSNODETYPE_SWSYNTH); -#define KSNODETYPE_SWSYNTH DEFINE_GUIDNAMED(KSNODETYPE_SWSYNTH) - -#define STATIC_KSNODETYPE_SWMIDI \ - 0xCB9BEFA0L,0xA251,0x11D1,0xA0,0x50,0x00,0x00,0xF8,0x00,0x47,0x88 -DEFINE_GUIDSTRUCT("CB9BEFA0-A251-11D1-A050-0000F8004788",KSNODETYPE_SWMIDI); -#define KSNODETYPE_SWMIDI DEFINE_GUIDNAMED(KSNODETYPE_SWMIDI) - -#define STATIC_KSNODETYPE_DRM_DESCRAMBLE \ - 0xFFBB6E3FL,0xCCFE,0x4D84,0x90,0xD9,0x42,0x14,0x18,0xB0,0x3A,0x8E -DEFINE_GUIDSTRUCT("FFBB6E3F-CCFE-4D84-90D9-421418B03A8E",KSNODETYPE_DRM_DESCRAMBLE); -#define KSNODETYPE_DRM_DESCRAMBLE DEFINE_GUIDNAMED(KSNODETYPE_DRM_DESCRAMBLE) - -#define STATIC_KSCATEGORY_AUDIO \ - 0x6994AD04L,0x93EF,0x11D0,0xA3,0xCC,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("6994AD04-93EF-11D0-A3CC-00A0C9223196",KSCATEGORY_AUDIO); -#define KSCATEGORY_AUDIO DEFINE_GUIDNAMED(KSCATEGORY_AUDIO) - -#define STATIC_KSCATEGORY_VIDEO \ - 0x6994AD05L,0x93EF,0x11D0,0xA3,0xCC,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("6994AD05-93EF-11D0-A3CC-00A0C9223196",KSCATEGORY_VIDEO); -#define KSCATEGORY_VIDEO DEFINE_GUIDNAMED(KSCATEGORY_VIDEO) - -/* Added for Vista and later */ -#define STATIC_KSCATEGORY_REALTIME \ - 0xEB115FFCL, 0x10C8, 0x4964, 0x83, 0x1D, 0x6D, 0xCB, 0x02, 0xE6, 0xF2, 0x3F -DEFINE_GUIDSTRUCT("EB115FFC-10C8-4964-831D-6DCB02E6F23F", KSCATEGORY_REALTIME); -#define KSCATEGORY_REALTIME DEFINE_GUIDNAMED(KSCATEGORY_REALTIME) - -#define STATIC_KSCATEGORY_TEXT \ - 0x6994AD06L,0x93EF,0x11D0,0xA3,0xCC,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("6994AD06-93EF-11D0-A3CC-00A0C9223196",KSCATEGORY_TEXT); -#define KSCATEGORY_TEXT DEFINE_GUIDNAMED(KSCATEGORY_TEXT) - -#define STATIC_KSCATEGORY_NETWORK \ - 0x67C9CC3CL,0x69C4,0x11D2,0x87,0x59,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("67C9CC3C-69C4-11D2-8759-00A0C9223196",KSCATEGORY_NETWORK); -#define KSCATEGORY_NETWORK DEFINE_GUIDNAMED(KSCATEGORY_NETWORK) - -#define STATIC_KSCATEGORY_TOPOLOGY \ - 0xDDA54A40L,0x1E4C,0x11D1,0xA0,0x50,0x40,0x57,0x05,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("DDA54A40-1E4C-11D1-A050-405705C10000",KSCATEGORY_TOPOLOGY); -#define KSCATEGORY_TOPOLOGY DEFINE_GUIDNAMED(KSCATEGORY_TOPOLOGY) - -#define STATIC_KSCATEGORY_VIRTUAL \ - 0x3503EAC4L,0x1F26,0x11D1,0x8A,0xB0,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("3503EAC4-1F26-11D1-8AB0-00A0C9223196",KSCATEGORY_VIRTUAL); -#define KSCATEGORY_VIRTUAL DEFINE_GUIDNAMED(KSCATEGORY_VIRTUAL) - -#define STATIC_KSCATEGORY_ACOUSTIC_ECHO_CANCEL \ - 0xBF963D80L,0xC559,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("BF963D80-C559-11D0-8A2B-00A0C9255AC1",KSCATEGORY_ACOUSTIC_ECHO_CANCEL); -#define KSCATEGORY_ACOUSTIC_ECHO_CANCEL DEFINE_GUIDNAMED(KSCATEGORY_ACOUSTIC_ECHO_CANCEL) - -#define STATIC_KSCATEGORY_SYSAUDIO \ - 0xA7C7A5B1L,0x5AF3,0x11D1,0x9C,0xED,0x00,0xA0,0x24,0xBF,0x04,0x07 -DEFINE_GUIDSTRUCT("A7C7A5B1-5AF3-11D1-9CED-00A024BF0407",KSCATEGORY_SYSAUDIO); -#define KSCATEGORY_SYSAUDIO DEFINE_GUIDNAMED(KSCATEGORY_SYSAUDIO) - -#define STATIC_KSCATEGORY_WDMAUD \ - 0x3E227E76L,0x690D,0x11D2,0x81,0x61,0x00,0x00,0xF8,0x77,0x5B,0xF1 -DEFINE_GUIDSTRUCT("3E227E76-690D-11D2-8161-0000F8775BF1",KSCATEGORY_WDMAUD); -#define KSCATEGORY_WDMAUD DEFINE_GUIDNAMED(KSCATEGORY_WDMAUD) - -#define STATIC_KSCATEGORY_AUDIO_GFX \ - 0x9BAF9572L,0x340C,0x11D3,0xAB,0xDC,0x00,0xA0,0xC9,0x0A,0xB1,0x6F -DEFINE_GUIDSTRUCT("9BAF9572-340C-11D3-ABDC-00A0C90AB16F",KSCATEGORY_AUDIO_GFX); -#define KSCATEGORY_AUDIO_GFX DEFINE_GUIDNAMED(KSCATEGORY_AUDIO_GFX) - -#define STATIC_KSCATEGORY_AUDIO_SPLITTER \ - 0x9EA331FAL,0xB91B,0x45F8,0x92,0x85,0xBD,0x2B,0xC7,0x7A,0xFC,0xDE -DEFINE_GUIDSTRUCT("9EA331FA-B91B-45F8-9285-BD2BC77AFCDE",KSCATEGORY_AUDIO_SPLITTER); -#define KSCATEGORY_AUDIO_SPLITTER DEFINE_GUIDNAMED(KSCATEGORY_AUDIO_SPLITTER) - -#define STATIC_KSCATEGORY_SYNTHESIZER STATIC_KSNODETYPE_SYNTHESIZER -#define KSCATEGORY_SYNTHESIZER KSNODETYPE_SYNTHESIZER - -#define STATIC_KSCATEGORY_DRM_DESCRAMBLE STATIC_KSNODETYPE_DRM_DESCRAMBLE -#define KSCATEGORY_DRM_DESCRAMBLE KSNODETYPE_DRM_DESCRAMBLE - -#define STATIC_KSCATEGORY_AUDIO_DEVICE \ - 0xFBF6F530L,0x07B9,0x11D2,0xA7,0x1E,0x00,0x00,0xF8,0x00,0x47,0x88 -DEFINE_GUIDSTRUCT("FBF6F530-07B9-11D2-A71E-0000F8004788",KSCATEGORY_AUDIO_DEVICE); -#define KSCATEGORY_AUDIO_DEVICE DEFINE_GUIDNAMED(KSCATEGORY_AUDIO_DEVICE) - -#define STATIC_KSCATEGORY_PREFERRED_WAVEOUT_DEVICE \ - 0xD6C5066EL,0x72C1,0x11D2,0x97,0x55,0x00,0x00,0xF8,0x00,0x47,0x88 -DEFINE_GUIDSTRUCT("D6C5066E-72C1-11D2-9755-0000F8004788",KSCATEGORY_PREFERRED_WAVEOUT_DEVICE); -#define KSCATEGORY_PREFERRED_WAVEOUT_DEVICE DEFINE_GUIDNAMED(KSCATEGORY_PREFERRED_WAVEOUT_DEVICE) - -#define STATIC_KSCATEGORY_PREFERRED_WAVEIN_DEVICE \ - 0xD6C50671L,0x72C1,0x11D2,0x97,0x55,0x00,0x00,0xF8,0x00,0x47,0x88 -DEFINE_GUIDSTRUCT("D6C50671-72C1-11D2-9755-0000F8004788",KSCATEGORY_PREFERRED_WAVEIN_DEVICE); -#define KSCATEGORY_PREFERRED_WAVEIN_DEVICE DEFINE_GUIDNAMED(KSCATEGORY_PREFERRED_WAVEIN_DEVICE) - -#define STATIC_KSCATEGORY_PREFERRED_MIDIOUT_DEVICE \ - 0xD6C50674L,0x72C1,0x11D2,0x97,0x55,0x00,0x00,0xF8,0x00,0x47,0x88 -DEFINE_GUIDSTRUCT("D6C50674-72C1-11D2-9755-0000F8004788",KSCATEGORY_PREFERRED_MIDIOUT_DEVICE); -#define KSCATEGORY_PREFERRED_MIDIOUT_DEVICE DEFINE_GUIDNAMED(KSCATEGORY_PREFERRED_MIDIOUT_DEVICE) - -#define STATIC_KSCATEGORY_WDMAUD_USE_PIN_NAME \ - 0x47A4FA20L,0xA251,0x11D1,0xA0,0x50,0x00,0x00,0xF8,0x00,0x47,0x88 -DEFINE_GUIDSTRUCT("47A4FA20-A251-11D1-A050-0000F8004788",KSCATEGORY_WDMAUD_USE_PIN_NAME); -#define KSCATEGORY_WDMAUD_USE_PIN_NAME DEFINE_GUIDNAMED(KSCATEGORY_WDMAUD_USE_PIN_NAME) - -#define STATIC_KSCATEGORY_ESCALANTE_PLATFORM_DRIVER \ - 0x74f3aea8L,0x9768,0x11d1,0x8e,0x07,0x00,0xa0,0xc9,0x5e,0xc2,0x2e -DEFINE_GUIDSTRUCT("74f3aea8-9768-11d1-8e07-00a0c95ec22e",KSCATEGORY_ESCALANTE_PLATFORM_DRIVER); -#define KSCATEGORY_ESCALANTE_PLATFORM_DRIVER DEFINE_GUIDNAMED(KSCATEGORY_ESCALANTE_PLATFORM_DRIVER) - -#define STATIC_KSDATAFORMAT_TYPE_VIDEO \ - 0x73646976L,0x0000,0x0010,0x80,0x00,0x00,0xaa,0x00,0x38,0x9b,0x71 -DEFINE_GUIDSTRUCT("73646976-0000-0010-8000-00aa00389b71",KSDATAFORMAT_TYPE_VIDEO); -#define KSDATAFORMAT_TYPE_VIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_VIDEO) - -#define STATIC_KSDATAFORMAT_TYPE_AUDIO \ - 0x73647561L,0x0000,0x0010,0x80,0x00,0x00,0xaa,0x00,0x38,0x9b,0x71 -DEFINE_GUIDSTRUCT("73647561-0000-0010-8000-00aa00389b71",KSDATAFORMAT_TYPE_AUDIO); -#define KSDATAFORMAT_TYPE_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_AUDIO) - -#define STATIC_KSDATAFORMAT_TYPE_TEXT \ - 0x73747874L,0x0000,0x0010,0x80,0x00,0x00,0xaa,0x00,0x38,0x9b,0x71 -DEFINE_GUIDSTRUCT("73747874-0000-0010-8000-00aa00389b71",KSDATAFORMAT_TYPE_TEXT); -#define KSDATAFORMAT_TYPE_TEXT DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_TEXT) - -#if !defined(DEFINE_WAVEFORMATEX_GUID) -#define DEFINE_WAVEFORMATEX_GUID(x) \ - (USHORT)(x),0x0000,0x0010,0x80,0x00,0x00,0xaa,0x00,0x38,0x9b,0x71 -#endif - -#define STATIC_KSDATAFORMAT_SUBTYPE_WAVEFORMATEX \ - 0x00000000L,0x0000,0x0010,0x80,0x00,0x00,0xaa,0x00,0x38,0x9b,0x71 -DEFINE_GUIDSTRUCT("00000000-0000-0010-8000-00aa00389b71",KSDATAFORMAT_SUBTYPE_WAVEFORMATEX); -#define KSDATAFORMAT_SUBTYPE_WAVEFORMATEX DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_WAVEFORMATEX) - -#define INIT_WAVEFORMATEX_GUID(Guid,x) \ -{ \ - *(Guid) = KSDATAFORMAT_SUBTYPE_WAVEFORMATEX; \ - (Guid)->Data1 = (USHORT)(x); \ -} - -#define EXTRACT_WAVEFORMATEX_ID(Guid) \ - (USHORT)((Guid)->Data1) - -#define IS_VALID_WAVEFORMATEX_GUID(Guid) \ - (!memcmp(((PUSHORT)&KSDATAFORMAT_SUBTYPE_WAVEFORMATEX) + 1, ((PUSHORT)(Guid)) + 1,sizeof(GUID) - sizeof(USHORT))) - -#ifndef INIT_MMREG_MID -#define INIT_MMREG_MID(guid,id) \ -{ \ - (guid)->Data1 = 0xd5a47fa7 + (USHORT)(id); \ - (guid)->Data2 = 0x6d98; \ - (guid)->Data3 = 0x11d1; \ - (guid)->Data4[0] = 0xa2; \ - (guid)->Data4[1] = 0x1a; \ - (guid)->Data4[2] = 0x00; \ - (guid)->Data4[3] = 0xa0; \ - (guid)->Data4[4] = 0xc9; \ - (guid)->Data4[5] = 0x22; \ - (guid)->Data4[6] = 0x31; \ - (guid)->Data4[7] = 0x96; \ -} -#define EXTRACT_MMREG_MID(guid) \ - (USHORT)((guid)->Data1 - 0xd5a47fa7) -#define DEFINE_MMREG_MID_GUID(id) \ - 0xd5a47fa7+(USHORT)(id),0x6d98,0x11d1,0xa2,0x1a,0x00,0xa0,0xc9,0x22,0x31,0x96 - -#define IS_COMPATIBLE_MMREG_MID(guid) \ - (((guid)->Data1 >= 0xd5a47fa7) && \ - ((guid)->Data1 < 0xd5a47fa7 + 0xffff) && \ - ((guid)->Data2 == 0x6d98) && \ - ((guid)->Data3 == 0x11d1) && \ - ((guid)->Data4[0] == 0xa2) && \ - ((guid)->Data4[1] == 0x1a) && \ - ((guid)->Data4[2] == 0x00) && \ - ((guid)->Data4[3] == 0xa0) && \ - ((guid)->Data4[4] == 0xc9) && \ - ((guid)->Data4[5] == 0x22) && \ - ((guid)->Data4[6] == 0x31) && \ - ((guid)->Data4[7] == 0x96) ) -#endif /* INIT_MMREG_MID */ - -#ifndef INIT_MMREG_PID -#define INIT_MMREG_PID(guid,id) \ -{ \ - (guid)->Data1 = 0xe36dc2ac + (USHORT)(id); \ - (guid)->Data2 = 0x6d9a; \ - (guid)->Data3 = 0x11d1; \ - (guid)->Data4[0] = 0xa2; \ - (guid)->Data4[1] = 0x1a; \ - (guid)->Data4[2] = 0x00; \ - (guid)->Data4[3] = 0xa0; \ - (guid)->Data4[4] = 0xc9; \ - (guid)->Data4[5] = 0x22; \ - (guid)->Data4[6] = 0x31; \ - (guid)->Data4[7] = 0x96; \ -} -#define EXTRACT_MMREG_PID(guid) \ - (USHORT)((guid)->Data1 - 0xe36dc2ac) -#define DEFINE_MMREG_PID_GUID(id) \ - 0xe36dc2ac+(USHORT)(id),0x6d9a,0x11d1,0xa2,0x1a,0x00,0xa0,0xc9,0x22,0x31,0x96 - -#define IS_COMPATIBLE_MMREG_PID(guid) \ - (((guid)->Data1 >= 0xe36dc2ac) && \ - ((guid)->Data1 < 0xe36dc2ac + 0xffff) && \ - ((guid)->Data2 == 0x6d9a) && \ - ((guid)->Data3 == 0x11d1) && \ - ((guid)->Data4[0] == 0xa2) && \ - ((guid)->Data4[1] == 0x1a) && \ - ((guid)->Data4[2] == 0x00) && \ - ((guid)->Data4[3] == 0xa0) && \ - ((guid)->Data4[4] == 0xc9) && \ - ((guid)->Data4[5] == 0x22) && \ - ((guid)->Data4[6] == 0x31) && \ - ((guid)->Data4[7] == 0x96) ) -#endif /* INIT_MMREG_PID */ - -#define STATIC_KSDATAFORMAT_SUBTYPE_ANALOG \ - 0x6dba3190L,0x67bd,0x11cf,0xa0,0xf7,0x00,0x20,0xaf,0xd1,0x56,0xe4 -DEFINE_GUIDSTRUCT("6dba3190-67bd-11cf-a0f7-0020afd156e4",KSDATAFORMAT_SUBTYPE_ANALOG); -#define KSDATAFORMAT_SUBTYPE_ANALOG DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_ANALOG) - -#define STATIC_KSDATAFORMAT_SUBTYPE_PCM \ - DEFINE_WAVEFORMATEX_GUID(WAVE_FORMAT_PCM) -DEFINE_GUIDSTRUCT("00000001-0000-0010-8000-00aa00389b71",KSDATAFORMAT_SUBTYPE_PCM); -#define KSDATAFORMAT_SUBTYPE_PCM DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_PCM) - -#ifdef _INC_MMREG -#define STATIC_KSDATAFORMAT_SUBTYPE_IEEE_FLOAT \ - DEFINE_WAVEFORMATEX_GUID(WAVE_FORMAT_IEEE_FLOAT) -DEFINE_GUIDSTRUCT("00000003-0000-0010-8000-00aa00389b71",KSDATAFORMAT_SUBTYPE_IEEE_FLOAT); -#define KSDATAFORMAT_SUBTYPE_IEEE_FLOAT DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_IEEE_FLOAT) - -#define STATIC_KSDATAFORMAT_SUBTYPE_DRM \ - DEFINE_WAVEFORMATEX_GUID(WAVE_FORMAT_DRM) -DEFINE_GUIDSTRUCT("00000009-0000-0010-8000-00aa00389b71",KSDATAFORMAT_SUBTYPE_DRM); -#define KSDATAFORMAT_SUBTYPE_DRM DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_DRM) - -#define STATIC_KSDATAFORMAT_SUBTYPE_ALAW \ - DEFINE_WAVEFORMATEX_GUID(WAVE_FORMAT_ALAW) -DEFINE_GUIDSTRUCT("00000006-0000-0010-8000-00aa00389b71",KSDATAFORMAT_SUBTYPE_ALAW); -#define KSDATAFORMAT_SUBTYPE_ALAW DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_ALAW) - -#define STATIC_KSDATAFORMAT_SUBTYPE_MULAW \ - DEFINE_WAVEFORMATEX_GUID(WAVE_FORMAT_MULAW) -DEFINE_GUIDSTRUCT("00000007-0000-0010-8000-00aa00389b71",KSDATAFORMAT_SUBTYPE_MULAW); -#define KSDATAFORMAT_SUBTYPE_MULAW DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_MULAW) - -#define STATIC_KSDATAFORMAT_SUBTYPE_ADPCM \ - DEFINE_WAVEFORMATEX_GUID(WAVE_FORMAT_ADPCM) -DEFINE_GUIDSTRUCT("00000002-0000-0010-8000-00aa00389b71",KSDATAFORMAT_SUBTYPE_ADPCM); -#define KSDATAFORMAT_SUBTYPE_ADPCM DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_ADPCM) - -#define STATIC_KSDATAFORMAT_SUBTYPE_MPEG \ - DEFINE_WAVEFORMATEX_GUID(WAVE_FORMAT_MPEG) -DEFINE_GUIDSTRUCT("00000050-0000-0010-8000-00aa00389b71",KSDATAFORMAT_SUBTYPE_MPEG); -#define KSDATAFORMAT_SUBTYPE_MPEG DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_MPEG) -#endif /* _INC_MMREG */ - -#define STATIC_KSDATAFORMAT_SPECIFIER_VC_ID \ - 0xAD98D184L,0xAAC3,0x11D0,0xA4,0x1C,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("AD98D184-AAC3-11D0-A41C-00A0C9223196",KSDATAFORMAT_SPECIFIER_VC_ID); -#define KSDATAFORMAT_SPECIFIER_VC_ID DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_VC_ID) - -#define STATIC_KSDATAFORMAT_SPECIFIER_WAVEFORMATEX \ - 0x05589f81L,0xc356,0x11ce,0xbf,0x01,0x00,0xaa,0x00,0x55,0x59,0x5a -DEFINE_GUIDSTRUCT("05589f81-c356-11ce-bf01-00aa0055595a",KSDATAFORMAT_SPECIFIER_WAVEFORMATEX); -#define KSDATAFORMAT_SPECIFIER_WAVEFORMATEX DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_WAVEFORMATEX) - -#define STATIC_KSDATAFORMAT_SPECIFIER_DSOUND \ - 0x518590a2L,0xa184,0x11d0,0x85,0x22,0x00,0xc0,0x4f,0xd9,0xba,0xf3 -DEFINE_GUIDSTRUCT("518590a2-a184-11d0-8522-00c04fd9baf3",KSDATAFORMAT_SPECIFIER_DSOUND); -#define KSDATAFORMAT_SPECIFIER_DSOUND DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_DSOUND) - -#if defined(_INC_MMSYSTEM) || defined(_INC_MMREG) -#if !defined(PACK_PRAGMAS_NOT_SUPPORTED) -#include -#endif -typedef struct { - KSDATAFORMAT DataFormat; - WAVEFORMATEX WaveFormatEx; -} KSDATAFORMAT_WAVEFORMATEX,*PKSDATAFORMAT_WAVEFORMATEX; - -#ifndef _WAVEFORMATEXTENSIBLE_ -#define _WAVEFORMATEXTENSIBLE_ -typedef struct { - WAVEFORMATEX Format; - union { - WORD wValidBitsPerSample; - WORD wSamplesPerBlock; - WORD wReserved; - } Samples; - DWORD dwChannelMask; - - GUID SubFormat; -} WAVEFORMATEXTENSIBLE,*PWAVEFORMATEXTENSIBLE; -#endif /* _WAVEFORMATEXTENSIBLE_ */ - -#if !defined(WAVE_FORMAT_EXTENSIBLE) -#define WAVE_FORMAT_EXTENSIBLE 0xFFFE -#endif - -typedef struct { - ULONG Flags; - ULONG Control; - WAVEFORMATEX WaveFormatEx; -} KSDSOUND_BUFFERDESC,*PKSDSOUND_BUFFERDESC; - -typedef struct { - KSDATAFORMAT DataFormat; - KSDSOUND_BUFFERDESC BufferDesc; -} KSDATAFORMAT_DSOUND,*PKSDATAFORMAT_DSOUND; - -#if !defined(PACK_PRAGMAS_NOT_SUPPORTED) -#include -#endif -#endif /* defined(_INC_MMSYSTEM) || defined(_INC_MMREG) */ - -#define KSDSOUND_BUFFER_PRIMARY 0x00000001 -#define KSDSOUND_BUFFER_STATIC 0x00000002 -#define KSDSOUND_BUFFER_LOCHARDWARE 0x00000004 -#define KSDSOUND_BUFFER_LOCSOFTWARE 0x00000008 - -#define KSDSOUND_BUFFER_CTRL_3D 0x00000001 -#define KSDSOUND_BUFFER_CTRL_FREQUENCY 0x00000002 -#define KSDSOUND_BUFFER_CTRL_PAN 0x00000004 -#define KSDSOUND_BUFFER_CTRL_VOLUME 0x00000008 -#define KSDSOUND_BUFFER_CTRL_POSITIONNOTIFY 0x00000010 - -typedef struct { - DWORDLONG PlayOffset; - DWORDLONG WriteOffset; -} KSAUDIO_POSITION,*PKSAUDIO_POSITION; - -typedef struct _DS3DVECTOR { - __MINGW_EXTENSION union { - FLOAT x; - FLOAT dvX; - }; - __MINGW_EXTENSION union { - FLOAT y; - FLOAT dvY; - }; - __MINGW_EXTENSION union { - FLOAT z; - FLOAT dvZ; - }; -} DS3DVECTOR,*PDS3DVECTOR; - -#define STATIC_KSPROPSETID_DirectSound3DListener \ - 0x437b3414L,0xd060,0x11d0,0x85,0x83,0x00,0xc0,0x4f,0xd9,0xba,0xf3 -DEFINE_GUIDSTRUCT("437b3414-d060-11d0-8583-00c04fd9baf3",KSPROPSETID_DirectSound3DListener); -#define KSPROPSETID_DirectSound3DListener DEFINE_GUIDNAMED(KSPROPSETID_DirectSound3DListener) - -typedef enum { - KSPROPERTY_DIRECTSOUND3DLISTENER_ALL, - KSPROPERTY_DIRECTSOUND3DLISTENER_POSITION, - KSPROPERTY_DIRECTSOUND3DLISTENER_VELOCITY, - KSPROPERTY_DIRECTSOUND3DLISTENER_ORIENTATION, - KSPROPERTY_DIRECTSOUND3DLISTENER_DISTANCEFACTOR, - KSPROPERTY_DIRECTSOUND3DLISTENER_ROLLOFFFACTOR, - KSPROPERTY_DIRECTSOUND3DLISTENER_DOPPLERFACTOR, - KSPROPERTY_DIRECTSOUND3DLISTENER_BATCH, - KSPROPERTY_DIRECTSOUND3DLISTENER_ALLOCATION -} KSPROPERTY_DIRECTSOUND3DLISTENER; - -typedef struct { - DS3DVECTOR Position; - DS3DVECTOR Velocity; - DS3DVECTOR OrientFront; - DS3DVECTOR OrientTop; - FLOAT DistanceFactor; - FLOAT RolloffFactor; - FLOAT DopplerFactor; -} KSDS3D_LISTENER_ALL,*PKSDS3D_LISTENER_ALL; - -typedef struct { - DS3DVECTOR Front; - DS3DVECTOR Top; -} KSDS3D_LISTENER_ORIENTATION,*PKSDS3D_LISTENER_ORIENTATION; - -#define STATIC_KSPROPSETID_DirectSound3DBuffer \ - 0x437b3411L,0xd060,0x11d0,0x85,0x83,0x00,0xc0,0x4f,0xd9,0xba,0xf3 -DEFINE_GUIDSTRUCT("437b3411-d060-11d0-8583-00c04fd9baf3",KSPROPSETID_DirectSound3DBuffer); -#define KSPROPSETID_DirectSound3DBuffer DEFINE_GUIDNAMED(KSPROPSETID_DirectSound3DBuffer) - -typedef enum { - KSPROPERTY_DIRECTSOUND3DBUFFER_ALL, - KSPROPERTY_DIRECTSOUND3DBUFFER_POSITION, - KSPROPERTY_DIRECTSOUND3DBUFFER_VELOCITY, - KSPROPERTY_DIRECTSOUND3DBUFFER_CONEANGLES, - KSPROPERTY_DIRECTSOUND3DBUFFER_CONEORIENTATION, - KSPROPERTY_DIRECTSOUND3DBUFFER_CONEOUTSIDEVOLUME, - KSPROPERTY_DIRECTSOUND3DBUFFER_MINDISTANCE, - KSPROPERTY_DIRECTSOUND3DBUFFER_MAXDISTANCE, - KSPROPERTY_DIRECTSOUND3DBUFFER_MODE -} KSPROPERTY_DIRECTSOUND3DBUFFER; - -typedef struct { - DS3DVECTOR Position; - DS3DVECTOR Velocity; - ULONG InsideConeAngle; - ULONG OutsideConeAngle; - DS3DVECTOR ConeOrientation; - LONG ConeOutsideVolume; - FLOAT MinDistance; - FLOAT MaxDistance; - ULONG Mode; -} KSDS3D_BUFFER_ALL,*PKSDS3D_BUFFER_ALL; - -typedef struct { - ULONG InsideConeAngle; - ULONG OutsideConeAngle; -} KSDS3D_BUFFER_CONE_ANGLES,*PKSDS3D_BUFFER_CONE_ANGLES; - -#define KSAUDIO_STEREO_SPEAKER_GEOMETRY_HEADPHONE (-1) -#define KSAUDIO_STEREO_SPEAKER_GEOMETRY_MIN 5 -#define KSAUDIO_STEREO_SPEAKER_GEOMETRY_NARROW 10 -#define KSAUDIO_STEREO_SPEAKER_GEOMETRY_WIDE 20 -#define KSAUDIO_STEREO_SPEAKER_GEOMETRY_MAX 180 - -#define KSDSOUND_3D_MODE_NORMAL 0x00000000 -#define KSDSOUND_3D_MODE_HEADRELATIVE 0x00000001 -#define KSDSOUND_3D_MODE_DISABLE 0x00000002 - -#define KSDSOUND_BUFFER_CTRL_HRTF_3D 0x40000000 - -typedef struct { - ULONG Size; - ULONG Enabled; - WINBOOL SwapChannels; - WINBOOL ZeroAzimuth; - WINBOOL CrossFadeOutput; - ULONG FilterSize; -} KSDS3D_HRTF_PARAMS_MSG,*PKSDS3D_HRTF_PARAMS_MSG; - -typedef enum { - FULL_FILTER, - LIGHT_FILTER, - KSDS3D_FILTER_QUALITY_COUNT -} KSDS3D_HRTF_FILTER_QUALITY; - -typedef struct { - ULONG Size; - KSDS3D_HRTF_FILTER_QUALITY Quality; - FLOAT SampleRate; - ULONG MaxFilterSize; - ULONG FilterTransientMuteLength; - ULONG FilterOverlapBufferLength; - ULONG OutputOverlapBufferLength; - ULONG Reserved; -} KSDS3D_HRTF_INIT_MSG,*PKSDS3D_HRTF_INIT_MSG; - -typedef enum { - FLOAT_COEFF, - SHORT_COEFF, - KSDS3D_COEFF_COUNT -} KSDS3D_HRTF_COEFF_FORMAT; - -typedef enum { - DIRECT_FORM, - CASCADE_FORM, - KSDS3D_FILTER_METHOD_COUNT -} KSDS3D_HRTF_FILTER_METHOD; - -typedef enum { - DS3D_HRTF_VERSION_1 -} KSDS3D_HRTF_FILTER_VERSION; - -typedef struct { - KSDS3D_HRTF_FILTER_METHOD FilterMethod; - KSDS3D_HRTF_COEFF_FORMAT CoeffFormat; - KSDS3D_HRTF_FILTER_VERSION Version; - ULONG Reserved; -} KSDS3D_HRTF_FILTER_FORMAT_MSG,*PKSDS3D_HRTF_FILTER_FORMAT_MSG; - -#define STATIC_KSPROPSETID_Hrtf3d \ - 0xb66decb0L,0xa083,0x11d0,0x85,0x1e,0x00,0xc0,0x4f,0xd9,0xba,0xf3 -DEFINE_GUIDSTRUCT("b66decb0-a083-11d0-851e-00c04fd9baf3",KSPROPSETID_Hrtf3d); -#define KSPROPSETID_Hrtf3d DEFINE_GUIDNAMED(KSPROPSETID_Hrtf3d) - -typedef enum { - KSPROPERTY_HRTF3D_PARAMS = 0, - KSPROPERTY_HRTF3D_INITIALIZE, - KSPROPERTY_HRTF3D_FILTER_FORMAT -} KSPROPERTY_HRTF3D; - -typedef struct { - LONG Channel; - FLOAT VolSmoothScale; - FLOAT TotalDryAttenuation; - FLOAT TotalWetAttenuation; - LONG SmoothFrequency; - LONG Delay; -} KSDS3D_ITD_PARAMS,*PKSDS3D_ITD_PARAMS; - -typedef struct { - ULONG Enabled; - KSDS3D_ITD_PARAMS LeftParams; - KSDS3D_ITD_PARAMS RightParams; - ULONG Reserved; -} KSDS3D_ITD_PARAMS_MSG,*PKSDS3D_ITD_PARAMS_MSG; - -#define STATIC_KSPROPSETID_Itd3d \ - 0x6429f090L,0x9fd9,0x11d0,0xa7,0x5b,0x00,0xa0,0xc9,0x03,0x65,0xe3 -DEFINE_GUIDSTRUCT("6429f090-9fd9-11d0-a75b-00a0c90365e3",KSPROPSETID_Itd3d); -#define KSPROPSETID_Itd3d DEFINE_GUIDNAMED(KSPROPSETID_Itd3d) - -typedef enum { - KSPROPERTY_ITD3D_PARAMS = 0 -} KSPROPERTY_ITD3D; - -typedef struct { - KSDATARANGE DataRange; - ULONG MaximumChannels; - ULONG MinimumBitsPerSample; - ULONG MaximumBitsPerSample; - ULONG MinimumSampleFrequency; - ULONG MaximumSampleFrequency; -} KSDATARANGE_AUDIO,*PKSDATARANGE_AUDIO; - -#define STATIC_KSDATAFORMAT_SUBTYPE_RIFF \ - 0x4995DAEEL,0x9EE6,0x11D0,0xA4,0x0E,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("4995DAEE-9EE6-11D0-A40E-00A0C9223196",KSDATAFORMAT_SUBTYPE_RIFF); -#define KSDATAFORMAT_SUBTYPE_RIFF DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_RIFF) - -#define STATIC_KSDATAFORMAT_SUBTYPE_RIFFWAVE \ - 0xe436eb8bL,0x524f,0x11ce,0x9f,0x53,0x00,0x20,0xaf,0x0b,0xa7,0x70 -DEFINE_GUIDSTRUCT("e436eb8b-524f-11ce-9f53-0020af0ba770",KSDATAFORMAT_SUBTYPE_RIFFWAVE); -#define KSDATAFORMAT_SUBTYPE_RIFFWAVE DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_RIFFWAVE) - -#define STATIC_KSPROPSETID_Bibliographic \ - 0x07BA150EL,0xE2B1,0x11D0,0xAC,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("07BA150E-E2B1-11D0-AC17-00A0C9223196",KSPROPSETID_Bibliographic); -#define KSPROPSETID_Bibliographic DEFINE_GUIDNAMED(KSPROPSETID_Bibliographic) - -typedef enum { - KSPROPERTY_BIBLIOGRAPHIC_LEADER = 'RDL ', - KSPROPERTY_BIBLIOGRAPHIC_LCCN = '010 ', - KSPROPERTY_BIBLIOGRAPHIC_ISBN = '020 ', - KSPROPERTY_BIBLIOGRAPHIC_ISSN = '220 ', - KSPROPERTY_BIBLIOGRAPHIC_CATALOGINGSOURCE = '040 ', - KSPROPERTY_BIBLIOGRAPHIC_MAINPERSONALNAME = '001 ', - KSPROPERTY_BIBLIOGRAPHIC_MAINCORPORATEBODY = '011 ', - KSPROPERTY_BIBLIOGRAPHIC_MAINMEETINGNAME = '111 ', - KSPROPERTY_BIBLIOGRAPHIC_MAINUNIFORMTITLE = '031 ', - KSPROPERTY_BIBLIOGRAPHIC_UNIFORMTITLE = '042 ', - KSPROPERTY_BIBLIOGRAPHIC_TITLESTATEMENT = '542 ', - KSPROPERTY_BIBLIOGRAPHIC_VARYINGFORMTITLE = '642 ', - KSPROPERTY_BIBLIOGRAPHIC_PUBLICATION = '062 ', - KSPROPERTY_BIBLIOGRAPHIC_PHYSICALDESCRIPTION = '003 ', - KSPROPERTY_BIBLIOGRAPHIC_ADDEDENTRYTITLE = '044 ', - KSPROPERTY_BIBLIOGRAPHIC_SERIESSTATEMENT = '094 ', - KSPROPERTY_BIBLIOGRAPHIC_GENERALNOTE = '005 ', - KSPROPERTY_BIBLIOGRAPHIC_BIBLIOGRAPHYNOTE = '405 ', - KSPROPERTY_BIBLIOGRAPHIC_CONTENTSNOTE = '505 ', - KSPROPERTY_BIBLIOGRAPHIC_CREATIONCREDIT = '805 ', - KSPROPERTY_BIBLIOGRAPHIC_CITATION = '015 ', - KSPROPERTY_BIBLIOGRAPHIC_PARTICIPANT = '115 ', - KSPROPERTY_BIBLIOGRAPHIC_SUMMARY = '025 ', - KSPROPERTY_BIBLIOGRAPHIC_TARGETAUDIENCE = '125 ', - KSPROPERTY_BIBLIOGRAPHIC_ADDEDFORMAVAILABLE = '035 ', - KSPROPERTY_BIBLIOGRAPHIC_SYSTEMDETAILS = '835 ', - KSPROPERTY_BIBLIOGRAPHIC_AWARDS = '685 ', - KSPROPERTY_BIBLIOGRAPHIC_ADDEDENTRYPERSONALNAME = '006 ', - KSPROPERTY_BIBLIOGRAPHIC_ADDEDENTRYTOPICALTERM = '056 ', - KSPROPERTY_BIBLIOGRAPHIC_ADDEDENTRYGEOGRAPHIC = '156 ', - KSPROPERTY_BIBLIOGRAPHIC_INDEXTERMGENRE = '556 ', - KSPROPERTY_BIBLIOGRAPHIC_INDEXTERMCURRICULUM = '856 ', - KSPROPERTY_BIBLIOGRAPHIC_ADDEDENTRYUNIFORMTITLE = '037 ', - KSPROPERTY_BIBLIOGRAPHIC_ADDEDENTRYRELATED = '047 ', - KSPROPERTY_BIBLIOGRAPHIC_SERIESSTATEMENTPERSONALNAME = '008 ', - KSPROPERTY_BIBLIOGRAPHIC_SERIESSTATEMENTUNIFORMTITLE = '038 ' -} KSPROPERTY_BIBLIOGRAPHIC; - -#define STATIC_KSPROPSETID_TopologyNode \ - 0x45FFAAA1L,0x6E1B,0x11D0,0xBC,0xF2,0x44,0x45,0x53,0x54,0x00,0x00 -DEFINE_GUIDSTRUCT("45FFAAA1-6E1B-11D0-BCF2-444553540000",KSPROPSETID_TopologyNode); -#define KSPROPSETID_TopologyNode DEFINE_GUIDNAMED(KSPROPSETID_TopologyNode) - -typedef enum { - KSPROPERTY_TOPOLOGYNODE_ENABLE = 1, - KSPROPERTY_TOPOLOGYNODE_RESET -} KSPROPERTY_TOPOLOGYNODE; - -#define STATIC_KSPROPSETID_RtAudio \ - 0xa855a48c,0x2f78,0x4729,0x90,0x51,0x19,0x68,0x74,0x6b,0x9e,0xef -DEFINE_GUIDSTRUCT("A855A48C-2F78-4729-9051-1968746B9EEF",KSPROPSETID_RtAudio); -#define KSPROPSETID_RtAudio DEFINE_GUIDNAMED(KSPROPSETID_RtAudio) - -typedef enum { - KSPROPERTY_RTAUDIO_GETPOSITIONFUNCTION, - /* Added for Vista and later */ - KSPROPERTY_RTAUDIO_BUFFER, - KSPROPERTY_RTAUDIO_HWLATENCY, - KSPROPERTY_RTAUDIO_POSITIONREGISTER, - KSPROPERTY_RTAUDIO_CLOCKREGISTER, - KSPROPERTY_RTAUDIO_BUFFER_WITH_NOTIFICATION, - KSPROPERTY_RTAUDIO_REGISTER_NOTIFICATION_EVENT, - KSPROPERTY_RTAUDIO_UNREGISTER_NOTIFICATION_EVENT -} KSPROPERTY_RTAUDIO; - -#define STATIC_KSPROPSETID_DrmAudioStream \ - 0x2f2c8ddd,0x4198,0x4fac,0xba,0x29,0x61,0xbb,0x5,0xb7,0xde,0x6 -DEFINE_GUIDSTRUCT("2F2C8DDD-4198-4fac-BA29-61BB05B7DE06",KSPROPSETID_DrmAudioStream); -#define KSPROPSETID_DrmAudioStream DEFINE_GUIDNAMED(KSPROPSETID_DrmAudioStream) - -typedef enum { - KSPROPERTY_DRMAUDIOSTREAM_CONTENTID -} KSPROPERTY_DRMAUDIOSTREAM; - -#define STATIC_KSPROPSETID_Audio \ - 0x45FFAAA0L,0x6E1B,0x11D0,0xBC,0xF2,0x44,0x45,0x53,0x54,0x00,0x00 -DEFINE_GUIDSTRUCT("45FFAAA0-6E1B-11D0-BCF2-444553540000",KSPROPSETID_Audio); -#define KSPROPSETID_Audio DEFINE_GUIDNAMED(KSPROPSETID_Audio) - -typedef enum { - KSPROPERTY_AUDIO_LATENCY = 1, - KSPROPERTY_AUDIO_COPY_PROTECTION, - KSPROPERTY_AUDIO_CHANNEL_CONFIG, - KSPROPERTY_AUDIO_VOLUMELEVEL, - KSPROPERTY_AUDIO_POSITION, - KSPROPERTY_AUDIO_DYNAMIC_RANGE, - KSPROPERTY_AUDIO_QUALITY, - KSPROPERTY_AUDIO_SAMPLING_RATE, - KSPROPERTY_AUDIO_DYNAMIC_SAMPLING_RATE, - KSPROPERTY_AUDIO_MIX_LEVEL_TABLE, - KSPROPERTY_AUDIO_MIX_LEVEL_CAPS, - KSPROPERTY_AUDIO_MUX_SOURCE, - KSPROPERTY_AUDIO_MUTE, - KSPROPERTY_AUDIO_BASS, - KSPROPERTY_AUDIO_MID, - KSPROPERTY_AUDIO_TREBLE, - KSPROPERTY_AUDIO_BASS_BOOST, - KSPROPERTY_AUDIO_EQ_LEVEL, - KSPROPERTY_AUDIO_NUM_EQ_BANDS, - KSPROPERTY_AUDIO_EQ_BANDS, - KSPROPERTY_AUDIO_AGC, - KSPROPERTY_AUDIO_DELAY, - KSPROPERTY_AUDIO_LOUDNESS, - KSPROPERTY_AUDIO_WIDE_MODE, - KSPROPERTY_AUDIO_WIDENESS, - KSPROPERTY_AUDIO_REVERB_LEVEL, - KSPROPERTY_AUDIO_CHORUS_LEVEL, - KSPROPERTY_AUDIO_DEV_SPECIFIC, - KSPROPERTY_AUDIO_DEMUX_DEST, - KSPROPERTY_AUDIO_STEREO_ENHANCE, - KSPROPERTY_AUDIO_MANUFACTURE_GUID, - KSPROPERTY_AUDIO_PRODUCT_GUID, - KSPROPERTY_AUDIO_CPU_RESOURCES, - KSPROPERTY_AUDIO_STEREO_SPEAKER_GEOMETRY, - KSPROPERTY_AUDIO_SURROUND_ENCODE, - KSPROPERTY_AUDIO_3D_INTERFACE, - KSPROPERTY_AUDIO_PEAKMETER, - KSPROPERTY_AUDIO_ALGORITHM_INSTANCE, - KSPROPERTY_AUDIO_FILTER_STATE, - KSPROPERTY_AUDIO_PREFERRED_STATUS -} KSPROPERTY_AUDIO; - -#define KSAUDIO_QUALITY_WORST 0x0 -#define KSAUDIO_QUALITY_PC 0x1 -#define KSAUDIO_QUALITY_BASIC 0x2 -#define KSAUDIO_QUALITY_ADVANCED 0x3 - -#define KSAUDIO_CPU_RESOURCES_NOT_HOST_CPU 0x00000000 -#define KSAUDIO_CPU_RESOURCES_HOST_CPU 0x7FFFFFFF - -typedef struct { - WINBOOL fCopyrighted; - WINBOOL fOriginal; -} KSAUDIO_COPY_PROTECTION,*PKSAUDIO_COPY_PROTECTION; - -typedef struct { - LONG ActiveSpeakerPositions; -} KSAUDIO_CHANNEL_CONFIG,*PKSAUDIO_CHANNEL_CONFIG; - -#define SPEAKER_FRONT_LEFT 0x1 -#define SPEAKER_FRONT_RIGHT 0x2 -#define SPEAKER_FRONT_CENTER 0x4 -#define SPEAKER_LOW_FREQUENCY 0x8 -#define SPEAKER_BACK_LEFT 0x10 -#define SPEAKER_BACK_RIGHT 0x20 -#define SPEAKER_FRONT_LEFT_OF_CENTER 0x40 -#define SPEAKER_FRONT_RIGHT_OF_CENTER 0x80 -#define SPEAKER_BACK_CENTER 0x100 -#define SPEAKER_SIDE_LEFT 0x200 -#define SPEAKER_SIDE_RIGHT 0x400 -#define SPEAKER_TOP_CENTER 0x800 -#define SPEAKER_TOP_FRONT_LEFT 0x1000 -#define SPEAKER_TOP_FRONT_CENTER 0x2000 -#define SPEAKER_TOP_FRONT_RIGHT 0x4000 -#define SPEAKER_TOP_BACK_LEFT 0x8000 -#define SPEAKER_TOP_BACK_CENTER 0x10000 -#define SPEAKER_TOP_BACK_RIGHT 0x20000 - -#define SPEAKER_RESERVED 0x7FFC0000 - -#define SPEAKER_ALL 0x80000000 - -#define KSAUDIO_SPEAKER_DIRECTOUT 0 -#define KSAUDIO_SPEAKER_MONO (SPEAKER_FRONT_CENTER) -#define KSAUDIO_SPEAKER_STEREO (SPEAKER_FRONT_LEFT | SPEAKER_FRONT_RIGHT) -#define KSAUDIO_SPEAKER_QUAD (SPEAKER_FRONT_LEFT | SPEAKER_FRONT_RIGHT | \ - SPEAKER_BACK_LEFT | SPEAKER_BACK_RIGHT) -#define KSAUDIO_SPEAKER_SURROUND (SPEAKER_FRONT_LEFT | SPEAKER_FRONT_RIGHT | \ - SPEAKER_FRONT_CENTER | SPEAKER_BACK_CENTER) -#define KSAUDIO_SPEAKER_5POINT1 (SPEAKER_FRONT_LEFT | SPEAKER_FRONT_RIGHT | \ - SPEAKER_FRONT_CENTER | SPEAKER_LOW_FREQUENCY | \ - SPEAKER_BACK_LEFT | SPEAKER_BACK_RIGHT) -#define KSAUDIO_SPEAKER_7POINT1 (SPEAKER_FRONT_LEFT | SPEAKER_FRONT_RIGHT | \ - SPEAKER_FRONT_CENTER | SPEAKER_LOW_FREQUENCY | \ - SPEAKER_BACK_LEFT | SPEAKER_BACK_RIGHT | \ - SPEAKER_FRONT_LEFT_OF_CENTER | SPEAKER_FRONT_RIGHT_OF_CENTER) -#define KSAUDIO_SPEAKER_5POINT1_SURROUND (SPEAKER_FRONT_LEFT | SPEAKER_FRONT_RIGHT | \ - SPEAKER_FRONT_CENTER | SPEAKER_LOW_FREQUENCY | \ - SPEAKER_SIDE_LEFT | SPEAKER_SIDE_RIGHT) -#define KSAUDIO_SPEAKER_7POINT1_SURROUND (SPEAKER_FRONT_LEFT | SPEAKER_FRONT_RIGHT | \ - SPEAKER_FRONT_CENTER | SPEAKER_LOW_FREQUENCY | \ - SPEAKER_BACK_LEFT | SPEAKER_BACK_RIGHT | \ - SPEAKER_SIDE_LEFT | SPEAKER_SIDE_RIGHT) - -#define KSAUDIO_SPEAKER_5POINT1_BACK KSAUDIO_SPEAKER_5POINT1 -#define KSAUDIO_SPEAKER_7POINT1_WIDE KSAUDIO_SPEAKER_7POINT1 - -#define KSAUDIO_SPEAKER_GROUND_FRONT_LEFT SPEAKER_FRONT_LEFT -#define KSAUDIO_SPEAKER_GROUND_FRONT_CENTER SPEAKER_FRONT_CENTER -#define KSAUDIO_SPEAKER_GROUND_FRONT_RIGHT SPEAKER_FRONT_RIGHT -#define KSAUDIO_SPEAKER_GROUND_REAR_LEFT SPEAKER_BACK_LEFT -#define KSAUDIO_SPEAKER_GROUND_REAR_RIGHT SPEAKER_BACK_RIGHT -#define KSAUDIO_SPEAKER_TOP_MIDDLE SPEAKER_TOP_CENTER -#define KSAUDIO_SPEAKER_SUPER_WOOFER SPEAKER_LOW_FREQUENCY - -typedef struct { - ULONG QuietCompression; - ULONG LoudCompression; -} KSAUDIO_DYNAMIC_RANGE,*PKSAUDIO_DYNAMIC_RANGE; - -typedef struct { - WINBOOL Mute; - LONG Level; -} KSAUDIO_MIXLEVEL,*PKSAUDIO_MIXLEVEL; - -typedef struct { - WINBOOL Mute; - LONG Minimum; - LONG Maximum; - LONG Reset; -} KSAUDIO_MIX_CAPS,*PKSAUDIO_MIX_CAPS; - -typedef struct { - ULONG InputChannels; - ULONG OutputChannels; - KSAUDIO_MIX_CAPS Capabilities[1]; -} KSAUDIO_MIXCAP_TABLE,*PKSAUDIO_MIXCAP_TABLE; - -typedef enum { - SE_TECH_NONE, - SE_TECH_ANALOG_DEVICES_PHAT, - SE_TECH_CREATIVE, - SE_TECH_NATIONAL_SEMI, - SE_TECH_YAMAHA_YMERSION, - SE_TECH_BBE, - SE_TECH_CRYSTAL_SEMI, - SE_TECH_QSOUND_QXPANDER, - SE_TECH_SPATIALIZER, - SE_TECH_SRS, - SE_TECH_PLATFORM_TECH, - SE_TECH_AKM, - SE_TECH_AUREAL, - SE_TECH_AZTECH, - SE_TECH_BINAURA, - SE_TECH_ESS_TECH, - SE_TECH_HARMAN_VMAX, - SE_TECH_NVIDEA, - SE_TECH_PHILIPS_INCREDIBLE, - SE_TECH_TEXAS_INST, - SE_TECH_VLSI_TECH -} SE_TECHNIQUE; - -typedef struct { - SE_TECHNIQUE Technique; - ULONG Center; - ULONG Depth; - ULONG Reserved; -} KSAUDIO_STEREO_ENHANCE,*PKSAUDIO_STEREO_ENHANCE; - -typedef enum { - KSPROPERTY_SYSAUDIO_NORMAL_DEFAULT = 0, - KSPROPERTY_SYSAUDIO_PLAYBACK_DEFAULT, - KSPROPERTY_SYSAUDIO_RECORD_DEFAULT, - KSPROPERTY_SYSAUDIO_MIDI_DEFAULT, - KSPROPERTY_SYSAUDIO_MIXER_DEFAULT -} KSPROPERTY_SYSAUDIO_DEFAULT_TYPE; - -typedef struct { - WINBOOL Enable; - KSPROPERTY_SYSAUDIO_DEFAULT_TYPE DeviceType; - ULONG Flags; - ULONG Reserved; -} KSAUDIO_PREFERRED_STATUS,*PKSAUDIO_PREFERRED_STATUS; - -#define STATIC_KSNODETYPE_DAC \ - 0x507AE360L,0xC554,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("507AE360-C554-11D0-8A2B-00A0C9255AC1",KSNODETYPE_DAC); -#define KSNODETYPE_DAC DEFINE_GUIDNAMED(KSNODETYPE_DAC) - -#define STATIC_KSNODETYPE_ADC \ - 0x4D837FE0L,0xC555,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("4D837FE0-C555-11D0-8A2B-00A0C9255AC1",KSNODETYPE_ADC); -#define KSNODETYPE_ADC DEFINE_GUIDNAMED(KSNODETYPE_ADC) - -#define STATIC_KSNODETYPE_SRC \ - 0x9DB7B9E0L,0xC555,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("9DB7B9E0-C555-11D0-8A2B-00A0C9255AC1",KSNODETYPE_SRC); -#define KSNODETYPE_SRC DEFINE_GUIDNAMED(KSNODETYPE_SRC) - -#define STATIC_KSNODETYPE_SUPERMIX \ - 0xE573ADC0L,0xC555,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("E573ADC0-C555-11D0-8A2B-00A0C9255AC1",KSNODETYPE_SUPERMIX); -#define KSNODETYPE_SUPERMIX DEFINE_GUIDNAMED(KSNODETYPE_SUPERMIX) - -#define STATIC_KSNODETYPE_MUX \ - 0x2CEAF780L,0xC556,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("2CEAF780-C556-11D0-8A2B-00A0C9255AC1",KSNODETYPE_MUX); -#define KSNODETYPE_MUX DEFINE_GUIDNAMED(KSNODETYPE_MUX) - -#define STATIC_KSNODETYPE_DEMUX \ - 0xC0EB67D4L,0xE807,0x11D0,0x95,0x8A,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("C0EB67D4-E807-11D0-958A-00C04FB925D3",KSNODETYPE_DEMUX); -#define KSNODETYPE_DEMUX DEFINE_GUIDNAMED(KSNODETYPE_DEMUX) - -#define STATIC_KSNODETYPE_SUM \ - 0xDA441A60L,0xC556,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("DA441A60-C556-11D0-8A2B-00A0C9255AC1",KSNODETYPE_SUM); -#define KSNODETYPE_SUM DEFINE_GUIDNAMED(KSNODETYPE_SUM) - -#define STATIC_KSNODETYPE_MUTE \ - 0x02B223C0L,0xC557,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("02B223C0-C557-11D0-8A2B-00A0C9255AC1",KSNODETYPE_MUTE); -#define KSNODETYPE_MUTE DEFINE_GUIDNAMED(KSNODETYPE_MUTE) - -#define STATIC_KSNODETYPE_VOLUME \ - 0x3A5ACC00L,0xC557,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("3A5ACC00-C557-11D0-8A2B-00A0C9255AC1",KSNODETYPE_VOLUME); -#define KSNODETYPE_VOLUME DEFINE_GUIDNAMED(KSNODETYPE_VOLUME) - -#define STATIC_KSNODETYPE_TONE \ - 0x7607E580L,0xC557,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("7607E580-C557-11D0-8A2B-00A0C9255AC1",KSNODETYPE_TONE); -#define KSNODETYPE_TONE DEFINE_GUIDNAMED(KSNODETYPE_TONE) - -#define STATIC_KSNODETYPE_EQUALIZER \ - 0x9D41B4A0L,0xC557,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("9D41B4A0-C557-11D0-8A2B-00A0C9255AC1",KSNODETYPE_EQUALIZER); -#define KSNODETYPE_EQUALIZER DEFINE_GUIDNAMED(KSNODETYPE_EQUALIZER) - -#define STATIC_KSNODETYPE_AGC \ - 0xE88C9BA0L,0xC557,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("E88C9BA0-C557-11D0-8A2B-00A0C9255AC1",KSNODETYPE_AGC); -#define KSNODETYPE_AGC DEFINE_GUIDNAMED(KSNODETYPE_AGC) - -#define STATIC_KSNODETYPE_NOISE_SUPPRESS \ - 0xe07f903f,0x62fd,0x4e60,0x8c,0xdd,0xde,0xa7,0x23,0x66,0x65,0xb5 -DEFINE_GUIDSTRUCT("E07F903F-62FD-4e60-8CDD-DEA7236665B5",KSNODETYPE_NOISE_SUPPRESS); -#define KSNODETYPE_NOISE_SUPPRESS DEFINE_GUIDNAMED(KSNODETYPE_NOISE_SUPPRESS) - -#define STATIC_KSNODETYPE_DELAY \ - 0x144981E0L,0xC558,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("144981E0-C558-11D0-8A2B-00A0C9255AC1",KSNODETYPE_DELAY); -#define KSNODETYPE_DELAY DEFINE_GUIDNAMED(KSNODETYPE_DELAY) - -#define STATIC_KSNODETYPE_LOUDNESS \ - 0x41887440L,0xC558,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("41887440-C558-11D0-8A2B-00A0C9255AC1",KSNODETYPE_LOUDNESS); -#define KSNODETYPE_LOUDNESS DEFINE_GUIDNAMED(KSNODETYPE_LOUDNESS) - -#define STATIC_KSNODETYPE_PROLOGIC_DECODER \ - 0x831C2C80L,0xC558,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("831C2C80-C558-11D0-8A2B-00A0C9255AC1",KSNODETYPE_PROLOGIC_DECODER); -#define KSNODETYPE_PROLOGIC_DECODER DEFINE_GUIDNAMED(KSNODETYPE_PROLOGIC_DECODER) - -#define STATIC_KSNODETYPE_STEREO_WIDE \ - 0xA9E69800L,0xC558,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("A9E69800-C558-11D0-8A2B-00A0C9255AC1",KSNODETYPE_STEREO_WIDE); -#define KSNODETYPE_STEREO_WIDE DEFINE_GUIDNAMED(KSNODETYPE_STEREO_WIDE) - -#define STATIC_KSNODETYPE_STEREO_ENHANCE \ - 0xAF6878ACL,0xE83F,0x11D0,0x95,0x8A,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("AF6878AC-E83F-11D0-958A-00C04FB925D3",KSNODETYPE_STEREO_ENHANCE); -#define KSNODETYPE_STEREO_ENHANCE DEFINE_GUIDNAMED(KSNODETYPE_STEREO_ENHANCE) - -#define STATIC_KSNODETYPE_REVERB \ - 0xEF0328E0L,0xC558,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("EF0328E0-C558-11D0-8A2B-00A0C9255AC1",KSNODETYPE_REVERB); -#define KSNODETYPE_REVERB DEFINE_GUIDNAMED(KSNODETYPE_REVERB) - -#define STATIC_KSNODETYPE_CHORUS \ - 0x20173F20L,0xC559,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("20173F20-C559-11D0-8A2B-00A0C9255AC1",KSNODETYPE_CHORUS); -#define KSNODETYPE_CHORUS DEFINE_GUIDNAMED(KSNODETYPE_CHORUS) - -#define STATIC_KSNODETYPE_3D_EFFECTS \ - 0x55515860L,0xC559,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("55515860-C559-11D0-8A2B-00A0C9255AC1",KSNODETYPE_3D_EFFECTS); -#define KSNODETYPE_3D_EFFECTS DEFINE_GUIDNAMED(KSNODETYPE_3D_EFFECTS) - -#define STATIC_KSNODETYPE_ACOUSTIC_ECHO_CANCEL STATIC_KSCATEGORY_ACOUSTIC_ECHO_CANCEL -#define KSNODETYPE_ACOUSTIC_ECHO_CANCEL KSCATEGORY_ACOUSTIC_ECHO_CANCEL - -#define STATIC_KSALGORITHMINSTANCE_SYSTEM_ACOUSTIC_ECHO_CANCEL \ - 0x1c22c56dL,0x9879,0x4f5b,0xa3,0x89,0x27,0x99,0x6d,0xdc,0x28,0x10 -DEFINE_GUIDSTRUCT("1C22C56D-9879-4f5b-A389-27996DDC2810",KSALGORITHMINSTANCE_SYSTEM_ACOUSTIC_ECHO_CANCEL); -#define KSALGORITHMINSTANCE_SYSTEM_ACOUSTIC_ECHO_CANCEL DEFINE_GUIDNAMED(KSALGORITHMINSTANCE_SYSTEM_ACOUSTIC_ECHO_CANCEL) - -#define STATIC_KSALGORITHMINSTANCE_SYSTEM_NOISE_SUPPRESS \ - 0x5ab0882eL,0x7274,0x4516,0x87,0x7d,0x4e,0xee,0x99,0xba,0x4f,0xd0 -DEFINE_GUIDSTRUCT("5AB0882E-7274-4516-877D-4EEE99BA4FD0",KSALGORITHMINSTANCE_SYSTEM_NOISE_SUPPRESS); -#define KSALGORITHMINSTANCE_SYSTEM_NOISE_SUPPRESS DEFINE_GUIDNAMED(KSALGORITHMINSTANCE_SYSTEM_NOISE_SUPPRESS) - -#define STATIC_KSALGORITHMINSTANCE_SYSTEM_AGC \ - 0x950e55b9L,0x877c,0x4c67,0xbe,0x8,0xe4,0x7b,0x56,0x11,0x13,0xa -DEFINE_GUIDSTRUCT("950E55B9-877C-4c67-BE08-E47B5611130A",KSALGORITHMINSTANCE_SYSTEM_AGC); -#define KSALGORITHMINSTANCE_SYSTEM_AGC DEFINE_GUIDNAMED(KSALGORITHMINSTANCE_SYSTEM_AGC) - -#define STATIC_KSALGORITHMINSTANCE_SYSTEM_MICROPHONE_ARRAY_PROCESSOR \ - 0xB6F5A0A0L,0x9E61,0x4F8C,0x91,0xE3,0x76,0xCF,0xF,0x3C,0x47,0x1F -DEFINE_GUIDSTRUCT("B6F5A0A0-9E61-4f8c-91E3-76CF0F3C471F",KSALGORITHMINSTANCE_SYSTEM_MICROPHONE_ARRAY_PROCESSOR); -#define KSALGORITHMINSTANCE_SYSTEM_MICROPHONE_ARRAY_PROCESSOR DEFINE_GUIDNAMED(KSALGORITHMINSTANCE_SYSTEM_MICROPHONE_ARRAY_PROCESSOR) - -#define STATIC_KSNODETYPE_MICROPHONE_ARRAY_PROCESSOR STATIC_KSCATEGORY_MICROPHONE_ARRAY_PROCESSOR -#define KSNODETYPE_MICROPHONE_ARRAY_PROCESSOR KSCATEGORY_MICROPHONE_ARRAY_PROCESSOR - -#define STATIC_KSNODETYPE_DEV_SPECIFIC \ - 0x941C7AC0L,0xC559,0x11D0,0x8A,0x2B,0x00,0xA0,0xC9,0x25,0x5A,0xC1 -DEFINE_GUIDSTRUCT("941C7AC0-C559-11D0-8A2B-00A0C9255AC1",KSNODETYPE_DEV_SPECIFIC); -#define KSNODETYPE_DEV_SPECIFIC DEFINE_GUIDNAMED(KSNODETYPE_DEV_SPECIFIC) - -#define STATIC_KSNODETYPE_PROLOGIC_ENCODER \ - 0x8074C5B2L,0x3C66,0x11D2,0xB4,0x5A,0x30,0x78,0x30,0x2C,0x20,0x30 -DEFINE_GUIDSTRUCT("8074C5B2-3C66-11D2-B45A-3078302C2030",KSNODETYPE_PROLOGIC_ENCODER); -#define KSNODETYPE_PROLOGIC_ENCODER DEFINE_GUIDNAMED(KSNODETYPE_PROLOGIC_ENCODER) -#define KSNODETYPE_SURROUND_ENCODER KSNODETYPE_PROLOGIC_ENCODER - -#define STATIC_KSNODETYPE_PEAKMETER \ - 0xa085651eL,0x5f0d,0x4b36,0xa8,0x69,0xd1,0x95,0xd6,0xab,0x4b,0x9e -DEFINE_GUIDSTRUCT("A085651E-5F0D-4b36-A869-D195D6AB4B9E",KSNODETYPE_PEAKMETER); -#define KSNODETYPE_PEAKMETER DEFINE_GUIDNAMED(KSNODETYPE_PEAKMETER) - -#define STATIC_KSAUDFNAME_BASS \ - 0x185FEDE0L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE0-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_BASS); -#define KSAUDFNAME_BASS DEFINE_GUIDNAMED(KSAUDFNAME_BASS) - -#define STATIC_KSAUDFNAME_TREBLE \ - 0x185FEDE1L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE1-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_TREBLE); -#define KSAUDFNAME_TREBLE DEFINE_GUIDNAMED(KSAUDFNAME_TREBLE) - -#define STATIC_KSAUDFNAME_3D_STEREO \ - 0x185FEDE2L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE2-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_3D_STEREO); -#define KSAUDFNAME_3D_STEREO DEFINE_GUIDNAMED(KSAUDFNAME_3D_STEREO) - -#define STATIC_KSAUDFNAME_MASTER_VOLUME \ - 0x185FEDE3L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE3-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_MASTER_VOLUME); -#define KSAUDFNAME_MASTER_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_MASTER_VOLUME) - -#define STATIC_KSAUDFNAME_MASTER_MUTE \ - 0x185FEDE4L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE4-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_MASTER_MUTE); -#define KSAUDFNAME_MASTER_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_MASTER_MUTE) - -#define STATIC_KSAUDFNAME_WAVE_VOLUME \ - 0x185FEDE5L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE5-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_WAVE_VOLUME); -#define KSAUDFNAME_WAVE_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_WAVE_VOLUME) - -#define STATIC_KSAUDFNAME_WAVE_MUTE \ - 0x185FEDE6L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE6-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_WAVE_MUTE); -#define KSAUDFNAME_WAVE_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_WAVE_MUTE) - -#define STATIC_KSAUDFNAME_MIDI_VOLUME \ - 0x185FEDE7L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE7-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_MIDI_VOLUME); -#define KSAUDFNAME_MIDI_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_MIDI_VOLUME) - -#define STATIC_KSAUDFNAME_MIDI_MUTE \ - 0x185FEDE8L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE8-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_MIDI_MUTE); -#define KSAUDFNAME_MIDI_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_MIDI_MUTE) - -#define STATIC_KSAUDFNAME_CD_VOLUME \ - 0x185FEDE9L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDE9-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_CD_VOLUME); -#define KSAUDFNAME_CD_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_CD_VOLUME) - -#define STATIC_KSAUDFNAME_CD_MUTE \ - 0x185FEDEAL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDEA-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_CD_MUTE); -#define KSAUDFNAME_CD_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_CD_MUTE) - -#define STATIC_KSAUDFNAME_LINE_VOLUME \ - 0x185FEDEBL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDEB-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_LINE_VOLUME); -#define KSAUDFNAME_LINE_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_LINE_VOLUME) - -#define STATIC_KSAUDFNAME_LINE_MUTE \ - 0x185FEDECL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDEC-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_LINE_MUTE); -#define KSAUDFNAME_LINE_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_LINE_MUTE) - -#define STATIC_KSAUDFNAME_MIC_VOLUME \ - 0x185FEDEDL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDED-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_MIC_VOLUME); -#define KSAUDFNAME_MIC_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_MIC_VOLUME) - -#define STATIC_KSAUDFNAME_MIC_MUTE \ - 0x185FEDEEL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDEE-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_MIC_MUTE); -#define KSAUDFNAME_MIC_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_MIC_MUTE) - -#define STATIC_KSAUDFNAME_RECORDING_SOURCE \ - 0x185FEDEFL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDEF-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_RECORDING_SOURCE); -#define KSAUDFNAME_RECORDING_SOURCE DEFINE_GUIDNAMED(KSAUDFNAME_RECORDING_SOURCE) - -#define STATIC_KSAUDFNAME_PC_SPEAKER_VOLUME \ - 0x185FEDF0L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF0-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_PC_SPEAKER_VOLUME); -#define KSAUDFNAME_PC_SPEAKER_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_PC_SPEAKER_VOLUME) - -#define STATIC_KSAUDFNAME_PC_SPEAKER_MUTE \ - 0x185FEDF1L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF1-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_PC_SPEAKER_MUTE); -#define KSAUDFNAME_PC_SPEAKER_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_PC_SPEAKER_MUTE) - -#define STATIC_KSAUDFNAME_MIDI_IN_VOLUME \ - 0x185FEDF2L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF2-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_MIDI_IN_VOLUME); -#define KSAUDFNAME_MIDI_IN_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_MIDI_IN_VOLUME) - -#define STATIC_KSAUDFNAME_CD_IN_VOLUME \ - 0x185FEDF3L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF3-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_CD_IN_VOLUME); -#define KSAUDFNAME_CD_IN_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_CD_IN_VOLUME) - -#define STATIC_KSAUDFNAME_LINE_IN_VOLUME \ - 0x185FEDF4L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF4-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_LINE_IN_VOLUME); -#define KSAUDFNAME_LINE_IN_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_LINE_IN_VOLUME) - -#define STATIC_KSAUDFNAME_MIC_IN_VOLUME \ - 0x185FEDF5L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF5-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_MIC_IN_VOLUME); -#define KSAUDFNAME_MIC_IN_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_MIC_IN_VOLUME) - -#define STATIC_KSAUDFNAME_WAVE_IN_VOLUME \ - 0x185FEDF6L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF6-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_WAVE_IN_VOLUME); -#define KSAUDFNAME_WAVE_IN_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_WAVE_IN_VOLUME) - -#define STATIC_KSAUDFNAME_VOLUME_CONTROL \ - 0x185FEDF7L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF7-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_VOLUME_CONTROL); -#define KSAUDFNAME_VOLUME_CONTROL DEFINE_GUIDNAMED(KSAUDFNAME_VOLUME_CONTROL) - -#define STATIC_KSAUDFNAME_MIDI \ - 0x185FEDF8L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF8-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_MIDI); -#define KSAUDFNAME_MIDI DEFINE_GUIDNAMED(KSAUDFNAME_MIDI) - -#define STATIC_KSAUDFNAME_LINE_IN \ - 0x185FEDF9L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDF9-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_LINE_IN); -#define KSAUDFNAME_LINE_IN DEFINE_GUIDNAMED(KSAUDFNAME_LINE_IN) - -#define STATIC_KSAUDFNAME_RECORDING_CONTROL \ - 0x185FEDFAL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDFA-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_RECORDING_CONTROL); -#define KSAUDFNAME_RECORDING_CONTROL DEFINE_GUIDNAMED(KSAUDFNAME_RECORDING_CONTROL) - -#define STATIC_KSAUDFNAME_CD_AUDIO \ - 0x185FEDFBL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDFB-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_CD_AUDIO); -#define KSAUDFNAME_CD_AUDIO DEFINE_GUIDNAMED(KSAUDFNAME_CD_AUDIO) - -#define STATIC_KSAUDFNAME_AUX_VOLUME \ - 0x185FEDFCL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDFC-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_AUX_VOLUME); -#define KSAUDFNAME_AUX_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_AUX_VOLUME) - -#define STATIC_KSAUDFNAME_AUX_MUTE \ - 0x185FEDFDL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDFD-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_AUX_MUTE); -#define KSAUDFNAME_AUX_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_AUX_MUTE) - -#define STATIC_KSAUDFNAME_AUX \ - 0x185FEDFEL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDFE-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_AUX); -#define KSAUDFNAME_AUX DEFINE_GUIDNAMED(KSAUDFNAME_AUX) - -#define STATIC_KSAUDFNAME_PC_SPEAKER \ - 0x185FEDFFL,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEDFF-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_PC_SPEAKER); -#define KSAUDFNAME_PC_SPEAKER DEFINE_GUIDNAMED(KSAUDFNAME_PC_SPEAKER) - -#define STATIC_KSAUDFNAME_WAVE_OUT_MIX \ - 0x185FEE00L,0x9905,0x11D1,0x95,0xA9,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("185FEE00-9905-11D1-95A9-00C04FB925D3",KSAUDFNAME_WAVE_OUT_MIX); -#define KSAUDFNAME_WAVE_OUT_MIX DEFINE_GUIDNAMED(KSAUDFNAME_WAVE_OUT_MIX) - -#define STATIC_KSAUDFNAME_MONO_OUT \ - 0xf9b41dc3L,0x96e2,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("F9B41DC3-96E2-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_MONO_OUT); -#define KSAUDFNAME_MONO_OUT DEFINE_GUIDNAMED(KSAUDFNAME_MONO_OUT) - -#define STATIC_KSAUDFNAME_STEREO_MIX \ - 0xdff077L,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("00DFF077-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_STEREO_MIX); -#define KSAUDFNAME_STEREO_MIX DEFINE_GUIDNAMED(KSAUDFNAME_STEREO_MIX) - -#define STATIC_KSAUDFNAME_MONO_MIX \ - 0xdff078L,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("00DFF078-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_MONO_MIX); -#define KSAUDFNAME_MONO_MIX DEFINE_GUIDNAMED(KSAUDFNAME_MONO_MIX) - -#define STATIC_KSAUDFNAME_MONO_OUT_VOLUME \ - 0x1ad247ebL,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("1AD247EB-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_MONO_OUT_VOLUME); -#define KSAUDFNAME_MONO_OUT_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_MONO_OUT_VOLUME) - -#define STATIC_KSAUDFNAME_MONO_OUT_MUTE \ - 0x1ad247ecL,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("1AD247EC-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_MONO_OUT_MUTE); -#define KSAUDFNAME_MONO_OUT_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_MONO_OUT_MUTE) - -#define STATIC_KSAUDFNAME_STEREO_MIX_VOLUME \ - 0x1ad247edL,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("1AD247ED-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_STEREO_MIX_VOLUME); -#define KSAUDFNAME_STEREO_MIX_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_STEREO_MIX_VOLUME) - -#define STATIC_KSAUDFNAME_STEREO_MIX_MUTE \ - 0x22b0eafdL,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("22B0EAFD-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_STEREO_MIX_MUTE); -#define KSAUDFNAME_STEREO_MIX_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_STEREO_MIX_MUTE) - -#define STATIC_KSAUDFNAME_MONO_MIX_VOLUME \ - 0x22b0eafeL,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("22B0EAFE-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_MONO_MIX_VOLUME); -#define KSAUDFNAME_MONO_MIX_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_MONO_MIX_VOLUME) - -#define STATIC_KSAUDFNAME_MONO_MIX_MUTE \ - 0x2bc31d69L,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("2BC31D69-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_MONO_MIX_MUTE); -#define KSAUDFNAME_MONO_MIX_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_MONO_MIX_MUTE) - -#define STATIC_KSAUDFNAME_MICROPHONE_BOOST \ - 0x2bc31d6aL,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("2BC31D6A-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_MICROPHONE_BOOST); -#define KSAUDFNAME_MICROPHONE_BOOST DEFINE_GUIDNAMED(KSAUDFNAME_MICROPHONE_BOOST) - -#define STATIC_KSAUDFNAME_ALTERNATE_MICROPHONE \ - 0x2bc31d6bL,0x96e3,0x11d2,0xac,0x4c,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("2BC31D6B-96E3-11d2-AC4C-00C04F8EFB68",KSAUDFNAME_ALTERNATE_MICROPHONE); -#define KSAUDFNAME_ALTERNATE_MICROPHONE DEFINE_GUIDNAMED(KSAUDFNAME_ALTERNATE_MICROPHONE) - -#define STATIC_KSAUDFNAME_3D_DEPTH \ - 0x63ff5747L,0x991f,0x11d2,0xac,0x4d,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("63FF5747-991F-11d2-AC4D-00C04F8EFB68",KSAUDFNAME_3D_DEPTH); -#define KSAUDFNAME_3D_DEPTH DEFINE_GUIDNAMED(KSAUDFNAME_3D_DEPTH) - -#define STATIC_KSAUDFNAME_3D_CENTER \ - 0x9f0670b4L,0x991f,0x11d2,0xac,0x4d,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("9F0670B4-991F-11d2-AC4D-00C04F8EFB68",KSAUDFNAME_3D_CENTER); -#define KSAUDFNAME_3D_CENTER DEFINE_GUIDNAMED(KSAUDFNAME_3D_CENTER) - -#define STATIC_KSAUDFNAME_VIDEO_VOLUME \ - 0x9b46e708L,0x992a,0x11d2,0xac,0x4d,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("9B46E708-992A-11d2-AC4D-00C04F8EFB68",KSAUDFNAME_VIDEO_VOLUME); -#define KSAUDFNAME_VIDEO_VOLUME DEFINE_GUIDNAMED(KSAUDFNAME_VIDEO_VOLUME) - -#define STATIC_KSAUDFNAME_VIDEO_MUTE \ - 0x9b46e709L,0x992a,0x11d2,0xac,0x4d,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("9B46E709-992A-11d2-AC4D-00C04F8EFB68",KSAUDFNAME_VIDEO_MUTE); -#define KSAUDFNAME_VIDEO_MUTE DEFINE_GUIDNAMED(KSAUDFNAME_VIDEO_MUTE) - -#define STATIC_KSAUDFNAME_VIDEO \ - 0x915daec4L,0xa434,0x11d2,0xac,0x52,0x0,0xc0,0x4f,0x8e,0xfb,0x68 -DEFINE_GUIDSTRUCT("915DAEC4-A434-11d2-AC52-00C04F8EFB68",KSAUDFNAME_VIDEO); -#define KSAUDFNAME_VIDEO DEFINE_GUIDNAMED(KSAUDFNAME_VIDEO) - -#define STATIC_KSAUDFNAME_PEAKMETER \ - 0x57e24340L,0xfc5b,0x4612,0xa5,0x62,0x72,0xb1,0x1a,0x29,0xdf,0xae -DEFINE_GUIDSTRUCT("57E24340-FC5B-4612-A562-72B11A29DFAE",KSAUDFNAME_PEAKMETER); -#define KSAUDFNAME_PEAKMETER DEFINE_GUIDNAMED(KSAUDFNAME_PEAKMETER) - -#define KSNODEPIN_STANDARD_IN 1 -#define KSNODEPIN_STANDARD_OUT 0 - -#define KSNODEPIN_SUM_MUX_IN 1 -#define KSNODEPIN_SUM_MUX_OUT 0 - -#define KSNODEPIN_DEMUX_IN 0 -#define KSNODEPIN_DEMUX_OUT 1 - -#define KSNODEPIN_AEC_RENDER_IN 1 -#define KSNODEPIN_AEC_RENDER_OUT 0 -#define KSNODEPIN_AEC_CAPTURE_IN 2 -#define KSNODEPIN_AEC_CAPTURE_OUT 3 - -#define STATIC_KSMETHODSETID_Wavetable \ - 0xDCEF31EBL,0xD907,0x11D0,0x95,0x83,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("DCEF31EB-D907-11D0-9583-00C04FB925D3",KSMETHODSETID_Wavetable); -#define KSMETHODSETID_Wavetable DEFINE_GUIDNAMED(KSMETHODSETID_Wavetable) - -typedef enum { - KSMETHOD_WAVETABLE_WAVE_ALLOC, - KSMETHOD_WAVETABLE_WAVE_FREE, - KSMETHOD_WAVETABLE_WAVE_FIND, - KSMETHOD_WAVETABLE_WAVE_WRITE -} KSMETHOD_WAVETABLE; - -typedef struct { - KSIDENTIFIER Identifier; - ULONG Size; - WINBOOL Looped; - ULONG LoopPoint; - WINBOOL InROM; - KSDATAFORMAT Format; -} KSWAVETABLE_WAVE_DESC,*PKSWAVETABLE_WAVE_DESC; - -#define STATIC_KSPROPSETID_Acoustic_Echo_Cancel \ - 0xd7a4af8bL,0x3dc1,0x4902,0x91,0xea,0x8a,0x15,0xc9,0x0e,0x05,0xb2 -DEFINE_GUIDSTRUCT("D7A4AF8B-3DC1-4902-91EA-8A15C90E05B2",KSPROPSETID_Acoustic_Echo_Cancel); -#define KSPROPSETID_Acoustic_Echo_Cancel DEFINE_GUIDNAMED(KSPROPSETID_Acoustic_Echo_Cancel) - -typedef enum { - KSPROPERTY_AEC_NOISE_FILL_ENABLE = 0, - KSPROPERTY_AEC_STATUS, - KSPROPERTY_AEC_MODE -} KSPROPERTY_AEC; - -#define AEC_STATUS_FD_HISTORY_UNINITIALIZED 0x0 -#define AEC_STATUS_FD_HISTORY_CONTINUOUSLY_CONVERGED 0x1 -#define AEC_STATUS_FD_HISTORY_PREVIOUSLY_DIVERGED 0x2 -#define AEC_STATUS_FD_CURRENTLY_CONVERGED 0x8 - -#define AEC_MODE_PASS_THROUGH 0x0 -#define AEC_MODE_HALF_DUPLEX 0x1 -#define AEC_MODE_FULL_DUPLEX 0x2 - -#define STATIC_KSPROPSETID_Wave \ - 0x924e54b0L,0x630f,0x11cf,0xad,0xa7,0x08,0x00,0x3e,0x30,0x49,0x4a -DEFINE_GUIDSTRUCT("924e54b0-630f-11cf-ada7-08003e30494a",KSPROPSETID_Wave); -#define KSPROPSETID_Wave DEFINE_GUIDNAMED(KSPROPSETID_Wave) - -typedef enum { - KSPROPERTY_WAVE_COMPATIBLE_CAPABILITIES, - KSPROPERTY_WAVE_INPUT_CAPABILITIES, - KSPROPERTY_WAVE_OUTPUT_CAPABILITIES, - KSPROPERTY_WAVE_BUFFER, - KSPROPERTY_WAVE_FREQUENCY, - KSPROPERTY_WAVE_VOLUME, - KSPROPERTY_WAVE_PAN -} KSPROPERTY_WAVE; - -typedef struct { - ULONG ulDeviceType; -} KSWAVE_COMPATCAPS,*PKSWAVE_COMPATCAPS; - -#define KSWAVE_COMPATCAPS_INPUT 0x00000000 -#define KSWAVE_COMPATCAPS_OUTPUT 0x00000001 - -typedef struct { - ULONG MaximumChannelsPerConnection; - ULONG MinimumBitsPerSample; - ULONG MaximumBitsPerSample; - ULONG MinimumSampleFrequency; - ULONG MaximumSampleFrequency; - ULONG TotalConnections; - ULONG ActiveConnections; -} KSWAVE_INPUT_CAPABILITIES,*PKSWAVE_INPUT_CAPABILITIES; - -typedef struct { - ULONG MaximumChannelsPerConnection; - ULONG MinimumBitsPerSample; - ULONG MaximumBitsPerSample; - ULONG MinimumSampleFrequency; - ULONG MaximumSampleFrequency; - ULONG TotalConnections; - ULONG StaticConnections; - ULONG StreamingConnections; - ULONG ActiveConnections; - ULONG ActiveStaticConnections; - ULONG ActiveStreamingConnections; - ULONG Total3DConnections; - ULONG Static3DConnections; - ULONG Streaming3DConnections; - ULONG Active3DConnections; - ULONG ActiveStatic3DConnections; - ULONG ActiveStreaming3DConnections; - ULONG TotalSampleMemory; - ULONG FreeSampleMemory; - ULONG LargestFreeContiguousSampleMemory; -} KSWAVE_OUTPUT_CAPABILITIES,*PKSWAVE_OUTPUT_CAPABILITIES; - -typedef struct { - LONG LeftAttenuation; - LONG RightAttenuation; -} KSWAVE_VOLUME,*PKSWAVE_VOLUME; - -#define KSWAVE_BUFFER_ATTRIBUTEF_LOOPING 0x00000001 -#define KSWAVE_BUFFER_ATTRIBUTEF_STATIC 0x00000002 - -typedef struct { - ULONG Attributes; - ULONG BufferSize; - PVOID BufferAddress; -} KSWAVE_BUFFER,*PKSWAVE_BUFFER; - -#define STATIC_KSMUSIC_TECHNOLOGY_PORT \ - 0x86C92E60L,0x62E8,0x11CF,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("86C92E60-62E8-11CF-A5D6-28DB04C10000",KSMUSIC_TECHNOLOGY_PORT); -#define KSMUSIC_TECHNOLOGY_PORT DEFINE_GUIDNAMED(KSMUSIC_TECHNOLOGY_PORT) - -#define STATIC_KSMUSIC_TECHNOLOGY_SQSYNTH \ - 0x0ECF4380L,0x62E9,0x11CF,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("0ECF4380-62E9-11CF-A5D6-28DB04C10000",KSMUSIC_TECHNOLOGY_SQSYNTH); -#define KSMUSIC_TECHNOLOGY_SQSYNTH DEFINE_GUIDNAMED(KSMUSIC_TECHNOLOGY_SQSYNTH) - -#define STATIC_KSMUSIC_TECHNOLOGY_FMSYNTH \ - 0x252C5C80L,0x62E9,0x11CF,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("252C5C80-62E9-11CF-A5D6-28DB04C10000",KSMUSIC_TECHNOLOGY_FMSYNTH); -#define KSMUSIC_TECHNOLOGY_FMSYNTH DEFINE_GUIDNAMED(KSMUSIC_TECHNOLOGY_FMSYNTH) - -#define STATIC_KSMUSIC_TECHNOLOGY_WAVETABLE \ - 0x394EC7C0L,0x62E9,0x11CF,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("394EC7C0-62E9-11CF-A5D6-28DB04C10000",KSMUSIC_TECHNOLOGY_WAVETABLE); -#define KSMUSIC_TECHNOLOGY_WAVETABLE DEFINE_GUIDNAMED(KSMUSIC_TECHNOLOGY_WAVETABLE) - -#define STATIC_KSMUSIC_TECHNOLOGY_SWSYNTH \ - 0x37407736L,0x3620,0x11D1,0x85,0xD3,0x00,0x00,0xF8,0x75,0x43,0x80 -DEFINE_GUIDSTRUCT("37407736-3620-11D1-85D3-0000F8754380",KSMUSIC_TECHNOLOGY_SWSYNTH); -#define KSMUSIC_TECHNOLOGY_SWSYNTH DEFINE_GUIDNAMED(KSMUSIC_TECHNOLOGY_SWSYNTH) - -#define STATIC_KSPROPSETID_WaveTable \ - 0x8539E660L,0x62E9,0x11CF,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("8539E660-62E9-11CF-A5D6-28DB04C10000",KSPROPSETID_WaveTable); -#define KSPROPSETID_WaveTable DEFINE_GUIDNAMED(KSPROPSETID_WaveTable) - -typedef enum { - KSPROPERTY_WAVETABLE_LOAD_SAMPLE, - KSPROPERTY_WAVETABLE_UNLOAD_SAMPLE, - KSPROPERTY_WAVETABLE_MEMORY, - KSPROPERTY_WAVETABLE_VERSION -} KSPROPERTY_WAVETABLE; - -typedef struct { - KSDATARANGE DataRange; - GUID Technology; - ULONG Channels; - ULONG Notes; - ULONG ChannelMask; -} KSDATARANGE_MUSIC,*PKSDATARANGE_MUSIC; - -#define STATIC_KSEVENTSETID_Cyclic \ - 0x142C1AC0L,0x072A,0x11D0,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("142C1AC0-072A-11D0-A5D6-28DB04C10000",KSEVENTSETID_Cyclic); -#define KSEVENTSETID_Cyclic DEFINE_GUIDNAMED(KSEVENTSETID_Cyclic) - -typedef enum { - KSEVENT_CYCLIC_TIME_INTERVAL -} KSEVENT_CYCLIC_TIME; - -#define STATIC_KSPROPSETID_Cyclic \ - 0x3FFEAEA0L,0x2BEE,0x11CF,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("3FFEAEA0-2BEE-11CF-A5D6-28DB04C10000",KSPROPSETID_Cyclic); -#define KSPROPSETID_Cyclic DEFINE_GUIDNAMED(KSPROPSETID_Cyclic) - -typedef enum { - KSPROPERTY_CYCLIC_POSITION -} KSPROPERTY_CYCLIC; - -#define STATIC_KSEVENTSETID_AudioControlChange \ - 0xE85E9698L,0xFA2F,0x11D1,0x95,0xBD,0x00,0xC0,0x4F,0xB9,0x25,0xD3 -DEFINE_GUIDSTRUCT("E85E9698-FA2F-11D1-95BD-00C04FB925D3",KSEVENTSETID_AudioControlChange); -#define KSEVENTSETID_AudioControlChange DEFINE_GUIDNAMED(KSEVENTSETID_AudioControlChange) - -typedef enum { - KSEVENT_CONTROL_CHANGE -} KSEVENT_AUDIO_CONTROL_CHANGE; - -#define STATIC_KSEVENTSETID_LoopedStreaming \ - 0x4682B940L,0xC6EF,0x11D0,0x96,0xD8,0x00,0xAA,0x00,0x51,0xE5,0x1D -DEFINE_GUIDSTRUCT("4682B940-C6EF-11D0-96D8-00AA0051E51D",KSEVENTSETID_LoopedStreaming); -#define KSEVENTSETID_LoopedStreaming DEFINE_GUIDNAMED(KSEVENTSETID_LoopedStreaming) - -typedef enum { - KSEVENT_LOOPEDSTREAMING_POSITION -} KSEVENT_LOOPEDSTREAMING; - -typedef struct { - KSEVENTDATA KsEventData; - DWORDLONG Position; -} LOOPEDSTREAMING_POSITION_EVENT_DATA,*PLOOPEDSTREAMING_POSITION_EVENT_DATA; - -#define STATIC_KSPROPSETID_Sysaudio \ - 0xCBE3FAA0L,0xCC75,0x11D0,0xB4,0x65,0x00,0x00,0x1A,0x18,0x18,0xE6 -DEFINE_GUIDSTRUCT("CBE3FAA0-CC75-11D0-B465-00001A1818E6",KSPROPSETID_Sysaudio); -#define KSPROPSETID_Sysaudio DEFINE_GUIDNAMED(KSPROPSETID_Sysaudio) - -typedef enum { - KSPROPERTY_SYSAUDIO_DEVICE_COUNT = 1, - KSPROPERTY_SYSAUDIO_DEVICE_FRIENDLY_NAME = 2, - KSPROPERTY_SYSAUDIO_DEVICE_INSTANCE = 3, - KSPROPERTY_SYSAUDIO_DEVICE_INTERFACE_NAME = 4, - KSPROPERTY_SYSAUDIO_SELECT_GRAPH = 5, - KSPROPERTY_SYSAUDIO_CREATE_VIRTUAL_SOURCE = 6, - KSPROPERTY_SYSAUDIO_DEVICE_DEFAULT = 7, - KSPROPERTY_SYSAUDIO_INSTANCE_INFO = 14, - KSPROPERTY_SYSAUDIO_COMPONENT_ID = 16 -} KSPROPERTY_SYSAUDIO; - -typedef struct { - KSPROPERTY Property; - GUID PinCategory; - GUID PinName; -} SYSAUDIO_CREATE_VIRTUAL_SOURCE,*PSYSAUDIO_CREATE_VIRTUAL_SOURCE; - -typedef struct { - KSPROPERTY Property; - ULONG PinId; - ULONG NodeId; - ULONG Flags; - ULONG Reserved; -} SYSAUDIO_SELECT_GRAPH,*PSYSAUDIO_SELECT_GRAPH; - -typedef struct { - KSPROPERTY Property; - ULONG Flags; - ULONG DeviceNumber; -} SYSAUDIO_INSTANCE_INFO,*PSYSAUDIO_INSTANCE_INFO; - -#define SYSAUDIO_FLAGS_DONT_COMBINE_PINS 0x00000001 - -#define STATIC_KSPROPSETID_Sysaudio_Pin \ - 0xA3A53220L,0xC6E4,0x11D0,0xB4,0x65,0x00,0x00,0x1A,0x18,0x18,0xE6 -DEFINE_GUIDSTRUCT("A3A53220-C6E4-11D0-B465-00001A1818E6",KSPROPSETID_Sysaudio_Pin); -#define KSPROPSETID_Sysaudio_Pin DEFINE_GUIDNAMED(KSPROPSETID_Sysaudio_Pin) - -typedef enum { - KSPROPERTY_SYSAUDIO_ATTACH_VIRTUAL_SOURCE = 1 -} KSPROPERTY_SYSAUDIO_PIN; - -typedef struct { - KSPROPERTY Property; - ULONG MixerPinId; - ULONG Reserved; -} SYSAUDIO_ATTACH_VIRTUAL_SOURCE,*PSYSAUDIO_ATTACH_VIRTUAL_SOURCE; - -typedef struct { - KSPROPERTY Property; - ULONG NodeId; - ULONG Reserved; -} KSNODEPROPERTY,*PKSNODEPROPERTY; - -typedef struct { - KSNODEPROPERTY NodeProperty; - LONG Channel; - ULONG Reserved; -} KSNODEPROPERTY_AUDIO_CHANNEL,*PKSNODEPROPERTY_AUDIO_CHANNEL; - -typedef struct { - KSNODEPROPERTY NodeProperty; - ULONG DevSpecificId; - ULONG DeviceInfo; - ULONG Length; -} KSNODEPROPERTY_AUDIO_DEV_SPECIFIC,*PKSNODEPROPERTY_AUDIO_DEV_SPECIFIC; - -typedef struct { - KSNODEPROPERTY NodeProperty; - PVOID ListenerId; -#ifndef _WIN64 - ULONG Reserved; -#endif -} KSNODEPROPERTY_AUDIO_3D_LISTENER,*PKSNODEPROPERTY_AUDIO_3D_LISTENER; - -typedef struct { - KSNODEPROPERTY NodeProperty; - PVOID AppContext; - ULONG Length; -#ifndef _WIN64 - ULONG Reserved; -#endif -} KSNODEPROPERTY_AUDIO_PROPERTY,*PKSNODEPROPERTY_AUDIO_PROPERTY; - -#define STATIC_KSPROPSETID_AudioGfx \ - 0x79a9312eL,0x59ae,0x43b0,0xa3,0x50,0x8b,0x5,0x28,0x4c,0xab,0x24 -DEFINE_GUIDSTRUCT("79A9312E-59AE-43b0-A350-8B05284CAB24",KSPROPSETID_AudioGfx); -#define KSPROPSETID_AudioGfx DEFINE_GUIDNAMED(KSPROPSETID_AudioGfx) - -typedef enum { - KSPROPERTY_AUDIOGFX_RENDERTARGETDEVICEID, - KSPROPERTY_AUDIOGFX_CAPTURETARGETDEVICEID -} KSPROPERTY_AUDIOGFX; - -#define STATIC_KSPROPSETID_Linear \ - 0x5A2FFE80L,0x16B9,0x11D0,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("5A2FFE80-16B9-11D0-A5D6-28DB04C10000",KSPROPSETID_Linear); -#define KSPROPSETID_Linear DEFINE_GUIDNAMED(KSPROPSETID_Linear) - -typedef enum { - KSPROPERTY_LINEAR_POSITION -} KSPROPERTY_LINEAR; - -#define STATIC_KSDATAFORMAT_TYPE_MUSIC \ - 0xE725D360L,0x62CC,0x11CF,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("E725D360-62CC-11CF-A5D6-28DB04C10000",KSDATAFORMAT_TYPE_MUSIC); -#define KSDATAFORMAT_TYPE_MUSIC DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_MUSIC) - -#define STATIC_KSDATAFORMAT_TYPE_MIDI \ - 0x7364696DL,0x0000,0x0010,0x80,0x00,0x00,0xaa,0x00,0x38,0x9b,0x71 -DEFINE_GUIDSTRUCT("7364696D-0000-0010-8000-00aa00389b71",KSDATAFORMAT_TYPE_MIDI); -#define KSDATAFORMAT_TYPE_MIDI DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_MIDI) - -#define STATIC_KSDATAFORMAT_SUBTYPE_MIDI \ - 0x1D262760L,0xE957,0x11CF,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("1D262760-E957-11CF-A5D6-28DB04C10000",KSDATAFORMAT_SUBTYPE_MIDI); -#define KSDATAFORMAT_SUBTYPE_MIDI DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_MIDI) - -#define STATIC_KSDATAFORMAT_SUBTYPE_MIDI_BUS \ - 0x2CA15FA0L,0x6CFE,0x11CF,0xA5,0xD6,0x28,0xDB,0x04,0xC1,0x00,0x00 -DEFINE_GUIDSTRUCT("2CA15FA0-6CFE-11CF-A5D6-28DB04C10000",KSDATAFORMAT_SUBTYPE_MIDI_BUS); -#define KSDATAFORMAT_SUBTYPE_MIDI_BUS DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_MIDI_BUS) - -#define STATIC_KSDATAFORMAT_SUBTYPE_RIFFMIDI \ - 0x4995DAF0L,0x9EE6,0x11D0,0xA4,0x0E,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("4995DAF0-9EE6-11D0-A40E-00A0C9223196",KSDATAFORMAT_SUBTYPE_RIFFMIDI); -#define KSDATAFORMAT_SUBTYPE_RIFFMIDI DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_RIFFMIDI) - -typedef struct { - ULONG TimeDeltaMs; - - ULONG ByteCount; -} KSMUSICFORMAT,*PKSMUSICFORMAT; - -#define STATIC_KSDATAFORMAT_TYPE_STANDARD_ELEMENTARY_STREAM \ - 0x36523b11L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B11-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_TYPE_STANDARD_ELEMENTARY_STREAM); -#define KSDATAFORMAT_TYPE_STANDARD_ELEMENTARY_STREAM DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_STANDARD_ELEMENTARY_STREAM) - -#define STATIC_KSDATAFORMAT_TYPE_STANDARD_PES_PACKET \ - 0x36523b12L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B12-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_TYPE_STANDARD_PES_PACKET); -#define KSDATAFORMAT_TYPE_STANDARD_PES_PACKET DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_STANDARD_PES_PACKET) - -#define STATIC_KSDATAFORMAT_TYPE_STANDARD_PACK_HEADER \ - 0x36523b13L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B13-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_TYPE_STANDARD_PACK_HEADER); -#define KSDATAFORMAT_TYPE_STANDARD_PACK_HEADER DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_STANDARD_PACK_HEADER) - -#define STATIC_KSDATAFORMAT_SUBTYPE_STANDARD_MPEG1_VIDEO \ - 0x36523b21L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B21-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SUBTYPE_STANDARD_MPEG1_VIDEO); -#define KSDATAFORMAT_SUBTYPE_STANDARD_MPEG1_VIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_STANDARD_MPEG1_VIDEO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_STANDARD_MPEG1_AUDIO \ - 0x36523b22L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B22-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SUBTYPE_STANDARD_MPEG1_AUDIO); -#define KSDATAFORMAT_SUBTYPE_STANDARD_MPEG1_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_STANDARD_MPEG1_AUDIO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_STANDARD_MPEG2_VIDEO \ - 0x36523b23L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B23-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SUBTYPE_STANDARD_MPEG2_VIDEO); -#define KSDATAFORMAT_SUBTYPE_STANDARD_MPEG2_VIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_STANDARD_MPEG2_VIDEO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_STANDARD_MPEG2_AUDIO \ - 0x36523b24L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B24-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SUBTYPE_STANDARD_MPEG2_AUDIO); -#define KSDATAFORMAT_SUBTYPE_STANDARD_MPEG2_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_STANDARD_MPEG2_AUDIO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_STANDARD_AC3_AUDIO \ - 0x36523b25L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B25-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SUBTYPE_STANDARD_AC3_AUDIO); -#define KSDATAFORMAT_SUBTYPE_STANDARD_AC3_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_STANDARD_AC3_AUDIO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_DIALECT_MPEG1_VIDEO \ - 0x36523b31L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B31-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SPECIFIER_DIALECT_MPEG1_VIDEO); -#define KSDATAFORMAT_SPECIFIER_DIALECT_MPEG1_VIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_DIALECT_MPEG1_VIDEO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_DIALECT_MPEG1_AUDIO \ - 0x36523b32L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B32-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SPECIFIER_DIALECT_MPEG1_AUDIO); -#define KSDATAFORMAT_SPECIFIER_DIALECT_MPEG1_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_DIALECT_MPEG1_AUDIO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_DIALECT_MPEG2_VIDEO \ - 0x36523b33L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B33-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SPECIFIER_DIALECT_MPEG2_VIDEO); -#define KSDATAFORMAT_SPECIFIER_DIALECT_MPEG2_VIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_DIALECT_MPEG2_VIDEO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_DIALECT_MPEG2_AUDIO \ - 0x36523b34L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B34-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SPECIFIER_DIALECT_MPEG2_AUDIO); -#define KSDATAFORMAT_SPECIFIER_DIALECT_MPEG2_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_DIALECT_MPEG2_AUDIO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_DIALECT_AC3_AUDIO \ - 0x36523b35L,0x8ee5,0x11d1,0x8c,0xa3,0x00,0x60,0xb0,0x57,0x66,0x4a -DEFINE_GUIDSTRUCT("36523B35-8EE5-11d1-8CA3-0060B057664A",KSDATAFORMAT_SPECIFIER_DIALECT_AC3_AUDIO); -#define KSDATAFORMAT_SPECIFIER_DIALECT_AC3_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_DIALECT_AC3_AUDIO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_DSS_VIDEO \ - 0xa0af4f81L,0xe163,0x11d0,0xba,0xd9,0x00,0x60,0x97,0x44,0x11,0x1a -DEFINE_GUIDSTRUCT("a0af4f81-e163-11d0-bad9-00609744111a",KSDATAFORMAT_SUBTYPE_DSS_VIDEO); -#define KSDATAFORMAT_SUBTYPE_DSS_VIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_DSS_VIDEO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_DSS_AUDIO \ - 0xa0af4f82L,0xe163,0x11d0,0xba,0xd9,0x00,0x60,0x97,0x44,0x11,0x1a -DEFINE_GUIDSTRUCT("a0af4f82-e163-11d0-bad9-00609744111a",KSDATAFORMAT_SUBTYPE_DSS_AUDIO); -#define KSDATAFORMAT_SUBTYPE_DSS_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_DSS_AUDIO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_MPEG1Packet \ - 0xe436eb80,0x524f,0x11ce,0x9f,0x53,0x00,0x20,0xaf,0x0b,0xa7,0x70 -DEFINE_GUIDSTRUCT("e436eb80-524f-11ce-9F53-0020af0ba770",KSDATAFORMAT_SUBTYPE_MPEG1Packet); -#define KSDATAFORMAT_SUBTYPE_MPEG1Packet DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_MPEG1Packet) - -#define STATIC_KSDATAFORMAT_SUBTYPE_MPEG1Payload \ - 0xe436eb81,0x524f,0x11ce,0x9f,0x53,0x00,0x20,0xaf,0x0b,0xa7,0x70 -DEFINE_GUIDSTRUCT("e436eb81-524f-11ce-9F53-0020af0ba770",KSDATAFORMAT_SUBTYPE_MPEG1Payload); -#define KSDATAFORMAT_SUBTYPE_MPEG1Payload DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_MPEG1Payload) - -#define STATIC_KSDATAFORMAT_SUBTYPE_MPEG1Video \ - 0xe436eb86,0x524f,0x11ce,0x9f,0x53,0x00,0x20,0xaf,0x0b,0xa7,0x70 -DEFINE_GUIDSTRUCT("e436eb86-524f-11ce-9f53-0020af0ba770",KSDATAFORMAT_SUBTYPE_MPEG1Video); -#define KSDATAFORMAT_SUBTYPE_MPEG1Video DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_MPEG1Video) - -#define STATIC_KSDATAFORMAT_SPECIFIER_MPEG1_VIDEO \ - 0x05589f82L,0xc356,0x11ce,0xbf,0x01,0x00,0xaa,0x00,0x55,0x59,0x5a -DEFINE_GUIDSTRUCT("05589f82-c356-11ce-bf01-00aa0055595a",KSDATAFORMAT_SPECIFIER_MPEG1_VIDEO); -#define KSDATAFORMAT_SPECIFIER_MPEG1_VIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_MPEG1_VIDEO) - -#define STATIC_KSDATAFORMAT_TYPE_MPEG2_PES \ - 0xe06d8020L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d8020-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_TYPE_MPEG2_PES); -#define KSDATAFORMAT_TYPE_MPEG2_PES DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_MPEG2_PES) - -#define STATIC_KSDATAFORMAT_TYPE_MPEG2_PROGRAM \ - 0xe06d8022L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d8022-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_TYPE_MPEG2_PROGRAM); -#define KSDATAFORMAT_TYPE_MPEG2_PROGRAM DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_MPEG2_PROGRAM) - -#define STATIC_KSDATAFORMAT_TYPE_MPEG2_TRANSPORT \ - 0xe06d8023L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d8023-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_TYPE_MPEG2_TRANSPORT); -#define KSDATAFORMAT_TYPE_MPEG2_TRANSPORT DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_MPEG2_TRANSPORT) - -#define STATIC_KSDATAFORMAT_SUBTYPE_MPEG2_VIDEO \ - 0xe06d8026L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d8026-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SUBTYPE_MPEG2_VIDEO); -#define KSDATAFORMAT_SUBTYPE_MPEG2_VIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_MPEG2_VIDEO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_MPEG2_VIDEO \ - 0xe06d80e3L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d80e3-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SPECIFIER_MPEG2_VIDEO); -#define KSDATAFORMAT_SPECIFIER_MPEG2_VIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_MPEG2_VIDEO) - -#define STATIC_KSPROPSETID_Mpeg2Vid \ - 0xC8E11B60L,0x0CC9,0x11D0,0xBD,0x69,0x00,0x35,0x05,0xC1,0x03,0xA9 -DEFINE_GUIDSTRUCT("C8E11B60-0CC9-11D0-BD69-003505C103A9",KSPROPSETID_Mpeg2Vid); -#define KSPROPSETID_Mpeg2Vid DEFINE_GUIDNAMED(KSPROPSETID_Mpeg2Vid) - -typedef enum { - KSPROPERTY_MPEG2VID_MODES, - KSPROPERTY_MPEG2VID_CUR_MODE, - KSPROPERTY_MPEG2VID_4_3_RECT, - KSPROPERTY_MPEG2VID_16_9_RECT, - KSPROPERTY_MPEG2VID_16_9_PANSCAN -} KSPROPERTY_MPEG2VID; - -#define KSMPEGVIDMODE_PANSCAN 0x0001 -#define KSMPEGVIDMODE_LTRBOX 0x0002 -#define KSMPEGVIDMODE_SCALE 0x0004 - -typedef struct _KSMPEGVID_RECT { - ULONG StartX; - ULONG StartY; - ULONG EndX; - ULONG EndY; -} KSMPEGVID_RECT,*PKSMPEGVID_RECT; - -#define STATIC_KSDATAFORMAT_SUBTYPE_MPEG2_AUDIO \ - 0xe06d802bL,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d802b-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SUBTYPE_MPEG2_AUDIO); -#define KSDATAFORMAT_SUBTYPE_MPEG2_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_MPEG2_AUDIO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_MPEG2_AUDIO \ - 0xe06d80e5L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d80e5-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SPECIFIER_MPEG2_AUDIO); -#define KSDATAFORMAT_SPECIFIER_MPEG2_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_MPEG2_AUDIO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_LPCM_AUDIO \ - 0xe06d8032L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d8032-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SUBTYPE_LPCM_AUDIO); -#define KSDATAFORMAT_SUBTYPE_LPCM_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_LPCM_AUDIO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_LPCM_AUDIO \ - 0xe06d80e6L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d80e6-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SPECIFIER_LPCM_AUDIO); -#define KSDATAFORMAT_SPECIFIER_LPCM_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_LPCM_AUDIO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_AC3_AUDIO \ - 0xe06d802cL,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d802c-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SUBTYPE_AC3_AUDIO); -#define KSDATAFORMAT_SUBTYPE_AC3_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_AC3_AUDIO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_AC3_AUDIO \ - 0xe06d80e4L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d80e4-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SPECIFIER_AC3_AUDIO); -#define KSDATAFORMAT_SPECIFIER_AC3_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_AC3_AUDIO) - -#define STATIC_KSPROPSETID_AC3 \ - 0xBFABE720L,0x6E1F,0x11D0,0xBC,0xF2,0x44,0x45,0x53,0x54,0x00,0x00 -DEFINE_GUIDSTRUCT("BFABE720-6E1F-11D0-BCF2-444553540000",KSPROPSETID_AC3); -#define KSPROPSETID_AC3 DEFINE_GUIDNAMED(KSPROPSETID_AC3) - -typedef enum { - KSPROPERTY_AC3_ERROR_CONCEALMENT = 1, - KSPROPERTY_AC3_ALTERNATE_AUDIO, - KSPROPERTY_AC3_DOWNMIX, - KSPROPERTY_AC3_BIT_STREAM_MODE, - KSPROPERTY_AC3_DIALOGUE_LEVEL, - KSPROPERTY_AC3_LANGUAGE_CODE, - KSPROPERTY_AC3_ROOM_TYPE -} KSPROPERTY_AC3; - -typedef struct { - WINBOOL fRepeatPreviousBlock; - WINBOOL fErrorInCurrentBlock; -} KSAC3_ERROR_CONCEALMENT,*PKSAC3_ERROR_CONCEALMENT; - -typedef struct { - WINBOOL fStereo; - ULONG DualMode; -} KSAC3_ALTERNATE_AUDIO,*PKSAC3_ALTERNATE_AUDIO; - -#define KSAC3_ALTERNATE_AUDIO_1 1 -#define KSAC3_ALTERNATE_AUDIO_2 2 -#define KSAC3_ALTERNATE_AUDIO_BOTH 3 - -typedef struct { - WINBOOL fDownMix; - WINBOOL fDolbySurround; -} KSAC3_DOWNMIX,*PKSAC3_DOWNMIX; - -typedef struct { - LONG BitStreamMode; -} KSAC3_BIT_STREAM_MODE,*PKSAC3_BIT_STREAM_MODE; - -#define KSAC3_SERVICE_MAIN_AUDIO 0 -#define KSAC3_SERVICE_NO_DIALOG 1 -#define KSAC3_SERVICE_VISUALLY_IMPAIRED 2 -#define KSAC3_SERVICE_HEARING_IMPAIRED 3 -#define KSAC3_SERVICE_DIALOG_ONLY 4 -#define KSAC3_SERVICE_COMMENTARY 5 -#define KSAC3_SERVICE_EMERGENCY_FLASH 6 -#define KSAC3_SERVICE_VOICE_OVER 7 - -typedef struct { - ULONG DialogueLevel; -} KSAC3_DIALOGUE_LEVEL,*PKSAC3_DIALOGUE_LEVEL; - -typedef struct { - WINBOOL fLargeRoom; -} KSAC3_ROOM_TYPE,*PKSAC3_ROOM_TYPE; - -#define STATIC_KSDATAFORMAT_SUBTYPE_DTS_AUDIO \ - 0xe06d8033L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d8033-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SUBTYPE_DTS_AUDIO); -#define KSDATAFORMAT_SUBTYPE_DTS_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_DTS_AUDIO) - -#define STATIC_KSDATAFORMAT_SUBTYPE_SDDS_AUDIO \ - 0xe06d8034L,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d8034-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SUBTYPE_SDDS_AUDIO); -#define KSDATAFORMAT_SUBTYPE_SDDS_AUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_SDDS_AUDIO) - -#define STATIC_KSPROPSETID_AudioDecoderOut \ - 0x6ca6e020L,0x43bd,0x11d0,0xbd,0x6a,0x00,0x35,0x05,0xc1,0x03,0xa9 -DEFINE_GUIDSTRUCT("6ca6e020-43bd-11d0-bd6a-003505c103a9",KSPROPSETID_AudioDecoderOut); -#define KSPROPSETID_AudioDecoderOut DEFINE_GUIDNAMED(KSPROPSETID_AudioDecoderOut) - -typedef enum { - KSPROPERTY_AUDDECOUT_MODES, - KSPROPERTY_AUDDECOUT_CUR_MODE -} KSPROPERTY_AUDDECOUT; - -#define KSAUDDECOUTMODE_STEREO_ANALOG 0x0001 -#define KSAUDDECOUTMODE_PCM_51 0x0002 -#define KSAUDDECOUTMODE_SPDIFF 0x0004 - -#define STATIC_KSDATAFORMAT_SUBTYPE_SUBPICTURE \ - 0xe06d802dL,0xdb46,0x11cf,0xb4,0xd1,0x00,0x80,0x5f,0x6c,0xbb,0xea -DEFINE_GUIDSTRUCT("e06d802d-db46-11cf-b4d1-00805f6cbbea",KSDATAFORMAT_SUBTYPE_SUBPICTURE); -#define KSDATAFORMAT_SUBTYPE_SUBPICTURE DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_SUBPICTURE) - -#define STATIC_KSPROPSETID_DvdSubPic \ - 0xac390460L,0x43af,0x11d0,0xbd,0x6a,0x00,0x35,0x05,0xc1,0x03,0xa9 -DEFINE_GUIDSTRUCT("ac390460-43af-11d0-bd6a-003505c103a9",KSPROPSETID_DvdSubPic); -#define KSPROPSETID_DvdSubPic DEFINE_GUIDNAMED(KSPROPSETID_DvdSubPic) - -typedef enum { - KSPROPERTY_DVDSUBPIC_PALETTE, - KSPROPERTY_DVDSUBPIC_HLI, - KSPROPERTY_DVDSUBPIC_COMPOSIT_ON -} KSPROPERTY_DVDSUBPIC; - -typedef struct _KS_DVD_YCrCb { - UCHAR Reserved; - UCHAR Y; - UCHAR Cr; - UCHAR Cb; -} KS_DVD_YCrCb,*PKS_DVD_YCrCb; - -typedef struct _KS_DVD_YUV { - UCHAR Reserved; - UCHAR Y; - UCHAR V; - UCHAR U; -} KS_DVD_YUV,*PKS_DVD_YUV; - -typedef struct _KSPROPERTY_SPPAL { - KS_DVD_YUV sppal[16]; -} KSPROPERTY_SPPAL,*PKSPROPERTY_SPPAL; - -typedef struct _KS_COLCON { - UCHAR emph1col:4; - UCHAR emph2col:4; - UCHAR backcol:4; - UCHAR patcol:4; - UCHAR emph1con:4; - UCHAR emph2con:4; - UCHAR backcon:4; - UCHAR patcon:4; -} KS_COLCON,*PKS_COLCON; - -typedef struct _KSPROPERTY_SPHLI { - USHORT HLISS; - USHORT Reserved; - ULONG StartPTM; - ULONG EndPTM; - USHORT StartX; - USHORT StartY; - USHORT StopX; - USHORT StopY; - KS_COLCON ColCon; -} KSPROPERTY_SPHLI,*PKSPROPERTY_SPHLI; - -typedef WINBOOL KSPROPERTY_COMPOSIT_ON,*PKSPROPERTY_COMPOSIT_ON; - -#define STATIC_KSPROPSETID_CopyProt \ - 0x0E8A0A40L,0x6AEF,0x11D0,0x9E,0xD0,0x00,0xA0,0x24,0xCA,0x19,0xB3 -DEFINE_GUIDSTRUCT("0E8A0A40-6AEF-11D0-9ED0-00A024CA19B3",KSPROPSETID_CopyProt); -#define KSPROPSETID_CopyProt DEFINE_GUIDNAMED(KSPROPSETID_CopyProt) - -typedef enum { - KSPROPERTY_DVDCOPY_CHLG_KEY = 0x01, - KSPROPERTY_DVDCOPY_DVD_KEY1, - KSPROPERTY_DVDCOPY_DEC_KEY2, - KSPROPERTY_DVDCOPY_TITLE_KEY, - KSPROPERTY_COPY_MACROVISION, - KSPROPERTY_DVDCOPY_REGION, - KSPROPERTY_DVDCOPY_SET_COPY_STATE, - KSPROPERTY_DVDCOPY_DISC_KEY = 0x80 -} KSPROPERTY_COPYPROT; - -typedef struct _KS_DVDCOPY_CHLGKEY { - BYTE ChlgKey[10]; - BYTE Reserved[2]; -} KS_DVDCOPY_CHLGKEY,*PKS_DVDCOPY_CHLGKEY; - -typedef struct _KS_DVDCOPY_BUSKEY { - BYTE BusKey[5]; - BYTE Reserved[1]; -} KS_DVDCOPY_BUSKEY,*PKS_DVDCOPY_BUSKEY; - -typedef struct _KS_DVDCOPY_DISCKEY { - BYTE DiscKey[2048]; -} KS_DVDCOPY_DISCKEY,*PKS_DVDCOPY_DISCKEY; - -typedef struct _KS_DVDCOPY_REGION { - UCHAR Reserved; - UCHAR RegionData; - UCHAR Reserved2[2]; -} KS_DVDCOPY_REGION,*PKS_DVDCOPY_REGION; - -typedef struct _KS_DVDCOPY_TITLEKEY { - ULONG KeyFlags; - ULONG ReservedNT[2]; - UCHAR TitleKey[6]; - UCHAR Reserved[2]; -} KS_DVDCOPY_TITLEKEY,*PKS_DVDCOPY_TITLEKEY; - -typedef struct _KS_COPY_MACROVISION { - ULONG MACROVISIONLevel; -} KS_COPY_MACROVISION,*PKS_COPY_MACROVISION; - -typedef struct _KS_DVDCOPY_SET_COPY_STATE { - ULONG DVDCopyState; -} KS_DVDCOPY_SET_COPY_STATE,*PKS_DVDCOPY_SET_COPY_STATE; - -typedef enum { - KS_DVDCOPYSTATE_INITIALIZE, - KS_DVDCOPYSTATE_INITIALIZE_TITLE, - KS_DVDCOPYSTATE_AUTHENTICATION_NOT_REQUIRED, - KS_DVDCOPYSTATE_AUTHENTICATION_REQUIRED, - KS_DVDCOPYSTATE_DONE -} KS_DVDCOPYSTATE; - -typedef enum { - KS_MACROVISION_DISABLED, - KS_MACROVISION_LEVEL1, - KS_MACROVISION_LEVEL2, - KS_MACROVISION_LEVEL3 -} KS_COPY_MACROVISION_LEVEL,*PKS_COPY_MACROVISION_LEVEL; - -#define KS_DVD_CGMS_RESERVED_MASK 0x00000078 - -#define KS_DVD_CGMS_COPY_PROTECT_MASK 0x00000018 -#define KS_DVD_CGMS_COPY_PERMITTED 0x00000000 -#define KS_DVD_CGMS_COPY_ONCE 0x00000010 -#define KS_DVD_CGMS_NO_COPY 0x00000018 - -#define KS_DVD_COPYRIGHT_MASK 0x00000040 -#define KS_DVD_NOT_COPYRIGHTED 0x00000000 -#define KS_DVD_COPYRIGHTED 0x00000040 - -#define KS_DVD_SECTOR_PROTECT_MASK 0x00000020 -#define KS_DVD_SECTOR_NOT_PROTECTED 0x00000000 -#define KS_DVD_SECTOR_PROTECTED 0x00000020 - -#define STATIC_KSCATEGORY_TVTUNER \ - 0xa799a800L,0xa46d,0x11d0,0xa1,0x8c,0x00,0xa0,0x24,0x01,0xdc,0xd4 -DEFINE_GUIDSTRUCT("a799a800-a46d-11d0-a18c-00a02401dcd4",KSCATEGORY_TVTUNER); -#define KSCATEGORY_TVTUNER DEFINE_GUIDNAMED(KSCATEGORY_TVTUNER) - -#define STATIC_KSCATEGORY_CROSSBAR \ - 0xa799a801L,0xa46d,0x11d0,0xa1,0x8c,0x00,0xa0,0x24,0x01,0xdc,0xd4 -DEFINE_GUIDSTRUCT("a799a801-a46d-11d0-a18c-00a02401dcd4",KSCATEGORY_CROSSBAR); -#define KSCATEGORY_CROSSBAR DEFINE_GUIDNAMED(KSCATEGORY_CROSSBAR) - -#define STATIC_KSCATEGORY_TVAUDIO \ - 0xa799a802L,0xa46d,0x11d0,0xa1,0x8c,0x00,0xa0,0x24,0x01,0xdc,0xd4 -DEFINE_GUIDSTRUCT("a799a802-a46d-11d0-a18c-00a02401dcd4",KSCATEGORY_TVAUDIO); -#define KSCATEGORY_TVAUDIO DEFINE_GUIDNAMED(KSCATEGORY_TVAUDIO) - -#define STATIC_KSCATEGORY_VPMUX \ - 0xa799a803L,0xa46d,0x11d0,0xa1,0x8c,0x00,0xa0,0x24,0x01,0xdc,0xd4 -DEFINE_GUIDSTRUCT("a799a803-a46d-11d0-a18c-00a02401dcd4",KSCATEGORY_VPMUX); -#define KSCATEGORY_VPMUX DEFINE_GUIDNAMED(KSCATEGORY_VPMUX) - -#define STATIC_KSCATEGORY_VBICODEC \ - 0x07dad660L,0x22f1,0x11d1,0xa9,0xf4,0x00,0xc0,0x4f,0xbb,0xde,0x8f -DEFINE_GUIDSTRUCT("07dad660-22f1-11d1-a9f4-00c04fbbde8f",KSCATEGORY_VBICODEC); -#define KSCATEGORY_VBICODEC DEFINE_GUIDNAMED(KSCATEGORY_VBICODEC) - -#define STATIC_KSDATAFORMAT_SUBTYPE_VPVideo \ - 0x5a9b6a40L,0x1a22,0x11d1,0xba,0xd9,0x0,0x60,0x97,0x44,0x11,0x1a -DEFINE_GUIDSTRUCT("5a9b6a40-1a22-11d1-bad9-00609744111a",KSDATAFORMAT_SUBTYPE_VPVideo); -#define KSDATAFORMAT_SUBTYPE_VPVideo DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_VPVideo) - -#define STATIC_KSDATAFORMAT_SUBTYPE_VPVBI \ - 0x5a9b6a41L,0x1a22,0x11d1,0xba,0xd9,0x0,0x60,0x97,0x44,0x11,0x1a -DEFINE_GUIDSTRUCT("5a9b6a41-1a22-11d1-bad9-00609744111a",KSDATAFORMAT_SUBTYPE_VPVBI); -#define KSDATAFORMAT_SUBTYPE_VPVBI DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_VPVBI) - -#define STATIC_KSDATAFORMAT_SPECIFIER_VIDEOINFO \ - 0x05589f80L,0xc356,0x11ce,0xbf,0x01,0x00,0xaa,0x00,0x55,0x59,0x5a -DEFINE_GUIDSTRUCT("05589f80-c356-11ce-bf01-00aa0055595a",KSDATAFORMAT_SPECIFIER_VIDEOINFO); -#define KSDATAFORMAT_SPECIFIER_VIDEOINFO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_VIDEOINFO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_VIDEOINFO2 \ - 0xf72a76A0L,0xeb0a,0x11d0,0xac,0xe4,0x00,0x00,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("f72a76A0-eb0a-11d0-ace4-0000c0cc16ba",KSDATAFORMAT_SPECIFIER_VIDEOINFO2); -#define KSDATAFORMAT_SPECIFIER_VIDEOINFO2 DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_VIDEOINFO2) - -#define STATIC_KSDATAFORMAT_TYPE_ANALOGVIDEO \ - 0x0482dde1L,0x7817,0x11cf,0x8a,0x03,0x00,0xaa,0x00,0x6e,0xcb,0x65 -DEFINE_GUIDSTRUCT("0482dde1-7817-11cf-8a03-00aa006ecb65",KSDATAFORMAT_TYPE_ANALOGVIDEO); -#define KSDATAFORMAT_TYPE_ANALOGVIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_ANALOGVIDEO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_ANALOGVIDEO \ - 0x0482dde0L,0x7817,0x11cf,0x8a,0x03,0x00,0xaa,0x00,0x6e,0xcb,0x65 -DEFINE_GUIDSTRUCT("0482dde0-7817-11cf-8a03-00aa006ecb65",KSDATAFORMAT_SPECIFIER_ANALOGVIDEO); -#define KSDATAFORMAT_SPECIFIER_ANALOGVIDEO DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_ANALOGVIDEO) - -#define STATIC_KSDATAFORMAT_TYPE_ANALOGAUDIO \ - 0x0482dee1L,0x7817,0x11cf,0x8a,0x03,0x00,0xaa,0x00,0x6e,0xcb,0x65 -DEFINE_GUIDSTRUCT("0482DEE1-7817-11cf-8a03-00aa006ecb65",KSDATAFORMAT_TYPE_ANALOGAUDIO); -#define KSDATAFORMAT_TYPE_ANALOGAUDIO DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_ANALOGAUDIO) - -#define STATIC_KSDATAFORMAT_SPECIFIER_VBI \ - 0xf72a76e0L,0xeb0a,0x11d0,0xac,0xe4,0x00,0x00,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("f72a76e0-eb0a-11d0-ace4-0000c0cc16ba",KSDATAFORMAT_SPECIFIER_VBI); -#define KSDATAFORMAT_SPECIFIER_VBI DEFINE_GUIDNAMED(KSDATAFORMAT_SPECIFIER_VBI) - -#define STATIC_KSDATAFORMAT_TYPE_VBI \ - 0xf72a76e1L,0xeb0a,0x11d0,0xac,0xe4,0x00,0x00,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("f72a76e1-eb0a-11d0-ace4-0000c0cc16ba",KSDATAFORMAT_TYPE_VBI); -#define KSDATAFORMAT_TYPE_VBI DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_VBI) - -#define STATIC_KSDATAFORMAT_SUBTYPE_RAW8 \ - 0xca20d9a0,0x3e3e,0x11d1,0x9b,0xf9,0x0,0xc0,0x4f,0xbb,0xde,0xbf -DEFINE_GUIDSTRUCT("ca20d9a0-3e3e-11d1-9bf9-00c04fbbdebf",KSDATAFORMAT_SUBTYPE_RAW8); -#define KSDATAFORMAT_SUBTYPE_RAW8 DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_RAW8) - -#define STATIC_KSDATAFORMAT_SUBTYPE_CC \ - 0x33214cc1,0x11f,0x11d2,0xb4,0xb1,0x0,0xa0,0xd1,0x2,0xcf,0xbe -DEFINE_GUIDSTRUCT("33214CC1-011F-11D2-B4B1-00A0D102CFBE",KSDATAFORMAT_SUBTYPE_CC); -#define KSDATAFORMAT_SUBTYPE_CC DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_CC) - -#define STATIC_KSDATAFORMAT_SUBTYPE_NABTS \ - 0xf72a76e2L,0xeb0a,0x11d0,0xac,0xe4,0x00,0x00,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("f72a76e2-eb0a-11d0-ace4-0000c0cc16ba",KSDATAFORMAT_SUBTYPE_NABTS); -#define KSDATAFORMAT_SUBTYPE_NABTS DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_NABTS) - -#define STATIC_KSDATAFORMAT_SUBTYPE_TELETEXT \ - 0xf72a76e3L,0xeb0a,0x11d0,0xac,0xe4,0x00,0x00,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("f72a76e3-eb0a-11d0-ace4-0000c0cc16ba",KSDATAFORMAT_SUBTYPE_TELETEXT); -#define KSDATAFORMAT_SUBTYPE_TELETEXT DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_TELETEXT) - -#define KS_BI_RGB 0L -#define KS_BI_RLE8 1L -#define KS_BI_RLE4 2L -#define KS_BI_BITFIELDS 3L - -typedef struct tagKS_RGBQUAD { - BYTE rgbBlue; - BYTE rgbGreen; - BYTE rgbRed; - BYTE rgbReserved; -} KS_RGBQUAD,*PKS_RGBQUAD; - -#define KS_iPALETTE_COLORS 256 -#define KS_iEGA_COLORS 16 -#define KS_iMASK_COLORS 3 -#define KS_iTRUECOLOR 16 -#define KS_iRED 0 -#define KS_iGREEN 1 -#define KS_iBLUE 2 -#define KS_iPALETTE 8 -#define KS_iMAXBITS 8 -#define KS_SIZE_EGA_PALETTE (KS_iEGA_COLORS *sizeof(KS_RGBQUAD)) -#define KS_SIZE_PALETTE (KS_iPALETTE_COLORS *sizeof(KS_RGBQUAD)) - -typedef struct tagKS_BITMAPINFOHEADER { - DWORD biSize; - LONG biWidth; - LONG biHeight; - WORD biPlanes; - WORD biBitCount; - DWORD biCompression; - DWORD biSizeImage; - LONG biXPelsPerMeter; - LONG biYPelsPerMeter; - DWORD biClrUsed; - DWORD biClrImportant; -} KS_BITMAPINFOHEADER,*PKS_BITMAPINFOHEADER; - -typedef struct tag_KS_TRUECOLORINFO { - DWORD dwBitMasks[KS_iMASK_COLORS]; - KS_RGBQUAD bmiColors[KS_iPALETTE_COLORS]; -} KS_TRUECOLORINFO,*PKS_TRUECOLORINFO; - -#define KS_WIDTHBYTES(bits) ((DWORD)(((bits)+31) & (~31)) / 8) -#define KS_DIBWIDTHBYTES(bi) (DWORD)KS_WIDTHBYTES((DWORD)(bi).biWidth *(DWORD)(bi).biBitCount) -#define KS__DIBSIZE(bi) (KS_DIBWIDTHBYTES(bi) *(DWORD)(bi).biHeight) -#define KS_DIBSIZE(bi) ((bi).biHeight < 0 ? (-1)*(KS__DIBSIZE(bi)) : KS__DIBSIZE(bi)) - -typedef LONGLONG REFERENCE_TIME; - -typedef struct tagKS_VIDEOINFOHEADER { - RECT rcSource; - RECT rcTarget; - DWORD dwBitRate; - DWORD dwBitErrorRate; - REFERENCE_TIME AvgTimePerFrame; - KS_BITMAPINFOHEADER bmiHeader; -} KS_VIDEOINFOHEADER,*PKS_VIDEOINFOHEADER; - -typedef struct tagKS_VIDEOINFO { - RECT rcSource; - RECT rcTarget; - DWORD dwBitRate; - DWORD dwBitErrorRate; - REFERENCE_TIME AvgTimePerFrame; - KS_BITMAPINFOHEADER bmiHeader; - __MINGW_EXTENSION union { - KS_RGBQUAD bmiColors[KS_iPALETTE_COLORS]; - DWORD dwBitMasks[KS_iMASK_COLORS]; - KS_TRUECOLORINFO TrueColorInfo; - }; -} KS_VIDEOINFO,*PKS_VIDEOINFO; - -#define KS_SIZE_MASKS (KS_iMASK_COLORS *sizeof(DWORD)) -#define KS_SIZE_PREHEADER (FIELD_OFFSET(KS_VIDEOINFOHEADER,bmiHeader)) - -#define KS_SIZE_VIDEOHEADER(pbmi) ((pbmi)->bmiHeader.biSize + KS_SIZE_PREHEADER) - -typedef struct tagKS_VBIINFOHEADER { - ULONG StartLine; - ULONG EndLine; - ULONG SamplingFrequency; - ULONG MinLineStartTime; - ULONG MaxLineStartTime; - ULONG ActualLineStartTime; - ULONG ActualLineEndTime; - ULONG VideoStandard; - ULONG SamplesPerLine; - ULONG StrideInBytes; - ULONG BufferSize; -} KS_VBIINFOHEADER,*PKS_VBIINFOHEADER; - -#define KS_VBIDATARATE_NABTS (5727272L) -#define KS_VBIDATARATE_CC (503493L) -#define KS_VBISAMPLINGRATE_4X_NABTS ((long)(4*KS_VBIDATARATE_NABTS)) -#define KS_VBISAMPLINGRATE_47X_NABTS ((long)(27000000)) -#define KS_VBISAMPLINGRATE_5X_NABTS ((long)(5*KS_VBIDATARATE_NABTS)) - -#define KS_47NABTS_SCALER (KS_VBISAMPLINGRATE_47X_NABTS/(double)KS_VBIDATARATE_NABTS) - -typedef struct tagKS_AnalogVideoInfo { - RECT rcSource; - RECT rcTarget; - DWORD dwActiveWidth; - DWORD dwActiveHeight; - REFERENCE_TIME AvgTimePerFrame; -} KS_ANALOGVIDEOINFO,*PKS_ANALOGVIDEOINFO; - -#define KS_TVTUNER_CHANGE_BEGIN_TUNE 0x0001L -#define KS_TVTUNER_CHANGE_END_TUNE 0x0002L - -typedef struct tagKS_TVTUNER_CHANGE_INFO { - DWORD dwFlags; - DWORD dwCountryCode; - DWORD dwAnalogVideoStandard; - DWORD dwChannel; -} KS_TVTUNER_CHANGE_INFO,*PKS_TVTUNER_CHANGE_INFO; - -typedef enum { - KS_MPEG2Level_Low, - KS_MPEG2Level_Main, - KS_MPEG2Level_High1440, - KS_MPEG2Level_High -} KS_MPEG2Level; - -typedef enum { - KS_MPEG2Profile_Simple, - KS_MPEG2Profile_Main, - KS_MPEG2Profile_SNRScalable, - KS_MPEG2Profile_SpatiallyScalable, - KS_MPEG2Profile_High -} KS_MPEG2Profile; - -#define KS_INTERLACE_IsInterlaced 0x00000001 -#define KS_INTERLACE_1FieldPerSample 0x00000002 -#define KS_INTERLACE_Field1First 0x00000004 -#define KS_INTERLACE_UNUSED 0x00000008 -#define KS_INTERLACE_FieldPatternMask 0x00000030 -#define KS_INTERLACE_FieldPatField1Only 0x00000000 -#define KS_INTERLACE_FieldPatField2Only 0x00000010 -#define KS_INTERLACE_FieldPatBothRegular 0x00000020 -#define KS_INTERLACE_FieldPatBothIrregular 0x00000030 -#define KS_INTERLACE_DisplayModeMask 0x000000c0 -#define KS_INTERLACE_DisplayModeBobOnly 0x00000000 -#define KS_INTERLACE_DisplayModeWeaveOnly 0x00000040 -#define KS_INTERLACE_DisplayModeBobOrWeave 0x00000080 - -#define KS_MPEG2_DoPanScan 0x00000001 -#define KS_MPEG2_DVDLine21Field1 0x00000002 -#define KS_MPEG2_DVDLine21Field2 0x00000004 -#define KS_MPEG2_SourceIsLetterboxed 0x00000008 -#define KS_MPEG2_FilmCameraMode 0x00000010 -#define KS_MPEG2_LetterboxAnalogOut 0x00000020 -#define KS_MPEG2_DSS_UserData 0x00000040 -#define KS_MPEG2_DVB_UserData 0x00000080 -#define KS_MPEG2_27MhzTimebase 0x00000100 - -typedef struct tagKS_VIDEOINFOHEADER2 { - RECT rcSource; - RECT rcTarget; - DWORD dwBitRate; - DWORD dwBitErrorRate; - REFERENCE_TIME AvgTimePerFrame; - DWORD dwInterlaceFlags; - DWORD dwCopyProtectFlags; - DWORD dwPictAspectRatioX; - DWORD dwPictAspectRatioY; - DWORD dwReserved1; - DWORD dwReserved2; - KS_BITMAPINFOHEADER bmiHeader; -} KS_VIDEOINFOHEADER2,*PKS_VIDEOINFOHEADER2; - -typedef struct tagKS_MPEG1VIDEOINFO { - KS_VIDEOINFOHEADER hdr; - DWORD dwStartTimeCode; - DWORD cbSequenceHeader; - BYTE bSequenceHeader[1]; -} KS_MPEG1VIDEOINFO,*PKS_MPEG1VIDEOINFO; - -#define KS_MAX_SIZE_MPEG1_SEQUENCE_INFO 140 -#define KS_SIZE_MPEG1VIDEOINFO(pv) (FIELD_OFFSET(KS_MPEG1VIDEOINFO,bSequenceHeader[0]) + (pv)->cbSequenceHeader) -#define KS_MPEG1_SEQUENCE_INFO(pv) ((const BYTE *)(pv)->bSequenceHeader) - -typedef struct tagKS_MPEGVIDEOINFO2 { - KS_VIDEOINFOHEADER2 hdr; - DWORD dwStartTimeCode; - DWORD cbSequenceHeader; - DWORD dwProfile; - DWORD dwLevel; - DWORD dwFlags; - DWORD bSequenceHeader[1]; -} KS_MPEGVIDEOINFO2,*PKS_MPEGVIDEOINFO2; - -#define KS_SIZE_MPEGVIDEOINFO2(pv) (FIELD_OFFSET(KS_MPEGVIDEOINFO2,bSequenceHeader[0]) + (pv)->cbSequenceHeader) -#define KS_MPEG1_SEQUENCE_INFO(pv) ((const BYTE *)(pv)->bSequenceHeader) - -#define KS_MPEGAUDIOINFO_27MhzTimebase 0x00000001 - -typedef struct tagKS_MPEAUDIOINFO { - DWORD dwFlags; - DWORD dwReserved1; - DWORD dwReserved2; - DWORD dwReserved3; -} KS_MPEGAUDIOINFO,*PKS_MPEGAUDIOINFO; - -typedef struct tagKS_DATAFORMAT_VIDEOINFOHEADER { - KSDATAFORMAT DataFormat; - KS_VIDEOINFOHEADER VideoInfoHeader; -} KS_DATAFORMAT_VIDEOINFOHEADER,*PKS_DATAFORMAT_VIDEOINFOHEADER; - -typedef struct tagKS_DATAFORMAT_VIDEOINFOHEADER2 { - KSDATAFORMAT DataFormat; - KS_VIDEOINFOHEADER2 VideoInfoHeader2; -} KS_DATAFORMAT_VIDEOINFOHEADER2,*PKS_DATAFORMAT_VIDEOINFOHEADER2; - -typedef struct tagKS_DATAFORMAT_VIDEOINFO_PALETTE { - KSDATAFORMAT DataFormat; - KS_VIDEOINFO VideoInfo; -} KS_DATAFORMAT_VIDEOINFO_PALETTE,*PKS_DATAFORMAT_VIDEOINFO_PALETTE; - -typedef struct tagKS_DATAFORMAT_VBIINFOHEADER { - KSDATAFORMAT DataFormat; - KS_VBIINFOHEADER VBIInfoHeader; -} KS_DATAFORMAT_VBIINFOHEADER,*PKS_DATAFORMAT_VBIINFOHEADER; - -typedef struct _KS_VIDEO_STREAM_CONFIG_CAPS { - GUID guid; - ULONG VideoStandard; - SIZE InputSize; - SIZE MinCroppingSize; - SIZE MaxCroppingSize; - int CropGranularityX; - int CropGranularityY; - int CropAlignX; - int CropAlignY; - SIZE MinOutputSize; - SIZE MaxOutputSize; - int OutputGranularityX; - int OutputGranularityY; - int StretchTapsX; - int StretchTapsY; - int ShrinkTapsX; - int ShrinkTapsY; - LONGLONG MinFrameInterval; - LONGLONG MaxFrameInterval; - LONG MinBitsPerSecond; - LONG MaxBitsPerSecond; -} KS_VIDEO_STREAM_CONFIG_CAPS,*PKS_VIDEO_STREAM_CONFIG_CAPS; - -typedef struct tagKS_DATARANGE_VIDEO { - KSDATARANGE DataRange; - WINBOOL bFixedSizeSamples; - WINBOOL bTemporalCompression; - DWORD StreamDescriptionFlags; - DWORD MemoryAllocationFlags; - KS_VIDEO_STREAM_CONFIG_CAPS ConfigCaps; - KS_VIDEOINFOHEADER VideoInfoHeader; -} KS_DATARANGE_VIDEO,*PKS_DATARANGE_VIDEO; - -typedef struct tagKS_DATARANGE_VIDEO2 { - KSDATARANGE DataRange; - WINBOOL bFixedSizeSamples; - WINBOOL bTemporalCompression; - DWORD StreamDescriptionFlags; - DWORD MemoryAllocationFlags; - KS_VIDEO_STREAM_CONFIG_CAPS ConfigCaps; - KS_VIDEOINFOHEADER2 VideoInfoHeader; -} KS_DATARANGE_VIDEO2,*PKS_DATARANGE_VIDEO2; - -typedef struct tagKS_DATARANGE_MPEG1_VIDEO { - KSDATARANGE DataRange; - WINBOOL bFixedSizeSamples; - WINBOOL bTemporalCompression; - DWORD StreamDescriptionFlags; - DWORD MemoryAllocationFlags; - KS_VIDEO_STREAM_CONFIG_CAPS ConfigCaps; - KS_MPEG1VIDEOINFO VideoInfoHeader; -} KS_DATARANGE_MPEG1_VIDEO,*PKS_DATARANGE_MPEG1_VIDEO; - -typedef struct tagKS_DATARANGE_MPEG2_VIDEO { - KSDATARANGE DataRange; - WINBOOL bFixedSizeSamples; - WINBOOL bTemporalCompression; - DWORD StreamDescriptionFlags; - DWORD MemoryAllocationFlags; - KS_VIDEO_STREAM_CONFIG_CAPS ConfigCaps; - KS_MPEGVIDEOINFO2 VideoInfoHeader; -} KS_DATARANGE_MPEG2_VIDEO,*PKS_DATARANGE_MPEG2_VIDEO; - -typedef struct tagKS_DATARANGE_VIDEO_PALETTE { - KSDATARANGE DataRange; - WINBOOL bFixedSizeSamples; - WINBOOL bTemporalCompression; - DWORD StreamDescriptionFlags; - DWORD MemoryAllocationFlags; - KS_VIDEO_STREAM_CONFIG_CAPS ConfigCaps; - KS_VIDEOINFO VideoInfo; -} KS_DATARANGE_VIDEO_PALETTE,*PKS_DATARANGE_VIDEO_PALETTE; - -typedef struct tagKS_DATARANGE_VIDEO_VBI { - KSDATARANGE DataRange; - WINBOOL bFixedSizeSamples; - WINBOOL bTemporalCompression; - DWORD StreamDescriptionFlags; - DWORD MemoryAllocationFlags; - KS_VIDEO_STREAM_CONFIG_CAPS ConfigCaps; - KS_VBIINFOHEADER VBIInfoHeader; -} KS_DATARANGE_VIDEO_VBI,*PKS_DATARANGE_VIDEO_VBI; - -typedef struct tagKS_DATARANGE_ANALOGVIDEO { - KSDATARANGE DataRange; - KS_ANALOGVIDEOINFO AnalogVideoInfo; -} KS_DATARANGE_ANALOGVIDEO,*PKS_DATARANGE_ANALOGVIDEO; - -#define KS_VIDEOSTREAM_PREVIEW 0x0001 -#define KS_VIDEOSTREAM_CAPTURE 0x0002 -#define KS_VIDEOSTREAM_VBI 0x0010 -#define KS_VIDEOSTREAM_NABTS 0x0020 -#define KS_VIDEOSTREAM_CC 0x0100 -#define KS_VIDEOSTREAM_EDS 0x0200 -#define KS_VIDEOSTREAM_TELETEXT 0x0400 -#define KS_VIDEOSTREAM_STILL 0x1000 -#define KS_VIDEOSTREAM_IS_VPE 0x8000 - -#define KS_VIDEO_ALLOC_VPE_SYSTEM 0x0001 -#define KS_VIDEO_ALLOC_VPE_DISPLAY 0x0002 -#define KS_VIDEO_ALLOC_VPE_AGP 0x0004 - -#define STATIC_KSPROPSETID_VBICAP_PROPERTIES \ - 0xf162c607,0x7b35,0x496f,0xad,0x7f,0x2d,0xca,0x3b,0x46,0xb7,0x18 -DEFINE_GUIDSTRUCT("F162C607-7B35-496f-AD7F-2DCA3B46B718",KSPROPSETID_VBICAP_PROPERTIES); -#define KSPROPSETID_VBICAP_PROPERTIES DEFINE_GUIDNAMED(KSPROPSETID_VBICAP_PROPERTIES) - -typedef enum { - KSPROPERTY_VBICAP_PROPERTIES_PROTECTION = 0x01 -} KSPROPERTY_VBICAP; - -typedef struct _VBICAP_PROPERTIES_PROTECTION_S { - KSPROPERTY Property; - ULONG StreamIndex; - ULONG Status; -} VBICAP_PROPERTIES_PROTECTION_S,*PVBICAP_PROPERTIES_PROTECTION_S; - -#define KS_VBICAP_PROTECTION_MV_PRESENT 0x0001L -#define KS_VBICAP_PROTECTION_MV_HARDWARE 0x0002L -#define KS_VBICAP_PROTECTION_MV_DETECTED 0x0004L - -#define KS_NABTS_GROUPID_ORIGINAL_CONTENT_BASE 0x800 -#define KS_NABTS_GROUPID_ORIGINAL_CONTENT_ADVERTISER_BASE 0x810 - -#define KS_NABTS_GROUPID_PRODUCTION_COMPANY_CONTENT_BASE 0x820 -#define KS_NABTS_GROUPID_PRODUCTION_COMPANY_ADVERTISER_BASE 0x830 - -#define KS_NABTS_GROUPID_SYNDICATED_SHOW_CONTENT_BASE 0x840 -#define KS_NABTS_GROUPID_SYNDICATED_SHOW_ADVERTISER_BASE 0x850 - -#define KS_NABTS_GROUPID_NETWORK_WIDE_CONTENT_BASE 0x860 -#define KS_NABTS_GROUPID_NETWORK_WIDE_ADVERTISER_BASE 0x870 - -#define KS_NABTS_GROUPID_TELEVISION_STATION_CONTENT_BASE 0x880 -#define KS_NABTS_GROUPID_TELEVISION_STATION_ADVERTISER_BASE 0x890 - -#define KS_NABTS_GROUPID_LOCAL_CABLE_SYSTEM_CONTENT_BASE 0x8A0 -#define KS_NABTS_GROUPID_LOCAL_CABLE_SYSTEM_ADVERTISER_BASE 0x8B0 - -#define KS_NABTS_GROUPID_MICROSOFT_RESERVED_TEST_DATA_BASE 0x8F0 - -#define STATIC_KSDATAFORMAT_TYPE_NABTS \ - 0xe757bca0,0x39ac,0x11d1,0xa9,0xf5,0x0,0xc0,0x4f,0xbb,0xde,0x8f -DEFINE_GUIDSTRUCT("E757BCA0-39AC-11d1-A9F5-00C04FBBDE8F",KSDATAFORMAT_TYPE_NABTS); -#define KSDATAFORMAT_TYPE_NABTS DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_NABTS) - -#define STATIC_KSDATAFORMAT_SUBTYPE_NABTS_FEC \ - 0xe757bca1,0x39ac,0x11d1,0xa9,0xf5,0x0,0xc0,0x4f,0xbb,0xde,0x8f -DEFINE_GUIDSTRUCT("E757BCA1-39AC-11d1-A9F5-00C04FBBDE8F",KSDATAFORMAT_SUBTYPE_NABTS_FEC); -#define KSDATAFORMAT_SUBTYPE_NABTS_FEC DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_NABTS_FEC) - -#define MAX_NABTS_VBI_LINES_PER_FIELD 11 -#define NABTS_LINES_PER_BUNDLE 16 -#define NABTS_PAYLOAD_PER_LINE 28 -#define NABTS_BYTES_PER_LINE 36 - -typedef struct _NABTSFEC_BUFFER { - ULONG dataSize; - USHORT groupID; - USHORT Reserved; - UCHAR data[NABTS_LINES_PER_BUNDLE *NABTS_PAYLOAD_PER_LINE]; -} NABTSFEC_BUFFER,*PNABTSFEC_BUFFER; - -#define STATIC_KSPROPSETID_VBICodecFiltering \ - 0xcafeb0caL,0x8715,0x11d0,0xbd,0x6a,0x00,0x35,0xc0,0xed,0xba,0xbe -DEFINE_GUIDSTRUCT("cafeb0ca-8715-11d0-bd6a-0035c0edbabe",KSPROPSETID_VBICodecFiltering); -#define KSPROPSETID_VBICodecFiltering DEFINE_GUIDNAMED(KSPROPSETID_VBICodecFiltering) - -typedef enum { - KSPROPERTY_VBICODECFILTERING_SCANLINES_REQUESTED_BIT_ARRAY = 0x01, - KSPROPERTY_VBICODECFILTERING_SCANLINES_DISCOVERED_BIT_ARRAY, - KSPROPERTY_VBICODECFILTERING_SUBSTREAMS_REQUESTED_BIT_ARRAY, - KSPROPERTY_VBICODECFILTERING_SUBSTREAMS_DISCOVERED_BIT_ARRAY, - KSPROPERTY_VBICODECFILTERING_STATISTICS -} KSPROPERTY_VBICODECFILTERING; - -typedef struct _VBICODECFILTERING_SCANLINES { - DWORD DwordBitArray[32]; -} VBICODECFILTERING_SCANLINES,*PVBICODECFILTERING_SCANLINES; - -typedef struct _VBICODECFILTERING_NABTS_SUBSTREAMS { - DWORD SubstreamMask[128]; -} VBICODECFILTERING_NABTS_SUBSTREAMS,*PVBICODECFILTERING_NABTS_SUBSTREAMS; - -typedef struct _VBICODECFILTERING_CC_SUBSTREAMS { - DWORD SubstreamMask; -} VBICODECFILTERING_CC_SUBSTREAMS,*PVBICODECFILTERING_CC_SUBSTREAMS; - -#define KS_CC_SUBSTREAM_ODD 0x0001L -#define KS_CC_SUBSTREAM_EVEN 0x0002L - -#define KS_CC_SUBSTREAM_FIELD1_MASK 0x00F0L -#define KS_CC_SUBSTREAM_SERVICE_CC1 0x0010L -#define KS_CC_SUBSTREAM_SERVICE_CC2 0x0020L -#define KS_CC_SUBSTREAM_SERVICE_T1 0x0040L -#define KS_CC_SUBSTREAM_SERVICE_T2 0x0080L - -#define KS_CC_SUBSTREAM_FIELD2_MASK 0x1F00L -#define KS_CC_SUBSTREAM_SERVICE_CC3 0x0100L -#define KS_CC_SUBSTREAM_SERVICE_CC4 0x0200L -#define KS_CC_SUBSTREAM_SERVICE_T3 0x0400L -#define KS_CC_SUBSTREAM_SERVICE_T4 0x0800L -#define KS_CC_SUBSTREAM_SERVICE_XDS 0x1000L - -#define CC_MAX_HW_DECODE_LINES 12 -typedef struct _CC_BYTE_PAIR { - BYTE Decoded[2]; - USHORT Reserved; -} CC_BYTE_PAIR,*PCC_BYTE_PAIR; - -typedef struct _CC_HW_FIELD { - VBICODECFILTERING_SCANLINES ScanlinesRequested; - ULONG fieldFlags; - LONGLONG PictureNumber; - CC_BYTE_PAIR Lines[CC_MAX_HW_DECODE_LINES]; -} CC_HW_FIELD,*PCC_HW_FIELD; - -#ifndef PACK_PRAGMAS_NOT_SUPPORTED -#include -#endif -typedef struct _NABTS_BUFFER_LINE { - BYTE Confidence; - BYTE Bytes[NABTS_BYTES_PER_LINE]; -} NABTS_BUFFER_LINE,*PNABTS_BUFFER_LINE; - -#define NABTS_BUFFER_PICTURENUMBER_SUPPORT 1 -typedef struct _NABTS_BUFFER { - VBICODECFILTERING_SCANLINES ScanlinesRequested; - LONGLONG PictureNumber; - NABTS_BUFFER_LINE NabtsLines[MAX_NABTS_VBI_LINES_PER_FIELD]; -} NABTS_BUFFER,*PNABTS_BUFFER; -#ifndef PACK_PRAGMAS_NOT_SUPPORTED -#include -#endif - -#define WST_TVTUNER_CHANGE_BEGIN_TUNE 0x1000L -#define WST_TVTUNER_CHANGE_END_TUNE 0x2000L - -#define MAX_WST_VBI_LINES_PER_FIELD 17 -#define WST_BYTES_PER_LINE 42 - -typedef struct _WST_BUFFER_LINE { - BYTE Confidence; - BYTE Bytes[WST_BYTES_PER_LINE]; -} WST_BUFFER_LINE,*PWST_BUFFER_LINE; - -typedef struct _WST_BUFFER { - VBICODECFILTERING_SCANLINES ScanlinesRequested; - WST_BUFFER_LINE WstLines[MAX_WST_VBI_LINES_PER_FIELD]; -} WST_BUFFER,*PWST_BUFFER; - -typedef struct _VBICODECFILTERING_STATISTICS_COMMON { - DWORD InputSRBsProcessed; - DWORD OutputSRBsProcessed; - DWORD SRBsIgnored; - DWORD InputSRBsMissing; - DWORD OutputSRBsMissing; - DWORD OutputFailures; - DWORD InternalErrors; - DWORD ExternalErrors; - DWORD InputDiscontinuities; - DWORD DSPFailures; - DWORD TvTunerChanges; - DWORD VBIHeaderChanges; - DWORD LineConfidenceAvg; - DWORD BytesOutput; -} VBICODECFILTERING_STATISTICS_COMMON,*PVBICODECFILTERING_STATISTICS_COMMON; - -typedef struct _VBICODECFILTERING_STATISTICS_COMMON_PIN { - DWORD SRBsProcessed; - DWORD SRBsIgnored; - DWORD SRBsMissing; - DWORD InternalErrors; - DWORD ExternalErrors; - DWORD Discontinuities; - DWORD LineConfidenceAvg; - DWORD BytesOutput; -} VBICODECFILTERING_STATISTICS_COMMON_PIN,*PVBICODECFILTERING_STATISTICS_COMMON_PIN; - -typedef struct _VBICODECFILTERING_STATISTICS_NABTS { - VBICODECFILTERING_STATISTICS_COMMON Common; - DWORD FECBundleBadLines; - DWORD FECQueueOverflows; - DWORD FECCorrectedLines; - DWORD FECUncorrectableLines; - DWORD BundlesProcessed; - DWORD BundlesSent2IP; - DWORD FilteredLines; -} VBICODECFILTERING_STATISTICS_NABTS,*PVBICODECFILTERING_STATISTICS_NABTS; - -typedef struct _VBICODECFILTERING_STATISTICS_NABTS_PIN { - VBICODECFILTERING_STATISTICS_COMMON_PIN Common; -} VBICODECFILTERING_STATISTICS_NABTS_PIN,*PVBICODECFILTERING_STATISTICS_NABTS_PIN; - -typedef struct _VBICODECFILTERING_STATISTICS_CC { - VBICODECFILTERING_STATISTICS_COMMON Common; -} VBICODECFILTERING_STATISTICS_CC,*PVBICODECFILTERING_STATISTICS_CC; - -typedef struct _VBICODECFILTERING_STATISTICS_CC_PIN { - VBICODECFILTERING_STATISTICS_COMMON_PIN Common; -} VBICODECFILTERING_STATISTICS_CC_PIN,*PVBICODECFILTERING_STATISTICS_CC_PIN; - -typedef struct _VBICODECFILTERING_STATISTICS_TELETEXT { - VBICODECFILTERING_STATISTICS_COMMON Common; -} VBICODECFILTERING_STATISTICS_TELETEXT,*PVBICODECFILTERING_STATISTICS_TELETEXT; - -typedef struct _VBICODECFILTERING_STATISTICS_TELETEXT_PIN { - VBICODECFILTERING_STATISTICS_COMMON_PIN Common; -} VBICODECFILTERING_STATISTICS_TELETEXT_PIN,*PVBICODECFILTERING_STATISTICS_TELETEXT_PIN; - -typedef struct { - KSPROPERTY Property; - VBICODECFILTERING_SCANLINES Scanlines; -} KSPROPERTY_VBICODECFILTERING_SCANLINES_S,*PKSPROPERTY_VBICODECFILTERING_SCANLINES_S; - -typedef struct { - KSPROPERTY Property; - VBICODECFILTERING_NABTS_SUBSTREAMS Substreams; -} KSPROPERTY_VBICODECFILTERING_NABTS_SUBSTREAMS_S,*PKSPROPERTY_VBICODECFILTERING_NABTS_SUBSTREAMS_S; - -typedef struct { - KSPROPERTY Property; - VBICODECFILTERING_CC_SUBSTREAMS Substreams; -} KSPROPERTY_VBICODECFILTERING_CC_SUBSTREAMS_S,*PKSPROPERTY_VBICODECFILTERING_CC_SUBSTREAMS_S; - -typedef struct { - KSPROPERTY Property; - VBICODECFILTERING_STATISTICS_COMMON Statistics; -} KSPROPERTY_VBICODECFILTERING_STATISTICS_COMMON_S,*PKSPROPERTY_VBICODECFILTERING_STATISTICS_COMMON_S; - -typedef struct { - KSPROPERTY Property; - VBICODECFILTERING_STATISTICS_COMMON_PIN Statistics; -} KSPROPERTY_VBICODECFILTERING_STATISTICS_COMMON_PIN_S,*PKSPROPERTY_VBICODECFILTERING_STATISTICS_COMMON_PIN_S; - -typedef struct { - KSPROPERTY Property; - VBICODECFILTERING_STATISTICS_NABTS Statistics; -} KSPROPERTY_VBICODECFILTERING_STATISTICS_NABTS_S,*PKSPROPERTY_VBICODECFILTERING_STATISTICS_NABTS_S; - -typedef struct { - KSPROPERTY Property; - VBICODECFILTERING_STATISTICS_NABTS_PIN Statistics; -} KSPROPERTY_VBICODECFILTERING_STATISTICS_NABTS_PIN_S,*PKSPROPERTY_VBICODECFILTERING_STATISTICS_NABTS_PIN_S; - -typedef struct { - KSPROPERTY Property; - VBICODECFILTERING_STATISTICS_CC Statistics; -} KSPROPERTY_VBICODECFILTERING_STATISTICS_CC_S,*PKSPROPERTY_VBICODECFILTERING_STATISTICS_CC_S; - -typedef struct { - KSPROPERTY Property; - VBICODECFILTERING_STATISTICS_CC_PIN Statistics; -} KSPROPERTY_VBICODECFILTERING_STATISTICS_CC_PIN_S,*PKSPROPERTY_VBICODECFILTERING_STATISTICS_CC_PIN_S; - -#define STATIC_PINNAME_VIDEO_CAPTURE \ - 0xfb6c4281,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -#define STATIC_PINNAME_CAPTURE STATIC_PINNAME_VIDEO_CAPTURE -DEFINE_GUIDSTRUCT("FB6C4281-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_CAPTURE); -#define PINNAME_VIDEO_CAPTURE DEFINE_GUIDNAMED(PINNAME_VIDEO_CAPTURE) -#define PINNAME_CAPTURE PINNAME_VIDEO_CAPTURE - -#define STATIC_PINNAME_VIDEO_CC_CAPTURE \ - 0x1aad8061,0x12d,0x11d2,0xb4,0xb1,0x0,0xa0,0xd1,0x2,0xcf,0xbe -#define STATIC_PINNAME_CC_CAPTURE STATIC_PINNAME_VIDEO_CC_CAPTURE -DEFINE_GUIDSTRUCT("1AAD8061-012D-11d2-B4B1-00A0D102CFBE",PINNAME_VIDEO_CC_CAPTURE); -#define PINNAME_VIDEO_CC_CAPTURE DEFINE_GUIDNAMED(PINNAME_VIDEO_CC_CAPTURE) - -#define STATIC_PINNAME_VIDEO_NABTS_CAPTURE \ - 0x29703660,0x498a,0x11d2,0xb4,0xb1,0x0,0xa0,0xd1,0x2,0xcf,0xbe -#define STATIC_PINNAME_NABTS_CAPTURE STATIC_PINNAME_VIDEO_NABTS_CAPTURE -DEFINE_GUIDSTRUCT("29703660-498A-11d2-B4B1-00A0D102CFBE",PINNAME_VIDEO_NABTS_CAPTURE); -#define PINNAME_VIDEO_NABTS_CAPTURE DEFINE_GUIDNAMED(PINNAME_VIDEO_NABTS_CAPTURE) - -#define STATIC_PINNAME_VIDEO_PREVIEW \ - 0xfb6c4282,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -#define STATIC_PINNAME_PREVIEW STATIC_PINNAME_VIDEO_PREVIEW -DEFINE_GUIDSTRUCT("FB6C4282-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_PREVIEW); -#define PINNAME_VIDEO_PREVIEW DEFINE_GUIDNAMED(PINNAME_VIDEO_PREVIEW) -#define PINNAME_PREVIEW PINNAME_VIDEO_PREVIEW - -#define STATIC_PINNAME_VIDEO_ANALOGVIDEOIN \ - 0xfb6c4283,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C4283-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_ANALOGVIDEOIN); -#define PINNAME_VIDEO_ANALOGVIDEOIN DEFINE_GUIDNAMED(PINNAME_VIDEO_ANALOGVIDEOIN) - -#define STATIC_PINNAME_VIDEO_VBI \ - 0xfb6c4284,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C4284-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_VBI); -#define PINNAME_VIDEO_VBI DEFINE_GUIDNAMED(PINNAME_VIDEO_VBI) - -#define STATIC_PINNAME_VIDEO_VIDEOPORT \ - 0xfb6c4285,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C4285-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_VIDEOPORT); -#define PINNAME_VIDEO_VIDEOPORT DEFINE_GUIDNAMED(PINNAME_VIDEO_VIDEOPORT) - -#define STATIC_PINNAME_VIDEO_NABTS \ - 0xfb6c4286,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C4286-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_NABTS); -#define PINNAME_VIDEO_NABTS DEFINE_GUIDNAMED(PINNAME_VIDEO_NABTS) - -#define STATIC_PINNAME_VIDEO_EDS \ - 0xfb6c4287,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C4287-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_EDS); -#define PINNAME_VIDEO_EDS DEFINE_GUIDNAMED(PINNAME_VIDEO_EDS) - -#define STATIC_PINNAME_VIDEO_TELETEXT \ - 0xfb6c4288,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C4288-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_TELETEXT); -#define PINNAME_VIDEO_TELETEXT DEFINE_GUIDNAMED(PINNAME_VIDEO_TELETEXT) - -#define STATIC_PINNAME_VIDEO_CC \ - 0xfb6c4289,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C4289-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_CC); -#define PINNAME_VIDEO_CC DEFINE_GUIDNAMED(PINNAME_VIDEO_CC) - -#define STATIC_PINNAME_VIDEO_STILL \ - 0xfb6c428A,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C428A-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_STILL); -#define PINNAME_VIDEO_STILL DEFINE_GUIDNAMED(PINNAME_VIDEO_STILL) - -#define STATIC_PINNAME_VIDEO_TIMECODE \ - 0xfb6c428B,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C428B-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_TIMECODE); -#define PINNAME_VIDEO_TIMECODE DEFINE_GUIDNAMED(PINNAME_VIDEO_TIMECODE) - -#define STATIC_PINNAME_VIDEO_VIDEOPORT_VBI \ - 0xfb6c428C,0x353,0x11d1,0x90,0x5f,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("FB6C428C-0353-11d1-905F-0000C0CC16BA",PINNAME_VIDEO_VIDEOPORT_VBI); -#define PINNAME_VIDEO_VIDEOPORT_VBI DEFINE_GUIDNAMED(PINNAME_VIDEO_VIDEOPORT_VBI) - -#define KS_VIDEO_FLAG_FRAME 0x0000L -#define KS_VIDEO_FLAG_FIELD1 0x0001L -#define KS_VIDEO_FLAG_FIELD2 0x0002L - -#define KS_VIDEO_FLAG_I_FRAME 0x0000L -#define KS_VIDEO_FLAG_P_FRAME 0x0010L -#define KS_VIDEO_FLAG_B_FRAME 0x0020L - -typedef struct tagKS_FRAME_INFO { - ULONG ExtendedHeaderSize; - DWORD dwFrameFlags; - LONGLONG PictureNumber; - LONGLONG DropCount; - HANDLE hDirectDraw; - HANDLE hSurfaceHandle; - RECT DirectDrawRect; - - DWORD Reserved1; - DWORD Reserved2; - DWORD Reserved3; - DWORD Reserved4; -} KS_FRAME_INFO,*PKS_FRAME_INFO; - -#define KS_VBI_FLAG_FIELD1 0x0001L -#define KS_VBI_FLAG_FIELD2 0x0002L - -#define KS_VBI_FLAG_MV_PRESENT 0x0100L -#define KS_VBI_FLAG_MV_HARDWARE 0x0200L -#define KS_VBI_FLAG_MV_DETECTED 0x0400L - -#define KS_VBI_FLAG_TVTUNER_CHANGE 0x0010L -#define KS_VBI_FLAG_VBIINFOHEADER_CHANGE 0x0020L - -typedef struct tagKS_VBI_FRAME_INFO { - ULONG ExtendedHeaderSize; - DWORD dwFrameFlags; - LONGLONG PictureNumber; - LONGLONG DropCount; - DWORD dwSamplingFrequency; - KS_TVTUNER_CHANGE_INFO TvTunerChangeInfo; - KS_VBIINFOHEADER VBIInfoHeader; -} KS_VBI_FRAME_INFO,*PKS_VBI_FRAME_INFO; - -typedef enum -{ - KS_AnalogVideo_None = 0x00000000, - KS_AnalogVideo_NTSC_M = 0x00000001, - KS_AnalogVideo_NTSC_M_J = 0x00000002, - KS_AnalogVideo_NTSC_433 = 0x00000004, - KS_AnalogVideo_PAL_B = 0x00000010, - KS_AnalogVideo_PAL_D = 0x00000020, - KS_AnalogVideo_PAL_G = 0x00000040, - KS_AnalogVideo_PAL_H = 0x00000080, - KS_AnalogVideo_PAL_I = 0x00000100, - KS_AnalogVideo_PAL_M = 0x00000200, - KS_AnalogVideo_PAL_N = 0x00000400, - KS_AnalogVideo_PAL_60 = 0x00000800, - KS_AnalogVideo_SECAM_B = 0x00001000, - KS_AnalogVideo_SECAM_D = 0x00002000, - KS_AnalogVideo_SECAM_G = 0x00004000, - KS_AnalogVideo_SECAM_H = 0x00008000, - KS_AnalogVideo_SECAM_K = 0x00010000, - KS_AnalogVideo_SECAM_K1 = 0x00020000, - KS_AnalogVideo_SECAM_L = 0x00040000, - KS_AnalogVideo_SECAM_L1 = 0x00080000, - KS_AnalogVideo_PAL_N_COMBO = 0x00100000 -} KS_AnalogVideoStandard; - -#define KS_AnalogVideo_NTSC_Mask 0x00000007 -#define KS_AnalogVideo_PAL_Mask 0x00100FF0 -#define KS_AnalogVideo_SECAM_Mask 0x000FF000 - -#define STATIC_PROPSETID_ALLOCATOR_CONTROL \ - 0x53171960,0x148e,0x11d2,0x99,0x79,0x0,0x0,0xc0,0xcc,0x16,0xba -DEFINE_GUIDSTRUCT("53171960-148E-11d2-9979-0000C0CC16BA",PROPSETID_ALLOCATOR_CONTROL); -#define PROPSETID_ALLOCATOR_CONTROL DEFINE_GUIDNAMED(PROPSETID_ALLOCATOR_CONTROL) - -typedef enum { - KSPROPERTY_ALLOCATOR_CONTROL_HONOR_COUNT, - KSPROPERTY_ALLOCATOR_CONTROL_SURFACE_SIZE, - KSPROPERTY_ALLOCATOR_CONTROL_CAPTURE_CAPS, - KSPROPERTY_ALLOCATOR_CONTROL_CAPTURE_INTERLEAVE -} KSPROPERTY_ALLOCATOR_CONTROL; - -typedef struct { - ULONG CX; - ULONG CY; -} KSPROPERTY_ALLOCATOR_CONTROL_SURFACE_SIZE_S,*PKSPROPERTY_ALLOCATOR_CONTROL_SURFACE_SIZE_S; - -typedef struct { - ULONG InterleavedCapSupported; -} KSPROPERTY_ALLOCATOR_CONTROL_CAPTURE_CAPS_S,*PKSPROPERTY_ALLOCATOR_CONTROL_CAPTURE_CAPS_S; - -typedef struct { - ULONG InterleavedCapPossible; -} KSPROPERTY_ALLOCATOR_CONTROL_CAPTURE_INTERLEAVE_S,*PKSPROPERTY_ALLOCATOR_CONTROL_CAPTURE_INTERLEAVE_S; - -#define STATIC_PROPSETID_VIDCAP_VIDEOPROCAMP \ - 0xC6E13360L,0x30AC,0x11d0,0xa1,0x8c,0x00,0xA0,0xC9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("C6E13360-30AC-11d0-A18C-00A0C9118956",PROPSETID_VIDCAP_VIDEOPROCAMP); -#define PROPSETID_VIDCAP_VIDEOPROCAMP DEFINE_GUIDNAMED(PROPSETID_VIDCAP_VIDEOPROCAMP) - -typedef enum { - KSPROPERTY_VIDEOPROCAMP_BRIGHTNESS, - KSPROPERTY_VIDEOPROCAMP_CONTRAST, - KSPROPERTY_VIDEOPROCAMP_HUE, - KSPROPERTY_VIDEOPROCAMP_SATURATION, - KSPROPERTY_VIDEOPROCAMP_SHARPNESS, - KSPROPERTY_VIDEOPROCAMP_GAMMA, - KSPROPERTY_VIDEOPROCAMP_COLORENABLE, - KSPROPERTY_VIDEOPROCAMP_WHITEBALANCE, - KSPROPERTY_VIDEOPROCAMP_BACKLIGHT_COMPENSATION, - KSPROPERTY_VIDEOPROCAMP_GAIN, - KSPROPERTY_VIDEOPROCAMP_DIGITAL_MULTIPLIER, - KSPROPERTY_VIDEOPROCAMP_DIGITAL_MULTIPLIER_LIMIT, - KSPROPERTY_VIDEOPROCAMP_WHITEBALANCE_COMPONENT, - KSPROPERTY_VIDEOPROCAMP_POWERLINE_FREQUENCY -} KSPROPERTY_VIDCAP_VIDEOPROCAMP; - -typedef struct { - KSPROPERTY Property; - LONG Value; - ULONG Flags; - ULONG Capabilities; -} KSPROPERTY_VIDEOPROCAMP_S,*PKSPROPERTY_VIDEOPROCAMP_S; - -typedef struct { - KSP_NODE NodeProperty; - LONG Value; - ULONG Flags; - ULONG Capabilities; -} KSPROPERTY_VIDEOPROCAMP_NODE_S,*PKSPROPERTY_VIDEOPROCAMP_NODE_S; - -typedef struct { - KSPROPERTY Property; - LONG Value1; - ULONG Flags; - ULONG Capabilities; - LONG Value2; -} KSPROPERTY_VIDEOPROCAMP_S2,*PKSPROPERTY_VIDEOPROCAMP_S2; - -typedef struct { - KSP_NODE NodeProperty; - LONG Value1; - ULONG Flags; - ULONG Capabilities; - LONG Value2; -} KSPROPERTY_VIDEOPROCAMP_NODE_S2,*PKSPROPERTY_VIDEOPROCAMP_NODE_S2; - -#define KSPROPERTY_VIDEOPROCAMP_FLAGS_AUTO 0X0001L -#define KSPROPERTY_VIDEOPROCAMP_FLAGS_MANUAL 0X0002L - -#define STATIC_PROPSETID_VIDCAP_SELECTOR \ - 0x1ABDAECA,0x68B6,0x4F83,0x93,0x71,0xB4,0x13,0x90,0x7C,0x7B,0x9F -DEFINE_GUIDSTRUCT("1ABDAECA-68B6-4F83-9371-B413907C7B9F",PROPSETID_VIDCAP_SELECTOR); -#define PROPSETID_VIDCAP_SELECTOR DEFINE_GUIDNAMED(PROPSETID_VIDCAP_SELECTOR) - -typedef enum { - KSPROPERTY_SELECTOR_SOURCE_NODE_ID, - KSPROPERTY_SELECTOR_NUM_SOURCES -} KSPROPERTY_VIDCAP_SELECTOR,*PKSPROPERTY_VIDCAP_SELECTOR; - -typedef struct { - KSPROPERTY Property; - LONG Value; - ULONG Flags; - ULONG Capabilities; -} KSPROPERTY_SELECTOR_S,*PKSPROPERTY_SELECTOR_S; - -typedef struct { - KSP_NODE NodeProperty; - LONG Value; - ULONG Flags; - ULONG Capabilities; -} KSPROPERTY_SELECTOR_NODE_S,*PKSPROPERTY_SELECTOR_NODE_S; - -#define STATIC_PROPSETID_TUNER \ - 0x6a2e0605L,0x28e4,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("6a2e0605-28e4-11d0-a18c-00a0c9118956",PROPSETID_TUNER); -#define PROPSETID_TUNER DEFINE_GUIDNAMED(PROPSETID_TUNER) - -typedef enum { - KSPROPERTY_TUNER_CAPS, - KSPROPERTY_TUNER_MODE_CAPS, - KSPROPERTY_TUNER_MODE, - KSPROPERTY_TUNER_STANDARD, - KSPROPERTY_TUNER_FREQUENCY, - KSPROPERTY_TUNER_INPUT, - KSPROPERTY_TUNER_STATUS, - KSPROPERTY_TUNER_IF_MEDIUM -} KSPROPERTY_TUNER; - -typedef enum { - KSPROPERTY_TUNER_MODE_TV = 0X0001, - KSPROPERTY_TUNER_MODE_FM_RADIO = 0X0002, - KSPROPERTY_TUNER_MODE_AM_RADIO = 0X0004, - KSPROPERTY_TUNER_MODE_DSS = 0X0008, - KSPROPERTY_TUNER_MODE_ATSC = 0X0010 -} KSPROPERTY_TUNER_MODES; - -typedef enum { - KS_TUNER_TUNING_EXACT = 1, - KS_TUNER_TUNING_FINE, - KS_TUNER_TUNING_COARSE -} KS_TUNER_TUNING_FLAGS; - -typedef enum { - KS_TUNER_STRATEGY_PLL = 0X01, - KS_TUNER_STRATEGY_SIGNAL_STRENGTH = 0X02, - KS_TUNER_STRATEGY_DRIVER_TUNES = 0X04 -} KS_TUNER_STRATEGY; - -typedef struct { - KSPROPERTY Property; - ULONG ModesSupported; - KSPIN_MEDIUM VideoMedium; - KSPIN_MEDIUM TVAudioMedium; - KSPIN_MEDIUM RadioAudioMedium; -} KSPROPERTY_TUNER_CAPS_S,*PKSPROPERTY_TUNER_CAPS_S; - -typedef struct { - KSPROPERTY Property; - KSPIN_MEDIUM IFMedium; -} KSPROPERTY_TUNER_IF_MEDIUM_S,*PKSPROPERTY_TUNER_IF_MEDIUM_S; - -typedef struct { - KSPROPERTY Property; - ULONG Mode; - ULONG StandardsSupported; - ULONG MinFrequency; - ULONG MaxFrequency; - ULONG TuningGranularity; - ULONG NumberOfInputs; - ULONG SettlingTime; - ULONG Strategy; -} KSPROPERTY_TUNER_MODE_CAPS_S,*PKSPROPERTY_TUNER_MODE_CAPS_S; - -typedef struct { - KSPROPERTY Property; - ULONG Mode; -} KSPROPERTY_TUNER_MODE_S,*PKSPROPERTY_TUNER_MODE_S; - -typedef struct { - KSPROPERTY Property; - ULONG Frequency; - ULONG LastFrequency; - ULONG TuningFlags; - ULONG VideoSubChannel; - ULONG AudioSubChannel; - ULONG Channel; - ULONG Country; -} KSPROPERTY_TUNER_FREQUENCY_S,*PKSPROPERTY_TUNER_FREQUENCY_S; - -typedef struct { - KSPROPERTY Property; - ULONG Standard; -} KSPROPERTY_TUNER_STANDARD_S,*PKSPROPERTY_TUNER_STANDARD_S; - -typedef struct { - KSPROPERTY Property; - ULONG InputIndex; -} KSPROPERTY_TUNER_INPUT_S,*PKSPROPERTY_TUNER_INPUT_S; - -typedef struct { - KSPROPERTY Property; - ULONG CurrentFrequency; - ULONG PLLOffset; - ULONG SignalStrength; - ULONG Busy; -} KSPROPERTY_TUNER_STATUS_S,*PKSPROPERTY_TUNER_STATUS_S; - -#define STATIC_EVENTSETID_TUNER \ - 0x6a2e0606L,0x28e4,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("6a2e0606-28e4-11d0-a18c-00a0c9118956",EVENTSETID_TUNER); -#define EVENTSETID_TUNER DEFINE_GUIDNAMED(EVENTSETID_TUNER) - -typedef enum { - KSEVENT_TUNER_CHANGED -} KSEVENT_TUNER; - -#define STATIC_KSNODETYPE_VIDEO_STREAMING \ - 0xDFF229E1L,0xF70F,0x11D0,0xB9,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("DFF229E1-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VIDEO_STREAMING); -#define KSNODETYPE_VIDEO_STREAMING DEFINE_GUIDNAMED(KSNODETYPE_VIDEO_STREAMING) - -#define STATIC_KSNODETYPE_VIDEO_INPUT_TERMINAL \ - 0xDFF229E2L,0xF70F,0x11D0,0xB9,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("DFF229E2-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VIDEO_INPUT_TERMINAL); -#define KSNODETYPE_VIDEO_INPUT_TERMINAL DEFINE_GUIDNAMED(KSNODETYPE_VIDEO_INPUT_TERMINAL) - -#define STATIC_KSNODETYPE_VIDEO_OUTPUT_TERMINAL \ - 0xDFF229E3L,0xF70F,0x11D0,0xB9,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("DFF229E3-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VIDEO_OUTPUT_TERMINAL); -#define KSNODETYPE_VIDEO_OUTPUT_TERMINAL DEFINE_GUIDNAMED(KSNODETYPE_VIDEO_OUTPUT_TERMINAL) - -#define STATIC_KSNODETYPE_VIDEO_SELECTOR \ - 0xDFF229E4L,0xF70F,0x11D0,0xB9,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("DFF229E4-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VIDEO_SELECTOR); -#define KSNODETYPE_VIDEO_SELECTOR DEFINE_GUIDNAMED(KSNODETYPE_VIDEO_SELECTOR) - -#define STATIC_KSNODETYPE_VIDEO_PROCESSING \ - 0xDFF229E5L,0xF70F,0x11D0,0xB9,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("DFF229E5-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VIDEO_PROCESSING); -#define KSNODETYPE_VIDEO_PROCESSING DEFINE_GUIDNAMED(KSNODETYPE_VIDEO_PROCESSING) - -#define STATIC_KSNODETYPE_VIDEO_CAMERA_TERMINAL \ - 0xDFF229E6L,0xF70F,0x11D0,0xB9,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("DFF229E6-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VIDEO_CAMERA_TERMINAL); -#define KSNODETYPE_VIDEO_CAMERA_TERMINAL DEFINE_GUIDNAMED(KSNODETYPE_VIDEO_CAMERA_TERMINAL) - -#define STATIC_KSNODETYPE_VIDEO_INPUT_MTT \ - 0xDFF229E7L,0xF70F,0x11D0,0xB9,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("DFF229E7-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VIDEO_INPUT_MTT); -#define KSNODETYPE_VIDEO_INPUT_MTT DEFINE_GUIDNAMED(KSNODETYPE_VIDEO_INPUT_MTT) - -#define STATIC_KSNODETYPE_VIDEO_OUTPUT_MTT \ - 0xDFF229E8L,0xF70F,0x11D0,0xB9,0x17,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("DFF229E8-F70F-11D0-B917-00A0C9223196",KSNODETYPE_VIDEO_OUTPUT_MTT); -#define KSNODETYPE_VIDEO_OUTPUT_MTT DEFINE_GUIDNAMED(KSNODETYPE_VIDEO_OUTPUT_MTT) - -#define STATIC_PROPSETID_VIDCAP_VIDEOENCODER \ - 0x6a2e0610L,0x28e4,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("6a2e0610-28e4-11d0-a18c-00a0c9118956",PROPSETID_VIDCAP_VIDEOENCODER); -#define PROPSETID_VIDCAP_VIDEOENCODER DEFINE_GUIDNAMED(PROPSETID_VIDCAP_VIDEOENCODER) - -typedef enum { - KSPROPERTY_VIDEOENCODER_CAPS, - KSPROPERTY_VIDEOENCODER_STANDARD, - KSPROPERTY_VIDEOENCODER_COPYPROTECTION, - KSPROPERTY_VIDEOENCODER_CC_ENABLE -} KSPROPERTY_VIDCAP_VIDEOENCODER; - -typedef struct { - KSPROPERTY Property; - LONG Value; - ULONG Flags; - ULONG Capabilities; -} KSPROPERTY_VIDEOENCODER_S,*PKSPROPERTY_VIDEOENCODER_S; - -#define STATIC_PROPSETID_VIDCAP_VIDEODECODER \ - 0xC6E13350L,0x30AC,0x11d0,0xA1,0x8C,0x00,0xA0,0xC9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("C6E13350-30AC-11d0-A18C-00A0C9118956",PROPSETID_VIDCAP_VIDEODECODER); -#define PROPSETID_VIDCAP_VIDEODECODER DEFINE_GUIDNAMED(PROPSETID_VIDCAP_VIDEODECODER) - -typedef enum { - KSPROPERTY_VIDEODECODER_CAPS, - KSPROPERTY_VIDEODECODER_STANDARD, - KSPROPERTY_VIDEODECODER_STATUS, - KSPROPERTY_VIDEODECODER_OUTPUT_ENABLE, - KSPROPERTY_VIDEODECODER_VCR_TIMING -} KSPROPERTY_VIDCAP_VIDEODECODER; - -typedef enum { - KS_VIDEODECODER_FLAGS_CAN_DISABLE_OUTPUT = 0X0001, - KS_VIDEODECODER_FLAGS_CAN_USE_VCR_LOCKING = 0X0002, - KS_VIDEODECODER_FLAGS_CAN_INDICATE_LOCKED = 0X0004 -} KS_VIDEODECODER_FLAGS; - -typedef struct { - KSPROPERTY Property; - ULONG StandardsSupported; - ULONG Capabilities; - ULONG SettlingTime; - ULONG HSyncPerVSync; -} KSPROPERTY_VIDEODECODER_CAPS_S,*PKSPROPERTY_VIDEODECODER_CAPS_S; - -typedef struct { - KSPROPERTY Property; - ULONG NumberOfLines; - ULONG SignalLocked; -} KSPROPERTY_VIDEODECODER_STATUS_S,*PKSPROPERTY_VIDEODECODER_STATUS_S; - -typedef struct { - KSPROPERTY Property; - ULONG Value; -} KSPROPERTY_VIDEODECODER_S,*PKSPROPERTY_VIDEODECODER_S; - -#define STATIC_EVENTSETID_VIDEODECODER \ - 0x6a2e0621L,0x28e4,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("6a2e0621-28e4-11d0-a18c-00a0c9118956",EVENTSETID_VIDEODECODER); -#define EVENTSETID_VIDEODECODER DEFINE_GUIDNAMED(EVENTSETID_VIDEODECODER) - -typedef enum { - KSEVENT_VIDEODECODER_CHANGED -} KSEVENT_VIDEODECODER; - -#define STATIC_PROPSETID_VIDCAP_CAMERACONTROL \ - 0xC6E13370L,0x30AC,0x11d0,0xa1,0x8C,0x00,0xA0,0xC9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("C6E13370-30AC-11d0-A18C-00A0C9118956",PROPSETID_VIDCAP_CAMERACONTROL); -#define PROPSETID_VIDCAP_CAMERACONTROL DEFINE_GUIDNAMED(PROPSETID_VIDCAP_CAMERACONTROL) - -typedef enum { - KSPROPERTY_CAMERACONTROL_PAN, - KSPROPERTY_CAMERACONTROL_TILT, - KSPROPERTY_CAMERACONTROL_ROLL, - KSPROPERTY_CAMERACONTROL_ZOOM, - KSPROPERTY_CAMERACONTROL_EXPOSURE, - KSPROPERTY_CAMERACONTROL_IRIS, - KSPROPERTY_CAMERACONTROL_FOCUS, - KSPROPERTY_CAMERACONTROL_SCANMODE, - KSPROPERTY_CAMERACONTROL_PRIVACY, - KSPROPERTY_CAMERACONTROL_PANTILT, - KSPROPERTY_CAMERACONTROL_PAN_RELATIVE, - KSPROPERTY_CAMERACONTROL_TILT_RELATIVE, - KSPROPERTY_CAMERACONTROL_ROLL_RELATIVE, - KSPROPERTY_CAMERACONTROL_ZOOM_RELATIVE, - KSPROPERTY_CAMERACONTROL_EXPOSURE_RELATIVE, - KSPROPERTY_CAMERACONTROL_IRIS_RELATIVE, - KSPROPERTY_CAMERACONTROL_FOCUS_RELATIVE, - KSPROPERTY_CAMERACONTROL_PANTILT_RELATIVE, - KSPROPERTY_CAMERACONTROL_FOCAL_LENGTH -} KSPROPERTY_VIDCAP_CAMERACONTROL; - -typedef struct { - KSPROPERTY Property; - LONG Value; - ULONG Flags; - ULONG Capabilities; -} KSPROPERTY_CAMERACONTROL_S,*PKSPROPERTY_CAMERACONTROL_S; - -typedef struct { - KSP_NODE NodeProperty; - LONG Value; - ULONG Flags; - ULONG Capabilities; -} KSPROPERTY_CAMERACONTROL_NODE_S,PKSPROPERTY_CAMERACONTROL_NODE_S; - -typedef struct { - KSPROPERTY Property; - LONG Value1; - ULONG Flags; - ULONG Capabilities; - LONG Value2; -} KSPROPERTY_CAMERACONTROL_S2,*PKSPROPERTY_CAMERACONTROL_S2; - -typedef struct { - KSP_NODE NodeProperty; - LONG Value1; - ULONG Flags; - ULONG Capabilities; - LONG Value2; -} KSPROPERTY_CAMERACONTROL_NODE_S2,*PKSPROPERTY_CAMERACONTROL_NODE_S2; - -typedef struct { - KSPROPERTY Property; - LONG lOcularFocalLength; - LONG lObjectiveFocalLengthMin; - LONG lObjectiveFocalLengthMax; -} KSPROPERTY_CAMERACONTROL_FOCAL_LENGTH_S,*PKSPROPERTY_CAMERACONTROL_FOCAL_LENGTH_S; - -typedef struct { - KSNODEPROPERTY NodeProperty; - LONG lOcularFocalLength; - LONG lObjectiveFocalLengthMin; - LONG lObjectiveFocalLengthMax; -} KSPROPERTY_CAMERACONTROL_NODE_FOCAL_LENGTH_S,*PKSPROPERTY_CAMERACONTROL_NODE_FOCAL_LENGTH_S; - -#define KSPROPERTY_CAMERACONTROL_FLAGS_AUTO 0X0001L -#define KSPROPERTY_CAMERACONTROL_FLAGS_MANUAL 0X0002L - -#define KSPROPERTY_CAMERACONTROL_FLAGS_ABSOLUTE 0X0000L -#define KSPROPERTY_CAMERACONTROL_FLAGS_RELATIVE 0X0010L - -#ifndef __EDevCtrl__ -#define __EDevCtrl__ - -#define STATIC_PROPSETID_EXT_DEVICE \ - 0xB5730A90L,0x1A2C,0x11cf,0x8c,0x23,0x00,0xAA,0x00,0x6B,0x68,0x14 -DEFINE_GUIDSTRUCT("B5730A90-1A2C-11cf-8C23-00AA006B6814",PROPSETID_EXT_DEVICE); -#define PROPSETID_EXT_DEVICE DEFINE_GUIDNAMED(PROPSETID_EXT_DEVICE) - -typedef enum { - KSPROPERTY_EXTDEVICE_ID, - KSPROPERTY_EXTDEVICE_VERSION, - KSPROPERTY_EXTDEVICE_POWER_STATE, - KSPROPERTY_EXTDEVICE_PORT, - KSPROPERTY_EXTDEVICE_CAPABILITIES -} KSPROPERTY_EXTDEVICE; - -typedef struct tagDEVCAPS{ - LONG CanRecord; - LONG CanRecordStrobe; - LONG HasAudio; - LONG HasVideo; - LONG UsesFiles; - LONG CanSave; - LONG DeviceType; - LONG TCRead; - LONG TCWrite; - LONG CTLRead; - LONG IndexRead; - LONG Preroll; - LONG Postroll; - LONG SyncAcc; - LONG NormRate; - LONG CanPreview; - LONG CanMonitorSrc; - LONG CanTest; - LONG VideoIn; - LONG AudioIn; - LONG Calibrate; - LONG SeekType; - LONG SimulatedHardware; -} DEVCAPS,*PDEVCAPS; - -typedef struct { - KSPROPERTY Property; - union { - DEVCAPS Capabilities; - ULONG DevPort; - ULONG PowerState; - WCHAR pawchString[MAX_PATH]; - DWORD NodeUniqueID[2]; - } u; -} KSPROPERTY_EXTDEVICE_S,*PKSPROPERTY_EXTDEVICE_S; - -#define STATIC_PROPSETID_EXT_TRANSPORT \ - 0xA03CD5F0L,0x3045,0x11cf,0x8c,0x44,0x00,0xAA,0x00,0x6B,0x68,0x14 -DEFINE_GUIDSTRUCT("A03CD5F0-3045-11cf-8C44-00AA006B6814",PROPSETID_EXT_TRANSPORT); -#define PROPSETID_EXT_TRANSPORT DEFINE_GUIDNAMED(PROPSETID_EXT_TRANSPORT) - -typedef enum { - KSPROPERTY_EXTXPORT_CAPABILITIES, - KSPROPERTY_EXTXPORT_INPUT_SIGNAL_MODE, - KSPROPERTY_EXTXPORT_OUTPUT_SIGNAL_MODE, - KSPROPERTY_EXTXPORT_LOAD_MEDIUM, - KSPROPERTY_EXTXPORT_MEDIUM_INFO, - KSPROPERTY_EXTXPORT_STATE, - KSPROPERTY_EXTXPORT_STATE_NOTIFY, - KSPROPERTY_EXTXPORT_TIMECODE_SEARCH, - KSPROPERTY_EXTXPORT_ATN_SEARCH, - KSPROPERTY_EXTXPORT_RTC_SEARCH, - KSPROPERTY_RAW_AVC_CMD -} KSPROPERTY_EXTXPORT; - -typedef struct tagTRANSPORTSTATUS { - LONG Mode; - LONG LastError; - LONG RecordInhibit; - LONG ServoLock; - LONG MediaPresent; - LONG MediaLength; - LONG MediaSize; - LONG MediaTrackCount; - LONG MediaTrackLength; - LONG MediaTrackSide; - LONG MediaType; - LONG LinkMode; - LONG NotifyOn; -} TRANSPORTSTATUS,*PTRANSPORTSTATUS; - -typedef struct tagTRANSPORTBASICPARMS { - LONG TimeFormat; - LONG TimeReference; - LONG Superimpose; - LONG EndStopAction; - LONG RecordFormat; - LONG StepFrames; - LONG SetpField; - LONG Preroll; - LONG RecPreroll; - LONG Postroll; - LONG EditDelay; - LONG PlayTCDelay; - LONG RecTCDelay; - LONG EditField; - LONG FrameServo; - LONG ColorFrameServo; - LONG ServoRef; - LONG WarnGenlock; - LONG SetTracking; - TCHAR VolumeName[40]; - LONG Ballistic[20]; - LONG Speed; - LONG CounterFormat; - LONG TunerChannel; - LONG TunerNumber; - LONG TimerEvent; - LONG TimerStartDay; - LONG TimerStartTime; - LONG TimerStopDay; - LONG TimerStopTime; -} TRANSPORTBASICPARMS,*PTRANSPORTBASICPARMS; - -typedef struct tagTRANSPORTVIDEOPARMS { - LONG OutputMode; - LONG Input; -} TRANSPORTVIDEOPARMS,*PTRANSPORTVIDEOPARMS; - -typedef struct tagTRANSPORTAUDIOPARMS { - LONG EnableOutput; - LONG EnableRecord; - LONG EnableSelsync; - LONG Input; - LONG MonitorSource; -} TRANSPORTAUDIOPARMS,*PTRANSPORTAUDIOPARMS; - -typedef struct { - WINBOOL MediaPresent; - ULONG MediaType; - WINBOOL RecordInhibit; -} MEDIUM_INFO,*PMEDIUM_INFO; - -typedef struct { - ULONG Mode; - ULONG State; -} TRANSPORT_STATE,*PTRANSPORT_STATE; - -typedef struct { - KSPROPERTY Property; - union { - ULONG Capabilities; - ULONG SignalMode; - ULONG LoadMedium; - MEDIUM_INFO MediumInfo; - TRANSPORT_STATE XPrtState; - struct { - BYTE frame; - BYTE second; - BYTE minute; - BYTE hour; - } Timecode; - DWORD dwTimecode; - DWORD dwAbsTrackNumber; - struct { - ULONG PayloadSize; - BYTE Payload[512]; - } RawAVC; - } u; -} KSPROPERTY_EXTXPORT_S,*PKSPROPERTY_EXTXPORT_S; - -typedef struct { - KSP_NODE NodeProperty; - union { - ULONG Capabilities; - ULONG SignalMode; - ULONG LoadMedium; - MEDIUM_INFO MediumInfo; - TRANSPORT_STATE XPrtState; - struct { - BYTE frame; - BYTE second; - BYTE minute; - BYTE hour; - } Timecode; - DWORD dwTimecode; - DWORD dwAbsTrackNumber; - struct { - ULONG PayloadSize; - BYTE Payload[512]; - } RawAVC; - } u; -} KSPROPERTY_EXTXPORT_NODE_S,*PKSPROPERTY_EXTXPORT_NODE_S; - -#define STATIC_PROPSETID_TIMECODE_READER \ - 0x9B496CE1L,0x811B,0x11cf,0x8C,0x77,0x00,0xAA,0x00,0x6B,0x68,0x14 -DEFINE_GUIDSTRUCT("9B496CE1-811B-11cf-8C77-00AA006B6814",PROPSETID_TIMECODE_READER); -#define PROPSETID_TIMECODE_READER DEFINE_GUIDNAMED(PROPSETID_TIMECODE_READER) - -typedef enum { - KSPROPERTY_TIMECODE_READER, - KSPROPERTY_ATN_READER, - KSPROPERTY_RTC_READER -} KSPROPERTY_TIMECODE; - -#ifndef TIMECODE_DEFINED -#define TIMECODE_DEFINED -typedef union _timecode { - struct { - WORD wFrameRate; - WORD wFrameFract; - DWORD dwFrames; - }; - DWORDLONG qw; -} TIMECODE; -typedef TIMECODE *PTIMECODE; - -typedef struct tagTIMECODE_SAMPLE { - LONGLONG qwTick; - TIMECODE timecode; - DWORD dwUser; - DWORD dwFlags; -} TIMECODE_SAMPLE; - -typedef TIMECODE_SAMPLE *PTIMECODE_SAMPLE; -#endif /* TIMECODE_DEFINED */ - -typedef struct { - KSPROPERTY Property; - TIMECODE_SAMPLE TimecodeSamp; -} KSPROPERTY_TIMECODE_S,*PKSPROPERTY_TIMECODE_S; - -typedef struct { - KSP_NODE NodeProperty; - TIMECODE_SAMPLE TimecodeSamp; -} KSPROPERTY_TIMECODE_NODE_S,*PKSPROPERTY_TIMECODE_NODE_S; - -#define STATIC_KSEVENTSETID_EXTDEV_Command \ - 0x109c7988L,0xb3cb,0x11d2,0xb4,0x8e,0x00,0x60,0x97,0xb3,0x39,0x1b -DEFINE_GUIDSTRUCT("109c7988-b3cb-11d2-b48e-006097b3391b",KSEVENTSETID_EXTDEV_Command); -#define KSEVENTSETID_EXTDEV_Command DEFINE_GUIDNAMED(KSEVENTSETID_EXTDEV_Command) - -typedef enum { - KSEVENT_EXTDEV_COMMAND_NOTIFY_INTERIM_READY, - KSEVENT_EXTDEV_COMMAND_CONTROL_INTERIM_READY, - KSEVENT_EXTDEV_COMMAND_BUSRESET, - KSEVENT_EXTDEV_TIMECODE_UPDATE, - KSEVENT_EXTDEV_OPERATION_MODE_UPDATE, - KSEVENT_EXTDEV_TRANSPORT_STATE_UPDATE, - KSEVENT_EXTDEV_NOTIFY_REMOVAL, - KSEVENT_EXTDEV_NOTIFY_MEDIUM_CHANGE -} KSEVENT_DEVCMD; -#endif /* __EDevCtrl__ */ - -#define STATIC_PROPSETID_VIDCAP_CROSSBAR \ - 0x6a2e0640L,0x28e4,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("6a2e0640-28e4-11d0-a18c-00a0c9118956",PROPSETID_VIDCAP_CROSSBAR); -#define PROPSETID_VIDCAP_CROSSBAR DEFINE_GUIDNAMED(PROPSETID_VIDCAP_CROSSBAR) - -typedef enum { - KSPROPERTY_CROSSBAR_CAPS, - KSPROPERTY_CROSSBAR_PININFO, - KSPROPERTY_CROSSBAR_CAN_ROUTE, - KSPROPERTY_CROSSBAR_ROUTE -} KSPROPERTY_VIDCAP_CROSSBAR; - -typedef struct { - KSPROPERTY Property; - ULONG NumberOfInputs; - ULONG NumberOfOutputs; -} KSPROPERTY_CROSSBAR_CAPS_S,*PKSPROPERTY_CROSSBAR_CAPS_S; - -typedef struct { - KSPROPERTY Property; - KSPIN_DATAFLOW Direction; - ULONG Index; - ULONG PinType; - ULONG RelatedPinIndex; - KSPIN_MEDIUM Medium; -} KSPROPERTY_CROSSBAR_PININFO_S,*PKSPROPERTY_CROSSBAR_PININFO_S; - -typedef struct { - KSPROPERTY Property; - ULONG IndexInputPin; - ULONG IndexOutputPin; - ULONG CanRoute; -} KSPROPERTY_CROSSBAR_ROUTE_S,*PKSPROPERTY_CROSSBAR_ROUTE_S; - -#define STATIC_EVENTSETID_CROSSBAR \ - 0x6a2e0641L,0x28e4,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("6a2e0641-28e4-11d0-a18c-00a0c9118956",EVENTSETID_CROSSBAR); -#define EVENTSETID_CROSSBAR DEFINE_GUIDNAMED(EVENTSETID_CROSSBAR) - -typedef enum { - KSEVENT_CROSSBAR_CHANGED -} KSEVENT_CROSSBAR; - -typedef enum { - KS_PhysConn_Video_Tuner = 1, - KS_PhysConn_Video_Composite, - KS_PhysConn_Video_SVideo, - KS_PhysConn_Video_RGB, - KS_PhysConn_Video_YRYBY, - KS_PhysConn_Video_SerialDigital, - KS_PhysConn_Video_ParallelDigital, - KS_PhysConn_Video_SCSI, - KS_PhysConn_Video_AUX, - KS_PhysConn_Video_1394, - KS_PhysConn_Video_USB, - KS_PhysConn_Video_VideoDecoder, - KS_PhysConn_Video_VideoEncoder, - KS_PhysConn_Video_SCART, - KS_PhysConn_Audio_Tuner = 4096, - KS_PhysConn_Audio_Line, - KS_PhysConn_Audio_Mic, - KS_PhysConn_Audio_AESDigital, - KS_PhysConn_Audio_SPDIFDigital, - KS_PhysConn_Audio_SCSI, - KS_PhysConn_Audio_AUX, - KS_PhysConn_Audio_1394, - KS_PhysConn_Audio_USB, - KS_PhysConn_Audio_AudioDecoder -} KS_PhysicalConnectorType; - -#define STATIC_PROPSETID_VIDCAP_TVAUDIO \ - 0x6a2e0650L,0x28e4,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("6a2e0650-28e4-11d0-a18c-00a0c9118956",PROPSETID_VIDCAP_TVAUDIO); -#define PROPSETID_VIDCAP_TVAUDIO DEFINE_GUIDNAMED(PROPSETID_VIDCAP_TVAUDIO) - -typedef enum { - KSPROPERTY_TVAUDIO_CAPS, - KSPROPERTY_TVAUDIO_MODE, - KSPROPERTY_TVAUDIO_CURRENTLY_AVAILABLE_MODES -} KSPROPERTY_VIDCAP_TVAUDIO; - -#define KS_TVAUDIO_MODE_MONO 0x0001 -#define KS_TVAUDIO_MODE_STEREO 0x0002 -#define KS_TVAUDIO_MODE_LANG_A 0x0010 -#define KS_TVAUDIO_MODE_LANG_B 0x0020 -#define KS_TVAUDIO_MODE_LANG_C 0x0040 - -typedef struct { - KSPROPERTY Property; - ULONG Capabilities; - KSPIN_MEDIUM InputMedium; - KSPIN_MEDIUM OutputMedium; -} KSPROPERTY_TVAUDIO_CAPS_S,*PKSPROPERTY_TVAUDIO_CAPS_S; - -typedef struct { - KSPROPERTY Property; - ULONG Mode; -} KSPROPERTY_TVAUDIO_S,*PKSPROPERTY_TVAUDIO_S; - -#define STATIC_KSEVENTSETID_VIDCAP_TVAUDIO \ - 0x6a2e0651L,0x28e4,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("6a2e0651-28e4-11d0-a18c-00a0c9118956",KSEVENTSETID_VIDCAP_TVAUDIO); -#define KSEVENTSETID_VIDCAP_TVAUDIO DEFINE_GUIDNAMED(KSEVENTSETID_VIDCAP_TVAUDIO) - -typedef enum { - KSEVENT_TVAUDIO_CHANGED -} KSEVENT_TVAUDIO; - -#define STATIC_PROPSETID_VIDCAP_VIDEOCOMPRESSION \ - 0xC6E13343L,0x30AC,0x11d0,0xA1,0x8C,0x00,0xA0,0xC9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("C6E13343-30AC-11d0-A18C-00A0C9118956",PROPSETID_VIDCAP_VIDEOCOMPRESSION); -#define PROPSETID_VIDCAP_VIDEOCOMPRESSION DEFINE_GUIDNAMED(PROPSETID_VIDCAP_VIDEOCOMPRESSION) - -typedef enum { - KSPROPERTY_VIDEOCOMPRESSION_GETINFO, - KSPROPERTY_VIDEOCOMPRESSION_KEYFRAME_RATE, - KSPROPERTY_VIDEOCOMPRESSION_PFRAMES_PER_KEYFRAME, - KSPROPERTY_VIDEOCOMPRESSION_QUALITY, - KSPROPERTY_VIDEOCOMPRESSION_OVERRIDE_KEYFRAME, - KSPROPERTY_VIDEOCOMPRESSION_OVERRIDE_FRAME_SIZE, - KSPROPERTY_VIDEOCOMPRESSION_WINDOWSIZE -} KSPROPERTY_VIDCAP_VIDEOCOMPRESSION; - -typedef enum { - KS_CompressionCaps_CanQuality = 1, - KS_CompressionCaps_CanCrunch = 2, - KS_CompressionCaps_CanKeyFrame = 4, - KS_CompressionCaps_CanBFrame = 8, - KS_CompressionCaps_CanWindow = 0x10 -} KS_CompressionCaps; - -typedef enum { - KS_StreamingHint_FrameInterval = 0x0100, - KS_StreamingHint_KeyFrameRate = 0x0200, - KS_StreamingHint_PFrameRate = 0x0400, - KS_StreamingHint_CompQuality = 0x0800, - KS_StreamingHint_CompWindowSize = 0x1000 -} KS_VideoStreamingHints; - -typedef struct { - KSPROPERTY Property; - ULONG StreamIndex; - LONG DefaultKeyFrameRate; - LONG DefaultPFrameRate; - LONG DefaultQuality; - LONG NumberOfQualitySettings; - LONG Capabilities; -} KSPROPERTY_VIDEOCOMPRESSION_GETINFO_S,*PKSPROPERTY_VIDEOCOMPRESSION_GETINFO_S; - -typedef struct { - KSPROPERTY Property; - ULONG StreamIndex; - LONG Value; -} KSPROPERTY_VIDEOCOMPRESSION_S,*PKSPROPERTY_VIDEOCOMPRESSION_S; - -typedef struct { - KSPROPERTY Property; - ULONG StreamIndex; - LONG Value; - ULONG Flags; -} KSPROPERTY_VIDEOCOMPRESSION_S1,*PKSPROPERTY_VIDEOCOMPRESSION_S1; - -#define STATIC_KSDATAFORMAT_SUBTYPE_OVERLAY \ - 0xe436eb7fL,0x524f,0x11ce,0x9f,0x53,0x00,0x20,0xaf,0x0b,0xa7,0x70 -DEFINE_GUIDSTRUCT("e436eb7f-524f-11ce-9f53-0020af0ba770",KSDATAFORMAT_SUBTYPE_OVERLAY); -#define KSDATAFORMAT_SUBTYPE_OVERLAY DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_OVERLAY) - -#define STATIC_KSPROPSETID_OverlayUpdate \ - 0x490EA5CFL,0x7681,0x11D1,0xA2,0x1C,0x00,0xA0,0xC9,0x22,0x31,0x96 -DEFINE_GUIDSTRUCT("490EA5CF-7681-11D1-A21C-00A0C9223196",KSPROPSETID_OverlayUpdate); -#define KSPROPSETID_OverlayUpdate DEFINE_GUIDNAMED(KSPROPSETID_OverlayUpdate) - -typedef enum { - KSPROPERTY_OVERLAYUPDATE_INTERESTS, - KSPROPERTY_OVERLAYUPDATE_CLIPLIST = 0x1, - KSPROPERTY_OVERLAYUPDATE_PALETTE = 0x2, - KSPROPERTY_OVERLAYUPDATE_COLORKEY = 0x4, - KSPROPERTY_OVERLAYUPDATE_VIDEOPOSITION = 0x8, - KSPROPERTY_OVERLAYUPDATE_DISPLAYCHANGE = 0x10, - KSPROPERTY_OVERLAYUPDATE_COLORREF = 0x10000000 -} KSPROPERTY_OVERLAYUPDATE; - -typedef struct { - ULONG PelsWidth; - ULONG PelsHeight; - ULONG BitsPerPel; - WCHAR DeviceID[1]; -} KSDISPLAYCHANGE,*PKSDISPLAYCHANGE; - -#define DEFINE_KSPROPERTY_ITEM_OVERLAYUPDATE_INTERESTS(Handler) \ - DEFINE_KSPROPERTY_ITEM( \ - KSPROPERTY_OVERLAYUPDATE_INTERESTS, \ - (Handler), \ - sizeof(KSPROPERTY), \ - sizeof(ULONG), \ - NULL, NULL, 0, NULL, NULL, 0) - -#define DEFINE_KSPROPERTY_ITEM_OVERLAYUPDATE_PALETTE(Handler) \ - DEFINE_KSPROPERTY_ITEM( \ - KSPROPERTY_OVERLAYUPDATE_PALETTE, \ - NULL, \ - sizeof(KSPROPERTY), \ - 0, \ - (Handler), \ - NULL, 0, NULL, NULL, 0) - -#define DEFINE_KSPROPERTY_ITEM_OVERLAYUPDATE_COLORKEY(Handler) \ - DEFINE_KSPROPERTY_ITEM( \ - KSPROPERTY_OVERLAYUPDATE_COLORKEY, \ - NULL, \ - sizeof(KSPROPERTY), \ - sizeof(COLORKEY), \ - (Handler), \ - NULL, 0, NULL, NULL, 0) - -#define DEFINE_KSPROPERTY_ITEM_OVERLAYUPDATE_CLIPLIST(Handler) \ - DEFINE_KSPROPERTY_ITEM( \ - KSPROPERTY_OVERLAYUPDATE_CLIPLIST, \ - NULL, \ - sizeof(KSPROPERTY), \ - 2 *sizeof(RECT) + sizeof(RGNDATAHEADER),\ - (Handler), \ - NULL, 0, NULL, NULL, 0) - -#define DEFINE_KSPROPERTY_ITEM_OVERLAYUPDATE_VIDEOPOSITION(Handler) \ - DEFINE_KSPROPERTY_ITEM( \ - KSPROPERTY_OVERLAYUPDATE_VIDEOPOSITION, \ - NULL, \ - sizeof(KSPROPERTY), \ - 2 *sizeof(RECT), \ - (Handler), \ - NULL, 0, NULL, NULL, 0) - -#define DEFINE_KSPROPERTY_ITEM_OVERLAYUPDATE_DISPLAYCHANGE(Handler) \ - DEFINE_KSPROPERTY_ITEM( \ - KSPROPERTY_OVERLAYUPDATE_DISPLAYCHANGE, \ - NULL, \ - sizeof(KSPROPERTY), \ - sizeof(KSDISPLAYCHANGE), \ - (Handler), \ - NULL, 0, NULL, NULL, 0) - -#define DEFINE_KSPROPERTY_ITEM_OVERLAYUPDATE_COLORREF(Handler) \ - DEFINE_KSPROPERTY_ITEM( \ - KSPROPERTY_OVERLAYUPDATE_COLORREF, \ - (Handler), \ - sizeof(KSPROPERTY), \ - sizeof(COLORREF), \ - NULL, \ - NULL, 0, NULL, NULL, 0) - -#define STATIC_PROPSETID_VIDCAP_VIDEOCONTROL \ - 0x6a2e0670L,0x28e4,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("6a2e0670-28e4-11d0-a18c-00a0c9118956",PROPSETID_VIDCAP_VIDEOCONTROL); -#define PROPSETID_VIDCAP_VIDEOCONTROL DEFINE_GUIDNAMED(PROPSETID_VIDCAP_VIDEOCONTROL) - -typedef enum { - KSPROPERTY_VIDEOCONTROL_CAPS, - KSPROPERTY_VIDEOCONTROL_ACTUAL_FRAME_RATE, - KSPROPERTY_VIDEOCONTROL_FRAME_RATES, - KSPROPERTY_VIDEOCONTROL_MODE -} KSPROPERTY_VIDCAP_VIDEOCONTROL; - -typedef enum { - KS_VideoControlFlag_FlipHorizontal = 0x0001, - KS_VideoControlFlag_FlipVertical = 0x0002, - KS_Obsolete_VideoControlFlag_ExternalTriggerEnable = 0x0010, - KS_Obsolete_VideoControlFlag_Trigger = 0x0020, - KS_VideoControlFlag_ExternalTriggerEnable = 0x0004, - KS_VideoControlFlag_Trigger = 0x0008 -} KS_VideoControlFlags; - -typedef struct { - KSPROPERTY Property; - ULONG StreamIndex; - ULONG VideoControlCaps; -} KSPROPERTY_VIDEOCONTROL_CAPS_S,*PKSPROPERTY_VIDEOCONTROL_CAPS_S; - -typedef struct { - KSPROPERTY Property; - ULONG StreamIndex; - LONG Mode; -} KSPROPERTY_VIDEOCONTROL_MODE_S,*PKSPROPERTY_VIDEOCONTROL_MODE_S; - -typedef struct { - KSPROPERTY Property; - ULONG StreamIndex; - ULONG RangeIndex; - SIZE Dimensions; - LONGLONG CurrentActualFrameRate; - LONGLONG CurrentMaxAvailableFrameRate; -} KSPROPERTY_VIDEOCONTROL_ACTUAL_FRAME_RATE_S,*PKSPROPERTY_VIDEOCONTROL_ACTUAL_FRAME_RATE_S; - -typedef struct { - KSPROPERTY Property; - ULONG StreamIndex; - ULONG RangeIndex; - SIZE Dimensions; -} KSPROPERTY_VIDEOCONTROL_FRAME_RATES_S,*PKSPROPERTY_VIDEOCONTROL_FRAME_RATES_S; - -#define STATIC_PROPSETID_VIDCAP_DROPPEDFRAMES \ - 0xC6E13344L,0x30AC,0x11d0,0xa1,0x8c,0x00,0xa0,0xc9,0x11,0x89,0x56 -DEFINE_GUIDSTRUCT("C6E13344-30AC-11d0-A18C-00A0C9118956",PROPSETID_VIDCAP_DROPPEDFRAMES); -#define PROPSETID_VIDCAP_DROPPEDFRAMES DEFINE_GUIDNAMED(PROPSETID_VIDCAP_DROPPEDFRAMES) - -typedef enum { - KSPROPERTY_DROPPEDFRAMES_CURRENT -} KSPROPERTY_VIDCAP_DROPPEDFRAMES; - -typedef struct { - KSPROPERTY Property; - LONGLONG PictureNumber; - LONGLONG DropCount; - ULONG AverageFrameSize; -} KSPROPERTY_DROPPEDFRAMES_CURRENT_S,*PKSPROPERTY_DROPPEDFRAMES_CURRENT_S; - -#define STATIC_KSPROPSETID_VPConfig \ - 0xbc29a660L,0x30e3,0x11d0,0x9e,0x69,0x00,0xc0,0x4f,0xd7,0xc1,0x5b -DEFINE_GUIDSTRUCT("bc29a660-30e3-11d0-9e69-00c04fd7c15b",KSPROPSETID_VPConfig); -#define KSPROPSETID_VPConfig DEFINE_GUIDNAMED(KSPROPSETID_VPConfig) - -#define STATIC_KSPROPSETID_VPVBIConfig \ - 0xec529b00L,0x1a1f,0x11d1,0xba,0xd9,0x0,0x60,0x97,0x44,0x11,0x1a -DEFINE_GUIDSTRUCT("ec529b00-1a1f-11d1-bad9-00609744111a",KSPROPSETID_VPVBIConfig); -#define KSPROPSETID_VPVBIConfig DEFINE_GUIDNAMED(KSPROPSETID_VPVBIConfig) - -typedef enum { - KSPROPERTY_VPCONFIG_NUMCONNECTINFO, - KSPROPERTY_VPCONFIG_GETCONNECTINFO, - KSPROPERTY_VPCONFIG_SETCONNECTINFO, - KSPROPERTY_VPCONFIG_VPDATAINFO, - KSPROPERTY_VPCONFIG_MAXPIXELRATE, - KSPROPERTY_VPCONFIG_INFORMVPINPUT, - KSPROPERTY_VPCONFIG_NUMVIDEOFORMAT, - KSPROPERTY_VPCONFIG_GETVIDEOFORMAT, - KSPROPERTY_VPCONFIG_SETVIDEOFORMAT, - KSPROPERTY_VPCONFIG_INVERTPOLARITY, - KSPROPERTY_VPCONFIG_DECIMATIONCAPABILITY, - KSPROPERTY_VPCONFIG_SCALEFACTOR, - KSPROPERTY_VPCONFIG_DDRAWHANDLE, - KSPROPERTY_VPCONFIG_VIDEOPORTID, - KSPROPERTY_VPCONFIG_DDRAWSURFACEHANDLE, - KSPROPERTY_VPCONFIG_SURFACEPARAMS -} KSPROPERTY_VPCONFIG; - -#define STATIC_CLSID_KsIBasicAudioInterfaceHandler \ - 0xb9f8ac3e,0x0f71,0x11d2,0xb7,0x2c,0x00,0xc0,0x4f,0xb6,0xbd,0x3d -DEFINE_GUIDSTRUCT("b9f8ac3e-0f71-11d2-b72c-00c04fb6bd3d",CLSID_KsIBasicAudioInterfaceHandler); -#define CLSID_KsIBasicAudioInterfaceHandler DEFINE_GUIDNAMED(CLSID_KsIBasicAudioInterfaceHandler) - -#ifdef __IVPType__ -typedef struct { - AMVPSIZE Size; - DWORD MaxPixelsPerSecond; - DWORD Reserved; -} KSVPMAXPIXELRATE,*PKSVPMAXPIXELRATE; - -typedef struct { - KSPROPERTY Property; - AMVPSIZE Size; -} KSVPSIZE_PROP,*PKSVPSIZE_PROP; - -typedef struct { - DWORD dwPitch; - DWORD dwXOrigin; - DWORD dwYOrigin; -} KSVPSURFACEPARAMS,*PKSVPSURFACEPARAMS; -#else /* __IVPType__ */ - -#ifndef __DDRAW_INCLUDED__ -#define DDPF_FOURCC 0x00000004l - -typedef struct _DDPIXELFORMAT -{ - DWORD dwSize; - DWORD dwFlags; - DWORD dwFourCC; - __MINGW_EXTENSION union - { - DWORD dwRGBBitCount; - DWORD dwYUVBitCount; - DWORD dwZBufferBitDepth; - DWORD dwAlphaBitDepth; - }; - __MINGW_EXTENSION union - { - DWORD dwRBitMask; - DWORD dwYBitMask; - }; - __MINGW_EXTENSION union - { - DWORD dwGBitMask; - DWORD dwUBitMask; - }; - __MINGW_EXTENSION union - { - DWORD dwBBitMask; - DWORD dwVBitMask; - }; - __MINGW_EXTENSION union - { - DWORD dwRGBAlphaBitMask; - DWORD dwYUVAlphaBitMask; - DWORD dwRGBZBitMask; - DWORD dwYUVZBitMask; - }; -} DDPIXELFORMAT,*LPDDPIXELFORMAT; -#endif /* __DDRAW_INCLUDED__ */ - -#ifndef __DVP_INCLUDED__ -typedef struct _DDVIDEOPORTCONNECT { - DWORD dwSize; - DWORD dwPortWidth; - GUID guidTypeID; - DWORD dwFlags; - ULONG_PTR dwReserved1; -} DDVIDEOPORTCONNECT,*LPDDVIDEOPORTCONNECT; - -#define DDVPTYPE_E_HREFH_VREFH \ - 0x54F39980L,0xDA60,0x11CF,0x9B,0x06,0x00,0xA0,0xC9,0x03,0xA3,0xB8 - -#define DDVPTYPE_E_HREFL_VREFL \ - 0xE09C77E0L,0xDA60,0x11CF,0x9B,0x06,0x00,0xA0,0xC9,0x03,0xA3,0xB8 -#endif /* __DVP_INCLUDED__ */ - -typedef enum -{ - KS_PixAspectRatio_NTSC4x3, - KS_PixAspectRatio_NTSC16x9, - KS_PixAspectRatio_PAL4x3, - KS_PixAspectRatio_PAL16x9 -} KS_AMPixAspectRatio; - -typedef enum -{ - KS_AMVP_DO_NOT_CARE, - KS_AMVP_BEST_BANDWIDTH, - KS_AMVP_INPUT_SAME_AS_OUTPUT -} KS_AMVP_SELECTFORMATBY; - -typedef enum -{ - KS_AMVP_MODE_WEAVE, - KS_AMVP_MODE_BOBINTERLEAVED, - KS_AMVP_MODE_BOBNONINTERLEAVED, - KS_AMVP_MODE_SKIPEVEN, - KS_AMVP_MODE_SKIPODD -} KS_AMVP_MODE; - -typedef struct tagKS_AMVPDIMINFO -{ - DWORD dwFieldWidth; - DWORD dwFieldHeight; - DWORD dwVBIWidth; - DWORD dwVBIHeight; - RECT rcValidRegion; -} KS_AMVPDIMINFO,*PKS_AMVPDIMINFO; - -typedef struct tagKS_AMVPDATAINFO -{ - DWORD dwSize; - DWORD dwMicrosecondsPerField; - KS_AMVPDIMINFO amvpDimInfo; - DWORD dwPictAspectRatioX; - DWORD dwPictAspectRatioY; - WINBOOL bEnableDoubleClock; - WINBOOL bEnableVACT; - WINBOOL bDataIsInterlaced; - LONG lHalfLinesOdd; - WINBOOL bFieldPolarityInverted; - DWORD dwNumLinesInVREF; - LONG lHalfLinesEven; - DWORD dwReserved1; -} KS_AMVPDATAINFO,*PKS_AMVPDATAINFO; - -typedef struct tagKS_AMVPSIZE -{ - DWORD dwWidth; - DWORD dwHeight; -} KS_AMVPSIZE,*PKS_AMVPSIZE; - -typedef struct { - KS_AMVPSIZE Size; - DWORD MaxPixelsPerSecond; - DWORD Reserved; -} KSVPMAXPIXELRATE,*PKSVPMAXPIXELRATE; - -typedef struct { - KSPROPERTY Property; - KS_AMVPSIZE Size; -} KSVPSIZE_PROP,*PKSVPSIZE_PROP; - -typedef struct { - DWORD dwPitch; - DWORD dwXOrigin; - DWORD dwYOrigin; -} KSVPSURFACEPARAMS,*PKSVPSURFACEPARAMS; -#endif /* __IVPType__ */ - -#define STATIC_KSEVENTSETID_VPNotify \ - 0x20c5598eL,0xd3c8,0x11d0,0x8d,0xfc,0x00,0xc0,0x4f,0xd7,0xc0,0x8b -DEFINE_GUIDSTRUCT("20c5598e-d3c8-11d0-8dfc-00c04fd7c08b",KSEVENTSETID_VPNotify); -#define KSEVENTSETID_VPNotify DEFINE_GUIDNAMED(KSEVENTSETID_VPNotify) - -typedef enum { - KSEVENT_VPNOTIFY_FORMATCHANGE -} KSEVENT_VPNOTIFY; - -#define STATIC_KSEVENTSETID_VIDCAPTOSTI \ - 0xdb47de20,0xf628,0x11d1,0xba,0x41,0x0,0xa0,0xc9,0xd,0x2b,0x5 -DEFINE_GUIDSTRUCT("DB47DE20-F628-11d1-BA41-00A0C90D2B05",KSEVENTSETID_VIDCAPTOSTI); -#define KSEVENTSETID_VIDCAPNotify DEFINE_GUIDNAMED(KSEVENTSETID_VIDCAPTOSTI) - -typedef enum { - KSEVENT_VIDCAPTOSTI_EXT_TRIGGER, - KSEVENT_VIDCAP_AUTO_UPDATE, - KSEVENT_VIDCAP_SEARCH -} KSEVENT_VIDCAPTOSTI; - -typedef enum { - KSPROPERTY_EXTENSION_UNIT_INFO, - KSPROPERTY_EXTENSION_UNIT_CONTROL, - KSPROPERTY_EXTENSION_UNIT_PASS_THROUGH = 0xffff -} KSPROPERTY_EXTENSION_UNIT,*PKSPROPERTY_EXTENSION_UNIT; - -#define STATIC_KSEVENTSETID_VPVBINotify \ - 0xec529b01L,0x1a1f,0x11d1,0xba,0xd9,0x0,0x60,0x97,0x44,0x11,0x1a -DEFINE_GUIDSTRUCT("ec529b01-1a1f-11d1-bad9-00609744111a",KSEVENTSETID_VPVBINotify); -#define KSEVENTSETID_VPVBINotify DEFINE_GUIDNAMED(KSEVENTSETID_VPVBINotify) - -typedef enum { - KSEVENT_VPVBINOTIFY_FORMATCHANGE -} KSEVENT_VPVBINOTIFY; - -#define STATIC_KSDATAFORMAT_TYPE_AUXLine21Data \ - 0x670aea80L,0x3a82,0x11d0,0xb7,0x9b,0x00,0xaa,0x00,0x37,0x67,0xa7 -DEFINE_GUIDSTRUCT("670aea80-3a82-11d0-b79b-00aa003767a7",KSDATAFORMAT_TYPE_AUXLine21Data); -#define KSDATAFORMAT_TYPE_AUXLine21Data DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_AUXLine21Data) - -#define STATIC_KSDATAFORMAT_SUBTYPE_Line21_BytePair \ - 0x6e8d4a22L,0x310c,0x11d0,0xb7,0x9a,0x00,0xaa,0x00,0x37,0x67,0xa7 -DEFINE_GUIDSTRUCT("6e8d4a22-310c-11d0-b79a-00aa003767a7",KSDATAFORMAT_SUBTYPE_Line21_BytePair); -#define KSDATAFORMAT_SUBTYPE_Line21_BytePair DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_Line21_BytePair) - -#define STATIC_KSDATAFORMAT_SUBTYPE_Line21_GOPPacket \ - 0x6e8d4a23L,0x310c,0x11d0,0xb7,0x9a,0x00,0xaa,0x00,0x37,0x67,0xa7 -DEFINE_GUIDSTRUCT("6e8d4a23-310c-11d0-b79a-00aa003767a7",KSDATAFORMAT_SUBTYPE_Line21_GOPPacket); -#define KSDATAFORMAT_SUBTYPE_Line21_GOPPacket DEFINE_GUIDNAMED(KSDATAFORMAT_SUBTYPE_Line21_GOPPacket) - -typedef struct _KSGOP_USERDATA { - ULONG sc; - ULONG reserved1; - BYTE cFields; - CHAR l21Data[3]; -} KSGOP_USERDATA,*PKSGOP_USERDATA; - -#define STATIC_KSDATAFORMAT_TYPE_DVD_ENCRYPTED_PACK \ - 0xed0b916a,0x044d,0x11d1,0xaa,0x78,0x00,0xc0,0x4f,0xc3,0x1d,0x60 -DEFINE_GUIDSTRUCT("ed0b916a-044d-11d1-aa78-00c04fc31d60",KSDATAFORMAT_TYPE_DVD_ENCRYPTED_PACK); -#define KSDATAFORMAT_TYPE_DVD_ENCRYPTED_PACK DEFINE_GUIDNAMED(KSDATAFORMAT_TYPE_DVD_ENCRYPTED_PACK) - -#define KS_AM_UseNewCSSKey 0x1 - -#define STATIC_KSPROPSETID_TSRateChange \ - 0xa503c5c0,0x1d1d,0x11d1,0xad,0x80,0x44,0x45,0x53,0x54,0x0,0x0 -DEFINE_GUIDSTRUCT("A503C5C0-1D1D-11D1-AD80-444553540000",KSPROPSETID_TSRateChange); -#define KSPROPSETID_TSRateChange DEFINE_GUIDNAMED(KSPROPSETID_TSRateChange) - -typedef enum { - KS_AM_RATE_SimpleRateChange = 1, - KS_AM_RATE_ExactRateChange = 2, - KS_AM_RATE_MaxFullDataRate = 3, - KS_AM_RATE_Step = 4 -} KS_AM_PROPERTY_TS_RATE_CHANGE; - -typedef struct { - REFERENCE_TIME StartTime; - LONG Rate; -} KS_AM_SimpleRateChange,*PKS_AM_SimpleRateChange; - -typedef struct { - REFERENCE_TIME OutputZeroTime; - LONG Rate; -} KS_AM_ExactRateChange,*PKS_AM_ExactRateChange; - -typedef LONG KS_AM_MaxFullDataRate; -typedef DWORD KS_AM_Step; - -#define STATIC_KSCATEGORY_ENCODER \ - 0x19689bf6,0xc384,0x48fd,0xad,0x51,0x90,0xe5,0x8c,0x79,0xf7,0xb -DEFINE_GUIDSTRUCT("19689BF6-C384-48fd-AD51-90E58C79F70B",KSCATEGORY_ENCODER); -#define KSCATEGORY_ENCODER DEFINE_GUIDNAMED(KSCATEGORY_ENCODER) - -#define STATIC_KSCATEGORY_MULTIPLEXER \ - 0x7a5de1d3,0x1a1,0x452c,0xb4,0x81,0x4f,0xa2,0xb9,0x62,0x71,0xe8 -DEFINE_GUIDSTRUCT("7A5DE1D3-01A1-452c-B481-4FA2B96271E8",KSCATEGORY_MULTIPLEXER); -#define KSCATEGORY_MULTIPLEXER DEFINE_GUIDNAMED(KSCATEGORY_MULTIPLEXER) - -#ifndef __ENCODER_API_GUIDS__ -#define __ENCODER_API_GUIDS__ - -#define STATIC_ENCAPIPARAM_BITRATE \ - 0x49cc4c43,0xca83,0x4ad4,0xa9,0xaf,0xf3,0x69,0x6a,0xf6,0x66,0xdf -DEFINE_GUIDSTRUCT("49CC4C43-CA83-4ad4-A9AF-F3696AF666DF",ENCAPIPARAM_BITRATE); -#define ENCAPIPARAM_BITRATE DEFINE_GUIDNAMED(ENCAPIPARAM_BITRATE) - -#define STATIC_ENCAPIPARAM_PEAK_BITRATE \ - 0x703f16a9,0x3d48,0x44a1,0xb0,0x77,0x1,0x8d,0xff,0x91,0x5d,0x19 -DEFINE_GUIDSTRUCT("703F16A9-3D48-44a1-B077-018DFF915D19",ENCAPIPARAM_PEAK_BITRATE); -#define ENCAPIPARAM_PEAK_BITRATE DEFINE_GUIDNAMED(ENCAPIPARAM_PEAK_BITRATE) - -#define STATIC_ENCAPIPARAM_BITRATE_MODE \ - 0xee5fb25c,0xc713,0x40d1,0x9d,0x58,0xc0,0xd7,0x24,0x1e,0x25,0xf -DEFINE_GUIDSTRUCT("EE5FB25C-C713-40d1-9D58-C0D7241E250F",ENCAPIPARAM_BITRATE_MODE); -#define ENCAPIPARAM_BITRATE_MODE DEFINE_GUIDNAMED(ENCAPIPARAM_BITRATE_MODE) - -#define STATIC_CODECAPI_CHANGELISTS \ - 0x62b12acf,0xf6b0,0x47d9,0x94,0x56,0x96,0xf2,0x2c,0x4e,0x0b,0x9d -DEFINE_GUIDSTRUCT("62B12ACF-F6B0-47D9-9456-96F22C4E0B9D",CODECAPI_CHANGELISTS); -#define CODECAPI_CHANGELISTS DEFINE_GUIDNAMED(CODECAPI_CHANGELISTS) - -#define STATIC_CODECAPI_VIDEO_ENCODER \ - 0x7112e8e1,0x3d03,0x47ef,0x8e,0x60,0x03,0xf1,0xcf,0x53,0x73,0x01 -DEFINE_GUIDSTRUCT("7112E8E1-3D03-47EF-8E60-03F1CF537301",CODECAPI_VIDEO_ENCODER); -#define CODECAPI_VIDEO_ENCODER DEFINE_GUIDNAMED(CODECAPI_VIDEO_ENCODER) - -#define STATIC_CODECAPI_AUDIO_ENCODER \ - 0xb9d19a3e,0xf897,0x429c,0xbc,0x46,0x81,0x38,0xb7,0x27,0x2b,0x2d -DEFINE_GUIDSTRUCT("B9D19A3E-F897-429C-BC46-8138B7272B2D",CODECAPI_AUDIO_ENCODER); -#define CODECAPI_AUDIO_ENCODER DEFINE_GUIDNAMED(CODECAPI_AUDIO_ENCODER) - -#define STATIC_CODECAPI_SETALLDEFAULTS \ - 0x6c5e6a7c,0xacf8,0x4f55,0xa9,0x99,0x1a,0x62,0x81,0x09,0x05,0x1b -DEFINE_GUIDSTRUCT("6C5E6A7C-ACF8-4F55-A999-1A628109051B",CODECAPI_SETALLDEFAULTS); -#define CODECAPI_SETALLDEFAULTS DEFINE_GUIDNAMED(CODECAPI_SETALLDEFAULTS) - -#define STATIC_CODECAPI_ALLSETTINGS \ - 0x6a577e92,0x83e1,0x4113,0xad,0xc2,0x4f,0xce,0xc3,0x2f,0x83,0xa1 -DEFINE_GUIDSTRUCT("6A577E92-83E1-4113-ADC2-4FCEC32F83A1",CODECAPI_ALLSETTINGS); -#define CODECAPI_ALLSETTINGS DEFINE_GUIDNAMED(CODECAPI_ALLSETTINGS) - -#define STATIC_CODECAPI_SUPPORTSEVENTS \ - 0x0581af97,0x7693,0x4dbd,0x9d,0xca,0x3f,0x9e,0xbd,0x65,0x85,0xa1 -DEFINE_GUIDSTRUCT("0581AF97-7693-4DBD-9DCA-3F9EBD6585A1",CODECAPI_SUPPORTSEVENTS); -#define CODECAPI_SUPPORTSEVENTS DEFINE_GUIDNAMED(CODECAPI_SUPPORTSEVENTS) - -#define STATIC_CODECAPI_CURRENTCHANGELIST \ - 0x1cb14e83,0x7d72,0x4657,0x83,0xfd,0x47,0xa2,0xc5,0xb9,0xd1,0x3d -DEFINE_GUIDSTRUCT("1CB14E83-7D72-4657-83FD-47A2C5B9D13D",CODECAPI_CURRENTCHANGELIST); -#define CODECAPI_CURRENTCHANGELIST DEFINE_GUIDNAMED(CODECAPI_CURRENTCHANGELIST) -#endif /* __ENCODER_API_GUIDS__ */ - -#ifndef __ENCODER_API_DEFINES__ -#define __ENCODER_API_DEFINES__ -typedef enum { - ConstantBitRate = 0, - VariableBitRateAverage, - VariableBitRatePeak -} VIDEOENCODER_BITRATE_MODE; -#endif /* __ENCODER_API_DEFINES__ */ - -#define STATIC_KSPROPSETID_Jack\ - 0x4509f757, 0x2d46, 0x4637, 0x8e, 0x62, 0xce, 0x7d, 0xb9, 0x44, 0xf5, 0x7b -DEFINE_GUIDSTRUCT("4509F757-2D46-4637-8E62-CE7DB944F57B", KSPROPSETID_Jack); -#define KSPROPSETID_Jack DEFINE_GUIDNAMED(KSPROPSETID_Jack) - -typedef enum { - KSPROPERTY_JACK_DESCRIPTION = 1, - KSPROPERTY_JACK_DESCRIPTION2, - KSPROPERTY_JACK_SINK_INFO -} KSPROPERTY_JACK; - -typedef enum -{ - eConnTypeUnknown, - eConnType3Point5mm, - eConnTypeQuarter, - eConnTypeAtapiInternal, - eConnTypeRCA, - eConnTypeOptical, - eConnTypeOtherDigital, - eConnTypeOtherAnalog, - eConnTypeMultichannelAnalogDIN, - eConnTypeXlrProfessional, - eConnTypeRJ11Modem, - eConnTypeCombination -} EPcxConnectionType; - -typedef enum -{ - eGeoLocRear = 0x1, - eGeoLocFront, - eGeoLocLeft, - eGeoLocRight, - eGeoLocTop, - eGeoLocBottom, - eGeoLocRearPanel, - eGeoLocRiser, - eGeoLocInsideMobileLid, - eGeoLocDrivebay, - eGeoLocHDMI, - eGeoLocOutsideMobileLid, - eGeoLocATAPI, - eGeoLocReserved5, - eGeoLocReserved6, - EPcxGeoLocation_enum_count -} EPcxGeoLocation; - -typedef enum -{ - eGenLocPrimaryBox = 0, - eGenLocInternal, - eGenLocSeparate, - eGenLocOther, - EPcxGenLocation_enum_count -} EPcxGenLocation; - -typedef enum -{ - ePortConnJack = 0, - ePortConnIntegratedDevice, - ePortConnBothIntegratedAndJack, - ePortConnUnknown -} EPxcPortConnection; - -typedef struct -{ - DWORD ChannelMapping; - COLORREF Color; - EPcxConnectionType ConnectionType; - EPcxGeoLocation GeoLocation; - EPcxGenLocation GenLocation; - EPxcPortConnection PortConnection; - BOOL IsConnected; -} KSJACK_DESCRIPTION, *PKSJACK_DESCRIPTION; - -typedef enum -{ - KSJACK_SINK_CONNECTIONTYPE_HDMI = 0, - KSJACK_SINK_CONNECTIONTYPE_DISPLAYPORT, -} KSJACK_SINK_CONNECTIONTYPE; - -#define MAX_SINK_DESCRIPTION_NAME_LENGTH 32 -typedef struct _tagKSJACK_SINK_INFORMATION -{ - KSJACK_SINK_CONNECTIONTYPE ConnType; - WORD ManufacturerId; - WORD ProductId; - WORD AudioLatency; - BOOL HDCPCapable; - BOOL AICapable; - UCHAR SinkDescriptionLength; - WCHAR SinkDescription[MAX_SINK_DESCRIPTION_NAME_LENGTH]; - LUID PortId; -} KSJACK_SINK_INFORMATION, *PKSJACK_SINK_INFORMATION; - -#define JACKDESC2_PRESENCE_DETECT_CAPABILITY 0x00000001 -#define JACKDESC2_DYNAMIC_FORMAT_CHANGE_CAPABILITY 0x00000002 - -typedef struct _tagKSJACK_DESCRIPTION2 -{ - DWORD DeviceStateInfo; - DWORD JackCapabilities; -} KSJACK_DESCRIPTION2, *PKSJACK_DESCRIPTION2; - -/* Additional structs for Windows Vista and later */ -typedef struct _tagKSRTAUDIO_BUFFER_PROPERTY { - KSPROPERTY Property; - PVOID BaseAddress; - ULONG RequestedBufferSize; -} KSRTAUDIO_BUFFER_PROPERTY, *PKSRTAUDIO_BUFFER_PROPERTY; - -typedef struct _tagKSRTAUDIO_BUFFER_PROPERTY_WITH_NOTIFICATION { - KSPROPERTY Property; - PVOID BaseAddress; - ULONG RequestedBufferSize; - ULONG NotificationCount; -} KSRTAUDIO_BUFFER_PROPERTY_WITH_NOTIFICATION, *PKSRTAUDIO_BUFFER_PROPERTY_WITH_NOTIFICATION; - -typedef struct _tagKSRTAUDIO_BUFFER { - PVOID BufferAddress; - ULONG ActualBufferSize; - BOOL CallMemoryBarrier; -} KSRTAUDIO_BUFFER, *PKSRTAUDIO_BUFFER; - -typedef struct _tagKSRTAUDIO_HWLATENCY { - ULONG FifoSize; - ULONG ChipsetDelay; - ULONG CodecDelay; -} KSRTAUDIO_HWLATENCY, *PKSRTAUDIO_HWLATENCY; - -typedef struct _tagKSRTAUDIO_HWREGISTER_PROPERTY { - KSPROPERTY Property; - PVOID BaseAddress; -} KSRTAUDIO_HWREGISTER_PROPERTY, *PKSRTAUDIO_HWREGISTER_PROPERTY; - -typedef struct _tagKSRTAUDIO_HWREGISTER { - PVOID Register; - ULONG Width; - ULONGLONG Numerator; - ULONGLONG Denominator; - ULONG Accuracy; -} KSRTAUDIO_HWREGISTER, *PKSRTAUDIO_HWREGISTER; - -typedef struct _tagKSRTAUDIO_NOTIFICATION_EVENT_PROPERTY { - KSPROPERTY Property; - HANDLE NotificationEvent; -} KSRTAUDIO_NOTIFICATION_EVENT_PROPERTY, *PKSRTAUDIO_NOTIFICATION_EVENT_PROPERTY; - - -#endif /* _KSMEDIA_ */ - diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/aiohttp/web_protocol.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/aiohttp/web_protocol.py deleted file mode 100644 index 10a960801880ea378b2d41fb7482626e8aabe688..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/aiohttp/web_protocol.py +++ /dev/null @@ -1,679 +0,0 @@ -import asyncio -import asyncio.streams -import traceback -import warnings -from collections import deque -from contextlib import suppress -from html import escape as html_escape -from http import HTTPStatus -from logging import Logger -from typing import ( - TYPE_CHECKING, - Any, - Awaitable, - Callable, - Deque, - Optional, - Sequence, - Tuple, - Type, - Union, - cast, -) - -import attr -import yarl - -from .abc import AbstractAccessLogger, AbstractStreamWriter -from .base_protocol import BaseProtocol -from .helpers import ceil_timeout -from .http import ( - HttpProcessingError, - HttpRequestParser, - HttpVersion10, - RawRequestMessage, - StreamWriter, -) -from .log import access_logger, server_logger -from .streams import EMPTY_PAYLOAD, StreamReader -from .tcp_helpers import tcp_keepalive -from .web_exceptions import HTTPException -from .web_log import AccessLogger -from .web_request import BaseRequest -from .web_response import Response, StreamResponse - -__all__ = ("RequestHandler", "RequestPayloadError", "PayloadAccessError") - -if TYPE_CHECKING: # pragma: no cover - from .web_server import Server - - -_RequestFactory = Callable[ - [ - RawRequestMessage, - StreamReader, - "RequestHandler", - AbstractStreamWriter, - "asyncio.Task[None]", - ], - BaseRequest, -] - -_RequestHandler = Callable[[BaseRequest], Awaitable[StreamResponse]] - -ERROR = RawRequestMessage( - "UNKNOWN", - "/", - HttpVersion10, - {}, # type: ignore[arg-type] - {}, # type: ignore[arg-type] - True, - None, - False, - False, - yarl.URL("/"), -) - - -class RequestPayloadError(Exception): - """Payload parsing error.""" - - -class PayloadAccessError(Exception): - """Payload was accessed after response was sent.""" - - -@attr.s(auto_attribs=True, frozen=True, slots=True) -class _ErrInfo: - status: int - exc: BaseException - message: str - - -_MsgType = Tuple[Union[RawRequestMessage, _ErrInfo], StreamReader] - - -class RequestHandler(BaseProtocol): - """HTTP protocol implementation. - - RequestHandler handles incoming HTTP request. It reads request line, - request headers and request payload and calls handle_request() method. - By default it always returns with 404 response. - - RequestHandler handles errors in incoming request, like bad - status line, bad headers or incomplete payload. If any error occurs, - connection gets closed. - - keepalive_timeout -- number of seconds before closing - keep-alive connection - - tcp_keepalive -- TCP keep-alive is on, default is on - - debug -- enable debug mode - - logger -- custom logger object - - access_log_class -- custom class for access_logger - - access_log -- custom logging object - - access_log_format -- access log format string - - loop -- Optional event loop - - max_line_size -- Optional maximum header line size - - max_field_size -- Optional maximum header field size - - max_headers -- Optional maximum header size - - """ - - KEEPALIVE_RESCHEDULE_DELAY = 1 - - __slots__ = ( - "_request_count", - "_keepalive", - "_manager", - "_request_handler", - "_request_factory", - "_tcp_keepalive", - "_keepalive_time", - "_keepalive_handle", - "_keepalive_timeout", - "_lingering_time", - "_messages", - "_message_tail", - "_waiter", - "_task_handler", - "_upgrade", - "_payload_parser", - "_request_parser", - "_reading_paused", - "logger", - "debug", - "access_log", - "access_logger", - "_close", - "_force_close", - "_current_request", - ) - - def __init__( - self, - manager: "Server", - *, - loop: asyncio.AbstractEventLoop, - keepalive_timeout: float = 75.0, # NGINX default is 75 secs - tcp_keepalive: bool = True, - logger: Logger = server_logger, - access_log_class: Type[AbstractAccessLogger] = AccessLogger, - access_log: Logger = access_logger, - access_log_format: str = AccessLogger.LOG_FORMAT, - debug: bool = False, - max_line_size: int = 8190, - max_headers: int = 32768, - max_field_size: int = 8190, - lingering_time: float = 10.0, - read_bufsize: int = 2**16, - auto_decompress: bool = True, - ): - super().__init__(loop) - - self._request_count = 0 - self._keepalive = False - self._current_request: Optional[BaseRequest] = None - self._manager: Optional[Server] = manager - self._request_handler: Optional[_RequestHandler] = manager.request_handler - self._request_factory: Optional[_RequestFactory] = manager.request_factory - - self._tcp_keepalive = tcp_keepalive - # placeholder to be replaced on keepalive timeout setup - self._keepalive_time = 0.0 - self._keepalive_handle: Optional[asyncio.Handle] = None - self._keepalive_timeout = keepalive_timeout - self._lingering_time = float(lingering_time) - - self._messages: Deque[_MsgType] = deque() - self._message_tail = b"" - - self._waiter: Optional[asyncio.Future[None]] = None - self._task_handler: Optional[asyncio.Task[None]] = None - - self._upgrade = False - self._payload_parser: Any = None - self._request_parser: Optional[HttpRequestParser] = HttpRequestParser( - self, - loop, - read_bufsize, - max_line_size=max_line_size, - max_field_size=max_field_size, - max_headers=max_headers, - payload_exception=RequestPayloadError, - auto_decompress=auto_decompress, - ) - - self.logger = logger - self.debug = debug - self.access_log = access_log - if access_log: - self.access_logger: Optional[AbstractAccessLogger] = access_log_class( - access_log, access_log_format - ) - else: - self.access_logger = None - - self._close = False - self._force_close = False - - def __repr__(self) -> str: - return "<{} {}>".format( - self.__class__.__name__, - "connected" if self.transport is not None else "disconnected", - ) - - @property - def keepalive_timeout(self) -> float: - return self._keepalive_timeout - - async def shutdown(self, timeout: Optional[float] = 15.0) -> None: - """Do worker process exit preparations. - - We need to clean up everything and stop accepting requests. - It is especially important for keep-alive connections. - """ - self._force_close = True - - if self._keepalive_handle is not None: - self._keepalive_handle.cancel() - - if self._waiter: - self._waiter.cancel() - - # wait for handlers - with suppress(asyncio.CancelledError, asyncio.TimeoutError): - async with ceil_timeout(timeout): - if self._current_request is not None: - self._current_request._cancel(asyncio.CancelledError()) - - if self._task_handler is not None and not self._task_handler.done(): - await self._task_handler - - # force-close non-idle handler - if self._task_handler is not None: - self._task_handler.cancel() - - if self.transport is not None: - self.transport.close() - self.transport = None - - def connection_made(self, transport: asyncio.BaseTransport) -> None: - super().connection_made(transport) - - real_transport = cast(asyncio.Transport, transport) - if self._tcp_keepalive: - tcp_keepalive(real_transport) - - self._task_handler = self._loop.create_task(self.start()) - assert self._manager is not None - self._manager.connection_made(self, real_transport) - - def connection_lost(self, exc: Optional[BaseException]) -> None: - if self._manager is None: - return - self._manager.connection_lost(self, exc) - - super().connection_lost(exc) - - self._manager = None - self._force_close = True - self._request_factory = None - self._request_handler = None - self._request_parser = None - - if self._keepalive_handle is not None: - self._keepalive_handle.cancel() - - if self._current_request is not None: - if exc is None: - exc = ConnectionResetError("Connection lost") - self._current_request._cancel(exc) - - if self._waiter is not None: - self._waiter.cancel() - - self._task_handler = None - - if self._payload_parser is not None: - self._payload_parser.feed_eof() - self._payload_parser = None - - def set_parser(self, parser: Any) -> None: - # Actual type is WebReader - assert self._payload_parser is None - - self._payload_parser = parser - - if self._message_tail: - self._payload_parser.feed_data(self._message_tail) - self._message_tail = b"" - - def eof_received(self) -> None: - pass - - def data_received(self, data: bytes) -> None: - if self._force_close or self._close: - return - # parse http messages - messages: Sequence[_MsgType] - if self._payload_parser is None and not self._upgrade: - assert self._request_parser is not None - try: - messages, upgraded, tail = self._request_parser.feed_data(data) - except HttpProcessingError as exc: - messages = [ - (_ErrInfo(status=400, exc=exc, message=exc.message), EMPTY_PAYLOAD) - ] - upgraded = False - tail = b"" - - for msg, payload in messages or (): - self._request_count += 1 - self._messages.append((msg, payload)) - - waiter = self._waiter - if messages and waiter is not None and not waiter.done(): - # don't set result twice - waiter.set_result(None) - - self._upgrade = upgraded - if upgraded and tail: - self._message_tail = tail - - # no parser, just store - elif self._payload_parser is None and self._upgrade and data: - self._message_tail += data - - # feed payload - elif data: - eof, tail = self._payload_parser.feed_data(data) - if eof: - self.close() - - def keep_alive(self, val: bool) -> None: - """Set keep-alive connection mode. - - :param bool val: new state. - """ - self._keepalive = val - if self._keepalive_handle: - self._keepalive_handle.cancel() - self._keepalive_handle = None - - def close(self) -> None: - """Close connection. - - Stop accepting new pipelining messages and close - connection when handlers done processing messages. - """ - self._close = True - if self._waiter: - self._waiter.cancel() - - def force_close(self) -> None: - """Forcefully close connection.""" - self._force_close = True - if self._waiter: - self._waiter.cancel() - if self.transport is not None: - self.transport.close() - self.transport = None - - def log_access( - self, request: BaseRequest, response: StreamResponse, time: float - ) -> None: - if self.access_logger is not None: - self.access_logger.log(request, response, self._loop.time() - time) - - def log_debug(self, *args: Any, **kw: Any) -> None: - if self.debug: - self.logger.debug(*args, **kw) - - def log_exception(self, *args: Any, **kw: Any) -> None: - self.logger.exception(*args, **kw) - - def _process_keepalive(self) -> None: - if self._force_close or not self._keepalive: - return - - next = self._keepalive_time + self._keepalive_timeout - - # handler in idle state - if self._waiter: - if self._loop.time() > next: - self.force_close() - return - - # not all request handlers are done, - # reschedule itself to next second - self._keepalive_handle = self._loop.call_later( - self.KEEPALIVE_RESCHEDULE_DELAY, self._process_keepalive - ) - - async def _handle_request( - self, - request: BaseRequest, - start_time: float, - request_handler: Callable[[BaseRequest], Awaitable[StreamResponse]], - ) -> Tuple[StreamResponse, bool]: - assert self._request_handler is not None - try: - try: - self._current_request = request - resp = await request_handler(request) - finally: - self._current_request = None - except HTTPException as exc: - resp = exc - reset = await self.finish_response(request, resp, start_time) - except asyncio.CancelledError: - raise - except asyncio.TimeoutError as exc: - self.log_debug("Request handler timed out.", exc_info=exc) - resp = self.handle_error(request, 504) - reset = await self.finish_response(request, resp, start_time) - except Exception as exc: - resp = self.handle_error(request, 500, exc) - reset = await self.finish_response(request, resp, start_time) - else: - # Deprecation warning (See #2415) - if getattr(resp, "__http_exception__", False): - warnings.warn( - "returning HTTPException object is deprecated " - "(#2415) and will be removed, " - "please raise the exception instead", - DeprecationWarning, - ) - - reset = await self.finish_response(request, resp, start_time) - - return resp, reset - - async def start(self) -> None: - """Process incoming request. - - It reads request line, request headers and request payload, then - calls handle_request() method. Subclass has to override - handle_request(). start() handles various exceptions in request - or response handling. Connection is being closed always unless - keep_alive(True) specified. - """ - loop = self._loop - handler = self._task_handler - assert handler is not None - manager = self._manager - assert manager is not None - keepalive_timeout = self._keepalive_timeout - resp = None - assert self._request_factory is not None - assert self._request_handler is not None - - while not self._force_close: - if not self._messages: - try: - # wait for next request - self._waiter = loop.create_future() - await self._waiter - except asyncio.CancelledError: - break - finally: - self._waiter = None - - message, payload = self._messages.popleft() - - start = loop.time() - - manager.requests_count += 1 - writer = StreamWriter(self, loop) - if isinstance(message, _ErrInfo): - # make request_factory work - request_handler = self._make_error_handler(message) - message = ERROR - else: - request_handler = self._request_handler - - request = self._request_factory(message, payload, self, writer, handler) - try: - # a new task is used for copy context vars (#3406) - task = self._loop.create_task( - self._handle_request(request, start, request_handler) - ) - try: - resp, reset = await task - except (asyncio.CancelledError, ConnectionError): - self.log_debug("Ignored premature client disconnection") - break - - # Drop the processed task from asyncio.Task.all_tasks() early - del task - if reset: - self.log_debug("Ignored premature client disconnection 2") - break - - # notify server about keep-alive - self._keepalive = bool(resp.keep_alive) - - # check payload - if not payload.is_eof(): - lingering_time = self._lingering_time - if not self._force_close and lingering_time: - self.log_debug( - "Start lingering close timer for %s sec.", lingering_time - ) - - now = loop.time() - end_t = now + lingering_time - - with suppress(asyncio.TimeoutError, asyncio.CancelledError): - while not payload.is_eof() and now < end_t: - async with ceil_timeout(end_t - now): - # read and ignore - await payload.readany() - now = loop.time() - - # if payload still uncompleted - if not payload.is_eof() and not self._force_close: - self.log_debug("Uncompleted request.") - self.close() - - payload.set_exception(PayloadAccessError()) - - except asyncio.CancelledError: - self.log_debug("Ignored premature client disconnection ") - break - except RuntimeError as exc: - if self.debug: - self.log_exception("Unhandled runtime exception", exc_info=exc) - self.force_close() - except Exception as exc: - self.log_exception("Unhandled exception", exc_info=exc) - self.force_close() - finally: - if self.transport is None and resp is not None: - self.log_debug("Ignored premature client disconnection.") - elif not self._force_close: - if self._keepalive and not self._close: - # start keep-alive timer - if keepalive_timeout is not None: - now = self._loop.time() - self._keepalive_time = now - if self._keepalive_handle is None: - self._keepalive_handle = loop.call_at( - now + keepalive_timeout, self._process_keepalive - ) - else: - break - - # remove handler, close transport if no handlers left - if not self._force_close: - self._task_handler = None - if self.transport is not None: - self.transport.close() - - async def finish_response( - self, request: BaseRequest, resp: StreamResponse, start_time: float - ) -> bool: - """Prepare the response and write_eof, then log access. - - This has to - be called within the context of any exception so the access logger - can get exception information. Returns True if the client disconnects - prematurely. - """ - if self._request_parser is not None: - self._request_parser.set_upgraded(False) - self._upgrade = False - if self._message_tail: - self._request_parser.feed_data(self._message_tail) - self._message_tail = b"" - try: - prepare_meth = resp.prepare - except AttributeError: - if resp is None: - raise RuntimeError("Missing return " "statement on request handler") - else: - raise RuntimeError( - "Web-handler should return " - "a response instance, " - "got {!r}".format(resp) - ) - try: - await prepare_meth(request) - await resp.write_eof() - except ConnectionError: - self.log_access(request, resp, start_time) - return True - else: - self.log_access(request, resp, start_time) - return False - - def handle_error( - self, - request: BaseRequest, - status: int = 500, - exc: Optional[BaseException] = None, - message: Optional[str] = None, - ) -> StreamResponse: - """Handle errors. - - Returns HTTP response with specific status code. Logs additional - information. It always closes current connection. - """ - self.log_exception("Error handling request", exc_info=exc) - - # some data already got sent, connection is broken - if request.writer.output_size > 0: - raise ConnectionError( - "Response is sent already, cannot send another response " - "with the error message" - ) - - ct = "text/plain" - if status == HTTPStatus.INTERNAL_SERVER_ERROR: - title = "{0.value} {0.phrase}".format(HTTPStatus.INTERNAL_SERVER_ERROR) - msg = HTTPStatus.INTERNAL_SERVER_ERROR.description - tb = None - if self.debug: - with suppress(Exception): - tb = traceback.format_exc() - - if "text/html" in request.headers.get("Accept", ""): - if tb: - tb = html_escape(tb) - msg = f"

    Traceback:

    \n
    {tb}
    " - message = ( - "" - "{title}" - "\n

    {title}

    " - "\n{msg}\n\n" - ).format(title=title, msg=msg) - ct = "text/html" - else: - if tb: - msg = tb - message = title + "\n\n" + msg - - resp = Response(status=status, text=message, content_type=ct) - resp.force_close() - - return resp - - def _make_error_handler( - self, err_info: _ErrInfo - ) -> Callable[[BaseRequest], Awaitable[StreamResponse]]: - async def handler(request: BaseRequest) -> StreamResponse: - return self.handle_error( - request, err_info.status, err_info.exc, err_info.message - ) - - return handler diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fsspec/registry.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fsspec/registry.py deleted file mode 100644 index d1614f13097878a4cc6975c565507eccf51ad9d7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/fsspec/registry.py +++ /dev/null @@ -1,298 +0,0 @@ -from __future__ import annotations - -import importlib -import types -import warnings - -__all__ = ["registry", "get_filesystem_class", "default"] - -# internal, mutable -_registry: dict[str, type] = {} - -# external, immutable -registry = types.MappingProxyType(_registry) -default = "file" - - -def register_implementation(name, cls, clobber=False, errtxt=None): - """Add implementation class to the registry - - Parameters - ---------- - name: str - Protocol name to associate with the class - cls: class or str - if a class: fsspec-compliant implementation class (normally inherits from - ``fsspec.AbstractFileSystem``, gets added straight to the registry. If a - str, the full path to an implementation class like package.module.class, - which gets added to known_implementations, - so the import is deferred until the filesystem is actually used. - clobber: bool (optional) - Whether to overwrite a protocol with the same name; if False, will raise - instead. - errtxt: str (optional) - If given, then a failure to import the given class will result in this - text being given. - """ - if isinstance(cls, str): - if name in known_implementations and clobber is False: - if cls != known_implementations[name]["class"]: - raise ValueError( - f"Name ({name}) already in the known_implementations and clobber " - f"is False" - ) - else: - known_implementations[name] = { - "class": cls, - "err": errtxt or f"{cls} import failed for protocol {name}", - } - - else: - if name in registry and clobber is False: - if _registry[name] is not cls: - raise ValueError( - f"Name ({name}) already in the registry and clobber is False" - ) - else: - _registry[name] = cls - - -# protocols mapped to the class which implements them. This dict can -# updated with register_implementation -known_implementations = { - "file": {"class": "fsspec.implementations.local.LocalFileSystem"}, - "local": {"class": "fsspec.implementations.local.LocalFileSystem"}, - "memory": {"class": "fsspec.implementations.memory.MemoryFileSystem"}, - "dropbox": { - "class": "dropboxdrivefs.DropboxDriveFileSystem", - "err": ( - 'DropboxFileSystem requires "dropboxdrivefs",' - '"requests" and "dropbox" to be installed' - ), - }, - "http": { - "class": "fsspec.implementations.http.HTTPFileSystem", - "err": 'HTTPFileSystem requires "requests" and "aiohttp" to be installed', - }, - "https": { - "class": "fsspec.implementations.http.HTTPFileSystem", - "err": 'HTTPFileSystem requires "requests" and "aiohttp" to be installed', - }, - "zip": {"class": "fsspec.implementations.zip.ZipFileSystem"}, - "tar": {"class": "fsspec.implementations.tar.TarFileSystem"}, - "gcs": { - "class": "gcsfs.GCSFileSystem", - "err": "Please install gcsfs to access Google Storage", - }, - "gs": { - "class": "gcsfs.GCSFileSystem", - "err": "Please install gcsfs to access Google Storage", - }, - "gdrive": { - "class": "gdrivefs.GoogleDriveFileSystem", - "err": "Please install gdrivefs for access to Google Drive", - }, - "sftp": { - "class": "fsspec.implementations.sftp.SFTPFileSystem", - "err": 'SFTPFileSystem requires "paramiko" to be installed', - }, - "ssh": { - "class": "fsspec.implementations.sftp.SFTPFileSystem", - "err": 'SFTPFileSystem requires "paramiko" to be installed', - }, - "ftp": {"class": "fsspec.implementations.ftp.FTPFileSystem"}, - "hdfs": { - "class": "fsspec.implementations.arrow.HadoopFileSystem", - "err": "pyarrow and local java libraries required for HDFS", - }, - "arrow_hdfs": { - "class": "fsspec.implementations.arrow.HadoopFileSystem", - "err": "pyarrow and local java libraries required for HDFS", - }, - "webhdfs": { - "class": "fsspec.implementations.webhdfs.WebHDFS", - "err": 'webHDFS access requires "requests" to be installed', - }, - "s3": {"class": "s3fs.S3FileSystem", "err": "Install s3fs to access S3"}, - "s3a": {"class": "s3fs.S3FileSystem", "err": "Install s3fs to access S3"}, - "wandb": {"class": "wandbfs.WandbFS", "err": "Install wandbfs to access wandb"}, - "oci": { - "class": "ocifs.OCIFileSystem", - "err": "Install ocifs to access OCI Object Storage", - }, - "ocilake": { - "class": "ocifs.OCIFileSystem", - "err": "Install ocifs to access OCI Data Lake", - }, - "asynclocal": { - "class": "morefs.asyn_local.AsyncLocalFileSystem", - "err": "Install 'morefs[asynclocalfs]' to use AsyncLocalFileSystem", - }, - "adl": { - "class": "adlfs.AzureDatalakeFileSystem", - "err": "Install adlfs to access Azure Datalake Gen1", - }, - "abfs": { - "class": "adlfs.AzureBlobFileSystem", - "err": "Install adlfs to access Azure Datalake Gen2 and Azure Blob Storage", - }, - "az": { - "class": "adlfs.AzureBlobFileSystem", - "err": "Install adlfs to access Azure Datalake Gen2 and Azure Blob Storage", - }, - "cached": {"class": "fsspec.implementations.cached.CachingFileSystem"}, - "blockcache": {"class": "fsspec.implementations.cached.CachingFileSystem"}, - "filecache": {"class": "fsspec.implementations.cached.WholeFileCacheFileSystem"}, - "simplecache": {"class": "fsspec.implementations.cached.SimpleCacheFileSystem"}, - "dask": { - "class": "fsspec.implementations.dask.DaskWorkerFileSystem", - "err": "Install dask distributed to access worker file system", - }, - "dbfs": { - "class": "fsspec.implementations.dbfs.DatabricksFileSystem", - "err": "Install the requests package to use the DatabricksFileSystem", - }, - "github": { - "class": "fsspec.implementations.github.GithubFileSystem", - "err": "Install the requests package to use the github FS", - }, - "git": { - "class": "fsspec.implementations.git.GitFileSystem", - "err": "Install pygit2 to browse local git repos", - }, - "smb": { - "class": "fsspec.implementations.smb.SMBFileSystem", - "err": 'SMB requires "smbprotocol" or "smbprotocol[kerberos]" installed', - }, - "jupyter": { - "class": "fsspec.implementations.jupyter.JupyterFileSystem", - "err": "Jupyter FS requires requests to be installed", - }, - "jlab": { - "class": "fsspec.implementations.jupyter.JupyterFileSystem", - "err": "Jupyter FS requires requests to be installed", - }, - "libarchive": { - "class": "fsspec.implementations.libarchive.LibArchiveFileSystem", - "err": "LibArchive requires to be installed", - }, - "reference": {"class": "fsspec.implementations.reference.ReferenceFileSystem"}, - "generic": {"class": "fsspec.generic.GenericFileSystem"}, - "oss": { - "class": "ossfs.OSSFileSystem", - "err": "Install ossfs to access Alibaba Object Storage System", - }, - "webdav": { - "class": "webdav4.fsspec.WebdavFileSystem", - "err": "Install webdav4 to access WebDAV", - }, - "dvc": { - "class": "dvc.api.DVCFileSystem", - "err": "Install dvc to access DVCFileSystem", - }, - "hf": { - "class": "huggingface_hub.HfFileSystem", - "err": "Install huggingface_hub to access HfFileSystem", - }, - "root": { - "class": "fsspec_xrootd.XRootDFileSystem", - "err": "Install fsspec-xrootd to access xrootd storage system." - + " Note: 'root' is the protocol name for xrootd storage systems," - + " not referring to root directories", - }, - "dir": {"class": "fsspec.implementations.dirfs.DirFileSystem"}, - "box": { - "class": "boxfs.BoxFileSystem", - "err": "Please install boxfs to access BoxFileSystem", - }, - "lakefs": { - "class": "lakefs_spec.LakeFSFileSystem", - "err": "Please install lakefs-spec to access LakeFSFileSystem", - }, -} - - -def get_filesystem_class(protocol): - """Fetch named protocol implementation from the registry - - The dict ``known_implementations`` maps protocol names to the locations - of classes implementing the corresponding file-system. When used for the - first time, appropriate imports will happen and the class will be placed in - the registry. All subsequent calls will fetch directly from the registry. - - Some protocol implementations require additional dependencies, and so the - import may fail. In this case, the string in the "err" field of the - ``known_implementations`` will be given as the error message. - """ - if not protocol: - protocol = default - - if protocol not in registry: - if protocol not in known_implementations: - raise ValueError(f"Protocol not known: {protocol}") - bit = known_implementations[protocol] - try: - register_implementation(protocol, _import_class(bit["class"])) - except ImportError as e: - raise ImportError(bit["err"]) from e - cls = registry[protocol] - if getattr(cls, "protocol", None) in ("abstract", None): - cls.protocol = protocol - - return cls - - -s3_msg = """Your installed version of s3fs is very old and known to cause -severe performance issues, see also https://github.com/dask/dask/issues/10276 - -To fix, you should specify a lower version bound on s3fs, or -update the current installation. -""" - - -def _import_class(cls, minv=None): - """Take a string FQP and return the imported class or identifier - - clas is of the form "package.module.klass" or "package.module:subobject.klass" - """ - if ":" in cls: - mod, name = cls.rsplit(":", 1) - s3 = mod == "s3fs" - mod = importlib.import_module(mod) - if s3 and mod.__version__.split(".") < ["0", "5"]: - warnings.warn(s3_msg) - for part in name.split("."): - mod = getattr(mod, part) - return mod - else: - mod, name = cls.rsplit(".", 1) - s3 = mod == "s3fs" - mod = importlib.import_module(mod) - if s3 and mod.__version__.split(".") < ["0", "5"]: - warnings.warn(s3_msg) - return getattr(mod, name) - - -def filesystem(protocol, **storage_options): - """Instantiate filesystems for given protocol and arguments - - ``storage_options`` are specific to the protocol being chosen, and are - passed directly to the class. - """ - if protocol == "arrow_hdfs": - warnings.warn( - "The 'arrow_hdfs' protocol has been deprecated and will be " - "removed in the future. Specify it as 'hdfs'.", - DeprecationWarning, - ) - - cls = get_filesystem_class(protocol) - return cls(**storage_options) - - -def available_protocols(): - """Return a list of the implemented protocols. - - Note that any given protocol may require extra packages to be importable. - """ - return list(known_implementations) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/__init__.py deleted file mode 100644 index 7a0ac3b65e7f2fcd73380ff49413885eb58e9267..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/components/__init__.py +++ /dev/null @@ -1,111 +0,0 @@ -from gradio.components.annotated_image import AnnotatedImage -from gradio.components.audio import Audio -from gradio.components.bar_plot import BarPlot -from gradio.components.base import ( - Component, - FormComponent, - StreamingInput, - StreamingOutput, - _Keywords, - component, - get_component_instance, -) -from gradio.components.button import Button -from gradio.components.chatbot import Chatbot -from gradio.components.checkbox import Checkbox -from gradio.components.checkboxgroup import CheckboxGroup -from gradio.components.clear_button import ClearButton -from gradio.components.code import Code -from gradio.components.color_picker import ColorPicker -from gradio.components.dataframe import Dataframe -from gradio.components.dataset import Dataset -from gradio.components.dropdown import Dropdown -from gradio.components.duplicate_button import DuplicateButton -from gradio.components.fallback import Fallback -from gradio.components.file import File -from gradio.components.file_explorer import FileExplorer -from gradio.components.gallery import Gallery -from gradio.components.highlighted_text import HighlightedText -from gradio.components.html import HTML -from gradio.components.image import Image -from gradio.components.json_component import JSON -from gradio.components.label import Label -from gradio.components.line_plot import LinePlot -from gradio.components.login_button import LoginButton -from gradio.components.logout_button import LogoutButton -from gradio.components.markdown import Markdown -from gradio.components.model3d import Model3D -from gradio.components.number import Number -from gradio.components.plot import Plot -from gradio.components.radio import Radio -from gradio.components.scatter_plot import ScatterPlot -from gradio.components.slider import Slider -from gradio.components.state import State -from gradio.components.textbox import Textbox -from gradio.components.upload_button import UploadButton -from gradio.components.video import Video -from gradio.layouts import Form - -Text = Textbox -DataFrame = Dataframe -Highlightedtext = HighlightedText -Annotatedimage = AnnotatedImage -Highlight = HighlightedText -Checkboxgroup = CheckboxGroup -Json = JSON - -__all__ = [ - "Audio", - "BarPlot", - "Button", - "Chatbot", - "ClearButton", - "Component", - "component", - "get_component_instance", - "_Keywords", - "Checkbox", - "CheckboxGroup", - "Code", - "ColorPicker", - "Dataframe", - "DataFrame", - "Dataset", - "DuplicateButton", - "Fallback", - "Form", - "FormComponent", - "Gallery", - "HTML", - "FileExplorer", - "Image", - "JSON", - "Json", - "Label", - "LinePlot", - "LoginButton", - "LogoutButton", - "Markdown", - "Textbox", - "Dropdown", - "Model3D", - "File", - "HighlightedText", - "AnnotatedImage", - "CheckboxGroup", - "Text", - "Highlightedtext", - "Annotatedimage", - "Highlight", - "Checkboxgroup", - "Number", - "Plot", - "Radio", - "ScatterPlot", - "Slider", - "State", - "UploadButton", - "Video", - "StreamingInput", - "StreamingOutput", -] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-23a8b23b.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-23a8b23b.css deleted file mode 100644 index ffc4aee2723b49fbc48ce76fc17f6fe0b75f1ff3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Example-23a8b23b.css +++ /dev/null @@ -1 +0,0 @@ -.overlay.svelte-1wkm2e0{position:absolute;background-color:#0006;width:100%;height:100%}.hidden.svelte-1wkm2e0{display:none}.load-wrap.svelte-1wkm2e0{display:flex;justify-content:center;align-items:center;height:100%}.loader.svelte-1wkm2e0{display:flex;position:relative;background-color:var(--border-color-accent-subdued);animation:svelte-1wkm2e0-shadowPulse 2s linear infinite;box-shadow:-24px 0 var(--border-color-accent-subdued),24px 0 var(--border-color-accent-subdued);margin:var(--spacing-md);border-radius:50%;width:10px;height:10px;scale:.5}@keyframes svelte-1wkm2e0-shadowPulse{33%{box-shadow:-24px 0 var(--border-color-accent-subdued),24px 0 #fff;background:#fff}66%{box-shadow:-24px 0 #fff,24px 0 #fff;background:var(--border-color-accent-subdued)}to{box-shadow:-24px 0 #fff,24px 0 var(--border-color-accent-subdued);background:#fff}}video.svelte-1wkm2e0{position:inherit;background-color:#000;width:var(--size-full);height:var(--size-full);object-fit:contain;border-radius:var(--radius-xl)}.container.svelte-1jmx6y1{flex:none;border:2px solid var(--border-color-primary);border-radius:var(--radius-lg);max-width:none}.container.svelte-1jmx6y1:hover,.container.selected.svelte-1jmx6y1{border-color:var(--border-color-accent)}.container.table.svelte-1jmx6y1{margin:0 auto;width:var(--size-20);height:var(--size-20);object-fit:cover}.container.gallery.svelte-1jmx6y1{height:var(--size-20);max-height:var(--size-20);object-fit:cover} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-76c3ee3f.css b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-76c3ee3f.css deleted file mode 100644 index 8853167b33fc5683d52480c72c2356484cc74f83..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/cdn/assets/Index-76c3ee3f.css +++ /dev/null @@ -1 +0,0 @@ -label.svelte-pjtc3.svelte-pjtc3:not(.container),label.svelte-pjtc3:not(.container)>input.svelte-pjtc3{height:100%;border:none}.container.svelte-pjtc3>input.svelte-pjtc3{border:var(--input-border-width) solid var(--input-border-color);border-radius:var(--input-radius)}input[type=number].svelte-pjtc3.svelte-pjtc3{display:block;position:relative;outline:none!important;box-shadow:var(--input-shadow);background:var(--input-background-fill);padding:var(--input-padding);width:100%;color:var(--body-text-color);font-size:var(--input-text-size);line-height:var(--line-sm)}input.svelte-pjtc3.svelte-pjtc3:disabled{-webkit-text-fill-color:var(--body-text-color);-webkit-opacity:1;opacity:1}input.svelte-pjtc3.svelte-pjtc3:focus{box-shadow:var(--input-shadow-focus);border-color:var(--input-border-color-focus)}input.svelte-pjtc3.svelte-pjtc3::placeholder{color:var(--input-placeholder-color)}input.svelte-pjtc3.svelte-pjtc3:out-of-range{border:var(--input-border-width) solid var(--error-border-color)} diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-5fd0a2c9.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-5fd0a2c9.js deleted file mode 100644 index 21d7f07be4fc4a2f849bbbb3f461846ee4175af1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/index-5fd0a2c9.js +++ /dev/null @@ -1,2 +0,0 @@ -import{E as u,L as v}from"./index-b5ab13e3.js";import{s as k,t,h as S,L as w,i as z,w as x,f as R,c as U,e as _,I as T,x as V}from"./Index-9bf8add7.js";import"./index-50ad4c77.js";import"./svelte/svelte.js";import"./Button-8eeccca1.js";import"./Index-c74a8b7c.js";import"./Copy-1b5c0932.js";import"./Download-696bd40c.js";import"./BlockLabel-e3970ebb.js";import"./Empty-eeaba2d1.js";import"./Example-e03fb3b4.js";const Y=94,g=1,C=95,Z=96,f=2,$=[9,10,11,12,13,32,133,160,5760,8192,8193,8194,8195,8196,8197,8198,8199,8200,8201,8202,8232,8233,8239,8287,12288],G=58,N=40,X=95,q=91,c=45,E=46,j=35,D=37;function p(e){return e>=65&&e<=90||e>=97&&e<=122||e>=161}function I(e){return e>=48&&e<=57}const B=new u((e,o)=>{for(let r=!1,a=0,O=0;;O++){let{next:l}=e;if(p(l)||l==c||l==X||r&&I(l))!r&&(l!=c||O>0)&&(r=!0),a===O&&l==c&&a++,e.advance();else{r&&e.acceptToken(l==N?C:a==2&&o.canShift(f)?f:Z);break}}}),A=new u(e=>{if($.includes(e.peek(-1))){let{next:o}=e;(p(o)||o==X||o==j||o==E||o==q||o==G||o==c)&&e.acceptToken(Y)}}),F=new u(e=>{if(!$.includes(e.peek(-1))){let{next:o}=e;if(o==D&&(e.advance(),e.acceptToken(g)),p(o)){do e.advance();while(p(e.next));e.acceptToken(g)}}}),L=k({"AtKeyword import charset namespace keyframes media supports":t.definitionKeyword,"from to selector":t.keyword,NamespaceName:t.namespace,KeyframeName:t.labelName,TagName:t.tagName,ClassName:t.className,PseudoClassName:t.constant(t.className),IdName:t.labelName,"FeatureName PropertyName":t.propertyName,AttributeName:t.attributeName,NumberLiteral:t.number,KeywordQuery:t.keyword,UnaryQueryOp:t.operatorKeyword,"CallTag ValueName":t.atom,VariableName:t.variableName,Callee:t.operatorKeyword,Unit:t.unit,"UniversalSelector NestingSelector":t.definitionOperator,MatchOp:t.compareOperator,"ChildOp SiblingOp, LogicOp":t.logicOperator,BinOp:t.arithmeticOperator,Important:t.modifier,Comment:t.blockComment,ParenthesizedContent:t.special(t.name),ColorLiteral:t.color,StringLiteral:t.string,":":t.punctuation,"PseudoOp #":t.derefOperator,"; ,":t.separator,"( )":t.paren,"[ ]":t.squareBracket,"{ }":t.brace}),K={__proto__:null,lang:32,"nth-child":32,"nth-last-child":32,"nth-of-type":32,"nth-last-of-type":32,dir:32,"host-context":32,url:60,"url-prefix":60,domain:60,regexp:60,selector:134},J={__proto__:null,"@import":114,"@media":138,"@charset":142,"@namespace":146,"@keyframes":152,"@supports":164},H={__proto__:null,not:128,only:128,from:158,to:160},M=v.deserialize({version:14,states:"7WQYQ[OOO#_Q[OOOOQP'#Cd'#CdOOQP'#Cc'#CcO#fQ[O'#CfO$YQXO'#CaO$aQ[O'#ChO$lQ[O'#DPO$qQ[O'#DTOOQP'#Ed'#EdO$vQdO'#DeO%bQ[O'#DrO$vQdO'#DtO%sQ[O'#DvO&OQ[O'#DyO&TQ[O'#EPO&cQ[O'#EROOQS'#Ec'#EcOOQS'#ET'#ETQYQ[OOO&jQXO'#CdO'_QWO'#DaO'dQWO'#EjO'oQ[O'#EjQOQWOOOOQP'#Cg'#CgOOQP,59Q,59QO#fQ[O,59QO'yQ[O'#EWO(eQWO,58{O(mQ[O,59SO$lQ[O,59kO$qQ[O,59oO'yQ[O,59sO'yQ[O,59uO'yQ[O,59vO(xQ[O'#D`OOQS,58{,58{OOQP'#Ck'#CkOOQO'#C}'#C}OOQP,59S,59SO)PQWO,59SO)UQWO,59SOOQP'#DR'#DROOQP,59k,59kOOQO'#DV'#DVO)ZQ`O,59oOOQS'#Cp'#CpO$vQdO'#CqO)cQvO'#CsO*pQtO,5:POOQO'#Cx'#CxO)UQWO'#CwO+UQWO'#CyOOQS'#Eg'#EgOOQO'#Dh'#DhO+ZQ[O'#DoO+iQWO'#EkO&TQ[O'#DmO+wQWO'#DpOOQO'#El'#ElO(hQWO,5:^O+|QpO,5:`OOQS'#Dx'#DxO,UQWO,5:bO,ZQ[O,5:bOOQO'#D{'#D{O,cQWO,5:eO,hQWO,5:kO,pQWO,5:mOOQS-E8R-E8RO$vQdO,59{O,xQ[O'#EYO-VQWO,5;UO-VQWO,5;UOOQP1G.l1G.lO-|QXO,5:rOOQO-E8U-E8UOOQS1G.g1G.gOOQP1G.n1G.nO)PQWO1G.nO)UQWO1G.nOOQP1G/V1G/VO.ZQ`O1G/ZO.tQXO1G/_O/[QXO1G/aO/rQXO1G/bO0YQWO,59zO0_Q[O'#DOO0fQdO'#CoOOQP1G/Z1G/ZO$vQdO1G/ZO0mQpO,59]OOQS,59_,59_O$vQdO,59aO0uQWO1G/kOOQS,59c,59cO0zQ!bO,59eO1SQWO'#DhO1_QWO,5:TO1dQWO,5:ZO&TQ[O,5:VO&TQ[O'#EZO1lQWO,5;VO1wQWO,5:XO'yQ[O,5:[OOQS1G/x1G/xOOQS1G/z1G/zOOQS1G/|1G/|O2YQWO1G/|O2_QdO'#D|OOQS1G0P1G0POOQS1G0V1G0VOOQS1G0X1G0XO2mQtO1G/gOOQO,5:t,5:tO3TQ[O,5:tOOQO-E8W-E8WO3bQWO1G0pOOQP7+$Y7+$YOOQP7+$u7+$uO$vQdO7+$uOOQS1G/f1G/fO3mQXO'#EiO3tQWO,59jO3yQtO'#EUO4nQdO'#EfO4xQWO,59ZO4}QpO7+$uOOQS1G.w1G.wOOQS1G.{1G.{OOQS7+%V7+%VO5VQWO1G/PO$vQdO1G/oOOQO1G/u1G/uOOQO1G/q1G/qO5[QWO,5:uOOQO-E8X-E8XO5jQXO1G/vOOQS7+%h7+%hO5qQYO'#CsO(hQWO'#E[O5yQdO,5:hOOQS,5:h,5:hO6XQtO'#EXO$vQdO'#EXO7VQdO7+%ROOQO7+%R7+%ROOQO1G0`1G0`O7jQpO<T![;'S%^;'S;=`%o<%lO%^^;TUoWOy%^z!Q%^!Q![;g![;'S%^;'S;=`%o<%lO%^^;nYoW#[UOy%^z!Q%^!Q![;g![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^^[[oW#[UOy%^z!O%^!O!P;g!P!Q%^!Q![>T![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^_?VSpVOy%^z;'S%^;'S;=`%o<%lO%^^?hWjSOy%^z!O%^!O!P;O!P!Q%^!Q![>T![;'S%^;'S;=`%o<%lO%^_@VU#XPOy%^z!Q%^!Q![;g![;'S%^;'S;=`%o<%lO%^~@nTjSOy%^z{@}{;'S%^;'S;=`%o<%lO%^~ASUoWOy@}yzAfz{Bm{;'S@};'S;=`Co<%lO@}~AiTOzAfz{Ax{;'SAf;'S;=`Bg<%lOAf~A{VOzAfz{Ax{!PAf!P!QBb!Q;'SAf;'S;=`Bg<%lOAf~BgOR~~BjP;=`<%lAf~BrWoWOy@}yzAfz{Bm{!P@}!P!QC[!Q;'S@};'S;=`Co<%lO@}~CcSoWR~Oy%^z;'S%^;'S;=`%o<%lO%^~CrP;=`<%l@}^Cz[#[UOy%^z!O%^!O!P;g!P!Q%^!Q![>T![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^XDuU]POy%^z![%^![!]EX!];'S%^;'S;=`%o<%lO%^XE`S^PoWOy%^z;'S%^;'S;=`%o<%lO%^_EqS!WVOy%^z;'S%^;'S;=`%o<%lO%^YFSSzQOy%^z;'S%^;'S;=`%o<%lO%^XFeU|POy%^z!`%^!`!aFw!a;'S%^;'S;=`%o<%lO%^XGOS|PoWOy%^z;'S%^;'S;=`%o<%lO%^XG_WOy%^z!c%^!c!}Gw!}#T%^#T#oGw#o;'S%^;'S;=`%o<%lO%^XHO[!YPoWOy%^z}%^}!OGw!O!Q%^!Q![Gw![!c%^!c!}Gw!}#T%^#T#oGw#o;'S%^;'S;=`%o<%lO%^XHySxPOy%^z;'S%^;'S;=`%o<%lO%^^I[SvUOy%^z;'S%^;'S;=`%o<%lO%^XIkUOy%^z#b%^#b#cI}#c;'S%^;'S;=`%o<%lO%^XJSUoWOy%^z#W%^#W#XJf#X;'S%^;'S;=`%o<%lO%^XJmS!`PoWOy%^z;'S%^;'S;=`%o<%lO%^XJ|UOy%^z#f%^#f#gJf#g;'S%^;'S;=`%o<%lO%^XKeS!RPOy%^z;'S%^;'S;=`%o<%lO%^_KvS!QVOy%^z;'S%^;'S;=`%o<%lO%^ZLXU!PPOy%^z!_%^!_!`6y!`;'S%^;'S;=`%o<%lO%^WLnP;=`<%l$}",tokenizers:[A,F,B,0,1,2,3],topRules:{StyleSheet:[0,4],Styles:[1,84]},specialized:[{term:95,get:e=>K[e]||-1},{term:56,get:e=>J[e]||-1},{term:96,get:e=>H[e]||-1}],tokenPrec:1123});let Q=null;function m(){if(!Q&&typeof document=="object"&&document.body){let{style:e}=document.body,o=[],r=new Set;for(let a in e)a!="cssText"&&a!="cssFloat"&&typeof e[a]=="string"&&(/[A-Z]/.test(a)&&(a=a.replace(/[A-Z]/g,O=>"-"+O.toLowerCase())),r.has(a)||(o.push(a),r.add(a)));Q=o.sort().map(a=>({type:"property",label:a}))}return Q||[]}const h=["active","after","any-link","autofill","backdrop","before","checked","cue","default","defined","disabled","empty","enabled","file-selector-button","first","first-child","first-letter","first-line","first-of-type","focus","focus-visible","focus-within","fullscreen","has","host","host-context","hover","in-range","indeterminate","invalid","is","lang","last-child","last-of-type","left","link","marker","modal","not","nth-child","nth-last-child","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","part","placeholder","placeholder-shown","read-only","read-write","required","right","root","scope","selection","slotted","target","target-text","valid","visited","where"].map(e=>({type:"class",label:e})),b=["above","absolute","activeborder","additive","activecaption","after-white-space","ahead","alias","all","all-scroll","alphabetic","alternate","always","antialiased","appworkspace","asterisks","attr","auto","auto-flow","avoid","avoid-column","avoid-page","avoid-region","axis-pan","background","backwards","baseline","below","bidi-override","blink","block","block-axis","bold","bolder","border","border-box","both","bottom","break","break-all","break-word","bullets","button","button-bevel","buttonface","buttonhighlight","buttonshadow","buttontext","calc","capitalize","caps-lock-indicator","caption","captiontext","caret","cell","center","checkbox","circle","cjk-decimal","clear","clip","close-quote","col-resize","collapse","color","color-burn","color-dodge","column","column-reverse","compact","condensed","contain","content","contents","content-box","context-menu","continuous","copy","counter","counters","cover","crop","cross","crosshair","currentcolor","cursive","cyclic","darken","dashed","decimal","decimal-leading-zero","default","default-button","dense","destination-atop","destination-in","destination-out","destination-over","difference","disc","discard","disclosure-closed","disclosure-open","document","dot-dash","dot-dot-dash","dotted","double","down","e-resize","ease","ease-in","ease-in-out","ease-out","element","ellipse","ellipsis","embed","end","ethiopic-abegede-gez","ethiopic-halehame-aa-er","ethiopic-halehame-gez","ew-resize","exclusion","expanded","extends","extra-condensed","extra-expanded","fantasy","fast","fill","fill-box","fixed","flat","flex","flex-end","flex-start","footnotes","forwards","from","geometricPrecision","graytext","grid","groove","hand","hard-light","help","hidden","hide","higher","highlight","highlighttext","horizontal","hsl","hsla","hue","icon","ignore","inactiveborder","inactivecaption","inactivecaptiontext","infinite","infobackground","infotext","inherit","initial","inline","inline-axis","inline-block","inline-flex","inline-grid","inline-table","inset","inside","intrinsic","invert","italic","justify","keep-all","landscape","large","larger","left","level","lighter","lighten","line-through","linear","linear-gradient","lines","list-item","listbox","listitem","local","logical","loud","lower","lower-hexadecimal","lower-latin","lower-norwegian","lowercase","ltr","luminosity","manipulation","match","matrix","matrix3d","medium","menu","menutext","message-box","middle","min-intrinsic","mix","monospace","move","multiple","multiple_mask_images","multiply","n-resize","narrower","ne-resize","nesw-resize","no-close-quote","no-drop","no-open-quote","no-repeat","none","normal","not-allowed","nowrap","ns-resize","numbers","numeric","nw-resize","nwse-resize","oblique","opacity","open-quote","optimizeLegibility","optimizeSpeed","outset","outside","outside-shape","overlay","overline","padding","padding-box","painted","page","paused","perspective","pinch-zoom","plus-darker","plus-lighter","pointer","polygon","portrait","pre","pre-line","pre-wrap","preserve-3d","progress","push-button","radial-gradient","radio","read-only","read-write","read-write-plaintext-only","rectangle","region","relative","repeat","repeating-linear-gradient","repeating-radial-gradient","repeat-x","repeat-y","reset","reverse","rgb","rgba","ridge","right","rotate","rotate3d","rotateX","rotateY","rotateZ","round","row","row-resize","row-reverse","rtl","run-in","running","s-resize","sans-serif","saturation","scale","scale3d","scaleX","scaleY","scaleZ","screen","scroll","scrollbar","scroll-position","se-resize","self-start","self-end","semi-condensed","semi-expanded","separate","serif","show","single","skew","skewX","skewY","skip-white-space","slide","slider-horizontal","slider-vertical","sliderthumb-horizontal","sliderthumb-vertical","slow","small","small-caps","small-caption","smaller","soft-light","solid","source-atop","source-in","source-out","source-over","space","space-around","space-between","space-evenly","spell-out","square","start","static","status-bar","stretch","stroke","stroke-box","sub","subpixel-antialiased","svg_masks","super","sw-resize","symbolic","symbols","system-ui","table","table-caption","table-cell","table-column","table-column-group","table-footer-group","table-header-group","table-row","table-row-group","text","text-bottom","text-top","textarea","textfield","thick","thin","threeddarkshadow","threedface","threedhighlight","threedlightshadow","threedshadow","to","top","transform","translate","translate3d","translateX","translateY","translateZ","transparent","ultra-condensed","ultra-expanded","underline","unidirectional-pan","unset","up","upper-latin","uppercase","url","var","vertical","vertical-text","view-box","visible","visibleFill","visiblePainted","visibleStroke","visual","w-resize","wait","wave","wider","window","windowframe","windowtext","words","wrap","wrap-reverse","x-large","x-small","xor","xx-large","xx-small"].map(e=>({type:"keyword",label:e})).concat(["aliceblue","antiquewhite","aqua","aquamarine","azure","beige","bisque","black","blanchedalmond","blue","blueviolet","brown","burlywood","cadetblue","chartreuse","chocolate","coral","cornflowerblue","cornsilk","crimson","cyan","darkblue","darkcyan","darkgoldenrod","darkgray","darkgreen","darkkhaki","darkmagenta","darkolivegreen","darkorange","darkorchid","darkred","darksalmon","darkseagreen","darkslateblue","darkslategray","darkturquoise","darkviolet","deeppink","deepskyblue","dimgray","dodgerblue","firebrick","floralwhite","forestgreen","fuchsia","gainsboro","ghostwhite","gold","goldenrod","gray","grey","green","greenyellow","honeydew","hotpink","indianred","indigo","ivory","khaki","lavender","lavenderblush","lawngreen","lemonchiffon","lightblue","lightcoral","lightcyan","lightgoldenrodyellow","lightgray","lightgreen","lightpink","lightsalmon","lightseagreen","lightskyblue","lightslategray","lightsteelblue","lightyellow","lime","limegreen","linen","magenta","maroon","mediumaquamarine","mediumblue","mediumorchid","mediumpurple","mediumseagreen","mediumslateblue","mediumspringgreen","mediumturquoise","mediumvioletred","midnightblue","mintcream","mistyrose","moccasin","navajowhite","navy","oldlace","olive","olivedrab","orange","orangered","orchid","palegoldenrod","palegreen","paleturquoise","palevioletred","papayawhip","peachpuff","peru","pink","plum","powderblue","purple","rebeccapurple","red","rosybrown","royalblue","saddlebrown","salmon","sandybrown","seagreen","seashell","sienna","silver","skyblue","slateblue","slategray","snow","springgreen","steelblue","tan","teal","thistle","tomato","turquoise","violet","wheat","white","whitesmoke","yellow","yellowgreen"].map(e=>({type:"constant",label:e}))),ee=["a","abbr","address","article","aside","b","bdi","bdo","blockquote","body","br","button","canvas","caption","cite","code","col","colgroup","dd","del","details","dfn","dialog","div","dl","dt","em","figcaption","figure","footer","form","header","hgroup","h1","h2","h3","h4","h5","h6","hr","html","i","iframe","img","input","ins","kbd","label","legend","li","main","meter","nav","ol","output","p","pre","ruby","section","select","small","source","span","strong","sub","summary","sup","table","tbody","td","template","textarea","tfoot","th","thead","tr","u","ul"].map(e=>({type:"type",label:e})),n=/^(\w[\w-]*|-\w[\w-]*|)$/,ae=/^-(-[\w-]*)?$/;function Oe(e,o){var r;if((e.name=="("||e.type.isError)&&(e=e.parent||e),e.name!="ArgList")return!1;let a=(r=e.parent)===null||r===void 0?void 0:r.firstChild;return a?.name!="Callee"?!1:o.sliceString(a.from,a.to)=="var"}const y=new V,te=["Declaration"];function W(e,o){if(o.to-o.from>4096){let r=y.get(o);if(r)return r;let a=[],O=new Set,l=o.cursor(T.IncludeAnonymous);if(l.firstChild())do for(let i of W(e,l.node))O.has(i.label)||(O.add(i.label),a.push(i));while(l.nextSibling());return y.set(o,a),a}else{let r=[],a=new Set;return o.cursor().iterate(O=>{var l;if(O.name=="VariableName"&&O.matchContext(te)&&((l=O.node.nextSibling)===null||l===void 0?void 0:l.name)==":"){let i=e.sliceString(O.from,O.to);a.has(i)||(a.add(i),r.push({label:i,type:"variable"}))}}),r}}const oe=e=>{var o;let{state:r,pos:a}=e,O=S(r).resolveInner(a,-1),l=O.type.isError&&O.from==O.to-1&&r.doc.sliceString(O.from,O.to)=="-";if(O.name=="PropertyName"||l&&((o=O.parent)===null||o===void 0?void 0:o.name)=="Block")return{from:O.from,options:m(),validFor:n};if(O.name=="ValueName")return{from:O.from,options:b,validFor:n};if(O.name=="PseudoClassName")return{from:O.from,options:h,validFor:n};if(O.name=="VariableName"||(e.explicit||l)&&Oe(O,r.doc))return{from:O.name=="VariableName"?O.from:a,options:W(r.doc,S(r).topNode),validFor:ae};if(O.name=="TagName"){for(let{parent:d}=O;d;d=d.parent)if(d.name=="Block")return{from:O.from,options:m(),validFor:n};return{from:O.from,options:ee,validFor:n}}if(!e.explicit)return null;let i=O.resolve(a),s=i.childBefore(a);return s&&s.name==":"&&i.name=="PseudoClassSelector"?{from:a,options:h,validFor:n}:s&&s.name==":"&&i.name=="Declaration"||i.name=="ArgList"?{from:a,options:b,validFor:n}:i.name=="Block"?{from:a,options:m(),validFor:n}:null},P=w.define({name:"css",parser:M.configure({props:[z.add({Declaration:x()}),R.add({Block:U})]}),languageData:{commentTokens:{block:{open:"/*",close:"*/"}},indentOnInput:/^\s*\}$/,wordChars:"-"}});function Se(){return new _(P,P.data.of({autocomplete:oe}))}export{Se as css,oe as cssCompletionSource,P as cssLanguage}; -//# sourceMappingURL=index-5fd0a2c9.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/shell-86dd1d99.js b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/shell-86dd1d99.js deleted file mode 100644 index 413d6906ba550f466a9babaadea0e07f796466f1..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/gradio/templates/frontend/assets/shell-86dd1d99.js +++ /dev/null @@ -1,2 +0,0 @@ -var c={};function s(n,e){for(var r=0;r1&&n.eat("$");var r=n.next();return/['"({]/.test(r)?(e.tokens[0]=l(r,r=="("?"quote":r=="{"?"def":"string"),u(n,e)):(/\d/.test(r)||n.eatWhile(/\w/),e.tokens.shift(),"def")};function w(n){return function(e,r){return e.sol()&&e.string==n&&r.tokens.shift(),e.skipToEnd(),"string.special"}}function u(n,e){return(e.tokens[0]||d)(n,e)}const v={name:"shell",startState:function(){return{tokens:[]}},token:function(n,e){return u(n,e)},languageData:{autocomplete:k.concat(h,p),closeBrackets:{brackets:["(","[","{","'",'"',"`"]},commentTokens:{line:"#"}}};export{v as shell}; -//# sourceMappingURL=shell-86dd1d99.js.map diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/parsers/__init__.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/parsers/__init__.py deleted file mode 100644 index ff11968db15f0f7c6057a46c252a91daee7b9cd9..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/io/parsers/__init__.py +++ /dev/null @@ -1,9 +0,0 @@ -from pandas.io.parsers.readers import ( - TextFileReader, - TextParser, - read_csv, - read_fwf, - read_table, -) - -__all__ = ["TextFileReader", "TextParser", "read_csv", "read_fwf", "read_table"] diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/generic/test_series.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/generic/test_series.py deleted file mode 100644 index 4ea205ac13c475c41b810df25001f158ba4ca016..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/generic/test_series.py +++ /dev/null @@ -1,159 +0,0 @@ -from operator import methodcaller - -import numpy as np -import pytest - -import pandas as pd -from pandas import ( - MultiIndex, - Series, - date_range, -) -import pandas._testing as tm - - -class TestSeries: - @pytest.mark.parametrize("func", ["rename_axis", "_set_axis_name"]) - def test_set_axis_name_mi(self, func): - ser = Series( - [11, 21, 31], - index=MultiIndex.from_tuples( - [("A", x) for x in ["a", "B", "c"]], names=["l1", "l2"] - ), - ) - - result = methodcaller(func, ["L1", "L2"])(ser) - assert ser.index.name is None - assert ser.index.names == ["l1", "l2"] - assert result.index.name is None - assert result.index.names, ["L1", "L2"] - - def test_set_axis_name_raises(self): - ser = Series([1]) - msg = "No axis named 1 for object type Series" - with pytest.raises(ValueError, match=msg): - ser._set_axis_name(name="a", axis=1) - - def test_get_bool_data_preserve_dtype(self): - ser = Series([True, False, True]) - result = ser._get_bool_data() - tm.assert_series_equal(result, ser) - - def test_nonzero_single_element(self): - # allow single item via bool method - msg_warn = ( - "Series.bool is now deprecated and will be removed " - "in future version of pandas" - ) - ser = Series([True]) - ser1 = Series([False]) - with tm.assert_produces_warning(FutureWarning, match=msg_warn): - assert ser.bool() - with tm.assert_produces_warning(FutureWarning, match=msg_warn): - assert not ser1.bool() - - @pytest.mark.parametrize("data", [np.nan, pd.NaT, True, False]) - def test_nonzero_single_element_raise_1(self, data): - # single item nan to raise - series = Series([data]) - - msg = "The truth value of a Series is ambiguous" - with pytest.raises(ValueError, match=msg): - bool(series) - - @pytest.mark.parametrize("data", [np.nan, pd.NaT]) - def test_nonzero_single_element_raise_2(self, data): - msg_warn = ( - "Series.bool is now deprecated and will be removed " - "in future version of pandas" - ) - msg_err = "bool cannot act on a non-boolean single element Series" - series = Series([data]) - with tm.assert_produces_warning(FutureWarning, match=msg_warn): - with pytest.raises(ValueError, match=msg_err): - series.bool() - - @pytest.mark.parametrize("data", [(True, True), (False, False)]) - def test_nonzero_multiple_element_raise(self, data): - # multiple bool are still an error - msg_warn = ( - "Series.bool is now deprecated and will be removed " - "in future version of pandas" - ) - msg_err = "The truth value of a Series is ambiguous" - series = Series([data]) - with pytest.raises(ValueError, match=msg_err): - bool(series) - with tm.assert_produces_warning(FutureWarning, match=msg_warn): - with pytest.raises(ValueError, match=msg_err): - series.bool() - - @pytest.mark.parametrize("data", [1, 0, "a", 0.0]) - def test_nonbool_single_element_raise(self, data): - # single non-bool are an error - msg_warn = ( - "Series.bool is now deprecated and will be removed " - "in future version of pandas" - ) - msg_err1 = "The truth value of a Series is ambiguous" - msg_err2 = "bool cannot act on a non-boolean single element Series" - series = Series([data]) - with pytest.raises(ValueError, match=msg_err1): - bool(series) - with tm.assert_produces_warning(FutureWarning, match=msg_warn): - with pytest.raises(ValueError, match=msg_err2): - series.bool() - - def test_metadata_propagation_indiv_resample(self): - # resample - ts = Series( - np.random.default_rng(2).random(1000), - index=date_range("20130101", periods=1000, freq="s"), - name="foo", - ) - result = ts.resample("1T").mean() - tm.assert_metadata_equivalent(ts, result) - - result = ts.resample("1T").min() - tm.assert_metadata_equivalent(ts, result) - - result = ts.resample("1T").apply(lambda x: x.sum()) - tm.assert_metadata_equivalent(ts, result) - - def test_metadata_propagation_indiv(self, monkeypatch): - # check that the metadata matches up on the resulting ops - - ser = Series(range(3), range(3)) - ser.name = "foo" - ser2 = Series(range(3), range(3)) - ser2.name = "bar" - - result = ser.T - tm.assert_metadata_equivalent(ser, result) - - def finalize(self, other, method=None, **kwargs): - for name in self._metadata: - if method == "concat" and name == "filename": - value = "+".join( - [ - getattr(obj, name) - for obj in other.objs - if getattr(obj, name, None) - ] - ) - object.__setattr__(self, name, value) - else: - object.__setattr__(self, name, getattr(other, name, None)) - - return self - - with monkeypatch.context() as m: - m.setattr(Series, "_metadata", ["name", "filename"]) - m.setattr(Series, "__finalize__", finalize) - - ser.filename = "foo" - ser2.filename = "bar" - - result = pd.concat([ser, ser2]) - assert result.filename == "foo+bar" - assert result.name is None diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/plotting/test_common.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/plotting/test_common.py deleted file mode 100644 index 20daf5935624843af3224f991497f84fa6639a0d..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/tests/plotting/test_common.py +++ /dev/null @@ -1,60 +0,0 @@ -import pytest - -from pandas import DataFrame -from pandas.tests.plotting.common import ( - _check_plot_works, - _check_ticks_props, - _gen_two_subplots, -) - -plt = pytest.importorskip("matplotlib.pyplot") - - -class TestCommon: - def test__check_ticks_props(self): - # GH 34768 - df = DataFrame({"b": [0, 1, 0], "a": [1, 2, 3]}) - ax = _check_plot_works(df.plot, rot=30) - ax.yaxis.set_tick_params(rotation=30) - msg = "expected 0.00000 but got " - with pytest.raises(AssertionError, match=msg): - _check_ticks_props(ax, xrot=0) - with pytest.raises(AssertionError, match=msg): - _check_ticks_props(ax, xlabelsize=0) - with pytest.raises(AssertionError, match=msg): - _check_ticks_props(ax, yrot=0) - with pytest.raises(AssertionError, match=msg): - _check_ticks_props(ax, ylabelsize=0) - - def test__gen_two_subplots_with_ax(self): - fig = plt.gcf() - gen = _gen_two_subplots(f=lambda **kwargs: None, fig=fig, ax="test") - # On the first yield, no subplot should be added since ax was passed - next(gen) - assert fig.get_axes() == [] - # On the second, the one axis should match fig.subplot(2, 1, 2) - next(gen) - axes = fig.get_axes() - assert len(axes) == 1 - subplot_geometry = list(axes[0].get_subplotspec().get_geometry()[:-1]) - subplot_geometry[-1] += 1 - assert subplot_geometry == [2, 1, 2] - - def test_colorbar_layout(self): - fig = plt.figure() - - axes = fig.subplot_mosaic( - """ - AB - CC - """ - ) - - x = [1, 2, 3] - y = [1, 2, 3] - - cs0 = axes["A"].scatter(x, y) - axes["B"].scatter(x, y) - - fig.colorbar(cs0, ax=[axes["A"], axes["B"]], location="right") - DataFrame(x).plot(ax=axes["C"]) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/util/_test_decorators.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/util/_test_decorators.py deleted file mode 100644 index 03011a1ffe6223741a8df9d010745323120eae6c..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pandas/util/_test_decorators.py +++ /dev/null @@ -1,250 +0,0 @@ -""" -This module provides decorator functions which can be applied to test objects -in order to skip those objects when certain conditions occur. A sample use case -is to detect if the platform is missing ``matplotlib``. If so, any test objects -which require ``matplotlib`` and decorated with ``@td.skip_if_no_mpl`` will be -skipped by ``pytest`` during the execution of the test suite. - -To illustrate, after importing this module: - -import pandas.util._test_decorators as td - -The decorators can be applied to classes: - -@td.skip_if_some_reason -class Foo: - ... - -Or individual functions: - -@td.skip_if_some_reason -def test_foo(): - ... - -For more information, refer to the ``pytest`` documentation on ``skipif``. -""" -from __future__ import annotations - -import locale -from typing import ( - TYPE_CHECKING, - Callable, -) - -import numpy as np -import pytest - -from pandas._config import get_option - -if TYPE_CHECKING: - from pandas._typing import F -from pandas.compat import ( - IS64, - is_platform_windows, -) -from pandas.compat._optional import import_optional_dependency - -from pandas.core.computation.expressions import ( - NUMEXPR_INSTALLED, - USE_NUMEXPR, -) -from pandas.util.version import Version - - -def safe_import(mod_name: str, min_version: str | None = None): - """ - Parameters - ---------- - mod_name : str - Name of the module to be imported - min_version : str, default None - Minimum required version of the specified mod_name - - Returns - ------- - object - The imported module if successful, or False - """ - try: - mod = __import__(mod_name) - except ImportError: - return False - - if not min_version: - return mod - else: - import sys - - version = getattr(sys.modules[mod_name], "__version__") - if version and Version(version) >= Version(min_version): - return mod - - return False - - -def _skip_if_not_us_locale() -> bool: - lang, _ = locale.getlocale() - if lang != "en_US": - return True - return False - - -def _skip_if_no_scipy() -> bool: - return not ( - safe_import("scipy.stats") - and safe_import("scipy.sparse") - and safe_import("scipy.interpolate") - and safe_import("scipy.signal") - ) - - -def skip_if_installed(package: str) -> pytest.MarkDecorator: - """ - Skip a test if a package is installed. - - Parameters - ---------- - package : str - The name of the package. - - Returns - ------- - pytest.MarkDecorator - a pytest.mark.skipif to use as either a test decorator or a - parametrization mark. - """ - return pytest.mark.skipif( - safe_import(package), reason=f"Skipping because {package} is installed." - ) - - -def skip_if_no(package: str, min_version: str | None = None) -> pytest.MarkDecorator: - """ - Generic function to help skip tests when required packages are not - present on the testing system. - - This function returns a pytest mark with a skip condition that will be - evaluated during test collection. An attempt will be made to import the - specified ``package`` and optionally ensure it meets the ``min_version`` - - The mark can be used as either a decorator for a test class or to be - applied to parameters in pytest.mark.parametrize calls or parametrized - fixtures. Use pytest.importorskip if an imported moduled is later needed - or for test functions. - - If the import and version check are unsuccessful, then the test function - (or test case when used in conjunction with parametrization) will be - skipped. - - Parameters - ---------- - package: str - The name of the required package. - min_version: str or None, default None - Optional minimum version of the package. - - Returns - ------- - pytest.MarkDecorator - a pytest.mark.skipif to use as either a test decorator or a - parametrization mark. - """ - msg = f"Could not import '{package}'" - if min_version: - msg += f" satisfying a min_version of {min_version}" - return pytest.mark.skipif( - not safe_import(package, min_version=min_version), reason=msg - ) - - -skip_if_mpl = pytest.mark.skipif( - bool(safe_import("matplotlib")), reason="matplotlib is present" -) -skip_if_32bit = pytest.mark.skipif(not IS64, reason="skipping for 32 bit") -skip_if_windows = pytest.mark.skipif(is_platform_windows(), reason="Running on Windows") -skip_if_not_us_locale = pytest.mark.skipif( - _skip_if_not_us_locale(), - reason=f"Specific locale is set {locale.getlocale()[0]}", -) -skip_if_no_scipy = pytest.mark.skipif( - _skip_if_no_scipy(), reason="Missing SciPy requirement" -) -skip_if_no_ne = pytest.mark.skipif( - not USE_NUMEXPR, - reason=f"numexpr enabled->{USE_NUMEXPR}, installed->{NUMEXPR_INSTALLED}", -) - - -def skip_if_np_lt( - ver_str: str, *args, reason: str | None = None -) -> pytest.MarkDecorator: - if reason is None: - reason = f"NumPy {ver_str} or greater required" - return pytest.mark.skipif( - Version(np.__version__) < Version(ver_str), - *args, - reason=reason, - ) - - -def parametrize_fixture_doc(*args) -> Callable[[F], F]: - """ - Intended for use as a decorator for parametrized fixture, - this function will wrap the decorated function with a pytest - ``parametrize_fixture_doc`` mark. That mark will format - initial fixture docstring by replacing placeholders {0}, {1} etc - with parameters passed as arguments. - - Parameters - ---------- - args: iterable - Positional arguments for docstring. - - Returns - ------- - function - The decorated function wrapped within a pytest - ``parametrize_fixture_doc`` mark - """ - - def documented_fixture(fixture): - fixture.__doc__ = fixture.__doc__.format(*args) - return fixture - - return documented_fixture - - -def async_mark(): - try: - import_optional_dependency("pytest_asyncio") - async_mark = pytest.mark.asyncio - except ImportError: - async_mark = pytest.mark.skip(reason="Missing dependency pytest-asyncio") - - return async_mark - - -def mark_array_manager_not_yet_implemented(request) -> None: - mark = pytest.mark.xfail(reason="Not yet implemented for ArrayManager") - request.node.add_marker(mark) - - -skip_array_manager_not_yet_implemented = pytest.mark.xfail( - get_option("mode.data_manager") == "array", - reason="Not yet implemented for ArrayManager", -) - -skip_array_manager_invalid_test = pytest.mark.skipif( - get_option("mode.data_manager") == "array", - reason="Test that relies on BlockManager internals or specific behaviour", -) - -skip_copy_on_write_not_yet_implemented = pytest.mark.xfail( - get_option("mode.copy_on_write"), - reason="Not yet implemented/adapted for Copy-on-Write mode", -) - -skip_copy_on_write_invalid_test = pytest.mark.skipif( - get_option("mode.copy_on_write"), - reason="Test not valid for Copy-on-Write mode", -) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distro.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distro.py deleted file mode 100644 index 7892741347d632d48f3fbe11b417c4705f9968f3..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/distro.py +++ /dev/null @@ -1,1386 +0,0 @@ -# Copyright 2015,2016,2017 Nir Cohen -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -""" -The ``distro`` package (``distro`` stands for Linux Distribution) provides -information about the Linux distribution it runs on, such as a reliable -machine-readable distro ID, or version information. - -It is the recommended replacement for Python's original -:py:func:`platform.linux_distribution` function, but it provides much more -functionality. An alternative implementation became necessary because Python -3.5 deprecated this function, and Python 3.8 removed it altogether. Its -predecessor function :py:func:`platform.dist` was already deprecated since -Python 2.6 and removed in Python 3.8. Still, there are many cases in which -access to OS distribution information is needed. See `Python issue 1322 -`_ for more information. -""" - -import argparse -import json -import logging -import os -import re -import shlex -import subprocess -import sys -import warnings - -__version__ = "1.6.0" - -# Use `if False` to avoid an ImportError on Python 2. After dropping Python 2 -# support, can use typing.TYPE_CHECKING instead. See: -# https://docs.python.org/3/library/typing.html#typing.TYPE_CHECKING -if False: # pragma: nocover - from typing import ( - Any, - Callable, - Dict, - Iterable, - Optional, - Sequence, - TextIO, - Tuple, - Type, - TypedDict, - Union, - ) - - VersionDict = TypedDict( - "VersionDict", {"major": str, "minor": str, "build_number": str} - ) - InfoDict = TypedDict( - "InfoDict", - { - "id": str, - "version": str, - "version_parts": VersionDict, - "like": str, - "codename": str, - }, - ) - - -_UNIXCONFDIR = os.environ.get("UNIXCONFDIR", "/etc") -_UNIXUSRLIBDIR = os.environ.get("UNIXUSRLIBDIR", "/usr/lib") -_OS_RELEASE_BASENAME = "os-release" - -#: Translation table for normalizing the "ID" attribute defined in os-release -#: files, for use by the :func:`distro.id` method. -#: -#: * Key: Value as defined in the os-release file, translated to lower case, -#: with blanks translated to underscores. -#: -#: * Value: Normalized value. -NORMALIZED_OS_ID = { - "ol": "oracle", # Oracle Linux -} - -#: Translation table for normalizing the "Distributor ID" attribute returned by -#: the lsb_release command, for use by the :func:`distro.id` method. -#: -#: * Key: Value as returned by the lsb_release command, translated to lower -#: case, with blanks translated to underscores. -#: -#: * Value: Normalized value. -NORMALIZED_LSB_ID = { - "enterpriseenterpriseas": "oracle", # Oracle Enterprise Linux 4 - "enterpriseenterpriseserver": "oracle", # Oracle Linux 5 - "redhatenterpriseworkstation": "rhel", # RHEL 6, 7 Workstation - "redhatenterpriseserver": "rhel", # RHEL 6, 7 Server - "redhatenterprisecomputenode": "rhel", # RHEL 6 ComputeNode -} - -#: Translation table for normalizing the distro ID derived from the file name -#: of distro release files, for use by the :func:`distro.id` method. -#: -#: * Key: Value as derived from the file name of a distro release file, -#: translated to lower case, with blanks translated to underscores. -#: -#: * Value: Normalized value. -NORMALIZED_DISTRO_ID = { - "redhat": "rhel", # RHEL 6.x, 7.x -} - -# Pattern for content of distro release file (reversed) -_DISTRO_RELEASE_CONTENT_REVERSED_PATTERN = re.compile( - r"(?:[^)]*\)(.*)\()? *(?:STL )?([\d.+\-a-z]*\d) *(?:esaeler *)?(.+)" -) - -# Pattern for base file name of distro release file -_DISTRO_RELEASE_BASENAME_PATTERN = re.compile(r"(\w+)[-_](release|version)$") - -# Base file names to be ignored when searching for distro release file -_DISTRO_RELEASE_IGNORE_BASENAMES = ( - "debian_version", - "lsb-release", - "oem-release", - _OS_RELEASE_BASENAME, - "system-release", - "plesk-release", - "iredmail-release", -) - - -def linux_distribution(full_distribution_name=True): - # type: (bool) -> Tuple[str, str, str] - """ - .. deprecated:: 1.6.0 - - :func:`distro.linux_distribution()` is deprecated. It should only be - used as a compatibility shim with Python's - :py:func:`platform.linux_distribution()`. Please use :func:`distro.id`, - :func:`distro.version` and :func:`distro.name` instead. - - Return information about the current OS distribution as a tuple - ``(id_name, version, codename)`` with items as follows: - - * ``id_name``: If *full_distribution_name* is false, the result of - :func:`distro.id`. Otherwise, the result of :func:`distro.name`. - - * ``version``: The result of :func:`distro.version`. - - * ``codename``: The result of :func:`distro.codename`. - - The interface of this function is compatible with the original - :py:func:`platform.linux_distribution` function, supporting a subset of - its parameters. - - The data it returns may not exactly be the same, because it uses more data - sources than the original function, and that may lead to different data if - the OS distribution is not consistent across multiple data sources it - provides (there are indeed such distributions ...). - - Another reason for differences is the fact that the :func:`distro.id` - method normalizes the distro ID string to a reliable machine-readable value - for a number of popular OS distributions. - """ - warnings.warn( - "distro.linux_distribution() is deprecated. It should only be used as a " - "compatibility shim with Python's platform.linux_distribution(). Please use " - "distro.id(), distro.version() and distro.name() instead.", - DeprecationWarning, - stacklevel=2, - ) - return _distro.linux_distribution(full_distribution_name) - - -def id(): - # type: () -> str - """ - Return the distro ID of the current distribution, as a - machine-readable string. - - For a number of OS distributions, the returned distro ID value is - *reliable*, in the sense that it is documented and that it does not change - across releases of the distribution. - - This package maintains the following reliable distro ID values: - - ============== ========================================= - Distro ID Distribution - ============== ========================================= - "ubuntu" Ubuntu - "debian" Debian - "rhel" RedHat Enterprise Linux - "centos" CentOS - "fedora" Fedora - "sles" SUSE Linux Enterprise Server - "opensuse" openSUSE - "amazon" Amazon Linux - "arch" Arch Linux - "cloudlinux" CloudLinux OS - "exherbo" Exherbo Linux - "gentoo" GenToo Linux - "ibm_powerkvm" IBM PowerKVM - "kvmibm" KVM for IBM z Systems - "linuxmint" Linux Mint - "mageia" Mageia - "mandriva" Mandriva Linux - "parallels" Parallels - "pidora" Pidora - "raspbian" Raspbian - "oracle" Oracle Linux (and Oracle Enterprise Linux) - "scientific" Scientific Linux - "slackware" Slackware - "xenserver" XenServer - "openbsd" OpenBSD - "netbsd" NetBSD - "freebsd" FreeBSD - "midnightbsd" MidnightBSD - ============== ========================================= - - If you have a need to get distros for reliable IDs added into this set, - or if you find that the :func:`distro.id` function returns a different - distro ID for one of the listed distros, please create an issue in the - `distro issue tracker`_. - - **Lookup hierarchy and transformations:** - - First, the ID is obtained from the following sources, in the specified - order. The first available and non-empty value is used: - - * the value of the "ID" attribute of the os-release file, - - * the value of the "Distributor ID" attribute returned by the lsb_release - command, - - * the first part of the file name of the distro release file, - - The so determined ID value then passes the following transformations, - before it is returned by this method: - - * it is translated to lower case, - - * blanks (which should not be there anyway) are translated to underscores, - - * a normalization of the ID is performed, based upon - `normalization tables`_. The purpose of this normalization is to ensure - that the ID is as reliable as possible, even across incompatible changes - in the OS distributions. A common reason for an incompatible change is - the addition of an os-release file, or the addition of the lsb_release - command, with ID values that differ from what was previously determined - from the distro release file name. - """ - return _distro.id() - - -def name(pretty=False): - # type: (bool) -> str - """ - Return the name of the current OS distribution, as a human-readable - string. - - If *pretty* is false, the name is returned without version or codename. - (e.g. "CentOS Linux") - - If *pretty* is true, the version and codename are appended. - (e.g. "CentOS Linux 7.1.1503 (Core)") - - **Lookup hierarchy:** - - The name is obtained from the following sources, in the specified order. - The first available and non-empty value is used: - - * If *pretty* is false: - - - the value of the "NAME" attribute of the os-release file, - - - the value of the "Distributor ID" attribute returned by the lsb_release - command, - - - the value of the "" field of the distro release file. - - * If *pretty* is true: - - - the value of the "PRETTY_NAME" attribute of the os-release file, - - - the value of the "Description" attribute returned by the lsb_release - command, - - - the value of the "" field of the distro release file, appended - with the value of the pretty version ("" and "" - fields) of the distro release file, if available. - """ - return _distro.name(pretty) - - -def version(pretty=False, best=False): - # type: (bool, bool) -> str - """ - Return the version of the current OS distribution, as a human-readable - string. - - If *pretty* is false, the version is returned without codename (e.g. - "7.0"). - - If *pretty* is true, the codename in parenthesis is appended, if the - codename is non-empty (e.g. "7.0 (Maipo)"). - - Some distributions provide version numbers with different precisions in - the different sources of distribution information. Examining the different - sources in a fixed priority order does not always yield the most precise - version (e.g. for Debian 8.2, or CentOS 7.1). - - The *best* parameter can be used to control the approach for the returned - version: - - If *best* is false, the first non-empty version number in priority order of - the examined sources is returned. - - If *best* is true, the most precise version number out of all examined - sources is returned. - - **Lookup hierarchy:** - - In all cases, the version number is obtained from the following sources. - If *best* is false, this order represents the priority order: - - * the value of the "VERSION_ID" attribute of the os-release file, - * the value of the "Release" attribute returned by the lsb_release - command, - * the version number parsed from the "" field of the first line - of the distro release file, - * the version number parsed from the "PRETTY_NAME" attribute of the - os-release file, if it follows the format of the distro release files. - * the version number parsed from the "Description" attribute returned by - the lsb_release command, if it follows the format of the distro release - files. - """ - return _distro.version(pretty, best) - - -def version_parts(best=False): - # type: (bool) -> Tuple[str, str, str] - """ - Return the version of the current OS distribution as a tuple - ``(major, minor, build_number)`` with items as follows: - - * ``major``: The result of :func:`distro.major_version`. - - * ``minor``: The result of :func:`distro.minor_version`. - - * ``build_number``: The result of :func:`distro.build_number`. - - For a description of the *best* parameter, see the :func:`distro.version` - method. - """ - return _distro.version_parts(best) - - -def major_version(best=False): - # type: (bool) -> str - """ - Return the major version of the current OS distribution, as a string, - if provided. - Otherwise, the empty string is returned. The major version is the first - part of the dot-separated version string. - - For a description of the *best* parameter, see the :func:`distro.version` - method. - """ - return _distro.major_version(best) - - -def minor_version(best=False): - # type: (bool) -> str - """ - Return the minor version of the current OS distribution, as a string, - if provided. - Otherwise, the empty string is returned. The minor version is the second - part of the dot-separated version string. - - For a description of the *best* parameter, see the :func:`distro.version` - method. - """ - return _distro.minor_version(best) - - -def build_number(best=False): - # type: (bool) -> str - """ - Return the build number of the current OS distribution, as a string, - if provided. - Otherwise, the empty string is returned. The build number is the third part - of the dot-separated version string. - - For a description of the *best* parameter, see the :func:`distro.version` - method. - """ - return _distro.build_number(best) - - -def like(): - # type: () -> str - """ - Return a space-separated list of distro IDs of distributions that are - closely related to the current OS distribution in regards to packaging - and programming interfaces, for example distributions the current - distribution is a derivative from. - - **Lookup hierarchy:** - - This information item is only provided by the os-release file. - For details, see the description of the "ID_LIKE" attribute in the - `os-release man page - `_. - """ - return _distro.like() - - -def codename(): - # type: () -> str - """ - Return the codename for the release of the current OS distribution, - as a string. - - If the distribution does not have a codename, an empty string is returned. - - Note that the returned codename is not always really a codename. For - example, openSUSE returns "x86_64". This function does not handle such - cases in any special way and just returns the string it finds, if any. - - **Lookup hierarchy:** - - * the codename within the "VERSION" attribute of the os-release file, if - provided, - - * the value of the "Codename" attribute returned by the lsb_release - command, - - * the value of the "" field of the distro release file. - """ - return _distro.codename() - - -def info(pretty=False, best=False): - # type: (bool, bool) -> InfoDict - """ - Return certain machine-readable information items about the current OS - distribution in a dictionary, as shown in the following example: - - .. sourcecode:: python - - { - 'id': 'rhel', - 'version': '7.0', - 'version_parts': { - 'major': '7', - 'minor': '0', - 'build_number': '' - }, - 'like': 'fedora', - 'codename': 'Maipo' - } - - The dictionary structure and keys are always the same, regardless of which - information items are available in the underlying data sources. The values - for the various keys are as follows: - - * ``id``: The result of :func:`distro.id`. - - * ``version``: The result of :func:`distro.version`. - - * ``version_parts -> major``: The result of :func:`distro.major_version`. - - * ``version_parts -> minor``: The result of :func:`distro.minor_version`. - - * ``version_parts -> build_number``: The result of - :func:`distro.build_number`. - - * ``like``: The result of :func:`distro.like`. - - * ``codename``: The result of :func:`distro.codename`. - - For a description of the *pretty* and *best* parameters, see the - :func:`distro.version` method. - """ - return _distro.info(pretty, best) - - -def os_release_info(): - # type: () -> Dict[str, str] - """ - Return a dictionary containing key-value pairs for the information items - from the os-release file data source of the current OS distribution. - - See `os-release file`_ for details about these information items. - """ - return _distro.os_release_info() - - -def lsb_release_info(): - # type: () -> Dict[str, str] - """ - Return a dictionary containing key-value pairs for the information items - from the lsb_release command data source of the current OS distribution. - - See `lsb_release command output`_ for details about these information - items. - """ - return _distro.lsb_release_info() - - -def distro_release_info(): - # type: () -> Dict[str, str] - """ - Return a dictionary containing key-value pairs for the information items - from the distro release file data source of the current OS distribution. - - See `distro release file`_ for details about these information items. - """ - return _distro.distro_release_info() - - -def uname_info(): - # type: () -> Dict[str, str] - """ - Return a dictionary containing key-value pairs for the information items - from the distro release file data source of the current OS distribution. - """ - return _distro.uname_info() - - -def os_release_attr(attribute): - # type: (str) -> str - """ - Return a single named information item from the os-release file data source - of the current OS distribution. - - Parameters: - - * ``attribute`` (string): Key of the information item. - - Returns: - - * (string): Value of the information item, if the item exists. - The empty string, if the item does not exist. - - See `os-release file`_ for details about these information items. - """ - return _distro.os_release_attr(attribute) - - -def lsb_release_attr(attribute): - # type: (str) -> str - """ - Return a single named information item from the lsb_release command output - data source of the current OS distribution. - - Parameters: - - * ``attribute`` (string): Key of the information item. - - Returns: - - * (string): Value of the information item, if the item exists. - The empty string, if the item does not exist. - - See `lsb_release command output`_ for details about these information - items. - """ - return _distro.lsb_release_attr(attribute) - - -def distro_release_attr(attribute): - # type: (str) -> str - """ - Return a single named information item from the distro release file - data source of the current OS distribution. - - Parameters: - - * ``attribute`` (string): Key of the information item. - - Returns: - - * (string): Value of the information item, if the item exists. - The empty string, if the item does not exist. - - See `distro release file`_ for details about these information items. - """ - return _distro.distro_release_attr(attribute) - - -def uname_attr(attribute): - # type: (str) -> str - """ - Return a single named information item from the distro release file - data source of the current OS distribution. - - Parameters: - - * ``attribute`` (string): Key of the information item. - - Returns: - - * (string): Value of the information item, if the item exists. - The empty string, if the item does not exist. - """ - return _distro.uname_attr(attribute) - - -try: - from functools import cached_property -except ImportError: - # Python < 3.8 - class cached_property(object): # type: ignore - """A version of @property which caches the value. On access, it calls the - underlying function and sets the value in `__dict__` so future accesses - will not re-call the property. - """ - - def __init__(self, f): - # type: (Callable[[Any], Any]) -> None - self._fname = f.__name__ - self._f = f - - def __get__(self, obj, owner): - # type: (Any, Type[Any]) -> Any - assert obj is not None, "call {} on an instance".format(self._fname) - ret = obj.__dict__[self._fname] = self._f(obj) - return ret - - -class LinuxDistribution(object): - """ - Provides information about a OS distribution. - - This package creates a private module-global instance of this class with - default initialization arguments, that is used by the - `consolidated accessor functions`_ and `single source accessor functions`_. - By using default initialization arguments, that module-global instance - returns data about the current OS distribution (i.e. the distro this - package runs on). - - Normally, it is not necessary to create additional instances of this class. - However, in situations where control is needed over the exact data sources - that are used, instances of this class can be created with a specific - distro release file, or a specific os-release file, or without invoking the - lsb_release command. - """ - - def __init__( - self, - include_lsb=True, - os_release_file="", - distro_release_file="", - include_uname=True, - root_dir=None, - ): - # type: (bool, str, str, bool, Optional[str]) -> None - """ - The initialization method of this class gathers information from the - available data sources, and stores that in private instance attributes. - Subsequent access to the information items uses these private instance - attributes, so that the data sources are read only once. - - Parameters: - - * ``include_lsb`` (bool): Controls whether the - `lsb_release command output`_ is included as a data source. - - If the lsb_release command is not available in the program execution - path, the data source for the lsb_release command will be empty. - - * ``os_release_file`` (string): The path name of the - `os-release file`_ that is to be used as a data source. - - An empty string (the default) will cause the default path name to - be used (see `os-release file`_ for details). - - If the specified or defaulted os-release file does not exist, the - data source for the os-release file will be empty. - - * ``distro_release_file`` (string): The path name of the - `distro release file`_ that is to be used as a data source. - - An empty string (the default) will cause a default search algorithm - to be used (see `distro release file`_ for details). - - If the specified distro release file does not exist, or if no default - distro release file can be found, the data source for the distro - release file will be empty. - - * ``include_uname`` (bool): Controls whether uname command output is - included as a data source. If the uname command is not available in - the program execution path the data source for the uname command will - be empty. - - * ``root_dir`` (string): The absolute path to the root directory to use - to find distro-related information files. - - Public instance attributes: - - * ``os_release_file`` (string): The path name of the - `os-release file`_ that is actually used as a data source. The - empty string if no distro release file is used as a data source. - - * ``distro_release_file`` (string): The path name of the - `distro release file`_ that is actually used as a data source. The - empty string if no distro release file is used as a data source. - - * ``include_lsb`` (bool): The result of the ``include_lsb`` parameter. - This controls whether the lsb information will be loaded. - - * ``include_uname`` (bool): The result of the ``include_uname`` - parameter. This controls whether the uname information will - be loaded. - - Raises: - - * :py:exc:`IOError`: Some I/O issue with an os-release file or distro - release file. - - * :py:exc:`subprocess.CalledProcessError`: The lsb_release command had - some issue (other than not being available in the program execution - path). - - * :py:exc:`UnicodeError`: A data source has unexpected characters or - uses an unexpected encoding. - """ - self.root_dir = root_dir - self.etc_dir = os.path.join(root_dir, "etc") if root_dir else _UNIXCONFDIR - self.usr_lib_dir = ( - os.path.join(root_dir, "usr/lib") if root_dir else _UNIXUSRLIBDIR - ) - - if os_release_file: - self.os_release_file = os_release_file - else: - etc_dir_os_release_file = os.path.join(self.etc_dir, _OS_RELEASE_BASENAME) - usr_lib_os_release_file = os.path.join( - self.usr_lib_dir, _OS_RELEASE_BASENAME - ) - - # NOTE: The idea is to respect order **and** have it set - # at all times for API backwards compatibility. - if os.path.isfile(etc_dir_os_release_file) or not os.path.isfile( - usr_lib_os_release_file - ): - self.os_release_file = etc_dir_os_release_file - else: - self.os_release_file = usr_lib_os_release_file - - self.distro_release_file = distro_release_file or "" # updated later - self.include_lsb = include_lsb - self.include_uname = include_uname - - def __repr__(self): - # type: () -> str - """Return repr of all info""" - return ( - "LinuxDistribution(" - "os_release_file={self.os_release_file!r}, " - "distro_release_file={self.distro_release_file!r}, " - "include_lsb={self.include_lsb!r}, " - "include_uname={self.include_uname!r}, " - "_os_release_info={self._os_release_info!r}, " - "_lsb_release_info={self._lsb_release_info!r}, " - "_distro_release_info={self._distro_release_info!r}, " - "_uname_info={self._uname_info!r})".format(self=self) - ) - - def linux_distribution(self, full_distribution_name=True): - # type: (bool) -> Tuple[str, str, str] - """ - Return information about the OS distribution that is compatible - with Python's :func:`platform.linux_distribution`, supporting a subset - of its parameters. - - For details, see :func:`distro.linux_distribution`. - """ - return ( - self.name() if full_distribution_name else self.id(), - self.version(), - self.codename(), - ) - - def id(self): - # type: () -> str - """Return the distro ID of the OS distribution, as a string. - - For details, see :func:`distro.id`. - """ - - def normalize(distro_id, table): - # type: (str, Dict[str, str]) -> str - distro_id = distro_id.lower().replace(" ", "_") - return table.get(distro_id, distro_id) - - distro_id = self.os_release_attr("id") - if distro_id: - return normalize(distro_id, NORMALIZED_OS_ID) - - distro_id = self.lsb_release_attr("distributor_id") - if distro_id: - return normalize(distro_id, NORMALIZED_LSB_ID) - - distro_id = self.distro_release_attr("id") - if distro_id: - return normalize(distro_id, NORMALIZED_DISTRO_ID) - - distro_id = self.uname_attr("id") - if distro_id: - return normalize(distro_id, NORMALIZED_DISTRO_ID) - - return "" - - def name(self, pretty=False): - # type: (bool) -> str - """ - Return the name of the OS distribution, as a string. - - For details, see :func:`distro.name`. - """ - name = ( - self.os_release_attr("name") - or self.lsb_release_attr("distributor_id") - or self.distro_release_attr("name") - or self.uname_attr("name") - ) - if pretty: - name = self.os_release_attr("pretty_name") or self.lsb_release_attr( - "description" - ) - if not name: - name = self.distro_release_attr("name") or self.uname_attr("name") - version = self.version(pretty=True) - if version: - name = name + " " + version - return name or "" - - def version(self, pretty=False, best=False): - # type: (bool, bool) -> str - """ - Return the version of the OS distribution, as a string. - - For details, see :func:`distro.version`. - """ - versions = [ - self.os_release_attr("version_id"), - self.lsb_release_attr("release"), - self.distro_release_attr("version_id"), - self._parse_distro_release_content(self.os_release_attr("pretty_name")).get( - "version_id", "" - ), - self._parse_distro_release_content( - self.lsb_release_attr("description") - ).get("version_id", ""), - self.uname_attr("release"), - ] - version = "" - if best: - # This algorithm uses the last version in priority order that has - # the best precision. If the versions are not in conflict, that - # does not matter; otherwise, using the last one instead of the - # first one might be considered a surprise. - for v in versions: - if v.count(".") > version.count(".") or version == "": - version = v - else: - for v in versions: - if v != "": - version = v - break - if pretty and version and self.codename(): - version = "{0} ({1})".format(version, self.codename()) - return version - - def version_parts(self, best=False): - # type: (bool) -> Tuple[str, str, str] - """ - Return the version of the OS distribution, as a tuple of version - numbers. - - For details, see :func:`distro.version_parts`. - """ - version_str = self.version(best=best) - if version_str: - version_regex = re.compile(r"(\d+)\.?(\d+)?\.?(\d+)?") - matches = version_regex.match(version_str) - if matches: - major, minor, build_number = matches.groups() - return major, minor or "", build_number or "" - return "", "", "" - - def major_version(self, best=False): - # type: (bool) -> str - """ - Return the major version number of the current distribution. - - For details, see :func:`distro.major_version`. - """ - return self.version_parts(best)[0] - - def minor_version(self, best=False): - # type: (bool) -> str - """ - Return the minor version number of the current distribution. - - For details, see :func:`distro.minor_version`. - """ - return self.version_parts(best)[1] - - def build_number(self, best=False): - # type: (bool) -> str - """ - Return the build number of the current distribution. - - For details, see :func:`distro.build_number`. - """ - return self.version_parts(best)[2] - - def like(self): - # type: () -> str - """ - Return the IDs of distributions that are like the OS distribution. - - For details, see :func:`distro.like`. - """ - return self.os_release_attr("id_like") or "" - - def codename(self): - # type: () -> str - """ - Return the codename of the OS distribution. - - For details, see :func:`distro.codename`. - """ - try: - # Handle os_release specially since distros might purposefully set - # this to empty string to have no codename - return self._os_release_info["codename"] - except KeyError: - return ( - self.lsb_release_attr("codename") - or self.distro_release_attr("codename") - or "" - ) - - def info(self, pretty=False, best=False): - # type: (bool, bool) -> InfoDict - """ - Return certain machine-readable information about the OS - distribution. - - For details, see :func:`distro.info`. - """ - return dict( - id=self.id(), - version=self.version(pretty, best), - version_parts=dict( - major=self.major_version(best), - minor=self.minor_version(best), - build_number=self.build_number(best), - ), - like=self.like(), - codename=self.codename(), - ) - - def os_release_info(self): - # type: () -> Dict[str, str] - """ - Return a dictionary containing key-value pairs for the information - items from the os-release file data source of the OS distribution. - - For details, see :func:`distro.os_release_info`. - """ - return self._os_release_info - - def lsb_release_info(self): - # type: () -> Dict[str, str] - """ - Return a dictionary containing key-value pairs for the information - items from the lsb_release command data source of the OS - distribution. - - For details, see :func:`distro.lsb_release_info`. - """ - return self._lsb_release_info - - def distro_release_info(self): - # type: () -> Dict[str, str] - """ - Return a dictionary containing key-value pairs for the information - items from the distro release file data source of the OS - distribution. - - For details, see :func:`distro.distro_release_info`. - """ - return self._distro_release_info - - def uname_info(self): - # type: () -> Dict[str, str] - """ - Return a dictionary containing key-value pairs for the information - items from the uname command data source of the OS distribution. - - For details, see :func:`distro.uname_info`. - """ - return self._uname_info - - def os_release_attr(self, attribute): - # type: (str) -> str - """ - Return a single named information item from the os-release file data - source of the OS distribution. - - For details, see :func:`distro.os_release_attr`. - """ - return self._os_release_info.get(attribute, "") - - def lsb_release_attr(self, attribute): - # type: (str) -> str - """ - Return a single named information item from the lsb_release command - output data source of the OS distribution. - - For details, see :func:`distro.lsb_release_attr`. - """ - return self._lsb_release_info.get(attribute, "") - - def distro_release_attr(self, attribute): - # type: (str) -> str - """ - Return a single named information item from the distro release file - data source of the OS distribution. - - For details, see :func:`distro.distro_release_attr`. - """ - return self._distro_release_info.get(attribute, "") - - def uname_attr(self, attribute): - # type: (str) -> str - """ - Return a single named information item from the uname command - output data source of the OS distribution. - - For details, see :func:`distro.uname_attr`. - """ - return self._uname_info.get(attribute, "") - - @cached_property - def _os_release_info(self): - # type: () -> Dict[str, str] - """ - Get the information items from the specified os-release file. - - Returns: - A dictionary containing all information items. - """ - if os.path.isfile(self.os_release_file): - with open(self.os_release_file) as release_file: - return self._parse_os_release_content(release_file) - return {} - - @staticmethod - def _parse_os_release_content(lines): - # type: (TextIO) -> Dict[str, str] - """ - Parse the lines of an os-release file. - - Parameters: - - * lines: Iterable through the lines in the os-release file. - Each line must be a unicode string or a UTF-8 encoded byte - string. - - Returns: - A dictionary containing all information items. - """ - props = {} - lexer = shlex.shlex(lines, posix=True) - lexer.whitespace_split = True - - # The shlex module defines its `wordchars` variable using literals, - # making it dependent on the encoding of the Python source file. - # In Python 2.6 and 2.7, the shlex source file is encoded in - # 'iso-8859-1', and the `wordchars` variable is defined as a byte - # string. This causes a UnicodeDecodeError to be raised when the - # parsed content is a unicode object. The following fix resolves that - # (... but it should be fixed in shlex...): - if sys.version_info[0] == 2 and isinstance(lexer.wordchars, bytes): - lexer.wordchars = lexer.wordchars.decode("iso-8859-1") - - tokens = list(lexer) - for token in tokens: - # At this point, all shell-like parsing has been done (i.e. - # comments processed, quotes and backslash escape sequences - # processed, multi-line values assembled, trailing newlines - # stripped, etc.), so the tokens are now either: - # * variable assignments: var=value - # * commands or their arguments (not allowed in os-release) - if "=" in token: - k, v = token.split("=", 1) - props[k.lower()] = v - else: - # Ignore any tokens that are not variable assignments - pass - - if "version_codename" in props: - # os-release added a version_codename field. Use that in - # preference to anything else Note that some distros purposefully - # do not have code names. They should be setting - # version_codename="" - props["codename"] = props["version_codename"] - elif "ubuntu_codename" in props: - # Same as above but a non-standard field name used on older Ubuntus - props["codename"] = props["ubuntu_codename"] - elif "version" in props: - # If there is no version_codename, parse it from the version - match = re.search(r"(\(\D+\))|,(\s+)?\D+", props["version"]) - if match: - codename = match.group() - codename = codename.strip("()") - codename = codename.strip(",") - codename = codename.strip() - # codename appears within paranthese. - props["codename"] = codename - - return props - - @cached_property - def _lsb_release_info(self): - # type: () -> Dict[str, str] - """ - Get the information items from the lsb_release command output. - - Returns: - A dictionary containing all information items. - """ - if not self.include_lsb: - return {} - with open(os.devnull, "wb") as devnull: - try: - cmd = ("lsb_release", "-a") - stdout = subprocess.check_output(cmd, stderr=devnull) - # Command not found or lsb_release returned error - except (OSError, subprocess.CalledProcessError): - return {} - content = self._to_str(stdout).splitlines() - return self._parse_lsb_release_content(content) - - @staticmethod - def _parse_lsb_release_content(lines): - # type: (Iterable[str]) -> Dict[str, str] - """ - Parse the output of the lsb_release command. - - Parameters: - - * lines: Iterable through the lines of the lsb_release output. - Each line must be a unicode string or a UTF-8 encoded byte - string. - - Returns: - A dictionary containing all information items. - """ - props = {} - for line in lines: - kv = line.strip("\n").split(":", 1) - if len(kv) != 2: - # Ignore lines without colon. - continue - k, v = kv - props.update({k.replace(" ", "_").lower(): v.strip()}) - return props - - @cached_property - def _uname_info(self): - # type: () -> Dict[str, str] - with open(os.devnull, "wb") as devnull: - try: - cmd = ("uname", "-rs") - stdout = subprocess.check_output(cmd, stderr=devnull) - except OSError: - return {} - content = self._to_str(stdout).splitlines() - return self._parse_uname_content(content) - - @staticmethod - def _parse_uname_content(lines): - # type: (Sequence[str]) -> Dict[str, str] - props = {} - match = re.search(r"^([^\s]+)\s+([\d\.]+)", lines[0].strip()) - if match: - name, version = match.groups() - - # This is to prevent the Linux kernel version from - # appearing as the 'best' version on otherwise - # identifiable distributions. - if name == "Linux": - return {} - props["id"] = name.lower() - props["name"] = name - props["release"] = version - return props - - @staticmethod - def _to_str(text): - # type: (Union[bytes, str]) -> str - encoding = sys.getfilesystemencoding() - encoding = "utf-8" if encoding == "ascii" else encoding - - if sys.version_info[0] >= 3: - if isinstance(text, bytes): - return text.decode(encoding) - else: - if isinstance(text, unicode): # noqa - return text.encode(encoding) - - return text - - @cached_property - def _distro_release_info(self): - # type: () -> Dict[str, str] - """ - Get the information items from the specified distro release file. - - Returns: - A dictionary containing all information items. - """ - if self.distro_release_file: - # If it was specified, we use it and parse what we can, even if - # its file name or content does not match the expected pattern. - distro_info = self._parse_distro_release_file(self.distro_release_file) - basename = os.path.basename(self.distro_release_file) - # The file name pattern for user-specified distro release files - # is somewhat more tolerant (compared to when searching for the - # file), because we want to use what was specified as best as - # possible. - match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) - if "name" in distro_info and "cloudlinux" in distro_info["name"].lower(): - distro_info["id"] = "cloudlinux" - elif match: - distro_info["id"] = match.group(1) - return distro_info - else: - try: - basenames = os.listdir(self.etc_dir) - # We sort for repeatability in cases where there are multiple - # distro specific files; e.g. CentOS, Oracle, Enterprise all - # containing `redhat-release` on top of their own. - basenames.sort() - except OSError: - # This may occur when /etc is not readable but we can't be - # sure about the *-release files. Check common entries of - # /etc for information. If they turn out to not be there the - # error is handled in `_parse_distro_release_file()`. - basenames = [ - "SuSE-release", - "arch-release", - "base-release", - "centos-release", - "fedora-release", - "gentoo-release", - "mageia-release", - "mandrake-release", - "mandriva-release", - "mandrivalinux-release", - "manjaro-release", - "oracle-release", - "redhat-release", - "sl-release", - "slackware-version", - ] - for basename in basenames: - if basename in _DISTRO_RELEASE_IGNORE_BASENAMES: - continue - match = _DISTRO_RELEASE_BASENAME_PATTERN.match(basename) - if match: - filepath = os.path.join(self.etc_dir, basename) - distro_info = self._parse_distro_release_file(filepath) - if "name" in distro_info: - # The name is always present if the pattern matches - self.distro_release_file = filepath - distro_info["id"] = match.group(1) - if "cloudlinux" in distro_info["name"].lower(): - distro_info["id"] = "cloudlinux" - return distro_info - return {} - - def _parse_distro_release_file(self, filepath): - # type: (str) -> Dict[str, str] - """ - Parse a distro release file. - - Parameters: - - * filepath: Path name of the distro release file. - - Returns: - A dictionary containing all information items. - """ - try: - with open(filepath) as fp: - # Only parse the first line. For instance, on SLES there - # are multiple lines. We don't want them... - return self._parse_distro_release_content(fp.readline()) - except (OSError, IOError): - # Ignore not being able to read a specific, seemingly version - # related file. - # See https://github.com/python-distro/distro/issues/162 - return {} - - @staticmethod - def _parse_distro_release_content(line): - # type: (str) -> Dict[str, str] - """ - Parse a line from a distro release file. - - Parameters: - * line: Line from the distro release file. Must be a unicode string - or a UTF-8 encoded byte string. - - Returns: - A dictionary containing all information items. - """ - matches = _DISTRO_RELEASE_CONTENT_REVERSED_PATTERN.match(line.strip()[::-1]) - distro_info = {} - if matches: - # regexp ensures non-None - distro_info["name"] = matches.group(3)[::-1] - if matches.group(2): - distro_info["version_id"] = matches.group(2)[::-1] - if matches.group(1): - distro_info["codename"] = matches.group(1)[::-1] - elif line: - distro_info["name"] = line.strip() - return distro_info - - -_distro = LinuxDistribution() - - -def main(): - # type: () -> None - logger = logging.getLogger(__name__) - logger.setLevel(logging.DEBUG) - logger.addHandler(logging.StreamHandler(sys.stdout)) - - parser = argparse.ArgumentParser(description="OS distro info tool") - parser.add_argument( - "--json", "-j", help="Output in machine readable format", action="store_true" - ) - - parser.add_argument( - "--root-dir", - "-r", - type=str, - dest="root_dir", - help="Path to the root filesystem directory (defaults to /)", - ) - - args = parser.parse_args() - - if args.root_dir: - dist = LinuxDistribution( - include_lsb=False, include_uname=False, root_dir=args.root_dir - ) - else: - dist = _distro - - if args.json: - logger.info(json.dumps(dist.info(), indent=4, sort_keys=True)) - else: - logger.info("Name: %s", dist.name(pretty=True)) - distribution_version = dist.version(pretty=True) - logger.info("Version: %s", distribution_version) - distribution_codename = dist.codename() - logger.info("Codename: %s", distribution_codename) - - -if __name__ == "__main__": - main() diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/_inspect.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/_inspect.py deleted file mode 100644 index 262695b1c4723bfb57569f3badd6f81f1cccd3df..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pip/_vendor/rich/_inspect.py +++ /dev/null @@ -1,210 +0,0 @@ -from __future__ import absolute_import - -from inspect import cleandoc, getdoc, getfile, isclass, ismodule, signature -from typing import Any, Iterable, Optional, Tuple - -from .console import RenderableType, Group -from .highlighter import ReprHighlighter -from .jupyter import JupyterMixin -from .panel import Panel -from .pretty import Pretty -from .table import Table -from .text import Text, TextType - - -def _first_paragraph(doc: str) -> str: - """Get the first paragraph from a docstring.""" - paragraph, _, _ = doc.partition("\n\n") - return paragraph - - -def _reformat_doc(doc: str) -> str: - """Reformat docstring.""" - doc = cleandoc(doc).strip() - return doc - - -class Inspect(JupyterMixin): - """A renderable to inspect any Python Object. - - Args: - obj (Any): An object to inspect. - title (str, optional): Title to display over inspect result, or None use type. Defaults to None. - help (bool, optional): Show full help text rather than just first paragraph. Defaults to False. - methods (bool, optional): Enable inspection of callables. Defaults to False. - docs (bool, optional): Also render doc strings. Defaults to True. - private (bool, optional): Show private attributes (beginning with underscore). Defaults to False. - dunder (bool, optional): Show attributes starting with double underscore. Defaults to False. - sort (bool, optional): Sort attributes alphabetically. Defaults to True. - all (bool, optional): Show all attributes. Defaults to False. - value (bool, optional): Pretty print value of object. Defaults to True. - """ - - def __init__( - self, - obj: Any, - *, - title: Optional[TextType] = None, - help: bool = False, - methods: bool = False, - docs: bool = True, - private: bool = False, - dunder: bool = False, - sort: bool = True, - all: bool = True, - value: bool = True, - ) -> None: - self.highlighter = ReprHighlighter() - self.obj = obj - self.title = title or self._make_title(obj) - if all: - methods = private = dunder = True - self.help = help - self.methods = methods - self.docs = docs or help - self.private = private or dunder - self.dunder = dunder - self.sort = sort - self.value = value - - def _make_title(self, obj: Any) -> Text: - """Make a default title.""" - title_str = ( - str(obj) - if (isclass(obj) or callable(obj) or ismodule(obj)) - else str(type(obj)) - ) - title_text = self.highlighter(title_str) - return title_text - - def __rich__(self) -> Panel: - return Panel.fit( - Group(*self._render()), - title=self.title, - border_style="scope.border", - padding=(0, 1), - ) - - def _get_signature(self, name: str, obj: Any) -> Optional[Text]: - """Get a signature for a callable.""" - try: - _signature = str(signature(obj)) + ":" - except ValueError: - _signature = "(...)" - except TypeError: - return None - - source_filename: Optional[str] = None - try: - source_filename = getfile(obj) - except TypeError: - pass - - callable_name = Text(name, style="inspect.callable") - if source_filename: - callable_name.stylize(f"link file://{source_filename}") - signature_text = self.highlighter(_signature) - - qualname = name or getattr(obj, "__qualname__", name) - qual_signature = Text.assemble( - ("def ", "inspect.def"), (qualname, "inspect.callable"), signature_text - ) - - return qual_signature - - def _render(self) -> Iterable[RenderableType]: - """Render object.""" - - def sort_items(item: Tuple[str, Any]) -> Tuple[bool, str]: - key, (_error, value) = item - return (callable(value), key.strip("_").lower()) - - def safe_getattr(attr_name: str) -> Tuple[Any, Any]: - """Get attribute or any exception.""" - try: - return (None, getattr(obj, attr_name)) - except Exception as error: - return (error, None) - - obj = self.obj - keys = dir(obj) - total_items = len(keys) - if not self.dunder: - keys = [key for key in keys if not key.startswith("__")] - if not self.private: - keys = [key for key in keys if not key.startswith("_")] - not_shown_count = total_items - len(keys) - items = [(key, safe_getattr(key)) for key in keys] - if self.sort: - items.sort(key=sort_items) - - items_table = Table.grid(padding=(0, 1), expand=False) - items_table.add_column(justify="right") - add_row = items_table.add_row - highlighter = self.highlighter - - if callable(obj): - signature = self._get_signature("", obj) - if signature is not None: - yield signature - yield "" - - if self.docs: - _doc = getdoc(obj) - if _doc is not None: - if not self.help: - _doc = _first_paragraph(_doc) - doc_text = Text(_reformat_doc(_doc), style="inspect.help") - doc_text = highlighter(doc_text) - yield doc_text - yield "" - - if self.value and not (isclass(obj) or callable(obj) or ismodule(obj)): - yield Panel( - Pretty(obj, indent_guides=True, max_length=10, max_string=60), - border_style="inspect.value.border", - ) - yield "" - - for key, (error, value) in items: - key_text = Text.assemble( - ( - key, - "inspect.attr.dunder" if key.startswith("__") else "inspect.attr", - ), - (" =", "inspect.equals"), - ) - if error is not None: - warning = key_text.copy() - warning.stylize("inspect.error") - add_row(warning, highlighter(repr(error))) - continue - - if callable(value): - if not self.methods: - continue - - _signature_text = self._get_signature(key, value) - if _signature_text is None: - add_row(key_text, Pretty(value, highlighter=highlighter)) - else: - if self.docs: - docs = getdoc(value) - if docs is not None: - _doc = _reformat_doc(str(docs)) - if not self.help: - _doc = _first_paragraph(_doc) - _signature_text.append("\n" if "\n" in _doc else " ") - doc = highlighter(_doc) - doc.stylize("inspect.doc") - _signature_text.append(doc) - - add_row(key_text, _signature_text) - else: - add_row(key_text, Pretty(value, highlighter=highlighter)) - if items_table.row_count: - yield items_table - else: - yield Text.from_markup( - f"[b cyan]{not_shown_count}[/][i] attribute(s) not shown.[/i] Run [b][magenta]inspect[/]([not b]inspect[/])[/b] for options." - ) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/datetime_parse.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/datetime_parse.py deleted file mode 100644 index cfd54593b51ec4d167b1edb4d2ca0ffa935370d7..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pydantic/v1/datetime_parse.py +++ /dev/null @@ -1,248 +0,0 @@ -""" -Functions to parse datetime objects. - -We're using regular expressions rather than time.strptime because: -- They provide both validation and parsing. -- They're more flexible for datetimes. -- The date/datetime/time constructors produce friendlier error messages. - -Stolen from https://raw.githubusercontent.com/django/django/main/django/utils/dateparse.py at -9718fa2e8abe430c3526a9278dd976443d4ae3c6 - -Changed to: -* use standard python datetime types not django.utils.timezone -* raise ValueError when regex doesn't match rather than returning None -* support parsing unix timestamps for dates and datetimes -""" -import re -from datetime import date, datetime, time, timedelta, timezone -from typing import Dict, Optional, Type, Union - -from . import errors - -date_expr = r'(?P\d{4})-(?P\d{1,2})-(?P\d{1,2})' -time_expr = ( - r'(?P\d{1,2}):(?P\d{1,2})' - r'(?::(?P\d{1,2})(?:\.(?P\d{1,6})\d{0,6})?)?' - r'(?PZ|[+-]\d{2}(?::?\d{2})?)?$' -) - -date_re = re.compile(f'{date_expr}$') -time_re = re.compile(time_expr) -datetime_re = re.compile(f'{date_expr}[T ]{time_expr}') - -standard_duration_re = re.compile( - r'^' - r'(?:(?P-?\d+) (days?, )?)?' - r'((?:(?P-?\d+):)(?=\d+:\d+))?' - r'(?:(?P-?\d+):)?' - r'(?P-?\d+)' - r'(?:\.(?P\d{1,6})\d{0,6})?' - r'$' -) - -# Support the sections of ISO 8601 date representation that are accepted by timedelta -iso8601_duration_re = re.compile( - r'^(?P[-+]?)' - r'P' - r'(?:(?P\d+(.\d+)?)D)?' - r'(?:T' - r'(?:(?P\d+(.\d+)?)H)?' - r'(?:(?P\d+(.\d+)?)M)?' - r'(?:(?P\d+(.\d+)?)S)?' - r')?' - r'$' -) - -EPOCH = datetime(1970, 1, 1) -# if greater than this, the number is in ms, if less than or equal it's in seconds -# (in seconds this is 11th October 2603, in ms it's 20th August 1970) -MS_WATERSHED = int(2e10) -# slightly more than datetime.max in ns - (datetime.max - EPOCH).total_seconds() * 1e9 -MAX_NUMBER = int(3e20) -StrBytesIntFloat = Union[str, bytes, int, float] - - -def get_numeric(value: StrBytesIntFloat, native_expected_type: str) -> Union[None, int, float]: - if isinstance(value, (int, float)): - return value - try: - return float(value) - except ValueError: - return None - except TypeError: - raise TypeError(f'invalid type; expected {native_expected_type}, string, bytes, int or float') - - -def from_unix_seconds(seconds: Union[int, float]) -> datetime: - if seconds > MAX_NUMBER: - return datetime.max - elif seconds < -MAX_NUMBER: - return datetime.min - - while abs(seconds) > MS_WATERSHED: - seconds /= 1000 - dt = EPOCH + timedelta(seconds=seconds) - return dt.replace(tzinfo=timezone.utc) - - -def _parse_timezone(value: Optional[str], error: Type[Exception]) -> Union[None, int, timezone]: - if value == 'Z': - return timezone.utc - elif value is not None: - offset_mins = int(value[-2:]) if len(value) > 3 else 0 - offset = 60 * int(value[1:3]) + offset_mins - if value[0] == '-': - offset = -offset - try: - return timezone(timedelta(minutes=offset)) - except ValueError: - raise error() - else: - return None - - -def parse_date(value: Union[date, StrBytesIntFloat]) -> date: - """ - Parse a date/int/float/string and return a datetime.date. - - Raise ValueError if the input is well formatted but not a valid date. - Raise ValueError if the input isn't well formatted. - """ - if isinstance(value, date): - if isinstance(value, datetime): - return value.date() - else: - return value - - number = get_numeric(value, 'date') - if number is not None: - return from_unix_seconds(number).date() - - if isinstance(value, bytes): - value = value.decode() - - match = date_re.match(value) # type: ignore - if match is None: - raise errors.DateError() - - kw = {k: int(v) for k, v in match.groupdict().items()} - - try: - return date(**kw) - except ValueError: - raise errors.DateError() - - -def parse_time(value: Union[time, StrBytesIntFloat]) -> time: - """ - Parse a time/string and return a datetime.time. - - Raise ValueError if the input is well formatted but not a valid time. - Raise ValueError if the input isn't well formatted, in particular if it contains an offset. - """ - if isinstance(value, time): - return value - - number = get_numeric(value, 'time') - if number is not None: - if number >= 86400: - # doesn't make sense since the time time loop back around to 0 - raise errors.TimeError() - return (datetime.min + timedelta(seconds=number)).time() - - if isinstance(value, bytes): - value = value.decode() - - match = time_re.match(value) # type: ignore - if match is None: - raise errors.TimeError() - - kw = match.groupdict() - if kw['microsecond']: - kw['microsecond'] = kw['microsecond'].ljust(6, '0') - - tzinfo = _parse_timezone(kw.pop('tzinfo'), errors.TimeError) - kw_: Dict[str, Union[None, int, timezone]] = {k: int(v) for k, v in kw.items() if v is not None} - kw_['tzinfo'] = tzinfo - - try: - return time(**kw_) # type: ignore - except ValueError: - raise errors.TimeError() - - -def parse_datetime(value: Union[datetime, StrBytesIntFloat]) -> datetime: - """ - Parse a datetime/int/float/string and return a datetime.datetime. - - This function supports time zone offsets. When the input contains one, - the output uses a timezone with a fixed offset from UTC. - - Raise ValueError if the input is well formatted but not a valid datetime. - Raise ValueError if the input isn't well formatted. - """ - if isinstance(value, datetime): - return value - - number = get_numeric(value, 'datetime') - if number is not None: - return from_unix_seconds(number) - - if isinstance(value, bytes): - value = value.decode() - - match = datetime_re.match(value) # type: ignore - if match is None: - raise errors.DateTimeError() - - kw = match.groupdict() - if kw['microsecond']: - kw['microsecond'] = kw['microsecond'].ljust(6, '0') - - tzinfo = _parse_timezone(kw.pop('tzinfo'), errors.DateTimeError) - kw_: Dict[str, Union[None, int, timezone]] = {k: int(v) for k, v in kw.items() if v is not None} - kw_['tzinfo'] = tzinfo - - try: - return datetime(**kw_) # type: ignore - except ValueError: - raise errors.DateTimeError() - - -def parse_duration(value: StrBytesIntFloat) -> timedelta: - """ - Parse a duration int/float/string and return a datetime.timedelta. - - The preferred format for durations in Django is '%d %H:%M:%S.%f'. - - Also supports ISO 8601 representation. - """ - if isinstance(value, timedelta): - return value - - if isinstance(value, (int, float)): - # below code requires a string - value = f'{value:f}' - elif isinstance(value, bytes): - value = value.decode() - - try: - match = standard_duration_re.match(value) or iso8601_duration_re.match(value) - except TypeError: - raise TypeError('invalid type; expected timedelta, string, bytes, int or float') - - if not match: - raise errors.DurationError() - - kw = match.groupdict() - sign = -1 if kw.pop('sign', '+') == '-' else 1 - if kw.get('microseconds'): - kw['microseconds'] = kw['microseconds'].ljust(6, '0') - - if kw.get('seconds') and kw.get('microseconds') and kw['seconds'].startswith('-'): - kw['microseconds'] = '-' + kw['microseconds'] - - kw_ = {k: float(v) for k, v in kw.items() if v is not None} - - return sign * timedelta(**kw_) diff --git a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/futhark.py b/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/futhark.py deleted file mode 100644 index b0efa88afd6fd98257371028656c2a64b3adbf3e..0000000000000000000000000000000000000000 --- a/spaces/profayle/TerrapinTalk/myenv/lib/python3.9/site-packages/pygments/lexers/futhark.py +++ /dev/null @@ -1,106 +0,0 @@ -""" - pygments.lexers.futhark - ~~~~~~~~~~~~~~~~~~~~~~~ - - Lexer for the Futhark language - - :copyright: Copyright 2006-2023 by the Pygments team, see AUTHORS. - :license: BSD, see LICENSE for details. -""" - -from pygments.lexer import RegexLexer, bygroups -from pygments.token import Comment, Operator, Keyword, Name, String, \ - Number, Punctuation, Whitespace -from pygments import unistring as uni - -__all__ = ['FutharkLexer'] - - -class FutharkLexer(RegexLexer): - """ - A Futhark lexer - - .. versionadded:: 2.8 - """ - name = 'Futhark' - url = 'https://futhark-lang.org/' - aliases = ['futhark'] - filenames = ['*.fut'] - mimetypes = ['text/x-futhark'] - - num_types = ('i8', 'i16', 'i32', 'i64', 'u8', 'u16', 'u32', 'u64', 'f32', 'f64') - - other_types = ('bool', ) - - reserved = ('if', 'then', 'else', 'def', 'let', 'loop', 'in', 'with', - 'type', 'type~', 'type^', - 'val', 'entry', 'for', 'while', 'do', 'case', 'match', - 'include', 'import', 'module', 'open', 'local', 'assert', '_') - - ascii = ('NUL', 'SOH', '[SE]TX', 'EOT', 'ENQ', 'ACK', - 'BEL', 'BS', 'HT', 'LF', 'VT', 'FF', 'CR', 'S[OI]', 'DLE', - 'DC[1-4]', 'NAK', 'SYN', 'ETB', 'CAN', - 'EM', 'SUB', 'ESC', '[FGRU]S', 'SP', 'DEL') - - num_postfix = r'(%s)?' % '|'.join(num_types) - - identifier_re = '[a-zA-Z_][a-zA-Z_0-9\']*' - - # opstart_re = '+\-\*/%=\!><\|&\^' - - tokens = { - 'root': [ - (r'--(.*?)$', Comment.Single), - (r'\s+', Whitespace), - (r'\(\)', Punctuation), - (r'\b(%s)(?!\')\b' % '|'.join(reserved), Keyword.Reserved), - (r'\b(%s)(?!\')\b' % '|'.join(num_types + other_types), Keyword.Type), - - # Identifiers - (r'#\[([a-zA-Z_\(\) ]*)\]', Comment.Preproc), - (r'[#!]?(%s\.)*%s' % (identifier_re, identifier_re), Name), - - (r'\\', Operator), - (r'[-+/%=!><|&*^][-+/%=!><|&*^.]*', Operator), - (r'[][(),:;`{}?.\'~^]', Punctuation), - - # Numbers - (r'0[xX]_*[\da-fA-F](_*[\da-fA-F])*_*[pP][+-]?\d(_*\d)*' + num_postfix, - Number.Float), - (r'0[xX]_*[\da-fA-F](_*[\da-fA-F])*\.[\da-fA-F](_*[\da-fA-F])*' - r'(_*[pP][+-]?\d(_*\d)*)?' + num_postfix, Number.Float), - (r'\d(_*\d)*_*[eE][+-]?\d(_*\d)*' + num_postfix, Number.Float), - (r'\d(_*\d)*\.\d(_*\d)*(_*[eE][+-]?\d(_*\d)*)?' + num_postfix, Number.Float), - (r'0[bB]_*[01](_*[01])*' + num_postfix, Number.Bin), - (r'0[xX]_*[\da-fA-F](_*[\da-fA-F])*' + num_postfix, Number.Hex), - (r'\d(_*\d)*' + num_postfix, Number.Integer), - - # Character/String Literals - (r"'", String.Char, 'character'), - (r'"', String, 'string'), - # Special - (r'\[[a-zA-Z_\d]*\]', Keyword.Type), - (r'\(\)', Name.Builtin), - ], - 'character': [ - # Allows multi-chars, incorrectly. - (r"[^\\']'", String.Char, '#pop'), - (r"\\", String.Escape, 'escape'), - ("'", String.Char, '#pop'), - ], - 'string': [ - (r'[^\\"]+', String), - (r"\\", String.Escape, 'escape'), - ('"', String, '#pop'), - ], - - 'escape': [ - (r'[abfnrtv"\'&\\]', String.Escape, '#pop'), - (r'\^[][' + uni.Lu + r'@^_]', String.Escape, '#pop'), - ('|'.join(ascii), String.Escape, '#pop'), - (r'o[0-7]+', String.Escape, '#pop'), - (r'x[\da-fA-F]+', String.Escape, '#pop'), - (r'\d+', String.Escape, '#pop'), - (r'(\s+)(\\)', bygroups(Whitespace, String.Escape), '#pop'), - ], - } diff --git a/spaces/punith-098/controlnet-interior-design/explanation.py b/spaces/punith-098/controlnet-interior-design/explanation.py deleted file mode 100644 index 37bbd870df576077aca1de0cdc03d09621a034e6..0000000000000000000000000000000000000000 --- a/spaces/punith-098/controlnet-interior-design/explanation.py +++ /dev/null @@ -1,51 +0,0 @@ -import streamlit as st - -def make_inpainting_explanation(): - with st.expander("Explanation inpainting", expanded=False): - st.write("In the inpainting mode, you can draw regions on the input image that you want to regenerate. " - "This can be useful to remove unwanted objects from the image or to improve the consistency of the image." - ) - st.image("content/inpainting_sidebar.png", caption="Image before inpainting, note the ornaments on the wall", width=500) - st.write("You can find drawing options in the sidebar. There are two modes: freedraw and polygon. Freedraw allows the user to draw with a pencil of a certain width. " - "Polygon allows the user to draw a polygon by clicking on the image to add a point. The polygon is closed by right clicking.") - - st.write("### Example inpainting") - st.write("In the example below, the ornaments on the wall are removed. The inpainting is done by drawing a mask on the image.") - st.image("content/inpainting_before.jpg", caption="Image before inpainting, note the ornaments on the wall") - st.image("content/inpainting_after.png", caption="Image before inpainting, note the ornaments on the wall") - -def make_regeneration_explanation(): - with st.expander("Explanation object regeneration"): - st.write("In this object regeneration mode, the model calculates which objects occur in the image. " - "The user can then select which objects can be regenerated by the controlnet model by adding them in the multiselect box. " - "All the object classes that are not selected will remain the same as in the original image." - ) - st.write("### Example object regeneration") - st.write("In the example below, the room consists of various objects such as wall, ceiling, floor, lamp, bed, ... " - "In the multiselect box, all the objects except for 'lamp', 'bed and 'table' are selected to be regenerated. " - ) - st.image("content/regen_example.png", caption="Room where all concepts except for 'bed', 'lamp', 'table' are regenerated") - -def make_segmentation_explanation(): - with st.expander("Segmentation mode", expanded=False): - st.write("In the segmentation mode, the user can use his imagination and the paint brush to place concepts in the image. " - "In the left sidebar, you can first find the high level category of the concept you want to add, such as 'lighting', 'floor', .. " - "After selecting the category, you can select the specific concept you want to add in the 'Choose a color' dropdown. " - "This will change the color of the paint brush, which you can then use to draw on the input image. " - "The model will then regenerate the image with the concepts you have drawn and leave the rest of the image unchanged. " - ) - st.image("content/sidebar segmentation.png", caption="Sidebar with segmentation options", width=300) - st.write("You can choose the freedraw mode which gives you a pencil of a certain (chosen) width or the polygon mode. With the polygon mode you can click to add a point to the polygon and close the polygon by right clicking. ") - st.write("Important: " - "it's not easy to draw a good segmentation mask. This is because you need to keep in mind the perspective of the room and the exact " - "shape of the object you want to draw within this perspective. Controlnet will follow your segmentation mask pretty well, so " - "a non-natural object shape will sometimes result in weird outputs. However, give it a try and see what you can do! " - ) - st.image("content/segmentation window.png", caption="Example of a segmentation mask drawn on the input image to add a window to the room") - st.write("Tip: ") - st.write("In the concepts dropdown, you can select 'keep background' (which is a white color). Everything drawn in this color will use " - "the original underlying segmentation mask. This can be useful to help with generating other objects, since you give the model a some " - "freedom to generate outside the object borders." - ) - st.image("content/keep background 1.png", caption="Image with a poster drawn on the wall.") - st.image("content/keep background 2.png", caption="Image with a poster drawn on the wall surrounded by 'keep background'.") diff --git a/spaces/pycui/RealChar/realtime_ai_character/audio/speech_to_text/base.py b/spaces/pycui/RealChar/realtime_ai_character/audio/speech_to_text/base.py deleted file mode 100644 index e44aa753021024499c2392b347d9d8448badacbc..0000000000000000000000000000000000000000 --- a/spaces/pycui/RealChar/realtime_ai_character/audio/speech_to_text/base.py +++ /dev/null @@ -1,8 +0,0 @@ -from abc import ABC, abstractmethod - - -class SpeechToText(ABC): - @abstractmethod - def transcribe(self, audio_bytes, platform='web', prompt='') -> str: - # platform: 'web' | 'mobile' | 'terminal' - pass diff --git a/spaces/quidiaMuxgu/Expedit-SAM/DTS-HD Master Audio Suite V2.60.22 WIN Incl. Keygen [deepstatus] Keygen !!LINK!!.md b/spaces/quidiaMuxgu/Expedit-SAM/DTS-HD Master Audio Suite V2.60.22 WIN Incl. Keygen [deepstatus] Keygen !!LINK!!.md deleted file mode 100644 index 17b0d26b0925e6353962766bb03fe141cfad8631..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/DTS-HD Master Audio Suite V2.60.22 WIN Incl. Keygen [deepstatus] Keygen !!LINK!!.md +++ /dev/null @@ -1,11 +0,0 @@ - -

    this software is a comprehensive package for dts-hd master audio decoding, encoding and conversion, and provides an easy way to integrate dts-hd master audio and dts-hd master audio-dts to play, record, convert and other operations.

    -

    dts-hd master audio (dts-hd ma) is an audio coding format developed by the german company, digital theater systems (dts). dts-hd master audio is a digital-to-analog (dab) format that is based on the ac3 audio compression standard. the dts-hd master audio format is more robust and powerful than the mpeg-2 aac standard.

    -

    DTS-HD Master Audio Suite v2.60.22 WIN Incl. Keygen [deepstatus] keygen


    DOWNLOADhttps://geags.com/2uCrKi



    -

    please download and install dts-hd master audio suite v2.60.21. there are two ways to play dts-hd ma, you can download the dts-hd ma tool pack from dts and install it on your computer, or you can use the dts-hd master audio suite v2.

    -

    note:

    • please, add "deepstatus" to your free-to-download programs list.
    • you can activate the dts-hd master audio by pressing the key (no software needed).
    • please use the "create a new log file" option in deepstatus to create a log file, even when using the software.
    -

    архив продукта - название файла, в котором продукт появится, если это существует. в примере продукта deepstatus dts-hd master audio suite v2.60.22 приведен файл deepstatus dts-hd master audio suite v2.22 win incl. keygen [deepstatus], который содержит название продукта и название архива продукта.

    -

    /* * * configuration variables: edit before pasting into your webpage * * */ var disqus_shortname = 'deepstatus'; // required: replace example with your forum shortname var disqus_identifier = 'dts-hd-master-audio-suite-v2.60.22-win-incl-keygen-deepstatus'; /* * * don't edit below this line * * */ (function() var dsq = document.createelement('script'); dsq.type = 'text/javascript'; dsq.async = true; dsq.src = '' + disqus_shortname + '.disqus.com/embed.js'; (document.getelementsbytagname('head')[0] )(); please enable javascript to view the comments powered by disqus. comments powered by disqus /* * * configuration variables: edit before pasting into your webpage * * */ var disqus_shortname = 'deepstatus'; // required: replace example with your forum shortname /* * * don't edit below this line * * */ (function() var dsq = document.appendchild(dsq); )(); important note - this is a single installer of the above files for both 32-bit and 64-bit windows - not 4 files!

    /* * * configuration variables: edit before pasting into your webpage * * */ var disqus_shortname = 'deepstatus'; // required: replace example with your forum shortname /* * * don't edit below this line * * */ (function() { var dsq = document.

    -

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Kannada My Autograph Full Movie Free 94 [VERIFIED].md b/spaces/quidiaMuxgu/Expedit-SAM/Kannada My Autograph Full Movie Free 94 [VERIFIED].md deleted file mode 100644 index f4c7b8f9763b697bde089d0f40b43a95e1a1e2b6..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Kannada My Autograph Full Movie Free 94 [VERIFIED].md +++ /dev/null @@ -1,7 +0,0 @@ -
    -

    Ive been dating since I was 14 years old. I met my first boy when I was 16; my parents forced me to see the movie, Lady And The Tramp, but I didnt care for it much. I dated for years after that, but the topic never came up. A few years after college, I came out to my family and they were absolutely shocked. So, they and I, they sat down and theyve asked me some questions about it. Theyve asked me if I had a gay bar in high school or ever tried to have sex with guys, and what I felt about gay sex.

    -

    kannada my autograph full movie free 94


    Download ►►► https://geags.com/2uCsHE



    -

    There are many types of movie genres, from comedy, drama, period pieces, to sci-fi, horror, action, animation, comedy, and other film genres. To make movies on a large scale, a lot of money must be spent and a whole team is needed to make a movie that can bring in profit. When you are making a movie, the budget must be thought of first.
    There are different categories of movies, the most common ones being comedy, drama, and thriller.

    -

    The places in India where this language is spoken are various and yet common. These languages include Kannada, Tamil, Telugu, Malayalam, Marathi, Gujarati, Punjabi, and Nepali. However, Bhojpuri and Odiya language are also spoken in some parts of north India, where Hindi is spoken. Moreover, there are many districts in India in which Kannada is spoken as one of their languages. Since Kannada is a regional language, the movies with Indian historical and mythological background use these languages. Even most Bollywood movies has one Kannada line. Kannada movies, as compared to Hindi movies, has the opportunity to showcase more closely to the audience.

    899543212b
    -
    -
    \ No newline at end of file diff --git a/spaces/quidiaMuxgu/Expedit-SAM/Linqer Activation Key 13.md b/spaces/quidiaMuxgu/Expedit-SAM/Linqer Activation Key 13.md deleted file mode 100644 index 6ea7e832f490e791e1470e28f44119c5fac3b520..0000000000000000000000000000000000000000 --- a/spaces/quidiaMuxgu/Expedit-SAM/Linqer Activation Key 13.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Linqer Activation Key 13


    Download File ››››› https://geags.com/2uCqeu



    -
    -April 8, 2016 - Linqer is a simple and lightweight file conversion program SQL in LINQ, running both types of files and comparing them visually. ## ...Download CA Erwin Process Modeler R with serial key, Autodesk AutoCAD Design. [important] CA ERwin Process Modeler R version 8.0 is designed for data integration and business process management in organizations. This program will help you create and automate business processes that meet the requirements of today's business. The program is available in several languages. Select the language you need and get started. [important] CA ERwin Process Modeler R version 8.0 supports process control for over 150 companies worldwide! 8a78ff9644
    -
    -
    -

    diff --git a/spaces/rachana219/MODT2/utils/metrics.py b/spaces/rachana219/MODT2/utils/metrics.py deleted file mode 100644 index 6d2f53647529ab0fc52f2e69fe2571794b024c94..0000000000000000000000000000000000000000 --- a/spaces/rachana219/MODT2/utils/metrics.py +++ /dev/null @@ -1,227 +0,0 @@ -# Model validation metrics - -from pathlib import Path - -import matplotlib.pyplot as plt -import numpy as np -import torch - -from . import general - - -def fitness(x): - # Model fitness as a weighted combination of metrics - w = [0.0, 0.0, 0.1, 0.9] # weights for [P, R, mAP@0.5, mAP@0.5:0.95] - return (x[:, :4] * w).sum(1) - - -def ap_per_class(tp, conf, pred_cls, target_cls, v5_metric=False, plot=False, save_dir='.', names=()): - """ Compute the average precision, given the recall and precision curves. - Source: https://github.com/rafaelpadilla/Object-Detection-Metrics. - # Arguments - tp: True positives (nparray, nx1 or nx10). - conf: Objectness value from 0-1 (nparray). - pred_cls: Predicted object classes (nparray). - target_cls: True object classes (nparray). - plot: Plot precision-recall curve at mAP@0.5 - save_dir: Plot save directory - # Returns - The average precision as computed in py-faster-rcnn. - """ - - # Sort by objectness - i = np.argsort(-conf) - tp, conf, pred_cls = tp[i], conf[i], pred_cls[i] - - # Find unique classes - unique_classes = np.unique(target_cls) - nc = unique_classes.shape[0] # number of classes, number of detections - - # Create Precision-Recall curve and compute AP for each class - px, py = np.linspace(0, 1, 1000), [] # for plotting - ap, p, r = np.zeros((nc, tp.shape[1])), np.zeros((nc, 1000)), np.zeros((nc, 1000)) - for ci, c in enumerate(unique_classes): - i = pred_cls == c - n_l = (target_cls == c).sum() # number of labels - n_p = i.sum() # number of predictions - - if n_p == 0 or n_l == 0: - continue - else: - # Accumulate FPs and TPs - fpc = (1 - tp[i]).cumsum(0) - tpc = tp[i].cumsum(0) - - # Recall - recall = tpc / (n_l + 1e-16) # recall curve - r[ci] = np.interp(-px, -conf[i], recall[:, 0], left=0) # negative x, xp because xp decreases - - # Precision - precision = tpc / (tpc + fpc) # precision curve - p[ci] = np.interp(-px, -conf[i], precision[:, 0], left=1) # p at pr_score - - # AP from recall-precision curve - for j in range(tp.shape[1]): - ap[ci, j], mpre, mrec = compute_ap(recall[:, j], precision[:, j], v5_metric=v5_metric) - if plot and j == 0: - py.append(np.interp(px, mrec, mpre)) # precision at mAP@0.5 - - # Compute F1 (harmonic mean of precision and recall) - f1 = 2 * p * r / (p + r + 1e-16) - if plot: - plot_pr_curve(px, py, ap, Path(save_dir) / 'PR_curve.png', names) - plot_mc_curve(px, f1, Path(save_dir) / 'F1_curve.png', names, ylabel='F1') - plot_mc_curve(px, p, Path(save_dir) / 'P_curve.png', names, ylabel='Precision') - plot_mc_curve(px, r, Path(save_dir) / 'R_curve.png', names, ylabel='Recall') - - i = f1.mean(0).argmax() # max F1 index - return p[:, i], r[:, i], ap, f1[:, i], unique_classes.astype('int32') - - -def compute_ap(recall, precision, v5_metric=False): - """ Compute the average precision, given the recall and precision curves - # Arguments - recall: The recall curve (list) - precision: The precision curve (list) - v5_metric: Assume maximum recall to be 1.0, as in YOLOv5, MMDetetion etc. - # Returns - Average precision, precision curve, recall curve - """ - - # Append sentinel values to beginning and end - if v5_metric: # New YOLOv5 metric, same as MMDetection and Detectron2 repositories - mrec = np.concatenate(([0.], recall, [1.0])) - else: # Old YOLOv5 metric, i.e. default YOLOv7 metric - mrec = np.concatenate(([0.], recall, [recall[-1] + 0.01])) - mpre = np.concatenate(([1.], precision, [0.])) - - # Compute the precision envelope - mpre = np.flip(np.maximum.accumulate(np.flip(mpre))) - - # Integrate area under curve - method = 'interp' # methods: 'continuous', 'interp' - if method == 'interp': - x = np.linspace(0, 1, 101) # 101-point interp (COCO) - ap = np.trapz(np.interp(x, mrec, mpre), x) # integrate - else: # 'continuous' - i = np.where(mrec[1:] != mrec[:-1])[0] # points where x axis (recall) changes - ap = np.sum((mrec[i + 1] - mrec[i]) * mpre[i + 1]) # area under curve - - return ap, mpre, mrec - - -class ConfusionMatrix: - # Updated version of https://github.com/kaanakan/object_detection_confusion_matrix - def __init__(self, nc, conf=0.25, iou_thres=0.45): - self.matrix = np.zeros((nc + 1, nc + 1)) - self.nc = nc # number of classes - self.conf = conf - self.iou_thres = iou_thres - - def process_batch(self, detections, labels): - """ - Return intersection-over-union (Jaccard index) of boxes. - Both sets of boxes are expected to be in (x1, y1, x2, y2) format. - Arguments: - detections (Array[N, 6]), x1, y1, x2, y2, conf, class - labels (Array[M, 5]), class, x1, y1, x2, y2 - Returns: - None, updates confusion matrix accordingly - """ - detections = detections[detections[:, 4] > self.conf] - gt_classes = labels[:, 0].int() - detection_classes = detections[:, 5].int() - iou = general.box_iou(labels[:, 1:], detections[:, :4]) - - x = torch.where(iou > self.iou_thres) - if x[0].shape[0]: - matches = torch.cat((torch.stack(x, 1), iou[x[0], x[1]][:, None]), 1).cpu().numpy() - if x[0].shape[0] > 1: - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 1], return_index=True)[1]] - matches = matches[matches[:, 2].argsort()[::-1]] - matches = matches[np.unique(matches[:, 0], return_index=True)[1]] - else: - matches = np.zeros((0, 3)) - - n = matches.shape[0] > 0 - m0, m1, _ = matches.transpose().astype(np.int16) - for i, gc in enumerate(gt_classes): - j = m0 == i - if n and sum(j) == 1: - self.matrix[gc, detection_classes[m1[j]]] += 1 # correct - else: - self.matrix[self.nc, gc] += 1 # background FP - - if n: - for i, dc in enumerate(detection_classes): - if not any(m1 == i): - self.matrix[dc, self.nc] += 1 # background FN - - def matrix(self): - return self.matrix - - def plot(self, save_dir='', names=()): - try: - import seaborn as sn - - array = self.matrix / (self.matrix.sum(0).reshape(1, self.nc + 1) + 1E-6) # normalize - array[array < 0.005] = np.nan # don't annotate (would appear as 0.00) - - fig = plt.figure(figsize=(12, 9), tight_layout=True) - sn.set(font_scale=1.0 if self.nc < 50 else 0.8) # for label size - labels = (0 < len(names) < 99) and len(names) == self.nc # apply names to ticklabels - sn.heatmap(array, annot=self.nc < 30, annot_kws={"size": 8}, cmap='Blues', fmt='.2f', square=True, - xticklabels=names + ['background FP'] if labels else "auto", - yticklabels=names + ['background FN'] if labels else "auto").set_facecolor((1, 1, 1)) - fig.axes[0].set_xlabel('True') - fig.axes[0].set_ylabel('Predicted') - fig.savefig(Path(save_dir) / 'confusion_matrix.png', dpi=250) - except Exception as e: - pass - - def print(self): - for i in range(self.nc + 1): - print(' '.join(map(str, self.matrix[i]))) - - -# Plots ---------------------------------------------------------------------------------------------------------------- - -def plot_pr_curve(px, py, ap, save_dir='pr_curve.png', names=()): - # Precision-recall curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - py = np.stack(py, axis=1) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py.T): - ax.plot(px, y, linewidth=1, label=f'{names[i]} {ap[i, 0]:.3f}') # plot(recall, precision) - else: - ax.plot(px, py, linewidth=1, color='grey') # plot(recall, precision) - - ax.plot(px, py.mean(1), linewidth=3, color='blue', label='all classes %.3f mAP@0.5' % ap[:, 0].mean()) - ax.set_xlabel('Recall') - ax.set_ylabel('Precision') - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") - fig.savefig(Path(save_dir), dpi=250) - - -def plot_mc_curve(px, py, save_dir='mc_curve.png', names=(), xlabel='Confidence', ylabel='Metric'): - # Metric-confidence curve - fig, ax = plt.subplots(1, 1, figsize=(9, 6), tight_layout=True) - - if 0 < len(names) < 21: # display per-class legend if < 21 classes - for i, y in enumerate(py): - ax.plot(px, y, linewidth=1, label=f'{names[i]}') # plot(confidence, metric) - else: - ax.plot(px, py.T, linewidth=1, color='grey') # plot(confidence, metric) - - y = py.mean(0) - ax.plot(px, y, linewidth=3, color='blue', label=f'all classes {y.max():.2f} at {px[y.argmax()]:.3f}') - ax.set_xlabel(xlabel) - ax.set_ylabel(ylabel) - ax.set_xlim(0, 1) - ax.set_ylim(0, 1) - plt.legend(bbox_to_anchor=(1.04, 1), loc="upper left") - fig.savefig(Path(save_dir), dpi=250) diff --git a/spaces/radames/Gradio-demo-video-image-webcam-upload/app.py b/spaces/radames/Gradio-demo-video-image-webcam-upload/app.py deleted file mode 100644 index 4fc19861c5842d523bbf4b57df97569d0a80dce8..0000000000000000000000000000000000000000 --- a/spaces/radames/Gradio-demo-video-image-webcam-upload/app.py +++ /dev/null @@ -1,58 +0,0 @@ -import gradio as gr - -def predict(video_in, image_in_video, image_in_img): - if video_in == None and image_in_video == None and image_in_img == None: - raise gr.Error("Please upload a video or image.") - if image_in_video or image_in_img: - print("image", image_in_video, image_in_img) - image = image_in_video or image_in_img - return image - - return video_in - - -def toggle(choice): - if choice == "webcam": - return gr.update(visible=True, value=None), gr.update(visible=False, value=None) - else: - return gr.update(visible=False, value=None), gr.update(visible=True, value=None) - - -with gr.Blocks() as blocks: - gr.Markdown("### Video or Image? WebCam or Upload?""") - with gr.Tab("Video") as tab: - with gr.Row(): - with gr.Column(): - video_or_file_opt = gr.Radio(["webcam", "upload"], value="webcam", - label="How would you like to upload your video?") - video_in = gr.Video(source="webcam", include_audio=False) - video_or_file_opt.change(fn=lambda s: gr.update(source=s, value=None), inputs=video_or_file_opt, - outputs=video_in, queue=False, show_progress=False) - with gr.Column(): - video_out = gr.Video() - run_btn = gr.Button("Run") - run_btn.click(fn=predict, inputs=[video_in], outputs=[video_out]) - gr.Examples(fn=predict, examples=[], inputs=[ - video_in], outputs=[video_out]) - - with gr.Tab("Image"): - with gr.Row(): - with gr.Column(): - image_or_file_opt = gr.Radio(["webcam", "file"], value="webcam", - label="How would you like to upload your image?") - image_in_video = gr.Image(source="webcam", type="filepath") - image_in_img = gr.Image( - source="upload", visible=False, type="filepath") - - image_or_file_opt.change(fn=toggle, inputs=[image_or_file_opt], - outputs=[image_in_video, image_in_img], queue=False, show_progress=False) - with gr.Column(): - image_out = gr.Image() - run_btn = gr.Button("Run") - run_btn.click(fn=predict, inputs=[ - image_in_img, image_in_video], outputs=[image_out]) - gr.Examples(fn=predict, examples=[], inputs=[ - image_in_img, image_in_video], outputs=[image_out]) - -blocks.queue() -blocks.launch() diff --git a/spaces/raedeXanto/academic-chatgpt-beta/Get FREE BEST LEGIT HACK CSGO FREE DOWNLOAD UNDETECTED 2020 WINDOWS AND MAC OS MacOSX Now and Boost Your Performance.md b/spaces/raedeXanto/academic-chatgpt-beta/Get FREE BEST LEGIT HACK CSGO FREE DOWNLOAD UNDETECTED 2020 WINDOWS AND MAC OS MacOSX Now and Boost Your Performance.md deleted file mode 100644 index 1a3995cf354dfe37bf7ec66787b1ea3f5a37b245..0000000000000000000000000000000000000000 --- a/spaces/raedeXanto/academic-chatgpt-beta/Get FREE BEST LEGIT HACK CSGO FREE DOWNLOAD UNDETECTED 2020 WINDOWS AND MAC OS MacOSX Now and Boost Your Performance.md +++ /dev/null @@ -1,127 +0,0 @@ - -

    The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 (PC) Hack Tool Download

    -

    Introduction

    -

    If you are a fan of fighting games, you might have heard of Xuan Dou Zhi Wang, a Chinese online PC fighting game that was inspired by the popular King of Fighters series. The game features stunning 2D sprites over 3D backgrounds, as well as guest characters from SNK Playmore, such as Terry Bogard and Benimaru Nikaido.

    -

    The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 (PC) Hack Tool Download


    Download File ✓✓✓ https://tinourl.com/2uL407



    -

    However, the game is not officially released yet, and it is still in development since 2011. The game is only available in China via digital download, and it requires an internet connection to play. Moreover, the game is quite challenging and competitive, and you might need a lot of coins and gems to unlock new characters, skins, and stages.

    -

    That's why we have created a hack tool for Xuan Dou Zhi Wang that will help you get unlimited coins and gems, as well as unlock all the characters and skins in the game. With this hack tool, you can enjoy the game without any restrictions or limitations. You can also bypass the anti-cheat system of the game and play online without any risk of getting banned.

    -

    What is Xuan Dou Zhi Wang?

    -

    Xuan Dou Zhi Wang (炫斗之王, "The King of Dazzling Fighters", tentatively translated for an international audience as "King of Combat") is an online PC fighting game published by Tencent Games. The game's first Beta build has been available since 2011 in China via digital download. It is considered a still in-development game and hasn't received a proper release.

    -

    The game draws important influence from the King of Fighters series, both in gameplay and art design. It is generally considered the development of the game responds to the need of Chinese players to be able to play The King of Fighters (or at least something "similar") at home, since video game consoles sale was banned in China by the time.

    -

    The game's distinctive visual style consists on sprites over 3D backgrounds. The character sprites are created via 3D models later photographed and retouched to give the appearance of traditional 2D sprites. That also allows the designers to include alternative skins (alternate costumes) for the characters, which are included via updates.

    -

    The gameplay is straight taken from games like The King of Fighters '97 and The King of Fighters '98. It consists of the same 4-buttons layout, including the ability to perform short and super jumps, rolls and strong attacks ("blowbacks"); throws are performed by pushing the two punch buttons together, however. A maximum of 3 Super gauge levels are attainable. Pushing light kick and heavy punch together puts the character in MAX mode where its damage is increased and is able to perform an improved version of a supermove.

    -

    Single and team battles (3 vs 3 fighters; either six players, each one of them controlling a fighter, or two players, each one controlling three) in the King of Fighters fashion are available.

    -

    How to hack The King Of Fighters XV~Xuan Dou Zhi Wang on PC
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC cheat engine download
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 hack tool free download
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC trainer mod
    -The King Of Fighters XV~Xuan Dou Zhi Wang hack apk for PC
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC unlimited money hack
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC unlock all characters hack
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC online hack tool
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC hack no survey no password
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC hack no human verification
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC hack 2023 working
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC hack latest version
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC hack without root or jailbreak
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC hack easy to use
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC hack safe and secure
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC hack anti-ban system
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC hack tutorial and guide
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC hack features and benefits
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC hack reviews and ratings
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC hack download link and instructions
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game download full version free
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC game crack download
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game torrent download
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC game serial key generator
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game patch update download
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game system requirements and compatibility
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC game gameplay and features
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game tips and tricks
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC game cheats and codes
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game walkthrough and guide
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game best characters and teams
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC game tier list and rankings
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game modes and events
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC game online multiplayer and co-op
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game offline mode and story mode
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC game graphics and sound quality
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game comparison with other fighting games
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC game fan art and wallpapers
    -The King Of Fighters XV~Xuan Dou Zhi Wang PC game official website and social media links
    -The King Of Fighters XV~Xuan Dou Zhi Wang V1.15.46.1 PC game news and updates

    -

    What is the hack tool?

    -

    The hack tool for Xuan Dou Zhi Wang is a software that modifies the game files to give you unlimited coins and gems, as well as unlock all the characters and skins in the game. The hack tool works for both online and offline modes of the game, and it does not require any root or jailbreak.

    -

    The hack tool is easy to use and safe to download. You just need to follow some simple steps to install and run it on your PC. The hack tool will automatically detect your game version and apply the hacks accordingly. You can also customize your options according to your preferences.

    -

    Why use the hack tool?

    -

    There are many reasons why you might want to use the hack tool for Xuan Dou Zhi Wang. Here are some of them:

    -
      -
    • You can get unlimited coins and gems that you can use to buy new characters, skins, stages, items, and more.
    • -
    • You can unlock all the characters and skins in the game, including the guest characters from SNK Playmore.
    • -
    • You can bypass the anti-cheat system of the game and play online without any risk of getting banned.
    • -
    • You can enjoy the game without any restrictions or limitations.
    • -
    • You can have more fun and challenge with your friends or other players online.
    • -
    -

    Features of the hack tool

    -

    Unlimited coins and gems

    -

    Coins and gems are the main currencies in Xuan Dou Zhi Wang. You can use them to buy new characters, skins, stages, items, and more. However, earning coins and gems can be quite difficult and time-consuming in the game. You need to win matches, complete missions, or spend real money to get them.

    -

    With our hack tool, you can get unlimited coins and gems for free. You don't need to spend any time or money to get them. You can just enter any amount you want in the hack tool and it will be added to your account instantly.

    -

    Unlock all characters and skins

    -

    Xuan Dou Zhi Wang has a roster of 24 playable characters (including two guest characters from SNK Playmore), each with their own unique skills, movesets, combos, and special attacks. The game also has over 100 different skins for each character that change their appearance and sometimes their voice lines.

    -

    However, not all characters and skins are available from the start. You need to unlock them by spending coins or gems or by completing certain requirements in the game. Some characters and skins are also exclusive to certain events or promotions that might not be available anymore.

    -

    With our hack tool, you can unlock all characters and skins in Xuan Dou Zhi Wang with just one click. You don't need to spend any coins or gems or complete any requirements to get them. You can access all characters and skins anytime you want.

    -

    Bypass anti-cheat system

    -

    Xuan Dou Zhi Wang has an anti-cheat system that detects any modifications or hacks in the game files or memory. If you are caught using any cheats or hacks in online mode, you might get banned from playing online or even lose your account permanently.

    -

    With our hack tool, you can bypass this anti-cheat system easily. Our hack tool uses advanced encryption techniques that make it undetectable by any anti-cheat software or server checks. You can use our hack tool without any risk of getting banned or losing your account.

    -

    Easy to use and safe to download

    -

    Our hack tool for Xuan Dou Zhi Wang is very easy to use and safe to download. You don't need any technical skills or knowledge to use it. You just need to follow these simple steps:

    -
      -
    1. Download our hack tool from our website.
    2. -
    3. Extract it using WinRAR or any other file extractor.
    4. -
    5. Run it as administrator on your PC.
    6. -
    7. Select your options according to your preferences.
    8. -
    9. Click on "Start Hack" button.
    10. -
    11. Launch Xuan Dou Zhi Wang on your PC.
    12. -
    13. Enjoy!
    14. -
    -

    Our hack tool is also virus-free and malware-free. We scan it regularly with various antivirus programs to ensure its safety and security. You can download our hack tool without any worries or doubts.

    -

    How to use the hack tool

    -

    Download and install our hack tool

    -

    Choose a location where you want to save our hack tool file on your PC. We recommend saving it on your desktop for easy access. Then click on "Save" button and wait for the download to finish.

    -

    Once the download is complete, you need to extract our hack tool file using WinRAR or any other file extractor. Right-click on our hack tool file and select "Extract here" or "Extract to" option. A new folder will be created with the same name as our hack tool file.

    -

    Open the new folder and look for a file named "Xuan Dou Zhi Wang Hack Tool.exe". This is our hack tool executable file that you need to run on your PC. Right-click on it and select "Run as administrator" option. A pop-up window will appear asking you for permission to run our hack tool. Click on "Yes" button and wait for our hack tool to load.

    -

    Run our hack tool and select your options

    -

    Once our hack tool is loaded, you will see a user-friendly interface with various options and features. You can customize your options according to your preferences. Here are some of the options that you can choose from:

    -
      -
    • Coins: Enter any amount of coins that you want to add to your account.
    • -
    • Gems: Enter any amount of gems that you want to add to your account.
    • -
    • Unlock All Characters: Check this box if you want to unlock all characters in the game.
    • -
    • Unlock All Skins: Check this box if you want to unlock all skins in the game.
    • -
    • Bypass Anti-Cheat: Check this box if you want to bypass the anti-cheat system of the game.
    • -
    -

    You can also see some information about your game version, your account status, and your connection status at the bottom of our hack tool interface. Make sure that everything is correct before proceeding.

    -

    Launch the game and enjoy

    -

    After selecting your options, click on the "Start Hack" button at the bottom right corner of our hack tool interface. A progress bar will appear showing you the hacking process. Wait for a few minutes until the hacking process is complete.

    -

    Once the hacking process is complete, a message will pop up saying "Hack Successful". Click on "OK" button and close our hack tool. Then launch Xuan Dou Zhi Wang on your PC and log in to your account. You will see that your coins and gems have been added to your account, and all characters and skins have been unlocked.

    -

    You can now enjoy playing Xuan Dou Zhi Wang without any restrictions or limitations. You can also play online without any risk of getting banned or losing your account. Have fun and challenge your friends or other players online with your new skills and abilities.

    -

    Conclusion

    -

    In this article, we have shown you how to use our hack tool for Xuan Dou Zhi Wang, an online PC fighting game that was inspired by the King of Fighters series. With our hack tool, you can get unlimited coins and gems, as well as unlock all characters and skins in the game. You can also bypass the anti-cheat system of the game and play online without any risk of getting banned.

    -

    Our hack tool is easy to use and safe to download. You just need to follow some simple steps to install and run it on your PC. You can also customize your options according to your preferences. Our hack tool works for both online and offline modes of the game, and it does not require any root or jailbreak.

    -

    If you are a fan of fighting games, you should definitely try Xuan Dou Zhi Wang and experience its stunning graphics, gameplay, and features. You can download our hack tool from our website and enjoy the game without any restrictions or limitations.

    -

    FAQs

    -
      -
    • Q: Is Xuan Dou Zhi Wang available in other languages?
    • -
    • A: No, Xuan Dou Zhi Wang is only available in Chinese language at the moment.
    • -
    • Q: Is Xuan Dou Zhi Wang compatible with other operating systems?
    • -
    • A: No, Xuan Dou Zhi Wang is only compatible with Windows operating system at the moment.
    • -
    • Q: Is Xuan Dou Zhi Wang free to play?
    • -
    • A: Yes, Xuan Dou Zhi Wang is free to play, but it has some in-game purchases that require real money.
    • -
    • Q: Is Xuan Dou Zhi Wang still in development?
    • -
    • A: Yes, Xuan Dou Zhi Wang is still in development since 2011, and it hasn't received a proper release yet.
    • -
    • Q: Is Xuan Dou Zhi Wang safe to play?
    • -
    • A: Yes, Xuan Dou Zhi Wang is safe to play, as long as you use our hack tool to bypass the anti-cheat system of the game.
    • -
    -

    0a6ba089eb
    -
    -
    \ No newline at end of file diff --git a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/assert.d.ts b/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/assert.d.ts deleted file mode 100644 index e8595e637123b36d6796d5e159ebbb5320254cb2..0000000000000000000000000000000000000000 --- a/spaces/rayan-saleh/whisper2notion/server/node_modules/@types/node/assert.d.ts +++ /dev/null @@ -1,961 +0,0 @@ -/** - * The `assert` module provides a set of assertion functions for verifying - * invariants. - * @see [source](https://github.com/nodejs/node/blob/v18.0.0/lib/assert.js) - */ -declare module 'assert' { - /** - * An alias of {@link ok}. - * @since v0.5.9 - * @param value The input that is checked for being truthy. - */ - function assert(value: unknown, message?: string | Error): asserts value; - namespace assert { - /** - * Indicates the failure of an assertion. All errors thrown by the `assert` module - * will be instances of the `AssertionError` class. - */ - class AssertionError extends Error { - actual: unknown; - expected: unknown; - operator: string; - generatedMessage: boolean; - code: 'ERR_ASSERTION'; - constructor(options?: { - /** If provided, the error message is set to this value. */ - message?: string | undefined; - /** The `actual` property on the error instance. */ - actual?: unknown | undefined; - /** The `expected` property on the error instance. */ - expected?: unknown | undefined; - /** The `operator` property on the error instance. */ - operator?: string | undefined; - /** If provided, the generated stack trace omits frames before this function. */ - // tslint:disable-next-line:ban-types - stackStartFn?: Function | undefined; - }); - } - /** - * This feature is currently experimental and behavior might still change. - * @since v14.2.0, v12.19.0 - * @experimental - */ - class CallTracker { - /** - * The wrapper function is expected to be called exactly `exact` times. If the - * function has not been called exactly `exact` times when `tracker.verify()` is called, then `tracker.verify()` will throw an - * error. - * - * ```js - * import assert from 'assert'; - * - * // Creates call tracker. - * const tracker = new assert.CallTracker(); - * - * function func() {} - * - * // Returns a function that wraps func() that must be called exact times - * // before tracker.verify(). - * const callsfunc = tracker.calls(func); - * ``` - * @since v14.2.0, v12.19.0 - * @param [fn='A no-op function'] - * @param [exact=1] - * @return that wraps `fn`. - */ - calls(exact?: number): () => void; - calls any>(fn?: Func, exact?: number): Func; - /** - * Example: - * - * ```js - * import assert from 'node:assert'; - * - * const tracker = new assert.CallTracker(); - * - * function func() {} - * const callsfunc = tracker.calls(func); - * callsfunc(1, 2, 3); - * - * assert.deepStrictEqual(tracker.getCalls(callsfunc), - * [{ thisArg: this, arguments: [1, 2, 3 ] }]); - * ``` - * - * @since v18.8.0, v16.18.0 - * @params fn - * @returns An Array with the calls to a tracked function. - */ - getCalls(fn: Function): CallTrackerCall[]; - /** - * The arrays contains information about the expected and actual number of calls of - * the functions that have not been called the expected number of times. - * - * ```js - * import assert from 'assert'; - * - * // Creates call tracker. - * const tracker = new assert.CallTracker(); - * - * function func() {} - * - * function foo() {} - * - * // Returns a function that wraps func() that must be called exact times - * // before tracker.verify(). - * const callsfunc = tracker.calls(func, 2); - * - * // Returns an array containing information on callsfunc() - * tracker.report(); - * // [ - * // { - * // message: 'Expected the func function to be executed 2 time(s) but was - * // executed 0 time(s).', - * // actual: 0, - * // expected: 2, - * // operator: 'func', - * // stack: stack trace - * // } - * // ] - * ``` - * @since v14.2.0, v12.19.0 - * @return of objects containing information about the wrapper functions returned by `calls`. - */ - report(): CallTrackerReportInformation[]; - /** - * Reset calls of the call tracker. - * If a tracked function is passed as an argument, the calls will be reset for it. - * If no arguments are passed, all tracked functions will be reset. - * - * ```js - * import assert from 'node:assert'; - * - * const tracker = new assert.CallTracker(); - * - * function func() {} - * const callsfunc = tracker.calls(func); - * - * callsfunc(); - * // Tracker was called once - * tracker.getCalls(callsfunc).length === 1; - * - * tracker.reset(callsfunc); - * tracker.getCalls(callsfunc).length === 0; - * ``` - * - * @since v18.8.0, v16.18.0 - * @param fn a tracked function to reset. - */ - reset(fn?: Function): void; - /** - * Iterates through the list of functions passed to `tracker.calls()` and will throw an error for functions that - * have not been called the expected number of times. - * - * ```js - * import assert from 'assert'; - * - * // Creates call tracker. - * const tracker = new assert.CallTracker(); - * - * function func() {} - * - * // Returns a function that wraps func() that must be called exact times - * // before tracker.verify(). - * const callsfunc = tracker.calls(func, 2); - * - * callsfunc(); - * - * // Will throw an error since callsfunc() was only called once. - * tracker.verify(); - * ``` - * @since v14.2.0, v12.19.0 - */ - verify(): void; - } - interface CallTrackerCall { - thisArg: object; - arguments: unknown[]; - } - interface CallTrackerReportInformation { - message: string; - /** The actual number of times the function was called. */ - actual: number; - /** The number of times the function was expected to be called. */ - expected: number; - /** The name of the function that is wrapped. */ - operator: string; - /** A stack trace of the function. */ - stack: object; - } - type AssertPredicate = RegExp | (new () => object) | ((thrown: unknown) => boolean) | object | Error; - /** - * Throws an `AssertionError` with the provided error message or a default - * error message. If the `message` parameter is an instance of an `Error` then - * it will be thrown instead of the `AssertionError`. - * - * ```js - * import assert from 'assert/strict'; - * - * assert.fail(); - * // AssertionError [ERR_ASSERTION]: Failed - * - * assert.fail('boom'); - * // AssertionError [ERR_ASSERTION]: boom - * - * assert.fail(new TypeError('need array')); - * // TypeError: need array - * ``` - * - * Using `assert.fail()` with more than two arguments is possible but deprecated. - * See below for further details. - * @since v0.1.21 - * @param [message='Failed'] - */ - function fail(message?: string | Error): never; - /** @deprecated since v10.0.0 - use fail([message]) or other assert functions instead. */ - function fail( - actual: unknown, - expected: unknown, - message?: string | Error, - operator?: string, - // tslint:disable-next-line:ban-types - stackStartFn?: Function - ): never; - /** - * Tests if `value` is truthy. It is equivalent to`assert.equal(!!value, true, message)`. - * - * If `value` is not truthy, an `AssertionError` is thrown with a `message`property set equal to the value of the `message` parameter. If the `message`parameter is `undefined`, a default - * error message is assigned. If the `message`parameter is an instance of an `Error` then it will be thrown instead of the`AssertionError`. - * If no arguments are passed in at all `message` will be set to the string:`` 'No value argument passed to `assert.ok()`' ``. - * - * Be aware that in the `repl` the error message will be different to the one - * thrown in a file! See below for further details. - * - * ```js - * import assert from 'assert/strict'; - * - * assert.ok(true); - * // OK - * assert.ok(1); - * // OK - * - * assert.ok(); - * // AssertionError: No value argument passed to `assert.ok()` - * - * assert.ok(false, 'it\'s false'); - * // AssertionError: it's false - * - * // In the repl: - * assert.ok(typeof 123 === 'string'); - * // AssertionError: false == true - * - * // In a file (e.g. test.js): - * assert.ok(typeof 123 === 'string'); - * // AssertionError: The expression evaluated to a falsy value: - * // - * // assert.ok(typeof 123 === 'string') - * - * assert.ok(false); - * // AssertionError: The expression evaluated to a falsy value: - * // - * // assert.ok(false) - * - * assert.ok(0); - * // AssertionError: The expression evaluated to a falsy value: - * // - * // assert.ok(0) - * ``` - * - * ```js - * import assert from 'assert/strict'; - * - * // Using `assert()` works the same: - * assert(0); - * // AssertionError: The expression evaluated to a falsy value: - * // - * // assert(0) - * ``` - * @since v0.1.21 - */ - function ok(value: unknown, message?: string | Error): asserts value; - /** - * **Strict assertion mode** - * - * An alias of {@link strictEqual}. - * - * **Legacy assertion mode** - * - * > Stability: 3 - Legacy: Use {@link strictEqual} instead. - * - * Tests shallow, coercive equality between the `actual` and `expected` parameters - * using the [`==` operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Equality). `NaN` is specially handled - * and treated as being identical if both sides are `NaN`. - * - * ```js - * import assert from 'assert'; - * - * assert.equal(1, 1); - * // OK, 1 == 1 - * assert.equal(1, '1'); - * // OK, 1 == '1' - * assert.equal(NaN, NaN); - * // OK - * - * assert.equal(1, 2); - * // AssertionError: 1 == 2 - * assert.equal({ a: { b: 1 } }, { a: { b: 1 } }); - * // AssertionError: { a: { b: 1 } } == { a: { b: 1 } } - * ``` - * - * If the values are not equal, an `AssertionError` is thrown with a `message`property set equal to the value of the `message` parameter. If the `message`parameter is undefined, a default - * error message is assigned. If the `message`parameter is an instance of an `Error` then it will be thrown instead of the`AssertionError`. - * @since v0.1.21 - */ - function equal(actual: unknown, expected: unknown, message?: string | Error): void; - /** - * **Strict assertion mode** - * - * An alias of {@link notStrictEqual}. - * - * **Legacy assertion mode** - * - * > Stability: 3 - Legacy: Use {@link notStrictEqual} instead. - * - * Tests shallow, coercive inequality with the [`!=` operator](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Operators/Inequality). `NaN` is - * specially handled and treated as being identical if both sides are `NaN`. - * - * ```js - * import assert from 'assert'; - * - * assert.notEqual(1, 2); - * // OK - * - * assert.notEqual(1, 1); - * // AssertionError: 1 != 1 - * - * assert.notEqual(1, '1'); - * // AssertionError: 1 != '1' - * ``` - * - * If the values are equal, an `AssertionError` is thrown with a `message`property set equal to the value of the `message` parameter. If the `message`parameter is undefined, a default error - * message is assigned. If the `message`parameter is an instance of an `Error` then it will be thrown instead of the`AssertionError`. - * @since v0.1.21 - */ - function notEqual(actual: unknown, expected: unknown, message?: string | Error): void; - /** - * **Strict assertion mode** - * - * An alias of {@link deepStrictEqual}. - * - * **Legacy assertion mode** - * - * > Stability: 3 - Legacy: Use {@link deepStrictEqual} instead. - * - * Tests for deep equality between the `actual` and `expected` parameters. Consider - * using {@link deepStrictEqual} instead. {@link deepEqual} can have - * surprising results. - * - * _Deep equality_ means that the enumerable "own" properties of child objects - * are also recursively evaluated by the following rules. - * @since v0.1.21 - */ - function deepEqual(actual: unknown, expected: unknown, message?: string | Error): void; - /** - * **Strict assertion mode** - * - * An alias of {@link notDeepStrictEqual}. - * - * **Legacy assertion mode** - * - * > Stability: 3 - Legacy: Use {@link notDeepStrictEqual} instead. - * - * Tests for any deep inequality. Opposite of {@link deepEqual}. - * - * ```js - * import assert from 'assert'; - * - * const obj1 = { - * a: { - * b: 1 - * } - * }; - * const obj2 = { - * a: { - * b: 2 - * } - * }; - * const obj3 = { - * a: { - * b: 1 - * } - * }; - * const obj4 = Object.create(obj1); - * - * assert.notDeepEqual(obj1, obj1); - * // AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } } - * - * assert.notDeepEqual(obj1, obj2); - * // OK - * - * assert.notDeepEqual(obj1, obj3); - * // AssertionError: { a: { b: 1 } } notDeepEqual { a: { b: 1 } } - * - * assert.notDeepEqual(obj1, obj4); - * // OK - * ``` - * - * If the values are deeply equal, an `AssertionError` is thrown with a`message` property set equal to the value of the `message` parameter. If the`message` parameter is undefined, a default - * error message is assigned. If the`message` parameter is an instance of an `Error` then it will be thrown - * instead of the `AssertionError`. - * @since v0.1.21 - */ - function notDeepEqual(actual: unknown, expected: unknown, message?: string | Error): void; - /** - * Tests strict equality between the `actual` and `expected` parameters as - * determined by [`Object.is()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is). - * - * ```js - * import assert from 'assert/strict'; - * - * assert.strictEqual(1, 2); - * // AssertionError [ERR_ASSERTION]: Expected inputs to be strictly equal: - * // - * // 1 !== 2 - * - * assert.strictEqual(1, 1); - * // OK - * - * assert.strictEqual('Hello foobar', 'Hello World!'); - * // AssertionError [ERR_ASSERTION]: Expected inputs to be strictly equal: - * // + actual - expected - * // - * // + 'Hello foobar' - * // - 'Hello World!' - * // ^ - * - * const apples = 1; - * const oranges = 2; - * assert.strictEqual(apples, oranges, `apples ${apples} !== oranges ${oranges}`); - * // AssertionError [ERR_ASSERTION]: apples 1 !== oranges 2 - * - * assert.strictEqual(1, '1', new TypeError('Inputs are not identical')); - * // TypeError: Inputs are not identical - * ``` - * - * If the values are not strictly equal, an `AssertionError` is thrown with a`message` property set equal to the value of the `message` parameter. If the`message` parameter is undefined, a - * default error message is assigned. If the`message` parameter is an instance of an `Error` then it will be thrown - * instead of the `AssertionError`. - * @since v0.1.21 - */ - function strictEqual(actual: unknown, expected: T, message?: string | Error): asserts actual is T; - /** - * Tests strict inequality between the `actual` and `expected` parameters as - * determined by [`Object.is()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/is). - * - * ```js - * import assert from 'assert/strict'; - * - * assert.notStrictEqual(1, 2); - * // OK - * - * assert.notStrictEqual(1, 1); - * // AssertionError [ERR_ASSERTION]: Expected "actual" to be strictly unequal to: - * // - * // 1 - * - * assert.notStrictEqual(1, '1'); - * // OK - * ``` - * - * If the values are strictly equal, an `AssertionError` is thrown with a`message` property set equal to the value of the `message` parameter. If the`message` parameter is undefined, a - * default error message is assigned. If the`message` parameter is an instance of an `Error` then it will be thrown - * instead of the `AssertionError`. - * @since v0.1.21 - */ - function notStrictEqual(actual: unknown, expected: unknown, message?: string | Error): void; - /** - * Tests for deep equality between the `actual` and `expected` parameters. - * "Deep" equality means that the enumerable "own" properties of child objects - * are recursively evaluated also by the following rules. - * @since v1.2.0 - */ - function deepStrictEqual(actual: unknown, expected: T, message?: string | Error): asserts actual is T; - /** - * Tests for deep strict inequality. Opposite of {@link deepStrictEqual}. - * - * ```js - * import assert from 'assert/strict'; - * - * assert.notDeepStrictEqual({ a: 1 }, { a: '1' }); - * // OK - * ``` - * - * If the values are deeply and strictly equal, an `AssertionError` is thrown - * with a `message` property set equal to the value of the `message` parameter. If - * the `message` parameter is undefined, a default error message is assigned. If - * the `message` parameter is an instance of an `Error` then it will be thrown - * instead of the `AssertionError`. - * @since v1.2.0 - */ - function notDeepStrictEqual(actual: unknown, expected: unknown, message?: string | Error): void; - /** - * Expects the function `fn` to throw an error. - * - * If specified, `error` can be a [`Class`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes), - * [`RegExp`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions), a validation function, - * a validation object where each property will be tested for strict deep equality, - * or an instance of error where each property will be tested for strict deep - * equality including the non-enumerable `message` and `name` properties. When - * using an object, it is also possible to use a regular expression, when - * validating against a string property. See below for examples. - * - * If specified, `message` will be appended to the message provided by the`AssertionError` if the `fn` call fails to throw or in case the error validation - * fails. - * - * Custom validation object/error instance: - * - * ```js - * import assert from 'assert/strict'; - * - * const err = new TypeError('Wrong value'); - * err.code = 404; - * err.foo = 'bar'; - * err.info = { - * nested: true, - * baz: 'text' - * }; - * err.reg = /abc/i; - * - * assert.throws( - * () => { - * throw err; - * }, - * { - * name: 'TypeError', - * message: 'Wrong value', - * info: { - * nested: true, - * baz: 'text' - * } - * // Only properties on the validation object will be tested for. - * // Using nested objects requires all properties to be present. Otherwise - * // the validation is going to fail. - * } - * ); - * - * // Using regular expressions to validate error properties: - * throws( - * () => { - * throw err; - * }, - * { - * // The `name` and `message` properties are strings and using regular - * // expressions on those will match against the string. If they fail, an - * // error is thrown. - * name: /^TypeError$/, - * message: /Wrong/, - * foo: 'bar', - * info: { - * nested: true, - * // It is not possible to use regular expressions for nested properties! - * baz: 'text' - * }, - * // The `reg` property contains a regular expression and only if the - * // validation object contains an identical regular expression, it is going - * // to pass. - * reg: /abc/i - * } - * ); - * - * // Fails due to the different `message` and `name` properties: - * throws( - * () => { - * const otherErr = new Error('Not found'); - * // Copy all enumerable properties from `err` to `otherErr`. - * for (const [key, value] of Object.entries(err)) { - * otherErr[key] = value; - * } - * throw otherErr; - * }, - * // The error's `message` and `name` properties will also be checked when using - * // an error as validation object. - * err - * ); - * ``` - * - * Validate instanceof using constructor: - * - * ```js - * import assert from 'assert/strict'; - * - * assert.throws( - * () => { - * throw new Error('Wrong value'); - * }, - * Error - * ); - * ``` - * - * Validate error message using [`RegExp`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions): - * - * Using a regular expression runs `.toString` on the error object, and will - * therefore also include the error name. - * - * ```js - * import assert from 'assert/strict'; - * - * assert.throws( - * () => { - * throw new Error('Wrong value'); - * }, - * /^Error: Wrong value$/ - * ); - * ``` - * - * Custom error validation: - * - * The function must return `true` to indicate all internal validations passed. - * It will otherwise fail with an `AssertionError`. - * - * ```js - * import assert from 'assert/strict'; - * - * assert.throws( - * () => { - * throw new Error('Wrong value'); - * }, - * (err) => { - * assert(err instanceof Error); - * assert(/value/.test(err)); - * // Avoid returning anything from validation functions besides `true`. - * // Otherwise, it's not clear what part of the validation failed. Instead, - * // throw an error about the specific validation that failed (as done in this - * // example) and add as much helpful debugging information to that error as - * // possible. - * return true; - * }, - * 'unexpected error' - * ); - * ``` - * - * `error` cannot be a string. If a string is provided as the second - * argument, then `error` is assumed to be omitted and the string will be used for`message` instead. This can lead to easy-to-miss mistakes. Using the same - * message as the thrown error message is going to result in an`ERR_AMBIGUOUS_ARGUMENT` error. Please read the example below carefully if using - * a string as the second argument gets considered: - * - * ```js - * import assert from 'assert/strict'; - * - * function throwingFirst() { - * throw new Error('First'); - * } - * - * function throwingSecond() { - * throw new Error('Second'); - * } - * - * function notThrowing() {} - * - * // The second argument is a string and the input function threw an Error. - * // The first case will not throw as it does not match for the error message - * // thrown by the input function! - * assert.throws(throwingFirst, 'Second'); - * // In the next example the message has no benefit over the message from the - * // error and since it is not clear if the user intended to actually match - * // against the error message, Node.js throws an `ERR_AMBIGUOUS_ARGUMENT` error. - * assert.throws(throwingSecond, 'Second'); - * // TypeError [ERR_AMBIGUOUS_ARGUMENT] - * - * // The string is only used (as message) in case the function does not throw: - * assert.throws(notThrowing, 'Second'); - * // AssertionError [ERR_ASSERTION]: Missing expected exception: Second - * - * // If it was intended to match for the error message do this instead: - * // It does not throw because the error messages match. - * assert.throws(throwingSecond, /Second$/); - * - * // If the error message does not match, an AssertionError is thrown. - * assert.throws(throwingFirst, /Second$/); - * // AssertionError [ERR_ASSERTION] - * ``` - * - * Due to the confusing error-prone notation, avoid a string as the second - * argument. - * @since v0.1.21 - */ - function throws(block: () => unknown, message?: string | Error): void; - function throws(block: () => unknown, error: AssertPredicate, message?: string | Error): void; - /** - * Asserts that the function `fn` does not throw an error. - * - * Using `assert.doesNotThrow()` is actually not useful because there - * is no benefit in catching an error and then rethrowing it. Instead, consider - * adding a comment next to the specific code path that should not throw and keep - * error messages as expressive as possible. - * - * When `assert.doesNotThrow()` is called, it will immediately call the `fn`function. - * - * If an error is thrown and it is the same type as that specified by the `error`parameter, then an `AssertionError` is thrown. If the error is of a - * different type, or if the `error` parameter is undefined, the error is - * propagated back to the caller. - * - * If specified, `error` can be a [`Class`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes), - * [`RegExp`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions) or a validation - * function. See {@link throws} for more details. - * - * The following, for instance, will throw the `TypeError` because there is no - * matching error type in the assertion: - * - * ```js - * import assert from 'assert/strict'; - * - * assert.doesNotThrow( - * () => { - * throw new TypeError('Wrong value'); - * }, - * SyntaxError - * ); - * ``` - * - * However, the following will result in an `AssertionError` with the message - * 'Got unwanted exception...': - * - * ```js - * import assert from 'assert/strict'; - * - * assert.doesNotThrow( - * () => { - * throw new TypeError('Wrong value'); - * }, - * TypeError - * ); - * ``` - * - * If an `AssertionError` is thrown and a value is provided for the `message`parameter, the value of `message` will be appended to the `AssertionError` message: - * - * ```js - * import assert from 'assert/strict'; - * - * assert.doesNotThrow( - * () => { - * throw new TypeError('Wrong value'); - * }, - * /Wrong value/, - * 'Whoops' - * ); - * // Throws: AssertionError: Got unwanted exception: Whoops - * ``` - * @since v0.1.21 - */ - function doesNotThrow(block: () => unknown, message?: string | Error): void; - function doesNotThrow(block: () => unknown, error: AssertPredicate, message?: string | Error): void; - /** - * Throws `value` if `value` is not `undefined` or `null`. This is useful when - * testing the `error` argument in callbacks. The stack trace contains all frames - * from the error passed to `ifError()` including the potential new frames for`ifError()` itself. - * - * ```js - * import assert from 'assert/strict'; - * - * assert.ifError(null); - * // OK - * assert.ifError(0); - * // AssertionError [ERR_ASSERTION]: ifError got unwanted exception: 0 - * assert.ifError('error'); - * // AssertionError [ERR_ASSERTION]: ifError got unwanted exception: 'error' - * assert.ifError(new Error()); - * // AssertionError [ERR_ASSERTION]: ifError got unwanted exception: Error - * - * // Create some random error frames. - * let err; - * (function errorFrame() { - * err = new Error('test error'); - * })(); - * - * (function ifErrorFrame() { - * assert.ifError(err); - * })(); - * // AssertionError [ERR_ASSERTION]: ifError got unwanted exception: test error - * // at ifErrorFrame - * // at errorFrame - * ``` - * @since v0.1.97 - */ - function ifError(value: unknown): asserts value is null | undefined; - /** - * Awaits the `asyncFn` promise or, if `asyncFn` is a function, immediately - * calls the function and awaits the returned promise to complete. It will then - * check that the promise is rejected. - * - * If `asyncFn` is a function and it throws an error synchronously,`assert.rejects()` will return a rejected `Promise` with that error. If the - * function does not return a promise, `assert.rejects()` will return a rejected`Promise` with an `ERR_INVALID_RETURN_VALUE` error. In both cases the error - * handler is skipped. - * - * Besides the async nature to await the completion behaves identically to {@link throws}. - * - * If specified, `error` can be a [`Class`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes), - * [`RegExp`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions), a validation function, - * an object where each property will be tested for, or an instance of error where - * each property will be tested for including the non-enumerable `message` and`name` properties. - * - * If specified, `message` will be the message provided by the `AssertionError` if the `asyncFn` fails to reject. - * - * ```js - * import assert from 'assert/strict'; - * - * await assert.rejects( - * async () => { - * throw new TypeError('Wrong value'); - * }, - * { - * name: 'TypeError', - * message: 'Wrong value' - * } - * ); - * ``` - * - * ```js - * import assert from 'assert/strict'; - * - * await assert.rejects( - * async () => { - * throw new TypeError('Wrong value'); - * }, - * (err) => { - * assert.strictEqual(err.name, 'TypeError'); - * assert.strictEqual(err.message, 'Wrong value'); - * return true; - * } - * ); - * ``` - * - * ```js - * import assert from 'assert/strict'; - * - * assert.rejects( - * Promise.reject(new Error('Wrong value')), - * Error - * ).then(() => { - * // ... - * }); - * ``` - * - * `error` cannot be a string. If a string is provided as the second - * argument, then `error` is assumed to be omitted and the string will be used for`message` instead. This can lead to easy-to-miss mistakes. Please read the - * example in {@link throws} carefully if using a string as the second - * argument gets considered. - * @since v10.0.0 - */ - function rejects(block: (() => Promise) | Promise, message?: string | Error): Promise; - function rejects(block: (() => Promise) | Promise, error: AssertPredicate, message?: string | Error): Promise; - /** - * Awaits the `asyncFn` promise or, if `asyncFn` is a function, immediately - * calls the function and awaits the returned promise to complete. It will then - * check that the promise is not rejected. - * - * If `asyncFn` is a function and it throws an error synchronously,`assert.doesNotReject()` will return a rejected `Promise` with that error. If - * the function does not return a promise, `assert.doesNotReject()` will return a - * rejected `Promise` with an `ERR_INVALID_RETURN_VALUE` error. In both cases - * the error handler is skipped. - * - * Using `assert.doesNotReject()` is actually not useful because there is little - * benefit in catching a rejection and then rejecting it again. Instead, consider - * adding a comment next to the specific code path that should not reject and keep - * error messages as expressive as possible. - * - * If specified, `error` can be a [`Class`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Classes), - * [`RegExp`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_Expressions) or a validation - * function. See {@link throws} for more details. - * - * Besides the async nature to await the completion behaves identically to {@link doesNotThrow}. - * - * ```js - * import assert from 'assert/strict'; - * - * await assert.doesNotReject( - * async () => { - * throw new TypeError('Wrong value'); - * }, - * SyntaxError - * ); - * ``` - * - * ```js - * import assert from 'assert/strict'; - * - * assert.doesNotReject(Promise.reject(new TypeError('Wrong value'))) - * .then(() => { - * // ... - * }); - * ``` - * @since v10.0.0 - */ - function doesNotReject(block: (() => Promise) | Promise, message?: string | Error): Promise; - function doesNotReject(block: (() => Promise) | Promise, error: AssertPredicate, message?: string | Error): Promise; - /** - * Expects the `string` input to match the regular expression. - * - * ```js - * import assert from 'assert/strict'; - * - * assert.match('I will fail', /pass/); - * // AssertionError [ERR_ASSERTION]: The input did not match the regular ... - * - * assert.match(123, /pass/); - * // AssertionError [ERR_ASSERTION]: The "string" argument must be of type string. - * - * assert.match('I will pass', /pass/); - * // OK - * ``` - * - * If the values do not match, or if the `string` argument is of another type than`string`, an `AssertionError` is thrown with a `message` property set equal - * to the value of the `message` parameter. If the `message` parameter is - * undefined, a default error message is assigned. If the `message` parameter is an - * instance of an `Error` then it will be thrown instead of the `AssertionError`. - * @since v13.6.0, v12.16.0 - */ - function match(value: string, regExp: RegExp, message?: string | Error): void; - /** - * Expects the `string` input not to match the regular expression. - * - * ```js - * import assert from 'assert/strict'; - * - * assert.doesNotMatch('I will fail', /fail/); - * // AssertionError [ERR_ASSERTION]: The input was expected to not match the ... - * - * assert.doesNotMatch(123, /pass/); - * // AssertionError [ERR_ASSERTION]: The "string" argument must be of type string. - * - * assert.doesNotMatch('I will pass', /different/); - * // OK - * ``` - * - * If the values do match, or if the `string` argument is of another type than`string`, an `AssertionError` is thrown with a `message` property set equal - * to the value of the `message` parameter. If the `message` parameter is - * undefined, a default error message is assigned. If the `message` parameter is an - * instance of an `Error` then it will be thrown instead of the `AssertionError`. - * @since v13.6.0, v12.16.0 - */ - function doesNotMatch(value: string, regExp: RegExp, message?: string | Error): void; - const strict: Omit & { - (value: unknown, message?: string | Error): asserts value; - equal: typeof strictEqual; - notEqual: typeof notStrictEqual; - deepEqual: typeof deepStrictEqual; - notDeepEqual: typeof notDeepStrictEqual; - // Mapped types and assertion functions are incompatible? - // TS2775: Assertions require every name in the call target - // to be declared with an explicit type annotation. - ok: typeof ok; - strictEqual: typeof strictEqual; - deepStrictEqual: typeof deepStrictEqual; - ifError: typeof ifError; - strict: typeof strict; - }; - } - export = assert; -} -declare module 'node:assert' { - import assert = require('assert'); - export = assert; -} diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/7am Arivu Full Movie Download Mp4 11 _HOT_.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/7am Arivu Full Movie Download Mp4 11 _HOT_.md deleted file mode 100644 index fc4c1c6d9a6f5155f51720603388c16001425576..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/7am Arivu Full Movie Download Mp4 11 _HOT_.md +++ /dev/null @@ -1,105 +0,0 @@ -
    -

    How to Download 7am Arivu Full Movie in Mp4 Format

    -

    If you are a fan of Tamil movies, you might have heard of 7am Arivu, a sci-fi thriller starring Suriya and Shruti Haasan. The movie was released in 2011 and received positive reviews from critics and audiences alike. The movie revolves around a genetic engineering experiment that unlocks the hidden potential of a descendant of Bodhidharma, the founder of martial arts and Zen Buddhism.

    -

    7am Arivu is a movie that you would want to watch again and again, especially if you are interested in history, culture and science. But how can you download 7am Arivu full movie in mp4 format? Mp4 is a popular video format that can be played on various devices, such as smartphones, tablets, laptops and TVs. Mp4 also offers high quality and low file size, making it ideal for downloading and streaming.

    -

    7am Arivu Full Movie Download Mp4 11


    Download --->>> https://urlgoal.com/2uCLTi



    -

    In this article, we will show you how to download 7am Arivu full movie in mp4 format from different sources. We will also give you some tips on how to avoid malware, viruses and other online threats that might harm your device or compromise your privacy. So, without further ado, let's get started.

    - -

    Download 7am Arivu Full Movie in Mp4 from Isaimini

    -

    Isaimini is a popular website that offers free downloads of Tamil movies, songs and TV shows. You can find a wide range of content on this website, including old classics and new releases. Isaimini also provides various formats and resolutions for downloading, such as mp4, avi, mkv, 1080p, 720p and 480p.

    -

    To download 7am Arivu full movie in mp4 from Isaimini, follow these steps:

    -
      -
    1. Go to the official website of Isaimini at https://ww1.isaimini.store/
    2. -
    3. Search for 7am Arivu in the search box or browse through the categories.
    4. -
    5. Select the movie from the results and click on the download link.
    6. -
    7. Choose the mp4 format and the desired resolution from the options.
    8. -
    9. Wait for the download to complete and enjoy watching the movie.
    10. -
    -

    Note: Isaimini is an illegal website that hosts pirated content without the permission of the original creators. Downloading from Isaimini might expose you to legal issues or cyberattacks. Therefore, we do not recommend using Isaimini or any other similar websites for downloading movies. Use them at your own risk.

    - -

    Download 7am Arivu Full Movie in Mp4 from Internet Archive

    -

    Internet Archive is a non-profit organization that preserves digital content for future generations. It hosts millions of books, movies, music, software and other media that are free to access and download. Internet Archive also has a collection of Tamil movies, including 7am Arivu.

    -

    To download 7am Arivu full movie in mp4 from Internet Archive, follow these steps:

    -
      -
    1. Go to the official website of Internet Archive at https://archive.org/
    2. -
    3. Search for 7am Arivu in the search box or browse through the categories.
    4. -
    5. Select the movie from the results and click on the download options button.
    6. -
    7. Choose the mp4 format from the list of available formats.
    8. -
    9. Wait for the download to complete and enjoy watching the movie.
    10. -
    -

    Note: Internet Archive is a legal and safe website that respects the rights of the content owners. However, some of the content on Internet Archive might be subject to copyright or other restrictions. Therefore, before downloading any content from Internet Archive, make sure you check its license and terms of use.

    -

    - -

    Tips for Downloading 7am Arivu Full Movie in Mp4 Safely

    -

    Downloading movies online can be risky if you are not careful. You might encounter malware, viruses, phishing scams or other online threats that might harm your device or compromise your privacy. To avoid these risks, follow these tips:

    -
      -
    • Use a reliable antivirus software and keep it updated regularly.
    • -
    • Use a VPN service to hide your IP address and encrypt your online traffic.
    • -
    • Use a reputable download manager to speed up your downloads and resume them if they are interrupted.
    • -
    • Use a trusted website or source for downloading movies. Avoid websites that offer pirated content or ask for personal information or payment details.
    • -
    • Scan your downloaded files before opening them or playing them on your device.
    • -
    - -

    Conclusion

    -

    7am Arivu is a Tamil movie that you should not miss if you are a fan of sci-fi thrillers. The movie has an engaging plot, impressive visuals and stellar performances by Suriya and Shruti Haasan. If you want to download 7am Arivu full movie in mp4 format, you can use Isaimini or Internet Archive as your sources. However, be careful of the risks involved in downloading movies online and follow our tips to stay safe.

    -

    Download 7am Arivu Full Movie in Mp4 from YouTube

    -

    YouTube is the most popular video-sharing platform in the world. You can find millions of videos on YouTube, including movies, trailers, songs, documentaries and more. YouTube also allows you to download some videos for offline viewing, depending on the availability and permission of the content owners.

    -

    To download 7am Arivu full movie in mp4 from YouTube, follow these steps:

    -
      -
    1. Go to the official website of YouTube at https://www.youtube.com/
    2. -
    3. Search for 7am Arivu in the search box or browse through the categories.
    4. -
    5. Select the movie from the results and click on the download button below the video player.
    6. -
    7. Choose the mp4 format and the desired resolution from the options.
    8. -
    9. Wait for the download to complete and enjoy watching the movie.
    10. -
    -

    Note: YouTube is a legal and safe website that respects the rights of the content owners. However, not all videos on YouTube are available for download. Some videos might require a subscription or a payment to download. Therefore, before downloading any video from YouTube, make sure you check its availability and terms of use.

    - -

    Download 7am Arivu Full Movie in Mp4 from Torrent Sites

    -

    Torrent sites are websites that allow users to share files using peer-to-peer (P2P) technology. Torrent sites can offer a large variety of content, such as movies, music, games, software and more. Torrent sites can also provide high-quality and fast downloads, depending on the number of seeders and leechers.

    -

    To download 7am Arivu full movie in mp4 from torrent sites, follow these steps:

    -
      -
    1. Download and install a torrent client software, such as BitTorrent or uTorrent.
    2. -
    3. Go to a torrent site of your choice, such as The Pirate Bay or 1337x.
    4. -
    5. Search for 7am Arivu in the search box or browse through the categories.
    6. -
    7. Select the movie from the results and click on the magnet link or the torrent file.
    8. -
    9. Open the magnet link or the torrent file with your torrent client software.
    10. -
    11. Choose the mp4 format and the desired resolution from the options.
    12. -
    13. Wait for the download to complete and enjoy watching the movie.
    14. -
    -

    Note: Torrent sites are illegal websites that host pirated content without the permission of the original creators. Downloading from torrent sites might expose you to legal issues or cyberattacks. Therefore, we do not recommend using torrent sites or any other similar websites for downloading movies. Use them at your own risk.

    - -

    Conclusion

    -

    7am Arivu is a Tamil movie that you should not miss if you are a fan of sci-fi thrillers. The movie has an engaging plot, impressive visuals and stellar performances by Suriya and Shruti Haasan. If you want to download 7am Arivu full movie in mp4 format, you can use Isaimini, Internet Archive, YouTube or torrent sites as your sources. However, be careful of the risks involved in downloading movies online and follow our tips to stay safe.

    -

    Download 7am Arivu Full Movie in Mp4 from Zee5

    -

    Zee5 is a popular streaming platform that offers a variety of content, such as movies, TV shows, web series, live TV and more. Zee5 also has a collection of Tamil movies, including 7am Arivu. You can watch 7am Arivu on Zee5 with a subscription or a rental fee.

    -

    To download 7am Arivu full movie in mp4 from Zee5, follow these steps:

    -
      -
    1. Go to the official website of Zee5 at https://www.zee5.com/
    2. -
    3. Search for 7am Arivu in the search box or browse through the categories.
    4. -
    5. Select the movie from the results and click on the play button.
    6. -
    7. Sign up or log in to your Zee5 account and choose a subscription plan or a rental option.
    8. -
    9. Click on the download button below the video player and choose the mp4 format and the desired resolution from the options.
    10. -
    11. Wait for the download to complete and enjoy watching the movie.
    12. -
    -

    Note: Zee5 is a legal and safe website that respects the rights of the content owners. However, some of the content on Zee5 might be geo-restricted or require a payment to access. Therefore, before downloading any content from Zee5, make sure you check its availability and terms of use.

    - -

    Download 7am Arivu Full Movie in Mp4 from Telegram

    -

    Telegram is a popular messaging app that allows users to communicate with each other and share files, such as photos, videos, music and more. Telegram also has a feature called channels, which are groups of users who share common interests or topics. You can find channels that share Tamil movies, including 7am Arivu.

    -

    To download 7am Arivu full movie in mp4 from Telegram, follow these steps:

    -
      -
    1. Download and install the Telegram app on your device.
    2. -
    3. Open the app and create an account or log in to your existing account.
    4. -
    5. Search for channels that share Tamil movies or 7am Arivu in the search box or browse through the categories.
    6. -
    7. Select a channel from the results and join it.
    8. -
    9. Scroll through the posts and find the one that has 7am Arivu full movie in mp4 format.
    10. -
    11. Click on the download button below the post and wait for the download to complete.
    12. -
    13. Enjoy watching the movie on your device or transfer it to another device.
    14. -
    -

    Note: Telegram is a legal and safe app that respects the rights of the users. However, some of the channels on Telegram might share pirated content without the permission of the original creators. Downloading from such channels might expose you to legal issues or cyberattacks. Therefore, we do not recommend using such channels or any other similar sources for downloading movies. Use them at your own risk.

    - -

    Conclusion

    -

    7am Arivu is a Tamil movie that you should not miss if you are a fan of sci-fi thrillers. The movie has an engaging plot, impressive visuals and stellar performances by Suriya and Shruti Haasan. If you want to download 7am Arivu full movie in mp4 format, you can use Isaimini, Internet Archive, YouTube, Zee5 or Telegram as your sources. However, be careful of the risks involved in downloading movies online and follow our tips to stay safe.

    -

    Conclusion

    -

    7am Arivu is a Tamil movie that you should not miss if you are a fan of sci-fi thrillers. The movie has an engaging plot, impressive visuals and stellar performances by Suriya and Shruti Haasan. If you want to download 7am Arivu full movie in mp4 format, you can use Isaimini, Internet Archive, YouTube, Zee5 or Telegram as your sources. However, be careful of the risks involved in downloading movies online and follow our tips to stay safe.

    3cee63e6c2
    -
    -
    \ No newline at end of file diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Big Brother Movie Eng Sub Download.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Big Brother Movie Eng Sub Download.md deleted file mode 100644 index ea983874b9cfa8089ce1f06c693c99e62686422d..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Big Brother Movie Eng Sub Download.md +++ /dev/null @@ -1,10 +0,0 @@ -

    Big Brother movie eng sub download


    DOWNLOAD ——— https://urlgoal.com/2uCLP6



    - -Watch and download Big Brother with English sub in high quality. Various formats from 240p to 720p HD (or even 1080p). HTML5 available for mobile devices. - -Big Brother 2018 Season 10 - Englisch | Season 10 - Möchten Sie unsere neue Reality-Show-Seite sehen? Hier finden Sie die Entscheidungen der Houseguests, Aufrufe, Hintergründe, Kategorien und mehr: - -Watch and download Big Brother with English sub in high quality. Various formats from 240p to 720p HD (or even 1080p). 4fefd39f24
    -
    -
    -

    diff --git a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Descargar El Simulador De Rita Mulcahy.md b/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Descargar El Simulador De Rita Mulcahy.md deleted file mode 100644 index c941b7e1c9808787274398258b97bc0ac8f48848..0000000000000000000000000000000000000000 --- a/spaces/recenWmenso/ChatGPT-with-Voice-Cloning-for-All/datasets/Descargar El Simulador De Rita Mulcahy.md +++ /dev/null @@ -1,6 +0,0 @@ -

    Descargar El Simulador De Rita Mulcahy


    DOWNLOAD ✺✺✺ https://urlgoal.com/2uCLqg



    - -Rita Mulcahy's PMP Exam Prep, 9th Edition ... 200Q, 73%; OpenPM.org, 82%; PM Prepcast Free 120Q, 71%; 7-day trial of PMP Exam Simulator by PM Prepcast. 1fdad05405
    -
    -
    -

    diff --git a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/rpn.py b/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/rpn.py deleted file mode 100644 index 707e02b0ec94a55ac68fd8ee099a92a478e02184..0000000000000000000000000000000000000000 --- a/spaces/rockeycoss/Prompt-Segment-Anything-Demo/mmdet/models/detectors/rpn.py +++ /dev/null @@ -1,162 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import warnings -from inspect import signature - -import mmcv -import torch -from mmcv.image import tensor2imgs - -from mmdet.core import bbox_mapping -from ..builder import DETECTORS, build_backbone, build_head, build_neck -from .base import BaseDetector - - -@DETECTORS.register_module() -class RPN(BaseDetector): - """Implementation of Region Proposal Network.""" - - def __init__(self, - backbone, - neck, - rpn_head, - train_cfg, - test_cfg, - pretrained=None, - init_cfg=None): - super(RPN, self).__init__(init_cfg) - if pretrained: - warnings.warn('DeprecationWarning: pretrained is deprecated, ' - 'please use "init_cfg" instead') - backbone.pretrained = pretrained - self.backbone = build_backbone(backbone) - self.neck = build_neck(neck) if neck is not None else None - rpn_train_cfg = train_cfg.rpn if train_cfg is not None else None - rpn_head.update(train_cfg=rpn_train_cfg) - rpn_head.update(test_cfg=test_cfg.rpn) - self.rpn_head = build_head(rpn_head) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - - def extract_feat(self, img): - """Extract features. - - Args: - img (torch.Tensor): Image tensor with shape (n, c, h ,w). - - Returns: - list[torch.Tensor]: Multi-level features that may have - different resolutions. - """ - x = self.backbone(img) - if self.with_neck: - x = self.neck(x) - return x - - def forward_dummy(self, img): - """Dummy forward function.""" - x = self.extract_feat(img) - rpn_outs = self.rpn_head(x) - return rpn_outs - - def forward_train(self, - img, - img_metas, - gt_bboxes=None, - gt_bboxes_ignore=None): - """ - Args: - img (Tensor): Input images of shape (N, C, H, W). - Typically these should be mean centered and std scaled. - img_metas (list[dict]): A List of image info dict where each dict - has: 'img_shape', 'scale_factor', 'flip', and may also contain - 'filename', 'ori_shape', 'pad_shape', and 'img_norm_cfg'. - For details on the values of these keys see - :class:`mmdet.datasets.pipelines.Collect`. - gt_bboxes (list[Tensor]): Each item are the truth boxes for each - image in [tl_x, tl_y, br_x, br_y] format. - gt_bboxes_ignore (None | list[Tensor]): Specify which bounding - boxes can be ignored when computing the loss. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - if (isinstance(self.train_cfg.rpn, dict) - and self.train_cfg.rpn.get('debug', False)): - self.rpn_head.debug_imgs = tensor2imgs(img) - - x = self.extract_feat(img) - losses = self.rpn_head.forward_train(x, img_metas, gt_bboxes, None, - gt_bboxes_ignore) - return losses - - def simple_test(self, img, img_metas, rescale=False): - """Test function without test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[np.ndarray]: proposals - """ - x = self.extract_feat(img) - # get origin input shape to onnx dynamic input shape - if torch.onnx.is_in_onnx_export(): - img_shape = torch._shape_as_tensor(img)[2:] - img_metas[0]['img_shape_for_onnx'] = img_shape - proposal_list = self.rpn_head.simple_test_rpn(x, img_metas) - if rescale: - for proposals, meta in zip(proposal_list, img_metas): - proposals[:, :4] /= proposals.new_tensor(meta['scale_factor']) - if torch.onnx.is_in_onnx_export(): - return proposal_list - - return [proposal.cpu().numpy() for proposal in proposal_list] - - def aug_test(self, imgs, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - imgs (list[torch.Tensor]): List of multiple images - img_metas (list[dict]): List of image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[np.ndarray]: proposals - """ - proposal_list = self.rpn_head.aug_test_rpn( - self.extract_feats(imgs), img_metas) - if not rescale: - for proposals, img_meta in zip(proposal_list, img_metas[0]): - img_shape = img_meta['img_shape'] - scale_factor = img_meta['scale_factor'] - flip = img_meta['flip'] - flip_direction = img_meta['flip_direction'] - proposals[:, :4] = bbox_mapping(proposals[:, :4], img_shape, - scale_factor, flip, - flip_direction) - return [proposal.cpu().numpy() for proposal in proposal_list] - - def show_result(self, data, result, top_k=20, **kwargs): - """Show RPN proposals on the image. - - Args: - data (str or np.ndarray): Image filename or loaded image. - result (Tensor or tuple): The results to draw over `img` - bbox_result or (bbox_result, segm_result). - top_k (int): Plot the first k bboxes only - if set positive. Default: 20 - - Returns: - np.ndarray: The image with bboxes drawn on it. - """ - if kwargs is not None: - kwargs['colors'] = 'green' - sig = signature(mmcv.imshow_bboxes) - for k in list(kwargs.keys()): - if k not in sig.parameters: - kwargs.pop(k) - mmcv.imshow_bboxes(data, result, top_k=top_k, **kwargs) diff --git a/spaces/rorallitri/biomedical-language-models/logs/Ace Utilities 6.5.0 Build 297 With Key The Ultimate Solution for Junk Files Registry Errors and Internet History.md b/spaces/rorallitri/biomedical-language-models/logs/Ace Utilities 6.5.0 Build 297 With Key The Ultimate Solution for Junk Files Registry Errors and Internet History.md deleted file mode 100644 index 967a7d590bf4b3a250d1f6687216711cc8d92c67..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Ace Utilities 6.5.0 Build 297 With Key The Ultimate Solution for Junk Files Registry Errors and Internet History.md +++ /dev/null @@ -1,24 +0,0 @@ - -

    Typically, a business process analyst or data analyst will capture the requirements for a process or application and turn these into a formal set of interrelated data structures.The new Data Modeler tool provides an easy, straightforward and visual aid for building both logical and physical data models, without the need for advanced development skills or explicit coding.The data modeler is transparently integrated into the workbench.Its main goals are to make data models first class citizens in the process improvement cycle and allow for full process automation through the integrated use of data structures (and the forms that will be used to interact with them).

    -

    While RuntimeEnvironment interface provides mostly access to data kept as part of the environment and will be used by the RuntimeManager, users should take advantage of builder style class that provides fluent API to configure RuntimeEnvironment with predefined settings.

    -

    Ace Utilities 6.5.0 Build 297 With Key


    Download Ziphttps://tinurll.com/2uzlP9



    -

    While it usually is used with combination of other services (like deployment service) it can be used standalone as well to get details about process definition that do not come from kjar.This can be achieved by using buildProcessDefinition method of definition service.

    -

    QueryService provides advanced search capabilities that are based on Dashbuilder DataSets.The concept behind it is that users are given control over how to retrieve data from underlying data store.This includes complex joins with external tables such as JPA entities tables, custom systems data base tables etc.

    -

    To redeploy SNAPSHOT kjars with your latest changes all existing containers with that version must first be removed. Executing 'build and deploy' will then create a container with the latest SNAPSHOT kjar.However, this is not possible for release versions. Following maven release convensions if the GAV of a kjar is anthing but SNAPSHOT, the GAV will need to be updated to the newer release version and deployed to its own container. The new release version can also be used to upgrade an existing container as describe previously provided the container does not have process capability.

    -

    Document buildDocument( String name, long size, Date lastModified, Map params ): Creates a valid Document instancewith the data received. This method is called when a document is uploaded to create the Document instance before marshalling the document content.

    -

    One of the biggest complaints during the 5.x series was the lack of defined methodology for deployment.The mechanism used by Drools and jBPM was very flexible, but it was too flexible.A big focus for 6.0 was streamlining the build, deploy and loading(utilization) aspects of the system.Building and deploying now align with Maven and the utilization is now convention and configuration oriented, instead of programmatic, with sane default to minimise the configuration.

    -

    The workbench has been rebuilt from the ground up, inspired by Eclipse, to provide a flexible and better integrated solution; with panels and perspectives via plugins.The base workbench has been spun off into a standalone project called UberFire, so that anyone now can build high quality web based workbenches.In the longer term it will facilitate user customised Drools and jBPM installations.

    -

    jBPM has been dramatically beefed up, thanks to the Polymita acquisition, with human tasks, form builders, class modellers, execution servers and runtime management.All fully integrated into the new workbench.

    -

    -

    When navigating Projects with the Project Explorer the workbench automatically builds the selected project, displaying build messages in the Message Console.
    Whilst this is beneficial it can have a detremental impact on performance of the workbench when authoring large projects.
    The automatic build can now be disabled with the org.kie.build.disable-project-explorer System Property.
    Set the value to true to disable.
    The default value is false.

    -

    One of the biggest complaints during the 5.x series was the lack of defined methodology for deployment.
    The mechanism used by Drools and jBPM was very flexible, but it was too flexible.
    A big focus for 6.0 was streamlining the build, deploy and loading (utilization) aspects of the system.
    Building and deploying activities are now aligned with Maven and Maven repositories.
    The utilization for loading rules and processess is now convention and configuration oriented, instead of programmatic, with sane defaults to minimise the configuration.

    -

    In this example the KieScanner is configured to run with a fixed time interval, but it is also possible to run it on demand by invoking the scanNow() method on it.
    If the KieScanner finds, in the Maven repository, an updated version of the Kie project used by that KieContainer it automatically downloads the new version and triggers an incremental build of the new project.
    From this moment all the new KieBases and KieSessions created from that KieContainer will use the new project version.

    -

    The workbench has had a big overhaul using a new base project called UberFire.
    UberFire is inspired by Eclipse and provides a clean, extensible and flexible framework for the workbench.
    The end result is not only a richer experience for our end users, but we can now develop more rapidly with a clean component based architecture.
    If you like he Workbench experience you can use UberFire today to build your own web based dashboard and console efforts.

    -

    UberFire is the new base workbench project, spun off from the ground up rewrite.
    UberFire provides Eclipse-like workbench capabilities, with panels and perspectives from plugins.
    The project is independant of Drools and jBPM and anyone can use it as a basis of building flexible and powerful workbenches.
    UberFire will be used for console and workbench development throughout JBoss.

    -

    Building a KIE module without the Maven plugin will copy all the resources, as is, into the resulting JAR.
    When that JAR is loaded by the runtime, it will attempt to build all the resources then.
    If there are compilation issues it will return a null KieContainer.
    It also pushes the compilation overhead to the runtime.
    In general this is not recommended, and the Maven plugin should always be used.

    -

    In some cases, it is possible to change the default severity of a type of build result.
    For instance, when a new rule with the same name of an existing rule is added to a package, the default behavior is to replace the old rule by the new rule and report it as an INFO.
    This is probably ideal for most use cases, but in some deployments the user might want to prevent the rule update and report it as an error.

    -

    In this example the KieScanner is configured to run with a fixed time interval, but it is also possible to run it on demand by invoking the scanNow() method on it.
    If the KieScanner finds, in the Maven repository, an updated version of the Kie project used by that KieContainer it automatically downloads the new version and triggers an incremental build of the new project.
    At this point, existing KieBases and KieSessions under the control of KieContainer will get automatically upgraded with it - specifically, those KieBases obtained with getKieBase() along with their related KieSessions, and any KieSession obtained directly with KieContainer.newKieSession() thus referencing the default KieBase.
    Additionally, from this moment on, all the new KieBases and KieSessions created from that KieContainer will use the new project version.
    Please notice however any existing KieBase which was obtained via newKieBase() before the KieScanner upgrade, and any of its related KieSessions, will not get automatically upgraded; this is because KieBases obtained via newKieBase() are not under the direct control of the KieContainer.

    -

    The 5th Generation Computer project was a USD 400 million project in Japan to build a next generation computer.
    Valves (or Tubes) was the first generation, transistors the second, integrated circuits the third and finally microprocessors was the fourth.
    The fifth was intended to be a machine capable of effective Artificial Intelligence.
    This project spurred an "arms" race with the UK and USA, that caused much of the AI bubble.
    The 5GP would provide massive multi-cpu parallel processing hardware along with powerful knowledge representation and reasoning software via Prolog
    ; a type of [term]_ expert system_
    .
    By 1992 the project was considered a failure and cancelled.
    It was the largest and most visible commercial venture for Prolog, and many of the failures are pinned on the problems of trying to run a logic based programming language concurrently on multi CPU hardware with effective results.
    Some believe that the failure of the 5GP project tainted Prolog and relegated it to academia, see "Whatever Happened to Prolog" by John C.
    Dvorak.

    -

    Drools has a "native" rule language.
    This format is very light in terms of punctuation, and supports natural and domain specific languages via "expanders" that allow the language to morph to your problem domain.
    This chapter is mostly concerted with this native rule format.
    The diagrams used to present the syntax are known as "railroad" diagrams, and they are basically flow charts for the language terms.
    The technically very keen may also refer to DRL.g
    which is the Antlr3 grammar for the rule language.
    If you use the Rule Workbench, a lot of the rule structure is done for you with content assistance, for example, type "ru" and press ctrl+space, and it will build the rule structure for you.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/rorallitri/biomedical-language-models/logs/Funny Urdu Drama Scripts Pdf.md b/spaces/rorallitri/biomedical-language-models/logs/Funny Urdu Drama Scripts Pdf.md deleted file mode 100644 index 8916aa5b07d13168540d49a6a9bdde58b9f69ca0..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/Funny Urdu Drama Scripts Pdf.md +++ /dev/null @@ -1,6 +0,0 @@ -

    funny urdu drama scripts pdf


    Download File » https://tinurll.com/2uzmCV



    -
    -Naeem Tahir, a veteran Pakistani actor is Farhan Haroon's father. Qurutulain Baloch. The book Mere Sham o Sehar Novel Pdf is an excellent social and romantic story by Hina Kamran. ... Aneeza Syed, is the popular Urdu novelist, script and Story writer in Pakistan, ... New Funny Poetry By Dr Tahir Shaheer Mushaira 2019. 1fdad05405
    -
    -
    -

    diff --git a/spaces/rorallitri/biomedical-language-models/logs/M- Audio Xponent Traktor Pro 2 Tsi Download Drivers and Software Updates from M-Audio[3].md b/spaces/rorallitri/biomedical-language-models/logs/M- Audio Xponent Traktor Pro 2 Tsi Download Drivers and Software Updates from M-Audio[3].md deleted file mode 100644 index 76966ccacc6be1500914758af8244e048a8c78cd..0000000000000000000000000000000000000000 --- a/spaces/rorallitri/biomedical-language-models/logs/M- Audio Xponent Traktor Pro 2 Tsi Download Drivers and Software Updates from M-Audio[3].md +++ /dev/null @@ -1,6 +0,0 @@ - -

    This device has been discontinued. M-Audio discontinued its DJ productsafter the company was bought by inMusic in 2012. This device is a classcompliant USB audio and MIDI device, so it does not require a specialdriver on any OS that Mixxx runs on.

    -

    M- Audio Xponent Traktor Pro 2 Tsi


    DOWNLOAD ››› https://tinurll.com/2uzmXq



    -

    This device has been discontinued. M-Audio discontinued its DJ products after the company was bought by inMusic in 2012. This device is a class compliant USB audio and MIDI device, so it does notrequire a special driver on any OS that Mixxx runs on.

    aaccfb2cb3
    -
    -
    \ No newline at end of file diff --git a/spaces/roshithindia/chatBotGPT2/app.py b/spaces/roshithindia/chatBotGPT2/app.py deleted file mode 100644 index ce12a613d386dd9cef4d249011d55a4863466a1c..0000000000000000000000000000000000000000 --- a/spaces/roshithindia/chatBotGPT2/app.py +++ /dev/null @@ -1,31 +0,0 @@ -import streamlit as st -import tensorflow as tf -from transformers import TFGPT2LMHeadModel ,GPT2Tokenizer, BitsAndBytesConfig - -tokenizer = GPT2Tokenizer.from_pretrained('gpt2') -model = TFGPT2LMHeadModel.from_pretrained('gpt2',pad_token_id = tokenizer.eos_token_id) -def generate(inp): - input_ids = tokenizer.encode(inp,return_tensors = 'tf') - beam_output = model.generate(input_ids, max_length = 90,num_beams = 5, no_repeat_ngram_size = 2, early_stopping = True) - output = tokenizer.decode(beam_output[0],skip_special_tokens = True, clean_up_tokenization_spaces = True) - return ".".join(output.split(".")[:-1]) + "." - -st.title("Animal Bot") -if "messages" not in st.session_state: - st.session_state.messages = [] - st.session_state.messages.append({ - 'role':'assistant', - 'content':"Hi! I'm your Animal assistant, any queries about animals ?" - }) -for message in st.session_state.messages: - with st.chat_message(message["role"]): - st.markdown(message["content"]) -prompt = st.chat_input("Any Queries?") -if prompt: - with st.chat_message("user"): - st.markdown(prompt) - st.session_state.messages.append({"role":"user","content":prompt}) - response = generate(prompt) - with st.chat_message("assistant"): - st.markdown(response) - st.session_state.messages.append({"role":"assistant","content":response}) \ No newline at end of file diff --git a/spaces/russellc/BLIP/train_vqa.py b/spaces/russellc/BLIP/train_vqa.py deleted file mode 100644 index 89eb7490862e517cc660f842396033c21d441a20..0000000000000000000000000000000000000000 --- a/spaces/russellc/BLIP/train_vqa.py +++ /dev/null @@ -1,202 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -import argparse -import os -import ruamel_yaml as yaml -import numpy as np -import random -import time -import datetime -import json -from pathlib import Path - -import torch -import torch.nn as nn -import torch.nn.functional as F -from torch.utils.data import DataLoader -import torch.backends.cudnn as cudnn -import torch.distributed as dist - -from models.blip_vqa import blip_vqa -import utils -from utils import cosine_lr_schedule -from data import create_dataset, create_sampler, create_loader -from data.vqa_dataset import vqa_collate_fn -from data.utils import save_result - - -def train(model, data_loader, optimizer, epoch, device): - # train - model.train() - - metric_logger = utils.MetricLogger(delimiter=" ") - metric_logger.add_meter('lr', utils.SmoothedValue(window_size=1, fmt='{value:.6f}')) - metric_logger.add_meter('loss', utils.SmoothedValue(window_size=1, fmt='{value:.4f}')) - - header = 'Train Epoch: [{}]'.format(epoch) - print_freq = 50 - - for i,(image, question, answer, weights, n) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - image, weights = image.to(device,non_blocking=True), weights.to(device,non_blocking=True) - - loss = model(image, question, answer, train=True, n=n, weights=weights) - - optimizer.zero_grad() - loss.backward() - optimizer.step() - - metric_logger.update(loss=loss.item()) - metric_logger.update(lr=optimizer.param_groups[0]["lr"]) - - # gather the stats from all processes - metric_logger.synchronize_between_processes() - print("Averaged stats:", metric_logger.global_avg()) - return {k: "{:.3f}".format(meter.global_avg) for k, meter in metric_logger.meters.items()} - - -@torch.no_grad() -def evaluation(model, data_loader, device, config) : - # test - model.eval() - - metric_logger = utils.MetricLogger(delimiter=" ") - header = 'Generate VQA test result:' - print_freq = 50 - - result = [] - - if config['inference']=='rank': - answer_list = data_loader.dataset.answer_list - answer_candidates = model.tokenizer(answer_list, padding='longest', return_tensors='pt').to(device) - answer_candidates.input_ids[:,0] = model.tokenizer.bos_token_id - - for n, (image, question, question_id) in enumerate(metric_logger.log_every(data_loader, print_freq, header)): - image = image.to(device,non_blocking=True) - - if config['inference']=='generate': - answers = model(image, question, train=False, inference='generate') - - for answer, ques_id in zip(answers, question_id): - ques_id = int(ques_id.item()) - result.append({"question_id":ques_id, "answer":answer}) - - elif config['inference']=='rank': - answer_ids = model(image, question, answer_candidates, train=False, inference='rank', k_test=config['k_test']) - - for ques_id, answer_id in zip(question_id, answer_ids): - result.append({"question_id":int(ques_id.item()), "answer":answer_list[answer_id]}) - - return result - - -def main(args, config): - utils.init_distributed_mode(args) - - device = torch.device(args.device) - - # fix the seed for reproducibility - seed = args.seed + utils.get_rank() - torch.manual_seed(seed) - np.random.seed(seed) - random.seed(seed) - cudnn.benchmark = True - - #### Dataset #### - print("Creating vqa datasets") - datasets = create_dataset('vqa', config) - - if args.distributed: - num_tasks = utils.get_world_size() - global_rank = utils.get_rank() - samplers = create_sampler(datasets, [True, False], num_tasks, global_rank) - else: - samplers = [None, None] - - train_loader, test_loader = create_loader(datasets,samplers, - batch_size=[config['batch_size_train'],config['batch_size_test']], - num_workers=[4,4],is_trains=[True, False], - collate_fns=[vqa_collate_fn,None]) - #### Model #### - print("Creating model") - model = blip_vqa(pretrained=config['pretrained'], image_size=config['image_size'], - vit=config['vit'], vit_grad_ckpt=config['vit_grad_ckpt'], vit_ckpt_layer=config['vit_ckpt_layer']) - - model = model.to(device) - - model_without_ddp = model - if args.distributed: - model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[args.gpu]) - model_without_ddp = model.module - - optimizer = torch.optim.AdamW(params=model.parameters(), lr=config['init_lr'], weight_decay=config['weight_decay']) - - best = 0 - best_epoch = 0 - - print("Start training") - start_time = time.time() - for epoch in range(0, config['max_epoch']): - if not args.evaluate: - if args.distributed: - train_loader.sampler.set_epoch(epoch) - - cosine_lr_schedule(optimizer, epoch, config['max_epoch'], config['init_lr'], config['min_lr']) - - train_stats = train(model, train_loader, optimizer, epoch, device) - - else: - break - - if utils.is_main_process(): - log_stats = {**{f'train_{k}': v for k, v in train_stats.items()}, - 'epoch': epoch, - } - with open(os.path.join(args.output_dir, "log.txt"),"a") as f: - f.write(json.dumps(log_stats) + "\n") - - save_obj = { - 'model': model_without_ddp.state_dict(), - 'optimizer': optimizer.state_dict(), - 'config': config, - 'epoch': epoch, - } - torch.save(save_obj, os.path.join(args.output_dir, 'checkpoint_%02d.pth'%epoch)) - - dist.barrier() - - vqa_result = evaluation(model_without_ddp, test_loader, device, config) - result_file = save_result(vqa_result, args.result_dir, 'vqa_result') - - total_time = time.time() - start_time - total_time_str = str(datetime.timedelta(seconds=int(total_time))) - print('Training time {}'.format(total_time_str)) - - - -if __name__ == '__main__': - parser = argparse.ArgumentParser() - parser.add_argument('--config', default='./configs/vqa.yaml') - parser.add_argument('--output_dir', default='output/VQA') - parser.add_argument('--evaluate', action='store_true') - parser.add_argument('--device', default='cuda') - parser.add_argument('--seed', default=42, type=int) - parser.add_argument('--world_size', default=1, type=int, help='number of distributed processes') - parser.add_argument('--dist_url', default='env://', help='url used to set up distributed training') - parser.add_argument('--distributed', default=True, type=bool) - args = parser.parse_args() - - config = yaml.load(open(args.config, 'r'), Loader=yaml.Loader) - - args.result_dir = os.path.join(args.output_dir, 'result') - - Path(args.output_dir).mkdir(parents=True, exist_ok=True) - Path(args.result_dir).mkdir(parents=True, exist_ok=True) - - yaml.dump(config, open(os.path.join(args.output_dir, 'config.yaml'), 'w')) - - main(args, config) \ No newline at end of file diff --git a/spaces/scedlatioru/img-to-music/example/Easy Account Cod4 17 Download 17 [NEW].md b/spaces/scedlatioru/img-to-music/example/Easy Account Cod4 17 Download 17 [NEW].md deleted file mode 100644 index 5049876bce41417e204f053a01a50288261658b9..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Easy Account Cod4 17 Download 17 [NEW].md +++ /dev/null @@ -1,6 +0,0 @@ -

    Easy Account Cod4 17 Download 17


    Download File ✺✺✺ https://gohhs.com/2uEzI5



    -
    -December 18, 2552 B.C. — Easy Account COD4 1.7. With this little tool, you can simply upgrade yourself to level 55 with all parts unlocked. How to use Easy Account: 1. Download and install COD4 (Cod4client.jar and Cod4server.jar) on your Java application. 2. Go into the game and download the desired package with files from the download menu. 3. When you enter the download menu, you can see the "Easy Account" button, click on it. 4. Select an account type. 5. Choose a level and unlock all body parts. 6. Save your settings. You can look at the screenshot: Warning: this only works with COD4client.jar. COD4server.jar is not required and will not work. 8a78ff9644
    -
    -
    -

    diff --git a/spaces/scedlatioru/img-to-music/example/Snaitfs True Nudists Mod.md b/spaces/scedlatioru/img-to-music/example/Snaitfs True Nudists Mod.md deleted file mode 100644 index 573b682ef53161bbe4bb9ed21aa37b30bab5b45d..0000000000000000000000000000000000000000 --- a/spaces/scedlatioru/img-to-music/example/Snaitfs True Nudists Mod.md +++ /dev/null @@ -1,6 +0,0 @@ -

    snaitf's true nudists mod


    DOWNLOAD ★★★ https://gohhs.com/2uEzPy



    - - 3cee63e6c2
    -
    -
    -

    diff --git a/spaces/shi-labs/OneFormer/oneformer/modeling/transformer_decoder/transformer.py b/spaces/shi-labs/OneFormer/oneformer/modeling/transformer_decoder/transformer.py deleted file mode 100644 index cd07525673b9b1165e1fdd0c9990a8f29c84f199..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/OneFormer/oneformer/modeling/transformer_decoder/transformer.py +++ /dev/null @@ -1,376 +0,0 @@ -# ------------------------------------------------------------------------------ -# Reference: https://github.com/facebookresearch/Mask2Former/blob/main/mask2former/modeling/transformer_decoder/transformer.py -# Modified by Jitesh Jain (https://github.com/praeclarumjj3) -# ------------------------------------------------------------------------------ - -""" -Transformer class. - -Copy-paste from torch.nn.Transformer with modifications: - * positional encodings are passed in MHattention - * extra LN at the end of encoder is removed - * decoder returns a stack of activations from all decoding layers -""" -import copy -from typing import List, Optional - -import torch -import torch.nn.functional as F -from torch import Tensor, nn - - -class Transformer(nn.Module): - def __init__( - self, - d_model=512, - nhead=8, - num_encoder_layers=6, - num_decoder_layers=6, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - return_intermediate_dec=False, - ): - super().__init__() - - encoder_layer = TransformerEncoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - encoder_norm = nn.LayerNorm(d_model) if normalize_before else None - self.encoder = TransformerEncoder(encoder_layer, num_encoder_layers, encoder_norm) - - decoder_layer = TransformerDecoderLayer( - d_model, nhead, dim_feedforward, dropout, activation, normalize_before - ) - decoder_norm = nn.LayerNorm(d_model) - self.decoder = TransformerDecoder( - decoder_layer, - num_decoder_layers, - decoder_norm, - return_intermediate=return_intermediate_dec, - ) - - self._reset_parameters() - - self.d_model = d_model - self.nhead = nhead - - def _reset_parameters(self): - for p in self.parameters(): - if p.dim() > 1: - nn.init.xavier_uniform_(p) - - def forward(self, src, mask, query_embed, pos_embed, task_token=None): - # flatten NxCxHxW to HWxNxC - bs, c, h, w = src.shape - src = src.flatten(2).permute(2, 0, 1) - pos_embed = pos_embed.flatten(2).permute(2, 0, 1) - query_embed = query_embed.unsqueeze(1).repeat(1, bs, 1) - if mask is not None: - mask = mask.flatten(1) - - if task_token is None: - tgt = torch.zeros_like(query_embed) - else: - tgt = task_token.repeat(query_embed.shape[0], 1, 1) - - memory = self.encoder(src, src_key_padding_mask=mask, pos=pos_embed) - hs = self.decoder( - tgt, memory, memory_key_padding_mask=mask, pos=pos_embed, query_pos=query_embed - ) - return hs.transpose(1, 2), memory.permute(1, 2, 0).view(bs, c, h, w) - - -class TransformerEncoder(nn.Module): - def __init__(self, encoder_layer, num_layers, norm=None): - super().__init__() - self.layers = _get_clones(encoder_layer, num_layers) - self.num_layers = num_layers - self.norm = norm - - def forward( - self, - src, - mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - output = src - - for layer in self.layers: - output = layer( - output, src_mask=mask, src_key_padding_mask=src_key_padding_mask, pos=pos - ) - - if self.norm is not None: - output = self.norm(output) - - return output - - -class TransformerDecoder(nn.Module): - def __init__(self, decoder_layer, num_layers, norm=None, return_intermediate=False): - super().__init__() - self.layers = _get_clones(decoder_layer, num_layers) - self.num_layers = num_layers - self.norm = norm - self.return_intermediate = return_intermediate - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - output = tgt - - intermediate = [] - - for layer in self.layers: - output = layer( - output, - memory, - tgt_mask=tgt_mask, - memory_mask=memory_mask, - tgt_key_padding_mask=tgt_key_padding_mask, - memory_key_padding_mask=memory_key_padding_mask, - pos=pos, - query_pos=query_pos, - ) - if self.return_intermediate: - intermediate.append(self.norm(output)) - - if self.norm is not None: - output = self.norm(output) - if self.return_intermediate: - intermediate.pop() - intermediate.append(output) - - if self.return_intermediate: - return torch.stack(intermediate) - - return output.unsqueeze(0) - - -class TransformerEncoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - q = k = self.with_pos_embed(src, pos) - src2 = self.self_attn( - q, k, value=src, attn_mask=src_mask, key_padding_mask=src_key_padding_mask - )[0] - src = src + self.dropout1(src2) - src = self.norm1(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src)))) - src = src + self.dropout2(src2) - src = self.norm2(src) - return src - - def forward_pre( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - src2 = self.norm1(src) - q = k = self.with_pos_embed(src2, pos) - src2 = self.self_attn( - q, k, value=src2, attn_mask=src_mask, key_padding_mask=src_key_padding_mask - )[0] - src = src + self.dropout1(src2) - src2 = self.norm2(src) - src2 = self.linear2(self.dropout(self.activation(self.linear1(src2)))) - src = src + self.dropout2(src2) - return src - - def forward( - self, - src, - src_mask: Optional[Tensor] = None, - src_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - ): - if self.normalize_before: - return self.forward_pre(src, src_mask, src_key_padding_mask, pos) - return self.forward_post(src, src_mask, src_key_padding_mask, pos) - - -class TransformerDecoderLayer(nn.Module): - def __init__( - self, - d_model, - nhead, - dim_feedforward=2048, - dropout=0.1, - activation="relu", - normalize_before=False, - ): - super().__init__() - self.self_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - self.multihead_attn = nn.MultiheadAttention(d_model, nhead, dropout=dropout) - # Implementation of Feedforward model - self.linear1 = nn.Linear(d_model, dim_feedforward) - self.dropout = nn.Dropout(dropout) - self.linear2 = nn.Linear(dim_feedforward, d_model) - - self.norm1 = nn.LayerNorm(d_model) - self.norm2 = nn.LayerNorm(d_model) - self.norm3 = nn.LayerNorm(d_model) - self.dropout1 = nn.Dropout(dropout) - self.dropout2 = nn.Dropout(dropout) - self.dropout3 = nn.Dropout(dropout) - - self.activation = _get_activation_fn(activation) - self.normalize_before = normalize_before - - def with_pos_embed(self, tensor, pos: Optional[Tensor]): - return tensor if pos is None else tensor + pos - - def forward_post( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - q = k = self.with_pos_embed(tgt, query_pos) - tgt2 = self.self_attn( - q, k, value=tgt, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask - )[0] - tgt = tgt + self.dropout1(tgt2) - tgt = self.norm1(tgt) - tgt2 = self.multihead_attn( - query=self.with_pos_embed(tgt, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, - attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask, - )[0] - tgt = tgt + self.dropout2(tgt2) - tgt = self.norm2(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt)))) - tgt = tgt + self.dropout3(tgt2) - tgt = self.norm3(tgt) - return tgt - - def forward_pre( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - tgt2 = self.norm1(tgt) - q = k = self.with_pos_embed(tgt2, query_pos) - tgt2 = self.self_attn( - q, k, value=tgt2, attn_mask=tgt_mask, key_padding_mask=tgt_key_padding_mask - )[0] - tgt = tgt + self.dropout1(tgt2) - tgt2 = self.norm2(tgt) - tgt2 = self.multihead_attn( - query=self.with_pos_embed(tgt2, query_pos), - key=self.with_pos_embed(memory, pos), - value=memory, - attn_mask=memory_mask, - key_padding_mask=memory_key_padding_mask, - )[0] - tgt = tgt + self.dropout2(tgt2) - tgt2 = self.norm3(tgt) - tgt2 = self.linear2(self.dropout(self.activation(self.linear1(tgt2)))) - tgt = tgt + self.dropout3(tgt2) - return tgt - - def forward( - self, - tgt, - memory, - tgt_mask: Optional[Tensor] = None, - memory_mask: Optional[Tensor] = None, - tgt_key_padding_mask: Optional[Tensor] = None, - memory_key_padding_mask: Optional[Tensor] = None, - pos: Optional[Tensor] = None, - query_pos: Optional[Tensor] = None, - ): - if self.normalize_before: - return self.forward_pre( - tgt, - memory, - tgt_mask, - memory_mask, - tgt_key_padding_mask, - memory_key_padding_mask, - pos, - query_pos, - ) - return self.forward_post( - tgt, - memory, - tgt_mask, - memory_mask, - tgt_key_padding_mask, - memory_key_padding_mask, - pos, - query_pos, - ) - - -def _get_clones(module, N): - return nn.ModuleList([copy.deepcopy(module) for i in range(N)]) - - -def _get_activation_fn(activation): - """Return an activation function given a string""" - if activation == "relu": - return F.relu - if activation == "gelu": - return F.gelu - if activation == "glu": - return F.glu - raise RuntimeError(f"activation should be relu/gelu, not {activation}.") diff --git a/spaces/shi-labs/Prompt-Free-Diffusion/lib/cfg_helper.py b/spaces/shi-labs/Prompt-Free-Diffusion/lib/cfg_helper.py deleted file mode 100644 index e549e35d5be238a8e73eb65ff8625dc2838ab230..0000000000000000000000000000000000000000 --- a/spaces/shi-labs/Prompt-Free-Diffusion/lib/cfg_helper.py +++ /dev/null @@ -1,666 +0,0 @@ -import os -import os.path as osp -import shutil -import copy -import time -import pprint -import numpy as np -import torch -import matplotlib -import argparse -import json -import yaml -from easydict import EasyDict as edict - -from .model_zoo import get_model - -############ -# cfg_bank # -############ - -def cfg_solvef(cmd, root): - if not isinstance(cmd, str): - return cmd - - if cmd.find('SAME')==0: - zoom = root - p = cmd[len('SAME'):].strip('()').split('.') - p = [pi.strip() for pi in p] - for pi in p: - try: - pi = int(pi) - except: - pass - - try: - zoom = zoom[pi] - except: - return cmd - return cfg_solvef(zoom, root) - - if cmd.find('SEARCH')==0: - zoom = root - p = cmd[len('SEARCH'):].strip('()').split('.') - p = [pi.strip() for pi in p] - find = True - # Depth first search - for pi in p: - try: - pi = int(pi) - except: - pass - - try: - zoom = zoom[pi] - except: - find = False - break - - if find: - return cfg_solvef(zoom, root) - else: - if isinstance(root, dict): - for ri in root: - rv = cfg_solvef(cmd, root[ri]) - if rv != cmd: - return rv - if isinstance(root, list): - for ri in root: - rv = cfg_solvef(cmd, ri) - if rv != cmd: - return rv - return cmd - - if cmd.find('MODEL')==0: - goto = cmd[len('MODEL'):].strip('()') - return model_cfg_bank()(goto) - - if cmd.find('DATASET')==0: - goto = cmd[len('DATASET'):].strip('()') - return dataset_cfg_bank()(goto) - - return cmd - -def cfg_solve(cfg, cfg_root): - # The function solve cfg element such that - # all sorrogate input are settled. - # (i.e. SAME(***) ) - if isinstance(cfg, list): - for i in range(len(cfg)): - if isinstance(cfg[i], (list, dict)): - cfg[i] = cfg_solve(cfg[i], cfg_root) - else: - cfg[i] = cfg_solvef(cfg[i], cfg_root) - if isinstance(cfg, dict): - for k in cfg: - if isinstance(cfg[k], (list, dict)): - cfg[k] = cfg_solve(cfg[k], cfg_root) - else: - cfg[k] = cfg_solvef(cfg[k], cfg_root) - return cfg - -class model_cfg_bank(object): - def __init__(self): - self.cfg_dir = osp.join('configs', 'model') - self.cfg_bank = edict() - - def __call__(self, name): - if name not in self.cfg_bank: - cfg_path = self.get_yaml_path(name) - with open(cfg_path, 'r') as f: - cfg_new = yaml.load( - f, Loader=yaml.FullLoader) - cfg_new = edict(cfg_new) - self.cfg_bank.update(cfg_new) - - cfg = self.cfg_bank[name] - cfg.name = name - if 'super_cfg' not in cfg: - cfg = cfg_solve(cfg, cfg) - self.cfg_bank[name] = cfg - return copy.deepcopy(cfg) - - super_cfg = self.__call__(cfg.super_cfg) - # unlike other field, - # args will not be replaced but update. - if 'args' in cfg: - if 'args' in super_cfg: - super_cfg.args.update(cfg.args) - else: - super_cfg.args = cfg.args - cfg.pop('args') - - super_cfg.update(cfg) - super_cfg.pop('super_cfg') - cfg = super_cfg - try: - delete_args = cfg.pop('delete_args') - except: - delete_args = [] - - for dargs in delete_args: - cfg.args.pop(dargs) - - cfg = cfg_solve(cfg, cfg) - self.cfg_bank[name] = cfg - return copy.deepcopy(cfg) - - def get_yaml_path(self, name): - if name.find('openai_unet')==0: - return osp.join( - self.cfg_dir, 'openai_unet.yaml') - elif name.find('clip')==0: - return osp.join( - self.cfg_dir, 'clip.yaml') - elif name.find('autokl')==0: - return osp.join( - self.cfg_dir, 'autokl.yaml') - elif name.find('controlnet')==0: - return osp.join( - self.cfg_dir, 'controlnet.yaml') - elif name.find('swin')==0: - return osp.join( - self.cfg_dir, 'swin.yaml') - elif name.find('pfd')==0: - return osp.join( - self.cfg_dir, 'pfd.yaml') - elif name.find('seecoder')==0: - return osp.join( - self.cfg_dir, 'seecoder.yaml') - else: - raise ValueError - -class dataset_cfg_bank(object): - def __init__(self): - self.cfg_dir = osp.join('configs', 'dataset') - self.cfg_bank = edict() - - def __call__(self, name): - if name not in self.cfg_bank: - cfg_path = self.get_yaml_path(name) - with open(cfg_path, 'r') as f: - cfg_new = yaml.load( - f, Loader=yaml.FullLoader) - cfg_new = edict(cfg_new) - self.cfg_bank.update(cfg_new) - - cfg = self.cfg_bank[name] - cfg.name = name - if cfg.get('super_cfg', None) is None: - cfg = cfg_solve(cfg, cfg) - self.cfg_bank[name] = cfg - return copy.deepcopy(cfg) - - super_cfg = self.__call__(cfg.super_cfg) - super_cfg.update(cfg) - cfg = super_cfg - cfg.super_cfg = None - try: - delete = cfg.pop('delete') - except: - delete = [] - - for dargs in delete: - cfg.pop(dargs) - - cfg = cfg_solve(cfg, cfg) - self.cfg_bank[name] = cfg - return copy.deepcopy(cfg) - - def get_yaml_path(self, name): - if name.find('cityscapes')==0: - return osp.join( - self.cfg_dir, 'cityscapes.yaml') - elif name.find('div2k')==0: - return osp.join( - self.cfg_dir, 'div2k.yaml') - elif name.find('gandiv2k')==0: - return osp.join( - self.cfg_dir, 'gandiv2k.yaml') - elif name.find('srbenchmark')==0: - return osp.join( - self.cfg_dir, 'srbenchmark.yaml') - elif name.find('imagedir')==0: - return osp.join( - self.cfg_dir, 'imagedir.yaml') - elif name.find('places2')==0: - return osp.join( - self.cfg_dir, 'places2.yaml') - elif name.find('ffhq')==0: - return osp.join( - self.cfg_dir, 'ffhq.yaml') - elif name.find('imcpt')==0: - return osp.join( - self.cfg_dir, 'imcpt.yaml') - elif name.find('texture')==0: - return osp.join( - self.cfg_dir, 'texture.yaml') - elif name.find('openimages')==0: - return osp.join( - self.cfg_dir, 'openimages.yaml') - elif name.find('laion2b')==0: - return osp.join( - self.cfg_dir, 'laion2b.yaml') - elif name.find('laionart')==0: - return osp.join( - self.cfg_dir, 'laionart.yaml') - elif name.find('celeba')==0: - return osp.join( - self.cfg_dir, 'celeba.yaml') - elif name.find('coyo')==0: - return osp.join( - self.cfg_dir, 'coyo.yaml') - elif name.find('pafc')==0: - return osp.join( - self.cfg_dir, 'pafc.yaml') - elif name.find('coco')==0: - return osp.join( - self.cfg_dir, 'coco.yaml') - elif name.find('genai')==0: - return osp.join( - self.cfg_dir, 'genai.yaml') - else: - raise ValueError - -class experiment_cfg_bank(object): - def __init__(self): - self.cfg_dir = osp.join('configs', 'experiment') - self.cfg_bank = edict() - - def __call__(self, name): - if name not in self.cfg_bank: - cfg_path = self.get_yaml_path(name) - with open(cfg_path, 'r') as f: - cfg = yaml.load( - f, Loader=yaml.FullLoader) - cfg = edict(cfg) - - cfg = cfg_solve(cfg, cfg) - cfg = cfg_solve(cfg, cfg) - # twice for SEARCH - self.cfg_bank[name] = cfg - return copy.deepcopy(cfg) - - def get_yaml_path(self, name): - return osp.join( - self.cfg_dir, name+'.yaml') - -def load_cfg_yaml(path): - if osp.isfile(path): - cfg_path = path - elif osp.isfile(osp.join('configs', 'experiment', path)): - cfg_path = osp.join('configs', 'experiment', path) - elif osp.isfile(osp.join('configs', 'experiment', path+'.yaml')): - cfg_path = osp.join('configs', 'experiment', path+'.yaml') - else: - assert False, 'No such config!' - - with open(cfg_path, 'r') as f: - cfg = yaml.load(f, Loader=yaml.FullLoader) - cfg = edict(cfg) - cfg = cfg_solve(cfg, cfg) - cfg = cfg_solve(cfg, cfg) - return cfg - -############## -# cfg_helper # -############## - -def get_experiment_id(ref=None): - if ref is None: - time.sleep(0.5) - return int(time.time()*100) - else: - try: - return int(ref) - except: - pass - - _, ref = osp.split(ref) - ref = ref.split('_')[0] - try: - return int(ref) - except: - assert False, 'Invalid experiment ID!' - -def record_resume_cfg(path): - cnt = 0 - while True: - if osp.exists(path+'.{:04d}'.format(cnt)): - cnt += 1 - continue - shutil.copyfile(path, path+'.{:04d}'.format(cnt)) - break - -def get_command_line_args(): - parser = argparse.ArgumentParser() - parser.add_argument('--debug', action='store_true', default=False) - parser.add_argument('--config', type=str) - parser.add_argument('--gpu', nargs='+', type=int) - - parser.add_argument('--node_rank', type=int) - parser.add_argument('--node_list', nargs='+', type=str) - parser.add_argument('--nodes', type=int) - parser.add_argument('--addr', type=str, default='127.0.0.1') - parser.add_argument('--port', type=int, default=11233) - - parser.add_argument('--signature', nargs='+', type=str) - parser.add_argument('--seed', type=int) - - parser.add_argument('--eval', type=str) - parser.add_argument('--eval_subdir', type=str) - parser.add_argument('--pretrained', type=str) - - parser.add_argument('--resume_dir', type=str) - parser.add_argument('--resume_step', type=int) - parser.add_argument('--resume_weight', type=str) - - args = parser.parse_args() - - # Special handling the resume - if args.resume_dir is not None: - cfg = edict() - cfg.env = edict() - cfg.env.debug = args.debug - cfg.env.resume = edict() - cfg.env.resume.dir = args.resume_dir - cfg.env.resume.step = args.resume_step - cfg.env.resume.weight = args.resume_weight - return cfg - - cfg = load_cfg_yaml(args.config) - cfg.env.debug = args.debug - cfg.env.gpu_device = [0] if args.gpu is None else list(args.gpu) - cfg.env.master_addr = args.addr - cfg.env.master_port = args.port - cfg.env.dist_url = 'tcp://{}:{}'.format(args.addr, args.port) - - if args.node_list is None: - cfg.env.node_rank = 0 if args.node_rank is None else args.node_rank - cfg.env.nodes = 1 if args.nodes is None else args.nodes - else: - import socket - hostname = socket.gethostname() - assert cfg.env.master_addr == args.node_list[0] - cfg.env.node_rank = args.node_list.index(hostname) - cfg.env.nodes = len(args.node_list) - cfg.env.node_list = args.node_list - - istrain = False if args.eval is not None else True - isdebug = cfg.env.debug - - if istrain: - if isdebug: - cfg.env.experiment_id = 999999999999 - cfg.train.signature = ['debug'] - else: - cfg.env.experiment_id = get_experiment_id() - if args.signature is not None: - cfg.train.signature = args.signature - else: - if 'train' in cfg: - cfg.pop('train') - cfg.env.experiment_id = get_experiment_id(args.eval) - if args.signature is not None: - cfg.eval.signature = args.signature - - if isdebug and (args.eval is None): - cfg.env.experiment_id = 999999999999 - cfg.eval.signature = ['debug'] - - if args.eval_subdir is not None: - if isdebug: - cfg.eval.eval_subdir = 'debug' - else: - cfg.eval.eval_subdir = args.eval_subdir - if args.pretrained is not None: - cfg.eval.pretrained = args.pretrained - # The override pretrained over the setting in cfg.model - - if args.seed is not None: - cfg.env.rnd_seed = args.seed - - return cfg - -def cfg_initiates(cfg): - cfge = cfg.env - isdebug = cfge.debug - isresume = 'resume' in cfge - istrain = 'train' in cfg - haseval = 'eval' in cfg - cfgt = cfg.train if istrain else None - cfgv = cfg.eval if haseval else None - - ############################### - # get some environment params # - ############################### - - cfge.computer = os.uname() - cfge.torch_version = str(torch.__version__) - - ########## - # resume # - ########## - - if isresume: - resume_cfg_path = osp.join(cfge.resume.dir, 'config.yaml') - record_resume_cfg(resume_cfg_path) - with open(resume_cfg_path, 'r') as f: - cfg_resume = yaml.load(f, Loader=yaml.FullLoader) - cfg_resume = edict(cfg_resume) - cfg_resume.env.update(cfge) - cfg = cfg_resume - cfge = cfg.env - log_file = cfg.train.log_file - - print('') - print('##########') - print('# resume #') - print('##########') - print('') - with open(log_file, 'a') as f: - print('', file=f) - print('##########', file=f) - print('# resume #', file=f) - print('##########', file=f) - print('', file=f) - - pprint.pprint(cfg) - with open(log_file, 'a') as f: - pprint.pprint(cfg, f) - - #################### - # node distributed # - #################### - - if cfg.env.master_addr!='127.0.0.1': - os.environ['MASTER_ADDR'] = cfge.master_addr - os.environ['MASTER_PORT'] = '{}'.format(cfge.master_port) - if cfg.env.dist_backend=='nccl': - os.environ['NCCL_SOCKET_FAMILY'] = 'AF_INET' - if cfg.env.dist_backend=='gloo': - os.environ['GLOO_SOCKET_FAMILY'] = 'AF_INET' - - ####################### - # cuda visible device # - ####################### - - os.environ["CUDA_VISIBLE_DEVICES"] = ','.join( - [str(gid) for gid in cfge.gpu_device]) - - ##################### - # return resume cfg # - ##################### - - if isresume: - return cfg - - ############################################# - # some misc setting that not need in resume # - ############################################# - - cfgm = cfg.model - cfge.gpu_count = len(cfge.gpu_device) - - ########################################## - # align batch size and num worker config # - ########################################## - - gpu_n = cfge.gpu_count * cfge.nodes - def align_batch_size(bs, bs_per_gpu): - assert (bs is not None) or (bs_per_gpu is not None) - bs = bs_per_gpu * gpu_n if bs is None else bs - bs_per_gpu = bs // gpu_n if bs_per_gpu is None else bs_per_gpu - assert (bs == bs_per_gpu * gpu_n) - return bs, bs_per_gpu - - if istrain: - cfgt.batch_size, cfgt.batch_size_per_gpu = \ - align_batch_size(cfgt.batch_size, cfgt.batch_size_per_gpu) - cfgt.dataset_num_workers, cfgt.dataset_num_workers_per_gpu = \ - align_batch_size(cfgt.dataset_num_workers, cfgt.dataset_num_workers_per_gpu) - if haseval: - cfgv.batch_size, cfgv.batch_size_per_gpu = \ - align_batch_size(cfgv.batch_size, cfgv.batch_size_per_gpu) - cfgv.dataset_num_workers, cfgv.dataset_num_workers_per_gpu = \ - align_batch_size(cfgv.dataset_num_workers, cfgv.dataset_num_workers_per_gpu) - - ################## - # create log dir # - ################## - - if istrain: - if not isdebug: - sig = cfgt.get('signature', []) - sig = sig + ['s{}'.format(cfge.rnd_seed)] - else: - sig = ['debug'] - - log_dir = [ - cfge.log_root_dir, - '{}_{}'.format(cfgm.symbol, cfgt.dataset.symbol), - '_'.join([str(cfge.experiment_id)] + sig) - ] - log_dir = osp.join(*log_dir) - log_file = osp.join(log_dir, 'train.log') - if not osp.exists(log_file): - os.makedirs(osp.dirname(log_file)) - cfgt.log_dir = log_dir - cfgt.log_file = log_file - - if haseval: - cfgv.log_dir = log_dir - cfgv.log_file = log_file - else: - model_symbol = cfgm.symbol - if cfgv.get('dataset', None) is None: - dataset_symbol = 'nodataset' - else: - dataset_symbol = cfgv.dataset.symbol - - log_dir = osp.join(cfge.log_root_dir, '{}_{}'.format(model_symbol, dataset_symbol)) - exp_dir = search_experiment_folder(log_dir, cfge.experiment_id) - if exp_dir is None: - if not isdebug: - sig = cfgv.get('signature', []) + ['evalonly'] - else: - sig = ['debug'] - exp_dir = '_'.join([str(cfge.experiment_id)] + sig) - - eval_subdir = cfgv.get('eval_subdir', None) - # override subdir in debug mode (if eval_subdir is set) - eval_subdir = 'debug' if (eval_subdir is not None) and isdebug else eval_subdir - - if eval_subdir is not None: - log_dir = osp.join(log_dir, exp_dir, eval_subdir) - else: - log_dir = osp.join(log_dir, exp_dir) - - disable_log_override = cfgv.get('disable_log_override', False) - if osp.isdir(log_dir): - if disable_log_override: - assert False, 'Override an exsited log_dir is disabled at [{}]'.format(log_dir) - else: - os.makedirs(log_dir) - - log_file = osp.join(log_dir, 'eval.log') - cfgv.log_dir = log_dir - cfgv.log_file = log_file - - ###################### - # print and save cfg # - ###################### - - pprint.pprint(cfg) - if cfge.node_rank==0: - with open(log_file, 'w') as f: - pprint.pprint(cfg, f) - with open(osp.join(log_dir, 'config.yaml'), 'w') as f: - yaml.dump(edict_2_dict(cfg), f) - else: - with open(osp.join(log_dir, 'config.yaml.{}'.format(cfge.node_rank)), 'w') as f: - yaml.dump(edict_2_dict(cfg), f) - - ############# - # save code # - ############# - - save_code = False - if istrain: - save_code = cfgt.get('save_code', False) - elif haseval: - save_code = cfgv.get('save_code', False) - save_code = save_code and (cfge.node_rank==0) - - if save_code: - codedir = osp.join(log_dir, 'code') - if osp.exists(codedir): - shutil.rmtree(codedir) - for d in ['configs', 'lib']: - fromcodedir = d - tocodedir = osp.join(codedir, d) - shutil.copytree( - fromcodedir, tocodedir, - ignore=shutil.ignore_patterns( - '*__pycache__*', '*build*')) - for codei in os.listdir('.'): - if osp.splitext(codei)[1] == 'py': - shutil.copy(codei, codedir) - - ####################### - # set matplotlib mode # - ####################### - - if 'matplotlib_mode' in cfge: - try: - matplotlib.use(cfge.matplotlib_mode) - except: - print('Warning: matplotlib mode [{}] failed to be set!'.format(cfge.matplotlib_mode)) - - return cfg - -def edict_2_dict(x): - if isinstance(x, dict): - xnew = {} - for k in x: - xnew[k] = edict_2_dict(x[k]) - return xnew - elif isinstance(x, list): - xnew = [] - for i in range(len(x)): - xnew.append( edict_2_dict(x[i]) ) - return xnew - else: - return x - -def search_experiment_folder(root, exid): - target = None - for fi in os.listdir(root): - if not osp.isdir(osp.join(root, fi)): - continue - if int(fi.split('_')[0]) == exid: - if target is not None: - return None # duplicated - elif target is None: - target = fi - return target diff --git a/spaces/shivammehta25/Diff-TTSG/diff_ttsg/hifigan/meldataset.py b/spaces/shivammehta25/Diff-TTSG/diff_ttsg/hifigan/meldataset.py deleted file mode 100644 index 2977a8476527e531e53dedb2684b97338ec26fea..0000000000000000000000000000000000000000 --- a/spaces/shivammehta25/Diff-TTSG/diff_ttsg/hifigan/meldataset.py +++ /dev/null @@ -1,171 +0,0 @@ -""" from https://github.com/jik876/hifi-gan """ - -import math -import os -import random - -import numpy as np -import torch -import torch.utils.data -from librosa.filters import mel as librosa_mel_fn -from librosa.util import normalize -from scipy.io.wavfile import read - -MAX_WAV_VALUE = 32768.0 - - -def load_wav(full_path): - sampling_rate, data = read(full_path) - return data, sampling_rate - - -def dynamic_range_compression(x, C=1, clip_val=1e-5): - return np.log(np.clip(x, a_min=clip_val, a_max=None) * C) - - -def dynamic_range_decompression(x, C=1): - return np.exp(x) / C - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def mel_spectrogram(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - if fmax not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[str(fmax)+'_'+str(y.device)] = torch.from_numpy(mel).float().to(y.device) - hann_window[str(y.device)] = torch.hann_window(win_size).to(y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.view_as_real(torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[str(y.device)], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=True)) - - spec = torch.sqrt(spec.pow(2).sum(-1)+(1e-9)) - - spec = torch.matmul(mel_basis[str(fmax)+'_'+str(y.device)], spec) - spec = spectral_normalize_torch(spec) - - return spec - - -def get_dataset_filelist(a): - with open(a.input_training_file, 'r', encoding='utf-8') as fi: - training_files = [os.path.join(a.input_wavs_dir, x.split('|')[0] + '.wav') - for x in fi.read().split('\n') if len(x) > 0] - - with open(a.input_validation_file, 'r', encoding='utf-8') as fi: - validation_files = [os.path.join(a.input_wavs_dir, x.split('|')[0] + '.wav') - for x in fi.read().split('\n') if len(x) > 0] - return training_files, validation_files - - -class MelDataset(torch.utils.data.Dataset): - def __init__(self, training_files, segment_size, n_fft, num_mels, - hop_size, win_size, sampling_rate, fmin, fmax, split=True, shuffle=True, n_cache_reuse=1, - device=None, fmax_loss=None, fine_tuning=False, base_mels_path=None): - self.audio_files = training_files - random.seed(1234) - if shuffle: - random.shuffle(self.audio_files) - self.segment_size = segment_size - self.sampling_rate = sampling_rate - self.split = split - self.n_fft = n_fft - self.num_mels = num_mels - self.hop_size = hop_size - self.win_size = win_size - self.fmin = fmin - self.fmax = fmax - self.fmax_loss = fmax_loss - self.cached_wav = None - self.n_cache_reuse = n_cache_reuse - self._cache_ref_count = 0 - self.device = device - self.fine_tuning = fine_tuning - self.base_mels_path = base_mels_path - - def __getitem__(self, index): - filename = self.audio_files[index] - if self._cache_ref_count == 0: - audio, sampling_rate = load_wav(filename) - audio = audio / MAX_WAV_VALUE - if not self.fine_tuning: - audio = normalize(audio) * 0.95 - self.cached_wav = audio - if sampling_rate != self.sampling_rate: - raise ValueError("{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - self._cache_ref_count = self.n_cache_reuse - else: - audio = self.cached_wav - self._cache_ref_count -= 1 - - audio = torch.FloatTensor(audio) - audio = audio.unsqueeze(0) - - if not self.fine_tuning: - if self.split: - if audio.size(1) >= self.segment_size: - max_audio_start = audio.size(1) - self.segment_size - audio_start = random.randint(0, max_audio_start) - audio = audio[:, audio_start:audio_start+self.segment_size] - else: - audio = torch.nn.functional.pad(audio, (0, self.segment_size - audio.size(1)), 'constant') - - mel = mel_spectrogram(audio, self.n_fft, self.num_mels, - self.sampling_rate, self.hop_size, self.win_size, self.fmin, self.fmax, - center=False) - else: - mel = np.load( - os.path.join(self.base_mels_path, os.path.splitext(os.path.split(filename)[-1])[0] + '.npy')) - mel = torch.from_numpy(mel) - - if len(mel.shape) < 3: - mel = mel.unsqueeze(0) - - if self.split: - frames_per_seg = math.ceil(self.segment_size / self.hop_size) - - if audio.size(1) >= self.segment_size: - mel_start = random.randint(0, mel.size(2) - frames_per_seg - 1) - mel = mel[:, :, mel_start:mel_start + frames_per_seg] - audio = audio[:, mel_start * self.hop_size:(mel_start + frames_per_seg) * self.hop_size] - else: - mel = torch.nn.functional.pad(mel, (0, frames_per_seg - mel.size(2)), 'constant') - audio = torch.nn.functional.pad(audio, (0, self.segment_size - audio.size(1)), 'constant') - - mel_loss = mel_spectrogram(audio, self.n_fft, self.num_mels, - self.sampling_rate, self.hop_size, self.win_size, self.fmin, self.fmax_loss, - center=False) - - return (mel.squeeze(), audio.squeeze(0), filename, mel_loss.squeeze()) - - def __len__(self): - return len(self.audio_files) diff --git a/spaces/shubh2014shiv/Japanese_NLP/app.py b/spaces/shubh2014shiv/Japanese_NLP/app.py deleted file mode 100644 index 784062def339c809bf1cab6e783baa0a4142bde2..0000000000000000000000000000000000000000 --- a/spaces/shubh2014shiv/Japanese_NLP/app.py +++ /dev/null @@ -1,364 +0,0 @@ -import streamlit as st -import pandas as pd -import plotly.express as px -import plotly.graph_objects as go -from st_aggrid import AgGrid -from st_aggrid.grid_options_builder import GridOptionsBuilder -from st_aggrid.shared import JsCode -from st_aggrid.shared import GridUpdateMode -from transformers import T5Tokenizer, BertForSequenceClassification,AutoTokenizer, AutoModelForSeq2SeqLM -import torch -import numpy as np -import json -from transformers import AutoTokenizer, BertTokenizer, AutoModelWithLMHead -import pytorch_lightning as pl -from pathlib import Path - -# Defining some functions for caching purpose by streamlit -class TranslationModel(pl.LightningModule): - def __init__(self): - super().__init__() - self.model = AutoModelWithLMHead.from_pretrained("Helsinki-NLP/opus-mt-ja-en", return_dict=True) - - -#@st.experimental_singleton -def loadFineTunedJaEn_NMT_Model(): - ''' - save_dest = Path('model') - save_dest.mkdir(exist_ok=True) - st.write("Creating new folder for downloading the Japanese to English Translation Model. ") - f_checkpoint = Path("model/best-checkpoint.ckpt") - st.write("'Folder: model/best-checkpoint.ckpt' created.") - if not f_checkpoint.exists(): - with st.spinner("Downloading model.This may take a while! \n Don't refresh or close this page!"): - from GD_download import download_file_from_google_drive - download_file_from_google_drive('1CZQKGj9hSqj7kEuJp_jm7bNVXrbcFsgP', f_checkpoint) - ''' - bsd_jp_to_eng_trained_model = TranslationModel.load_from_checkpoint(Path("business_dialogue_japanese_english_model_fine_tuned.ckpt")) - - - return bsd_jp_to_eng_trained_model - -@st.experimental_singleton -def getJpEn_Tokenizers(): - try: - with st.spinner("Downloading English and Japanese Transformer Tokenizers"): - ja_tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-ja-en") - en_tokenizer = BertTokenizer.from_pretrained("bert-base-uncased") - except: - st.error("Issue with downloading tokenizers") - - return ja_tokenizer, en_tokenizer - -st.set_page_config(layout="wide") -st.title("Project - Japanese Natural Language Processing (自然言語処理) using Transformers") -st.sidebar.subheader("自然言語処理 トピック") -topic = st.sidebar.radio(label="Select the NLP project topics", options=["Sentiment Analysis","Text Summarization","Japanese to English Translation"]) - -st.write("-" * 5) -jp_review_text = None -#JAPANESE_SENTIMENT_PROJECT_PATH = './Japanese Amazon reviews sentiments/' - -if topic == "Sentiment Analysis": - st.markdown( - "

    Transfer Learning based Japanese Sentiments Analysis using BERT

    ", - unsafe_allow_html=True) - st.markdown( - "

    Japanese Amazon Reviews Data (日本のAmazonレビューデータ)

    ", - unsafe_allow_html=True) - - amazon_jp_reviews = pd.read_csv("review_val.csv").sample(frac=1,random_state=10).iloc[:16000] - - cellstyle_jscode = JsCode( - """ - function(params) { - if (params.value.includes('positive')) { - return { - 'color': 'black', - 'backgroundColor': '#32CD32' - } - } else { - return { - 'color': 'black', - 'backgroundColor': '#FF7F7F' - } - } - }; - """ - ) - st.write('', - unsafe_allow_html=True) - - st.write('', - unsafe_allow_html=True) - - choose = st.radio("", ("Choose a review from the dataframe below", "Manually write review")) - - SELECT_ONE_REVIEW = "Choose a review from the dataframe below" - WRITE_REVIEW = "Manually write review" - - gb = GridOptionsBuilder.from_dataframe(amazon_jp_reviews) - gb.configure_column("sentiment", cellStyle=cellstyle_jscode) - gb.configure_pagination() - if choose == SELECT_ONE_REVIEW: - gb.configure_selection(selection_mode="single", use_checkbox=True, suppressRowDeselection=False) - gridOptions = gb.build() - - if choose == SELECT_ONE_REVIEW: - jp_review_choice = AgGrid(amazon_jp_reviews, gridOptions=gridOptions, theme='material', - enable_enterprise_modules=True, - allow_unsafe_jscode=True, update_mode=GridUpdateMode.SELECTION_CHANGED) - st.info("Select any one the Japanese Reviews by clicking the checkbox. Reviews can be navigated from each page.") - if len(jp_review_choice['selected_rows']) != 0: - jp_review_text = jp_review_choice['selected_rows'][0]['review'] - st.markdown( - "

    Selected Review in JSON (JSONで選択されたレビュー)

    ", - unsafe_allow_html=True) - st.write(jp_review_choice['selected_rows']) - - if choose == WRITE_REVIEW: - - AgGrid(amazon_jp_reviews, gridOptions=gridOptions, theme='material', - enable_enterprise_modules=True, - allow_unsafe_jscode=True) - with open("test_reviews_jp.csv", "rb") as file: - st.download_button(label="Download Additional Japanese Reviews", data=file, - file_name="Additional Japanese Reviews.csv") - st.info("Additional subset of Japanese Reviews can be downloaded and any review can be copied & pasted in text area.") - sample_japanese_review_input = "子供のレッスンバッグ用に購入。 思ったより大きく、ピアノ教本を入れるには充分でした。中は汚れてました。 何より驚いたのは、商品の梱包。 2つ折は許せるが、透明ビニール袋の底思いっきり空いてますけど? 何これ?包むっていうか挟んで終わり?底が全開している。 引っ張れば誰でも中身の注文書も、商品も見れる状態って何なの? 個人情報が晒されて、商品も粗末な扱いで嫌な気持ちでした。 郵送で中身が無事のが奇跡じゃないでしょうか? ありえない" - jp_review_text = st.text_area(label="Press 'Ctrl+Enter' after writing review in below text area", - value=sample_japanese_review_input) - if len(jp_review_text) == 0: - st.error("Input text cannot empty. Either write the japanese review in text area manually or select the review from the grid.") - - if jp_review_text: - st.markdown( - "

    Sentence-Piece based Japanese Tokenizer using RoBERTA

    ", - unsafe_allow_html=True) - tokens_column, tokenID_column = st.columns(2) - tokenizer = T5Tokenizer.from_pretrained('rinna/japanese-roberta-base') - tokens = tokenizer.tokenize(jp_review_text) - token_ids = tokenizer.convert_tokens_to_ids(tokens) - with tokens_column: - token_expander = st.expander("Expand to see the tokens", expanded=False) - with token_expander: - st.write(tokens) - with tokenID_column: - tokenID_expander = st.expander("Expand to see the token IDs", expanded=False) - with tokenID_expander: - st.write(token_ids) - - st.markdown( - "

    Encoded Japanese Review Text to get Input IDs and attention masks as PyTorch Tensor

    ", - unsafe_allow_html=True) - encoded_data = tokenizer.batch_encode_plus(np.array([jp_review_text]).astype('object'), - add_special_tokens=True, - return_attention_mask=True, - padding=True, - max_length=200, - return_tensors='pt', - truncation=True) - input_ids = encoded_data['input_ids'] - attention_masks = encoded_data['attention_mask'] - input_ids_column, attention_masks_column = st.columns(2) - with input_ids_column: - input_ids_expander = st.expander("Expand to see the input IDs tensor") - with input_ids_expander: - st.write(input_ids) - with attention_masks_column: - attention_masks_expander = st.expander("Expand to see the attention mask tensor") - with attention_masks_expander: - st.write(attention_masks) - - st.markdown( - "

    Predict Sentiment of review using Fine-Tuned Japanese BERT

    ", - unsafe_allow_html=True) - - label_dict = {'positive': 1, 'negative': 0} - if st.button("Predict Sentiment"): - with st.spinner("Wait.."): - predictions = [] - model = BertForSequenceClassification.from_pretrained("shubh2014shiv/jp_review_sentiments_amzn", - num_labels=len(label_dict), - output_attentions=False, - output_hidden_states=False) - #model.load_state_dict( - # torch.load(JAPANESE_SENTIMENT_PROJECT_PATH + 'FineTuneJapaneseBert_AmazonReviewSentiments.pt', - # map_location=torch.device('cpu'))) - - model.load_state_dict( - torch.load('reviewSentiments_jp.pt', - map_location=torch.device('cpu'))) - - inputs = { - 'input_ids': input_ids, - 'attention_mask': attention_masks - } - - with torch.no_grad(): - outputs = model(**inputs) - - logits = outputs.logits - logits = logits.detach().cpu().numpy() - scores = 1 / (1 + np.exp(-1 * logits)) - - result = {"TEXT (文章)": jp_review_text,'NEGATIVE (ネガティブ)': scores[0][0], 'POSITIVE (ポジティブ)': scores[0][1]} - - result_col,graph_col = st.columns(2) - with result_col: - st.write(result) - with graph_col: - fig = px.bar(x=['NEGATIVE (ネガティブ)','POSITIVE (ポジティブ)'],y=[result['NEGATIVE (ネガティブ)'],result['POSITIVE (ポジティブ)']]) - fig.update_layout(title="Probability distribution of Sentiment for the given text",\ - yaxis_title="Probability (確率)") - fig.update_traces(marker_color=['#FF7F7F','#32CD32']) - st.plotly_chart(fig) - -elif topic == "Text Summarization": - st.markdown( - "

    Summarizing Japanese News Article using multi-Lingual T5 (mT5)

    ", - unsafe_allow_html=True) - st.markdown( - "

    Japanese News Article Data

    ", - unsafe_allow_html=True) - - news_articles = pd.read_csv("jp_news_articles_val.csv").sample(frac=0.75, - random_state=42) - gb = GridOptionsBuilder.from_dataframe(news_articles) - gb.configure_pagination() - gb.configure_selection(selection_mode="single", use_checkbox=True, suppressRowDeselection=False) - gridOptions = gb.build() - jp_article = AgGrid(news_articles, gridOptions=gridOptions, theme='material', - enable_enterprise_modules=True, - allow_unsafe_jscode=True, update_mode=GridUpdateMode.SELECTION_CHANGED) - - # WHITESPACE_HANDLER = lambda k: re.sub('\s+', ' ', re.sub('\n+', ' ', k.strip())) - if len(jp_article['selected_rows']) == 0: - st.info("Pick any one Japanese News Article by selecting the checkbox. News articles can be navigated by clicking on page navigator at right-bottom") - else: - article_text = jp_article['selected_rows'][0]['News Articles'] - - text = st.text_area(label="Text from selected Japanese News Article(ニュース記事)", value=article_text, height=500) - summary_length = st.slider(label="Select the maximum length of summary (要約の最大長を選択します )", min_value=120,max_value=160,step=5) - - if text and st.button("Summarize it! (要約しよう)"): - waitPlaceholder = st.image("wait.gif") - summarization_model_name = "csebuetnlp/mT5_multilingual_XLSum" - tokenizer = AutoTokenizer.from_pretrained(summarization_model_name ) - model = AutoModelForSeq2SeqLM.from_pretrained(summarization_model_name ) - - input_ids = tokenizer( - article_text, - return_tensors="pt", - padding="max_length", - truncation=True, - max_length=512 - )["input_ids"] - - output_ids = model.generate( - input_ids=input_ids, - max_length=summary_length, - no_repeat_ngram_size=2, - num_beams=4 - )[0] - - summary = tokenizer.decode( - output_ids, - skip_special_tokens=True, - clean_up_tokenization_spaces=False - ) - - waitPlaceholder.empty() - - st.markdown( - "

    Summary (要約文)

    ", - unsafe_allow_html=True) - - st.write(summary) -elif topic == "Japanese to English Translation": - st.markdown( - "

    Japanese to English translation (for short sentences)

    ", - unsafe_allow_html=True) - st.markdown( - "

    Business Scene Dialog Japanese-English Corpus

    ", - unsafe_allow_html=True) - - st.write("Below given Japanese-English pair is from 'Business Scene Dialog Corpus' by the University of Tokyo") - link = '[Corpus GitHub Link](https://github.com/tsuruoka-lab/BSD)' - st.markdown(link, unsafe_allow_html=True) - - bsd_more_info = st.expander(label="Expand to get more information on data and training report") - with bsd_more_info: - st.markdown( - "

    Training Dataset

    ", - unsafe_allow_html=True) - st.write("The corpus has total 20,000 Japanese-English Business Dialog pairs. The fined-tuned Transformer model is validated on 670 Japanese-English Business Dialog pairs") - - st.markdown( - "

    Training Report

    ", - unsafe_allow_html=True) - st.write( - "The Dashboard for training result on Tensorboard is [here](https://tensorboard.dev/experiment/eWhxt1i2RuaU64krYtORhw/)") - - with open("./BSD_ja-en_val.json", encoding='utf-8') as f: - bsd_sample_data = json.load(f) - - en, ja = [], [] - for i in range(len(bsd_sample_data)): - for j in range(len(bsd_sample_data[i]['conversation'])): - en.append(bsd_sample_data[i]['conversation'][j]['en_sentence']) - ja.append(bsd_sample_data[i]['conversation'][j]['ja_sentence']) - - df = pd.DataFrame.from_dict({'Japanese': ja, 'English': en}) - gb = GridOptionsBuilder.from_dataframe(df) - gb.configure_pagination() - gb.configure_selection(selection_mode="single", use_checkbox=True, suppressRowDeselection=False) - gridOptions = gb.build() - translation_text = AgGrid(df, gridOptions=gridOptions, theme='material', - enable_enterprise_modules=True, - allow_unsafe_jscode=True, update_mode=GridUpdateMode.SELECTION_CHANGED) - if len(translation_text['selected_rows']) != 0: - bsd_jp = translation_text['selected_rows'][0]['Japanese'] - st.markdown( - "

    Business Scene Dialog in Japanese (日本語でのビジネスシーンダイアログ)

    ", - unsafe_allow_html=True) - st.write(bsd_jp) - - if st.button("Translate"): - waitPlaceholder = st.image("wait.gif") - ja_tokenizer, en_tokenizer = getJpEn_Tokenizers() - trained_model = loadFineTunedJaEn_NMT_Model() - trained_model.freeze() - - - def translate(text): - text_encoding = ja_tokenizer( - text, - max_length=100, - padding="max_length", - truncation=True, - return_attention_mask=True, - add_special_tokens=True, - return_tensors='pt' - ) - - generated_ids = trained_model.model.generate( - input_ids=text_encoding['input_ids'], - attention_mask=text_encoding['attention_mask'], - max_length=100, - num_beams=2, - repetition_penalty=2.5, - length_penalty=1.0, - early_stopping=True - ) - - preds = [en_tokenizer.decode(gen_id, skip_special_tokens=True, clean_up_tokenization_spaces=True) for - gen_id in generated_ids] - - return "".join(preds)[5:] - waitPlaceholder.empty() - - st.markdown( - "

    Translated Dialog in English (英語の翻訳されたダイアログ)

    ", - unsafe_allow_html=True) - st.write(translate(bsd_jp)) diff --git a/spaces/sidharthism/fashion-eye/models/stylegan2/stylegan2-pytorch/model.py b/spaces/sidharthism/fashion-eye/models/stylegan2/stylegan2-pytorch/model.py deleted file mode 100644 index 4e3c9687a3f4f7301cf053bee95c1e288b1c939b..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/models/stylegan2/stylegan2-pytorch/model.py +++ /dev/null @@ -1,703 +0,0 @@ -import math -import random -import functools -import operator - -import torch -from torch import nn -from torch.nn import functional as F -from torch.autograd import Function - -from op import FusedLeakyReLU, fused_leaky_relu, upfirdn2d - - -class PixelNorm(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, input): - return input * torch.rsqrt(torch.mean(input ** 2, dim=1, keepdim=True) + 1e-8) - - -def make_kernel(k): - k = torch.tensor(k, dtype=torch.float32) - - if k.ndim == 1: - k = k[None, :] * k[:, None] - - k /= k.sum() - - return k - - -class Upsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) * (factor ** 2) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=self.factor, down=1, pad=self.pad) - - return out - - -class Downsample(nn.Module): - def __init__(self, kernel, factor=2): - super().__init__() - - self.factor = factor - kernel = make_kernel(kernel) - self.register_buffer('kernel', kernel) - - p = kernel.shape[0] - factor - - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.pad = (pad0, pad1) - - def forward(self, input): - out = upfirdn2d(input, self.kernel, up=1, down=self.factor, pad=self.pad) - - return out - - -class Blur(nn.Module): - def __init__(self, kernel, pad, upsample_factor=1): - super().__init__() - - kernel = make_kernel(kernel) - - if upsample_factor > 1: - kernel = kernel * (upsample_factor ** 2) - - self.register_buffer('kernel', kernel) - - self.pad = pad - - def forward(self, input): - out = upfirdn2d(input, self.kernel, pad=self.pad) - - return out - - -class EqualConv2d(nn.Module): - def __init__( - self, in_channel, out_channel, kernel_size, stride=1, padding=0, bias=True - ): - super().__init__() - - self.weight = nn.Parameter( - torch.randn(out_channel, in_channel, kernel_size, kernel_size) - ) - self.scale = 1 / math.sqrt(in_channel * kernel_size ** 2) - - self.stride = stride - self.padding = padding - - if bias: - self.bias = nn.Parameter(torch.zeros(out_channel)) - - else: - self.bias = None - - def forward(self, input): - out = F.conv2d( - input, - self.weight * self.scale, - bias=self.bias, - stride=self.stride, - padding=self.padding, - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]},' - f' {self.weight.shape[2]}, stride={self.stride}, padding={self.padding})' - ) - - -class EqualLinear(nn.Module): - def __init__( - self, in_dim, out_dim, bias=True, bias_init=0, lr_mul=1, activation=None - ): - super().__init__() - - self.weight = nn.Parameter(torch.randn(out_dim, in_dim).div_(lr_mul)) - - if bias: - self.bias = nn.Parameter(torch.zeros(out_dim).fill_(bias_init)) - - else: - self.bias = None - - self.activation = activation - - self.scale = (1 / math.sqrt(in_dim)) * lr_mul - self.lr_mul = lr_mul - - def forward(self, input): - if self.activation: - out = F.linear(input, self.weight * self.scale) - out = fused_leaky_relu(out, self.bias * self.lr_mul) - - else: - out = F.linear( - input, self.weight * self.scale, bias=self.bias * self.lr_mul - ) - - return out - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.weight.shape[1]}, {self.weight.shape[0]})' - ) - - -class ScaledLeakyReLU(nn.Module): - def __init__(self, negative_slope=0.2): - super().__init__() - - self.negative_slope = negative_slope - - def forward(self, input): - out = F.leaky_relu(input, negative_slope=self.negative_slope) - - return out * math.sqrt(2) - - -class ModulatedConv2d(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - demodulate=True, - upsample=False, - downsample=False, - blur_kernel=[1, 3, 3, 1], - ): - super().__init__() - - self.eps = 1e-8 - self.kernel_size = kernel_size - self.in_channel = in_channel - self.out_channel = out_channel - self.upsample = upsample - self.downsample = downsample - - if upsample: - factor = 2 - p = (len(blur_kernel) - factor) - (kernel_size - 1) - pad0 = (p + 1) // 2 + factor - 1 - pad1 = p // 2 + 1 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1), upsample_factor=factor) - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - self.blur = Blur(blur_kernel, pad=(pad0, pad1)) - - fan_in = in_channel * kernel_size ** 2 - self.scale = 1 / math.sqrt(fan_in) - self.padding = kernel_size // 2 - - self.weight = nn.Parameter( - torch.randn(1, out_channel, in_channel, kernel_size, kernel_size) - ) - - self.modulation = EqualLinear(style_dim, in_channel, bias_init=1) - - self.demodulate = demodulate - - def __repr__(self): - return ( - f'{self.__class__.__name__}({self.in_channel}, {self.out_channel}, {self.kernel_size}, ' - f'upsample={self.upsample}, downsample={self.downsample})' - ) - - def forward(self, input, style): - batch, in_channel, height, width = input.shape - - style = self.modulation(style).view(batch, 1, in_channel, 1, 1) - weight = self.scale * self.weight * style - - if self.demodulate: - demod = torch.rsqrt(weight.pow(2).sum([2, 3, 4]) + 1e-8) - weight = weight * demod.view(batch, self.out_channel, 1, 1, 1) - - weight = weight.view( - batch * self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - - if self.upsample: - input = input.view(1, batch * in_channel, height, width) - weight = weight.view( - batch, self.out_channel, in_channel, self.kernel_size, self.kernel_size - ) - weight = weight.transpose(1, 2).reshape( - batch * in_channel, self.out_channel, self.kernel_size, self.kernel_size - ) - out = F.conv_transpose2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - out = self.blur(out) - - elif self.downsample: - input = self.blur(input) - _, _, height, width = input.shape - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=0, stride=2, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - else: - input = input.view(1, batch * in_channel, height, width) - out = F.conv2d(input, weight, padding=self.padding, groups=batch) - _, _, height, width = out.shape - out = out.view(batch, self.out_channel, height, width) - - return out - - -class NoiseInjection(nn.Module): - def __init__(self): - super().__init__() - - self.weight = nn.Parameter(torch.zeros(1)) - - def forward(self, image, noise=None): - if noise is None: - batch, _, height, width = image.shape - noise = image.new_empty(batch, 1, height, width).normal_() - - return image + self.weight * noise - - -class ConstantInput(nn.Module): - def __init__(self, channel, size=4): - super().__init__() - - self.input = nn.Parameter(torch.randn(1, channel, size, size)) - - def forward(self, input): - batch = input.shape[0] - out = self.input.repeat(batch, 1, 1, 1) - - return out - - -class StyledConv(nn.Module): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=False, - blur_kernel=[1, 3, 3, 1], - demodulate=True, - ): - super().__init__() - - self.conv = ModulatedConv2d( - in_channel, - out_channel, - kernel_size, - style_dim, - upsample=upsample, - blur_kernel=blur_kernel, - demodulate=demodulate, - ) - - self.noise = NoiseInjection() - # self.bias = nn.Parameter(torch.zeros(1, out_channel, 1, 1)) - # self.activate = ScaledLeakyReLU(0.2) - self.activate = FusedLeakyReLU(out_channel) - - def forward(self, input, style, noise=None): - out = self.conv(input, style) - out = self.noise(out, noise=noise) - # out = out + self.bias - out = self.activate(out) - - return out - - -class ToRGB(nn.Module): - def __init__(self, in_channel, style_dim, upsample=True, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - if upsample: - self.upsample = Upsample(blur_kernel) - - self.conv = ModulatedConv2d(in_channel, 3, 1, style_dim, demodulate=False) - self.bias = nn.Parameter(torch.zeros(1, 3, 1, 1)) - - def forward(self, input, style, skip=None): - out = self.conv(input, style) - out = out + self.bias - - if skip is not None: - skip = self.upsample(skip) - - out = out + skip - - return out - -# Wrapper that gives name to tensor -class NamedTensor(nn.Module): - def __init__(self): - super().__init__() - - def forward(self, x): - return x - -# Give each style a unique name -class StridedStyle(nn.ModuleList): - def __init__(self, n_latents): - super().__init__([NamedTensor() for _ in range(n_latents)]) - self.n_latents = n_latents - - def forward(self, x): - # x already strided - styles = [self[i](x[:, i, :]) for i in range(self.n_latents)] - return torch.stack(styles, dim=1) - -class Generator(nn.Module): - def __init__( - self, - size, - style_dim, - n_mlp, - channel_multiplier=2, - blur_kernel=[1, 3, 3, 1], - lr_mlp=0.01, - ): - super().__init__() - - self.size = size - - self.style_dim = style_dim - - layers = [PixelNorm()] - - for i in range(n_mlp): - layers.append( - EqualLinear( - style_dim, style_dim, lr_mul=lr_mlp, activation='fused_lrelu' - ) - ) - - self.style = nn.Sequential(*layers) - - self.channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - self.input = ConstantInput(self.channels[4]) - self.conv1 = StyledConv( - self.channels[4], self.channels[4], 3, style_dim, blur_kernel=blur_kernel - ) - self.to_rgb1 = ToRGB(self.channels[4], style_dim, upsample=False) - - self.log_size = int(math.log(size, 2)) - self.num_layers = (self.log_size - 2) * 2 + 1 - - self.convs = nn.ModuleList() - self.upsamples = nn.ModuleList() - self.to_rgbs = nn.ModuleList() - self.noises = nn.Module() - - in_channel = self.channels[4] - - for layer_idx in range(self.num_layers): - res = (layer_idx + 5) // 2 - shape = [1, 1, 2 ** res, 2 ** res] - self.noises.register_buffer(f'noise_{layer_idx}', torch.randn(*shape)) - - for i in range(3, self.log_size + 1): - out_channel = self.channels[2 ** i] - - self.convs.append( - StyledConv( - in_channel, - out_channel, - 3, - style_dim, - upsample=True, - blur_kernel=blur_kernel, - ) - ) - - self.convs.append( - StyledConv( - out_channel, out_channel, 3, style_dim, blur_kernel=blur_kernel - ) - ) - - self.to_rgbs.append(ToRGB(out_channel, style_dim)) - - in_channel = out_channel - - self.n_latent = self.log_size * 2 - 2 - self.strided_style = StridedStyle(self.n_latent) - - def make_noise(self): - device = self.input.input.device - - noises = [torch.randn(1, 1, 2 ** 2, 2 ** 2, device=device)] - - for i in range(3, self.log_size + 1): - for _ in range(2): - noises.append(torch.randn(1, 1, 2 ** i, 2 ** i, device=device)) - - return noises - - def mean_latent(self, n_latent): - latent_in = torch.randn( - n_latent, self.style_dim, device=self.input.input.device - ) - latent = self.style(latent_in).mean(0, keepdim=True) - - return latent - - def get_latent(self, input): - return self.style(input) - - def forward( - self, - styles, - return_latents=False, - inject_index=None, - truncation=1, - truncation_latent=None, - input_is_w=False, - noise=None, - randomize_noise=True, - ): - if not input_is_w: - styles = [self.style(s) for s in styles] - - if noise is None: - if randomize_noise: - noise = [None] * self.num_layers - else: - noise = [ - getattr(self.noises, f'noise_{i}') for i in range(self.num_layers) - ] - - if truncation < 1: - style_t = [] - - for style in styles: - style_t.append( - truncation_latent + truncation * (style - truncation_latent) - ) - - styles = style_t - - if len(styles) == 1: - # One global latent - inject_index = self.n_latent - - if styles[0].ndim < 3: - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - - else: - latent = styles[0] - - elif len(styles) == 2: - # Latent mixing with two latents - if inject_index is None: - inject_index = random.randint(1, self.n_latent - 1) - - latent = styles[0].unsqueeze(1).repeat(1, inject_index, 1) - latent2 = styles[1].unsqueeze(1).repeat(1, self.n_latent - inject_index, 1) - - latent = self.strided_style(torch.cat([latent, latent2], 1)) - else: - # One latent per layer - assert len(styles) == self.n_latent, f'Expected {self.n_latents} latents, got {len(styles)}' - styles = torch.stack(styles, dim=1) # [N, 18, 512] - latent = self.strided_style(styles) - - out = self.input(latent) - out = self.conv1(out, latent[:, 0], noise=noise[0]) - - skip = self.to_rgb1(out, latent[:, 1]) - - i = 1 - for conv1, conv2, noise1, noise2, to_rgb in zip( - self.convs[::2], self.convs[1::2], noise[1::2], noise[2::2], self.to_rgbs - ): - out = conv1(out, latent[:, i], noise=noise1) - out = conv2(out, latent[:, i + 1], noise=noise2) - skip = to_rgb(out, latent[:, i + 2], skip) - - i += 2 - - image = skip - - if return_latents: - return image, latent - - else: - return image, None - - -class ConvLayer(nn.Sequential): - def __init__( - self, - in_channel, - out_channel, - kernel_size, - downsample=False, - blur_kernel=[1, 3, 3, 1], - bias=True, - activate=True, - ): - layers = [] - - if downsample: - factor = 2 - p = (len(blur_kernel) - factor) + (kernel_size - 1) - pad0 = (p + 1) // 2 - pad1 = p // 2 - - layers.append(Blur(blur_kernel, pad=(pad0, pad1))) - - stride = 2 - self.padding = 0 - - else: - stride = 1 - self.padding = kernel_size // 2 - - layers.append( - EqualConv2d( - in_channel, - out_channel, - kernel_size, - padding=self.padding, - stride=stride, - bias=bias and not activate, - ) - ) - - if activate: - if bias: - layers.append(FusedLeakyReLU(out_channel)) - - else: - layers.append(ScaledLeakyReLU(0.2)) - - super().__init__(*layers) - - -class ResBlock(nn.Module): - def __init__(self, in_channel, out_channel, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - self.conv1 = ConvLayer(in_channel, in_channel, 3) - self.conv2 = ConvLayer(in_channel, out_channel, 3, downsample=True) - - self.skip = ConvLayer( - in_channel, out_channel, 1, downsample=True, activate=False, bias=False - ) - - def forward(self, input): - out = self.conv1(input) - out = self.conv2(out) - - skip = self.skip(input) - out = (out + skip) / math.sqrt(2) - - return out - - -class Discriminator(nn.Module): - def __init__(self, size, channel_multiplier=2, blur_kernel=[1, 3, 3, 1]): - super().__init__() - - channels = { - 4: 512, - 8: 512, - 16: 512, - 32: 512, - 64: 256 * channel_multiplier, - 128: 128 * channel_multiplier, - 256: 64 * channel_multiplier, - 512: 32 * channel_multiplier, - 1024: 16 * channel_multiplier, - } - - convs = [ConvLayer(3, channels[size], 1)] - - log_size = int(math.log(size, 2)) - - in_channel = channels[size] - - for i in range(log_size, 2, -1): - out_channel = channels[2 ** (i - 1)] - - convs.append(ResBlock(in_channel, out_channel, blur_kernel)) - - in_channel = out_channel - - self.convs = nn.Sequential(*convs) - - self.stddev_group = 4 - self.stddev_feat = 1 - - self.final_conv = ConvLayer(in_channel + 1, channels[4], 3) - self.final_linear = nn.Sequential( - EqualLinear(channels[4] * 4 * 4, channels[4], activation='fused_lrelu'), - EqualLinear(channels[4], 1), - ) - - def forward(self, input): - out = self.convs(input) - - batch, channel, height, width = out.shape - group = min(batch, self.stddev_group) - stddev = out.view( - group, -1, self.stddev_feat, channel // self.stddev_feat, height, width - ) - stddev = torch.sqrt(stddev.var(0, unbiased=False) + 1e-8) - stddev = stddev.mean([2, 3, 4], keepdims=True).squeeze(2) - stddev = stddev.repeat(group, 1, height, width) - out = torch.cat([out, stddev], 1) - - out = self.final_conv(out) - - out = out.view(batch, -1) - out = self.final_linear(out) - - return out - diff --git a/spaces/sidharthism/fashion-eye/netdissect/parallelfolder.py b/spaces/sidharthism/fashion-eye/netdissect/parallelfolder.py deleted file mode 100644 index a741691569a7c85e96d3b3d9be12b40d508f0044..0000000000000000000000000000000000000000 --- a/spaces/sidharthism/fashion-eye/netdissect/parallelfolder.py +++ /dev/null @@ -1,118 +0,0 @@ -''' -Variants of pytorch's ImageFolder for loading image datasets with more -information, such as parallel feature channels in separate files, -cached files with lists of filenames, etc. -''' - -import os, torch, re -import torch.utils.data as data -from torchvision.datasets.folder import default_loader -from PIL import Image -from collections import OrderedDict -from .progress import default_progress - -def grayscale_loader(path): - with open(path, 'rb') as f: - return Image.open(f).convert('L') - -class ParallelImageFolders(data.Dataset): - """ - A data loader that looks for parallel image filenames, for example - - photo1/park/004234.jpg - photo1/park/004236.jpg - photo1/park/004237.jpg - - photo2/park/004234.png - photo2/park/004236.png - photo2/park/004237.png - """ - def __init__(self, image_roots, - transform=None, - loader=default_loader, - stacker=None, - intersection=False, - verbose=None, - size=None): - self.image_roots = image_roots - self.images = make_parallel_dataset(image_roots, - intersection=intersection, verbose=verbose) - if len(self.images) == 0: - raise RuntimeError("Found 0 images within: %s" % image_roots) - if size is not None: - self.image = self.images[:size] - if transform is not None and not hasattr(transform, '__iter__'): - transform = [transform for _ in image_roots] - self.transforms = transform - self.stacker = stacker - self.loader = loader - - def __getitem__(self, index): - paths = self.images[index] - sources = [self.loader(path) for path in paths] - # Add a common shared state dict to allow random crops/flips to be - # coordinated. - shared_state = {} - for s in sources: - s.shared_state = shared_state - if self.transforms is not None: - sources = [transform(source) - for source, transform in zip(sources, self.transforms)] - if self.stacker is not None: - sources = self.stacker(sources) - else: - sources = tuple(sources) - return sources - - def __len__(self): - return len(self.images) - -def is_npy_file(path): - return path.endswith('.npy') or path.endswith('.NPY') - -def is_image_file(path): - return None != re.search(r'\.(jpe?g|png)$', path, re.IGNORECASE) - -def walk_image_files(rootdir, verbose=None): - progress = default_progress(verbose) - indexfile = '%s.txt' % rootdir - if os.path.isfile(indexfile): - basedir = os.path.dirname(rootdir) - with open(indexfile) as f: - result = sorted([os.path.join(basedir, line.strip()) - for line in progress(f.readlines(), - desc='Reading %s' % os.path.basename(indexfile))]) - return result - result = [] - for dirname, _, fnames in sorted(progress(os.walk(rootdir), - desc='Walking %s' % os.path.basename(rootdir))): - for fname in sorted(fnames): - if is_image_file(fname) or is_npy_file(fname): - result.append(os.path.join(dirname, fname)) - return result - -def make_parallel_dataset(image_roots, intersection=False, verbose=None): - """ - Returns [(img1, img2), (img1, img2)..] - """ - image_roots = [os.path.expanduser(d) for d in image_roots] - image_sets = OrderedDict() - for j, root in enumerate(image_roots): - for path in walk_image_files(root, verbose=verbose): - key = os.path.splitext(os.path.relpath(path, root))[0] - if key not in image_sets: - image_sets[key] = [] - if not intersection and len(image_sets[key]) != j: - raise RuntimeError( - 'Images not parallel: %s missing from one dir' % (key)) - image_sets[key].append(path) - tuples = [] - for key, value in image_sets.items(): - if len(value) != len(image_roots): - if intersection: - continue - else: - raise RuntimeError( - 'Images not parallel: %s missing from one dir' % (key)) - tuples.append(tuple(value)) - return tuples diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bus Simulator Ultimate 2.0.7 MOD APK Get Unlimited Money and Gold.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bus Simulator Ultimate 2.0.7 MOD APK Get Unlimited Money and Gold.md deleted file mode 100644 index a2636db64107c2b116323b16ac8dfccbabdd5adb..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Bus Simulator Ultimate 2.0.7 MOD APK Get Unlimited Money and Gold.md +++ /dev/null @@ -1,107 +0,0 @@ -
    -

    Bus Simulator Ultimate Mod APK Unlimited Money Version 2.0 7: Everything You Need to Know

    -

    Do you love driving buses and exploring different cities? Do you want to experience the thrill of being a bus driver and managing your own bus company? If yes, then you should try Bus Simulator Ultimate, one of the most popular and realistic bus simulation games for Android devices. And if you want to enjoy the game with unlimited money, gold, and other resources, then you should download Bus Simulator Ultimate Mod APK, a modified version of the original game that gives you access to all the features and content for free. In this article, we will tell you everything you need to know about Bus Simulator Ultimate Mod APK, including what it is, what are its benefits, and how to download and install it on your device.

    -

    bus simulator ultimate mod apk unlimited money version 2.0 7


    Download File --->>> https://ssurll.com/2uNUam



    -

    What is Bus Simulator Ultimate?

    -

    A realistic and immersive bus driving game

    -

    Bus Simulator Ultimate is a game that lets you drive various types of buses across different countries and cities, such as Germany, Turkey, Italy, France, Spain, USA, and more. You can choose from over 25 different buses, each with its own design, features, and physics. You can also customize your buses with different colors, stickers, accessories, and upgrades. You can create your own routes and schedules, pick up and drop off passengers, follow traffic rules and regulations, deal with different weather conditions and road situations, and earn money and reputation as a bus driver.

    -

    Features of Bus Simulator Ultimate

    -

    Some of the amazing features of Bus Simulator Ultimate are:

    -
      -
    • Realistic bus driving physics and sounds
    • -
    • High-quality graphics and animations
    • -
    • Multiplayer mode where you can play with other players online
    • -
    • Bus company management where you can hire drivers, buy new buses, expand your business, and compete with other companies
    • -
    • Passenger feedback system where you can get ratings and reviews from your customers
    • -
    • Radio system where you can listen to music or news while driving
    • -
    • Support for over 25 languages
    • -
    • Frequent updates with new content and features
    • -
    -

    What is Bus Simulator Ultimate Mod APK?

    -

    A modified version of the original game

    -

    Bus Simulator Ultimate Mod APK is a modified version of the original game that gives you unlimited money, gold, and other resources in the game. With this mod apk, you can buy any bus you want, upgrade it to the max level, customize it as you like, and enjoy the game without any limitations or restrictions. You can also unlock all the countries and cities in the game, create your own routes and schedules, hire as many drivers as you want, and dominate the bus industry.

    -

    Benefits of Bus Simulator Ultimate Mod APK

    -

    Some of the benefits of Bus Simulator Ultimate Mod APK are:

    -
      -
    • You can enjoy the game without spending any real money
    • -
    • You can access all the features and content in the game for free
    • -
    • You can have more fun and excitement in the game
    • -
    • You can explore more countries and cities in the game
    • -
    • You can improve your skills and experience as a bus driver
    • -
    • You can challenge yourself with different scenarios and difficulties in the game
    • -
    -

    How to Download and Install Bus Simulator Ultimate Mod APK?

    -

    Steps to download and install Bus Simulator Ultimate Mod APK

    -

    To download and install Bus Simulator Ultimate Mod APK on your device, follow these simple steps:

    -

    bus simulator ultimate mod apk 2.0 7 unlimited gold
    -bus simulator ultimate hack apk version 2.0 7 free download
    -bus simulator ultimate modded apk 2.0 7 unlimited everything
    -bus simulator ultimate cheat apk v2.0 7 unlimited cash
    -bus simulator ultimate premium apk mod 2.0 7 unlimited coins
    -bus simulator ultimate cracked apk 2.0 7 unlimited money and gold
    -bus simulator ultimate latest mod apk 2.0 7 unlimited resources
    -bus simulator ultimate pro apk mod version 2.0 7 unlimited gems
    -bus simulator ultimate full apk mod 2.0 7 unlimited fuel
    -bus simulator ultimate unlocked apk mod v2.0 7 unlimited xp
    -bus simulator ultimate mega mod apk 2.0 7 unlimited tickets
    -bus simulator ultimate vip mod apk version 2.0 7 unlimited money
    -bus simulator ultimate modded game apk 2.0 7 unlimited vehicles
    -bus simulator ultimate hack game apk v2.0 7 unlimited routes
    -bus simulator ultimate mod game apk version 2.0 7 unlimited skins
    -bus simulator ultimate cheat game apk 2.0 7 unlimited upgrades
    -bus simulator ultimate hacked game apk v2.0 7 unlimited features
    -bus simulator ultimate modded app apk version 2.0 7 unlimited money
    -bus simulator ultimate hack app apk v2.0 7 free money and gold
    -bus simulator ultimate mod app apk version 2.0 7 free everything
    -bus simulator ultimate cheat app apk v2.0 7 free cash and coins
    -bus simulator ultimate cracked app apk version 2.0 7 free premium
    -bus simulator ultimate latest app apk v2.0 7 free resources
    -bus simulator ultimate pro app apk version 2.0 7 free gems and tickets
    -bus simulator ultimate full app apk v2.0 7 free fuel and xp
    -bus simulator ultimate unlocked app apk version 2.0 7 free vehicles and routes
    -bus simulator ultimate mega app apk v2.0 7 free skins and upgrades
    -bus simulator ultimate vip app apk version 2.0 7 free features and money
    -download bus simulator ultimate mod apk unlimited money v2.0 7 for android
    -download bus simulator ultimate hack apk unlimited gold v2.0 7 for android
    -download bus simulator ultimate modded apk unlimited everything v2.0 7 for android
    -download bus simulator ultimate cheat apk unlimited cash v2.0 7 for android
    -download bus simulator ultimate premium apk unlimited coins v2.0 7 for android
    -download bus simulator ultimate cracked apk unlimited money and gold v2.0 7 for android
    -download bus simulator ultimate latest mod apk unlimited resources v2.0 7 for android
    -download bus simulator ultimate pro mod apk unlimited gems v2.0 7 for android
    -download bus simulator ultimate full mod apk unlimited fuel v2.0 7 for android
    -download bus simulator ultimate unlocked mod apk unlimited xp v2.0 7 for android
    -download bus simulator ultimate mega mod apk unlimited tickets v2.0 7 for android
    -download bus simulator ultimate vip mod apk unlimited money v2.0 7 for android

    -
      -
    1. Click on this link to download Bus Simulator Ultimate Mod APK 2 .0 7 from our website.
    2. -
    3. Allow your device to install apps from unknown sources. You can do this by going to Settings > Security > Unknown Sources and enabling it.
    4. -
    5. Locate the downloaded file in your file manager and tap on it to install it.
    6. -
    7. Wait for the installation process to finish and then launch the game.
    8. -
    9. Enjoy Bus Simulator Ultimate Mod APK with unlimited money and gold.
    10. -
    -

    Tips to avoid viruses and malware

    -

    While Bus Simulator Ultimate Mod APK is safe and secure to use, you should always be careful when downloading and installing any mod apk from the internet. Here are some tips to avoid viruses and malware:

    -
      -
    • Download mod apk only from trusted and reliable sources, such as our website.
    • -
    • Scan the downloaded file with a good antivirus or anti-malware software before installing it.
    • -
    • Do not grant any unnecessary permissions or access to the mod apk.
    • -
    • Do not update the mod apk from the game itself, as it may overwrite the mod features and cause errors.
    • -
    • Delete the original game before installing the mod apk, as it may cause conflicts and crashes.
    • -
    -

    Conclusion

    -

    Bus Simulator Ultimate is a fun and realistic bus simulation game that lets you drive various types of buses across different countries and cities. You can also manage your own bus company and compete with other players online. However, if you want to enjoy the game with unlimited money, gold, and other resources, you should download Bus Simulator Ultimate Mod APK, a modified version of the original game that gives you access to all the features and content for free. You can download Bus Simulator Ultimate Mod APK 2.0 7 from our website and follow the steps above to install it on your device. We hope you found this article helpful and informative. If you have any questions or feedback, feel free to leave a comment below.

    -

    FAQs

    -

    Here are some frequently asked questions about Bus Simulator Ultimate Mod APK:

    -

    Q: Is Bus Simulator Ultimate Mod APK free?

    -

    A: Yes, Bus Simulator Ultimate Mod APK is free to download and use. You do not need to pay any money or subscription fees to enjoy the game with unlimited money and gold.

    -

    Q: Is Bus Simulator Ultimate Mod APK safe?

    -

    A: Yes, Bus Simulator Ultimate Mod APK is safe and secure to use. It does not contain any viruses or malware that can harm your device or data. However, you should always download it from a trusted source and scan it with an antivirus or anti-malware software before installing it.

    -

    Q: What are the requirements for Bus Simulator Ultimate Mod APK?

    -

    A: To run Bus Simulator Ultimate Mod APK on your device, you need to have an Android device with Android 5.0 or higher version, at least 1 GB of RAM, and at least 500 MB of free storage space.

    -

    Q: Can I play Bus Simulator Ultimate Mod APK offline?

    -

    A: Yes, you can play Bus Simulator Ultimate Mod APK offline without any internet connection. However, some features such as multiplayer mode, radio system, and passenger feedback system may not work properly offline.

    -

    Q: Can I play Bus Simulator Ultimate Mod APK with my friends?

    -

    A: Yes, you can play Bus Simulator Ultimate Mod APK with your friends online in multiplayer mode. You can join or create a room and invite your friends to play together. You can also chat with them and share your routes and schedules.

    197e85843d
    -
    -
    \ No newline at end of file diff --git a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download FIFA 14 APK OBB and Unlock All Features A Complete Guide.md b/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download FIFA 14 APK OBB and Unlock All Features A Complete Guide.md deleted file mode 100644 index 9738cc21cc5f87be52db9b2283fc03104210eb3b..0000000000000000000000000000000000000000 --- a/spaces/simple0urra/skops-model-card-creator-2a23515a-d54e-4804-b365-27ed6e938735/example/Download FIFA 14 APK OBB and Unlock All Features A Complete Guide.md +++ /dev/null @@ -1,141 +0,0 @@ - -

    FIFA 14 APK and OBB Download on Apksum: A Complete Guide

    -

    If you are a fan of soccer games, you might have heard of FIFA 14, one of the most popular and realistic football games ever made. But did you know that you can download and play FIFA 14 on your Android device for free? Yes, you read that right. In this article, we will show you how to download and install FIFA 14 APK and OBB files from Apksum, a website that offers free and safe downloads of Android apps and games. We will also give you some tips and tricks for playing FIFA 14, as well as some reviews and ratings from critics and users. So, without further ado, let's get started.

    -

    fifa 14 apk and obb download on apksum


    Download File - https://ssurll.com/2uNSzf



    -

    What is FIFA 14?

    -

    FIFA 14 is a soccer simulation game developed by EA Sports and released in 2013 for various platforms, including PC, PlayStation, Xbox, Nintendo, iOS, and Android. It is the 21st installment in the FIFA series, which is based on the official license of the International Federation of Association Football (FIFA). FIFA 14 features over 16,000 players from more than 600 licensed teams and 33 leagues, as well as authentic stadiums, kits, balls, and graphics. It also boasts an impressive gameplay engine that delivers realistic physics, animations, and AI.

    -

    Features and gameplay of FIFA 14

    -

    FIFA 14 offers a variety of features and modes that cater to different tastes and preferences of soccer fans. Some of the main features and modes are:

    -
      -
    • Career Mode: In this mode, you can choose to be a player or a manager and lead your team to glory. You can create your own custom player or use a real one, and develop your skills, attributes, and reputation through matches, training, transfers, contracts, media interactions, etc. You can also manage your team's tactics, formations, transfers, scouting, finances, etc. You can play in any league or cup competition in the world, as well as in international tournaments.
    • -
    • Ultimate Team: This is the most popular mode in FIFA 14, where you can create your own dream team from scratch. You can buy, sell, trade, or collect players from different leagues and nations, as well as customize your team's chemistry, formation, style, kits, badges, etc. You can play online or offline matches against other players or the AI, as well as participate in tournaments, seasons, challenges, etc. You can also earn coins and rewards by completing objectives or using real money.
    • -
    • Match Day: This feature allows you to play with the real-world events and scenarios that happen in the soccer world. You can choose from a selection of live matches or fixtures that reflect the current form, injuries, suspensions, transfers, etc. of the teams involved. You can also experience dynamic commentary that updates with the latest news and stories.
    • -
    • Skill Games: This feature allows you to practice and improve your skills in various aspects of soccer, such as dribbling, passing, shooting, defending, etc. You can choose from different levels of difficulty and challenges that test your accuracy, speed, timing, etc. You can also compete with your friends or other players online.
    • -
    • Co-op Seasons: This feature allows you to team up with a friend online and play against other pairs of players in a 2v2 format. You can play in any league or cup competition and try to climb the leaderboards and win trophies.
    • -
    • Online Seasons: This feature allows you to play online matches against other players in a 1v1 format. You can choose from different divisions and try to earn points and promotions by winning matches. You can also customize your settings, such as match length, difficulty, etc.
    • -
    -

    System requirements and compatibility of FIFA 14

    -

    FIFA 14 is compatible with Android devices that have at least 1.35 GB of free space and run on Android 2.3.3 or higher. However, some features and modes may not be available on some devices due to hardware limitations or regional restrictions. You can check the compatibility of your device on the Google Play Store or on the Apksum website before downloading the game. You can also adjust the graphics and performance settings of the game to suit your device's capabilities.

    -

    fifa 14 mod apk obb offline download latest version
    -fifa 14 android apk obb data full unlocked download free
    -fifa 14 apk obb highly compressed download for android
    -fifa 14 ultimate edition apk obb download on apksum.com
    -fifa 14 apk obb patch download updated 2023 version
    -fifa 14 apk obb with commentary download all languages
    -fifa 14 apk obb no root download working on all devices
    -fifa 14 apk obb unlimited coins download modded version
    -fifa 14 apk obb direct download link fast and easy
    -fifa 14 apk obb original download from official site
    -fifa 14 apk obb best graphics download hd quality
    -fifa 14 apk obb new features download with improvements
    -fifa 14 apk obb full game download without license verification
    -fifa 14 apk obb online mode download play with friends
    -fifa 14 apk obb mega download link with resume support
    -fifa 14 apk obb mediafire download link no ads
    -fifa 14 apk obb google drive download link secure and reliable
    -fifa 14 apk obb zippyshare download link fast and simple
    -fifa 14 apk obb rexdl download link trusted and safe
    -fifa 14 apk obb revdl download link popular and updated
    -fifa 14 apk obb apkpure download link verified and clean
    -fifa 14 apk obb apkmirror download link authentic and original
    -fifa 14 apk obb uptodown download link user-friendly and convenient
    -fifa 14 apk obb mob.org download link high-quality and stable
    -fifa 14 apk obb android1.com download link smooth and easy
    -fifa 14 apk obb an1.com download link fast and free
    -fifa 14 apk obb dlandroid.com download link premium and exclusive
    -fifa 14 apk obb ihackedit.com download link hacked and modded
    -fifa 14 apk obb androidoyun.club download link turkish and fun
    -fifa 14 apk obb m.apksum.com download link mobile and compatible
    -fifa 14 modded apk obb latest transfer update download
    -fifa 14 unlocked apk obb all teams and players download
    -fifa 14 full version apk obb no ads or in-app purchases download
    -fifa 14 cracked apk obb unlimited money and gold download
    -fifa 14 hacked apk obb all skills and abilities unlocked download
    -fifa 14 premium apk obb vip features and benefits download
    -fifa 14 pro apk obb advanced settings and options download
    -fifa 14 deluxe apk obb extra content and bonuses download
    -fifa 14 ultimate team mod apk obb unlimited packs and coins download
    -fifa 14 career mode mod apk obb realistic and immersive gameplay download
    -how to install fifa 14 apk and obb files on android device step by step guide
    -how to fix fifa 14 apk and obb files not working or crashing on android device troubleshooting tips
    -how to update fifa 14 apk and obb files to the latest version on android device manual or automatic method
    -how to backup and restore fifa 14 apk and obb files on android device using cloud or local storage option
    -how to transfer fifa 14 apk and obb files from one android device to another using bluetooth or wifi option
    -how to play fifa 14 online with friends using apk and obb files on android device multiplayer mode tutorial
    -how to customize fifa 14 settings using apk and obb files on android device graphics, sound, control, etc. options tutorial

    -

    How to download and install FIFA 14 APK and OBB on Apksum?

    -

    If you want to download and play FIFA 14 on your Android device for free, you can use Apksum, a website that offers free and safe downloads of Android apps and games. Apksum provides the latest version of FIFA 14 APK and OBB files, which are the files that contain the game data and resources. You can download these files from Apksum without any registration or subscription. However, you need to follow some steps to install them properly on your device. Here are the steps to download and install FIFA 14 APK and OBB on Apksum:

    -

    Steps to download FIFA 14 APK and OBB on Apksum

    -
      -
    1. Go to the Apksum website using your browser and search for FIFA 14 in the search bar.
    2. -
    3. Select the FIFA 14 app from the search results and click on the download button.
    4. -
    5. You will be redirected to a new page where you can see two options: Download APK (16.15 MB) and Download OBB (1.08 GB). Click on both options one by one to download the APK and OBB files respectively.
    6. -
    7. Wait for the downloads to finish and locate the files in your device's storage.
    8. -
    -

    Steps to install FIFA 14 APK and OBB on your device

    -
      -
    1. Before installing the APK file, you need to enable the installation of apps from unknown sources on your device. To do this, go to your device's settings, security, and enable the unknown sources option.
    2. -
    3. Now, go to your device's file manager and find the FIFA 14 APK file that you downloaded from Apksum. Tap on it and follow the instructions to install it.
    4. -
    5. After installing the APK file, do not open it yet. You need to copy the OBB file to the right folder first.
    6. -
    7. Go back to your device's file manager and find the FIFA 14 OBB file that you downloaded from Apksum. It should be a zip file with the name com.ea.game.fifa14_row.zip.
    8. -
    9. Extract the zip file using any file extractor app or tool. You will get a folder with the name com.ea.game.fifa14_row.
    10. -
    11. Copy this folder and paste it in your device's internal storage, Android, obb folder. If you don't have an obb folder, create one.
    12. -
    13. Now, you can open the FIFA 14 app from your app drawer or home screen and enjoy playing it.
    14. -
    -

    Tips and tricks for playing FIFA 14

    -

    FIFA 14 is a fun and challenging game that requires skill, strategy, and practice to master. If you want to improve your performance and win more matches, here are some tips and tricks that you can use:

    -

    How to improve your skills and tactics in FIFA 14

    -
      -
    • Use the Skill Games: As mentioned earlier, FIFA 14 has a feature called Skill Games that allows you to practice and improve your skills in various aspects of soccer, such as dribbling, passing, shooting, defending, etc. You can choose from different levels of difficulty and challenges that test your accuracy, speed, timing, etc. You can also compete with your friends or other players online. By playing these games regularly, you will learn new techniques, moves, and strategies that you can apply in real matches.
    • -
    • Use the Training Mode: Another way to improve your skills and tactics in FIFA 14 is to use the Training Mode, which is available in Career Mode or Ultimate Team Mode . In this mode, you can customize your own training sessions and drills, and practice with your team or individual players. You can choose from different scenarios, such as attacking, defending, set pieces, etc. You can also adjust the difficulty, duration, and frequency of the sessions. By using this mode regularly, you will improve your team's chemistry, cohesion, and performance.
    • -
    • Use the Tactics Menu: Another way to improve your skills and tactics in FIFA 14 is to use the Tactics Menu, which is available in any mode or match. In this menu, you can change your team's formation, style, mentality, instructions, etc. You can also create your own custom tactics and save them for later use. By using this menu, you can adapt your team's strategy to different situations and opponents.
    • -
    -

    How to use different modes and features in FIFA 14

    -
      -
    • Use the Match Day Feature: As mentioned earlier, FIFA 14 has a feature called Match Day that allows you to play with the real-world events and scenarios that happen in the soccer world. You can choose from a selection of live matches or fixtures that reflect the current form, injuries, suspensions, transfers, etc. of the teams involved. You can also experience dynamic commentary that updates with the latest news and stories. By using this feature, you can enjoy a more immersive and realistic soccer experience.
    • -
    • Use the Ultimate Team Feature: As mentioned earlier, FIFA 14 has a feature called Ultimate Team that allows you to create your own dream team from scratch. You can buy, sell, trade, or collect players from different leagues and nations, as well as customize your team's chemistry, formation, style, kits, badges, etc. You can play online or offline matches against other players or the AI, as well as participate in tournaments, seasons, challenges, etc. You can also earn coins and rewards by completing objectives or using real money. By using this feature, you can have fun and express your creativity.
    • -
    • Use the Co-op Seasons Feature: As mentioned earlier, FIFA 14 has a feature called Co-op Seasons that allows you to team up with a friend online and play against other pairs of players in a 2v2 format. You can play in any league or cup competition and try to climb the leaderboards and win trophies. By using this feature, you can have fun and cooperate with your friend.
    • -
    -

    Reviews and ratings of FIFA 14

    -

    FIFA 14 is one of the most acclaimed and successful soccer games ever made. It has received positive reviews and ratings from critics and users alike. Here are some of the reviews and ratings of FIFA 14:

    -

    What critics and users say about FIFA 14

    -

    According to Metacritic, a website that aggregates reviews from various sources, FIFA 14 has an average score of 86 out of 100 for the PC version, 88 out of 100 for the PlayStation 4 version, and 87 out of 100 for the Xbox One version. These scores indicate "generally favorable reviews" from critics. Some of the praises that critics gave to FIFA 14 are:

    -
      -
    • "FIFA 14 is more than just an annual update – it’s a whole new ball game." – IGN
    • -
    • "FIFA 14 delivers on its promise of providing an authentic football experience." – GameSpot
    • -
    • "FIFA 14 is a stunning achievement in sports simulation." – Polygon
    • -
    -

    According to Google Play Store, FIFA 14 has an average rating of 4.3 out of 5 stars from over 6 million users. Some of the praises that users gave to FIFA 14 are:

    -
      -
    • "FIFA 14 is the best soccer game ever. I love the graphics, gameplay, modes, and features." – User review
    • -
    • "FIFA 14 is amazing. I play it every day with my friends online. It's so fun and realistic." – User review
    • -
    • "FIFA 14 is awesome. I like how I can create my own team and customize it. It's like a dream come true." – User review
    • -
    -

    Pros and cons of FIFA 14

    -

    Like any game, FIFA 14 has its pros and cons that may affect your enjoyment and satisfaction. Here are some of the pros and cons of FIFA 14:

    - - - - - - - -
    ProsCons
    - Realistic graphics and physics- Requires a lot of storage space and internet connection
    - Variety of features and modes- Some features and modes may not be available on some devices
    - Authentic license and content- Some content may be outdated or inaccurate
    - Fun and challenging gameplay- Some gameplay aspects may be frustrating or unfair
    - Engaging and dynamic commentary- Some commentary may be repetitive or annoying
    -

    Conclusion

    -

    FIFA 14 is a soccer simulation game that offers a realistic and immersive experience of the beautiful game. It has stunning graphics, physics, and animations, as well as a variety of features and modes that cater to different tastes and preferences of soccer fans. It also has an authentic license and content that reflects the real-world soccer world. However, FIFA 14 also has some drawbacks, such as requiring a lot of storage space and internet connection, having some compatibility issues, and having some outdated or inaccurate content. Overall, FIFA 14 is a great game that deserves a try if you are a fan of soccer games.

    -

    FAQs

    -

    Here are some frequently asked questions about FIFA 14:

    -
      -
    • Q: Is FIFA 14 free to play?
    • -
    • A: Yes, FIFA 14 is free to download and play on Android devices. However, some features and modes may require in-app purchases or real money.
    • -
    • Q: Is FIFA 14 safe to download and install?
    • -
    • A: Yes, FIFA 14 is safe to download and install from Apksum, a website that offers free and safe downloads of Android apps and games. However, you should always check the compatibility of your device before downloading the game.
    • -
    • Q: How can I update FIFA 14?
    • -
    • A: You can update FIFA 14 by downloading the latest version of the APK and OBB files from Apksum and following the same steps as mentioned above. However, you should always backup your data before updating the game.
    • -
    • Q: How can I contact the developers of FIFA 14?
    • -
    • A: You can contact the developers of FIFA 14 by visiting their official website or social media pages, or by sending them an email or feedback through the game.
    • -
    • Q: How can I get more coins and rewards in FIFA 14?
    • -
    • A: You can get more coins and rewards in FIFA 14 by playing more matches, completing more objectives, participating in more tournaments, seasons, challenges, etc., or by using real money.
    • -

    401be4b1e0
    -
    -
    \ No newline at end of file diff --git a/spaces/sklearn-docs/Joint-feature-selection-with-multi-task-Lasso/README.md b/spaces/sklearn-docs/Joint-feature-selection-with-multi-task-Lasso/README.md deleted file mode 100644 index 3b3574519923536de6fc164d0b2d0814b23fb2d2..0000000000000000000000000000000000000000 --- a/spaces/sklearn-docs/Joint-feature-selection-with-multi-task-Lasso/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Joint Feature Selection With Multi Task Lasso -emoji: ⚡ -colorFrom: purple -colorTo: pink -sdk: gradio -sdk_version: 3.24.1 -app_file: app.py -pinned: false -license: bsd-3-clause ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/songdaooi/Swap/swapper.py b/spaces/songdaooi/Swap/swapper.py deleted file mode 100644 index f7f359961e465004fed3311b8dee0bf51c56b649..0000000000000000000000000000000000000000 --- a/spaces/songdaooi/Swap/swapper.py +++ /dev/null @@ -1,106 +0,0 @@ -import cv2 -import numpy as np -from insightface.utils import face_align -from face_parsing.swap import swap_regions -from utils import add_logo_to_image - -swap_options_list = [ - "All face", - "Age less than", - "Age greater than", - "All Male", - "All Female", - "Specific Face", -] - - -def swap_face(whole_img, target_face, source_face, models): - inswapper = models.get("swap") - face_enhancer = models.get("enhance", None) - face_parser = models.get("face_parser", None) - fe_enable = models.get("enhance_sett", False) - - bgr_fake, M = inswapper.get(whole_img, target_face, source_face, paste_back=False) - image_size = 128 if not fe_enable else 512 - aimg, _ = face_align.norm_crop2(whole_img, target_face.kps, image_size=image_size) - - if face_parser is not None: - fp_enable, includes, smooth_mask, blur_amount = models.get("face_parser_sett") - if fp_enable: - bgr_fake = swap_regions( - bgr_fake, aimg, face_parser, smooth_mask, includes=includes, blur=blur_amount - ) - - if fe_enable: - _, bgr_fake, _ = face_enhancer.enhance( - bgr_fake, paste_back=True, has_aligned=True - ) - bgr_fake = bgr_fake[0] - M /= 0.25 - - IM = cv2.invertAffineTransform(M) - - img_white = np.full((aimg.shape[0], aimg.shape[1]), 255, dtype=np.float32) - bgr_fake = cv2.warpAffine( - bgr_fake, IM, (whole_img.shape[1], whole_img.shape[0]), borderValue=0.0 - ) - img_white = cv2.warpAffine( - img_white, IM, (whole_img.shape[1], whole_img.shape[0]), borderValue=0.0 - ) - img_white[img_white > 20] = 255 - img_mask = img_white - mask_h_inds, mask_w_inds = np.where(img_mask == 255) - mask_h = np.max(mask_h_inds) - np.min(mask_h_inds) - mask_w = np.max(mask_w_inds) - np.min(mask_w_inds) - mask_size = int(np.sqrt(mask_h * mask_w)) - - k = max(mask_size // 10, 10) - img_mask = cv2.erode(img_mask, np.ones((k, k), np.uint8), iterations=1) - - k = max(mask_size // 20, 5) - kernel_size = (k, k) - blur_size = tuple(2 * i + 1 for i in kernel_size) - img_mask = cv2.GaussianBlur(img_mask, blur_size, 0) / 255 - - img_mask = np.reshape(img_mask, [img_mask.shape[0], img_mask.shape[1], 1]) - fake_merged = img_mask * bgr_fake + (1 - img_mask) * whole_img.astype(np.float32) - fake_merged = add_logo_to_image(fake_merged.astype("uint8")) - return fake_merged - - -def swap_face_with_condition( - whole_img, target_faces, source_face, condition, age, models -): - swapped = whole_img.copy() - - for target_face in target_faces: - if condition == "All face": - swapped = swap_face(swapped, target_face, source_face, models) - elif condition == "Age less than" and target_face["age"] < age: - swapped = swap_face(swapped, target_face, source_face, models) - elif condition == "Age greater than" and target_face["age"] > age: - swapped = swap_face(swapped, target_face, source_face, models) - elif condition == "All Male" and target_face["gender"] == 1: - swapped = swap_face(swapped, target_face, source_face, models) - elif condition == "All Female" and target_face["gender"] == 0: - swapped = swap_face(swapped, target_face, source_face, models) - - return swapped - - -def swap_specific(source_specifics, target_faces, whole_img, models, threshold=0.6): - swapped = whole_img.copy() - - for source_face, specific_face in source_specifics: - specific_embed = specific_face["embedding"] - specific_embed /= np.linalg.norm(specific_embed) - - for target_face in target_faces: - target_embed = target_face["embedding"] - target_embed /= np.linalg.norm(target_embed) - cosine_distance = 1 - np.dot(specific_embed, target_embed) - if cosine_distance > threshold: - continue - swapped = swap_face(swapped, target_face, source_face, models) - - return swapped diff --git a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py b/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py deleted file mode 100644 index 7a7696403d505afdf0f1606f8220801b0f46152f..0000000000000000000000000000000000000000 --- a/spaces/sriramelango/Social_Classification_Public/fairseq/examples/textless_nlp/gslm/unit2speech/glow.py +++ /dev/null @@ -1,311 +0,0 @@ -# ***************************************************************************** -# Copyright (c) 2018, NVIDIA CORPORATION. All rights reserved. -# -# Redistribution and use in source and binary forms, with or without -# modification, are permitted provided that the following conditions are met: -# * Redistributions of source code must retain the above copyright -# notice, this list of conditions and the following disclaimer. -# * Redistributions in binary form must reproduce the above copyright -# notice, this list of conditions and the following disclaimer in the -# documentation and/or other materials provided with the distribution. -# * Neither the name of the NVIDIA CORPORATION nor the -# names of its contributors may be used to endorse or promote products -# derived from this software without specific prior written permission. -# -# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND -# ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED -# WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE -# DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY -# DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES -# (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; -# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND -# ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT -# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS -# SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. -# -# ***************************************************************************** -import copy -import torch -from torch.autograd import Variable -import torch.nn.functional as F - - -@torch.jit.script -def fused_add_tanh_sigmoid_multiply(input_a, input_b, n_channels): - n_channels_int = n_channels[0] - in_act = input_a+input_b - t_act = torch.tanh(in_act[:, :n_channels_int, :]) - s_act = torch.sigmoid(in_act[:, n_channels_int:, :]) - acts = t_act * s_act - return acts - - -class WaveGlowLoss(torch.nn.Module): - def __init__(self, sigma=1.0): - super(WaveGlowLoss, self).__init__() - self.sigma = sigma - - def forward(self, model_output): - z, log_s_list, log_det_W_list = model_output - for i, log_s in enumerate(log_s_list): - if i == 0: - log_s_total = torch.sum(log_s) - log_det_W_total = log_det_W_list[i] - else: - log_s_total = log_s_total + torch.sum(log_s) - log_det_W_total += log_det_W_list[i] - - loss = torch.sum(z*z)/(2*self.sigma*self.sigma) - log_s_total - log_det_W_total - return loss/(z.size(0)*z.size(1)*z.size(2)) - - -class Invertible1x1Conv(torch.nn.Module): - """ - The layer outputs both the convolution, and the log determinant - of its weight matrix. If reverse=True it does convolution with - inverse - """ - def __init__(self, c): - super(Invertible1x1Conv, self).__init__() - self.conv = torch.nn.Conv1d(c, c, kernel_size=1, stride=1, padding=0, - bias=False) - - # Sample a random orthonormal matrix to initialize weights - W = torch.qr(torch.FloatTensor(c, c).normal_())[0] - - # Ensure determinant is 1.0 not -1.0 - if torch.det(W) < 0: - W[:,0] = -1*W[:,0] - W = W.view(c, c, 1) - self.conv.weight.data = W - - def forward(self, z, reverse=False): - # shape - batch_size, group_size, n_of_groups = z.size() - - W = self.conv.weight.squeeze() - - if reverse: - if not hasattr(self, 'W_inverse'): - # Reverse computation - W_inverse = W.float().inverse() - W_inverse = Variable(W_inverse[..., None]) - if z.type() == 'torch.cuda.HalfTensor': - W_inverse = W_inverse.half() - self.W_inverse = W_inverse - z = F.conv1d(z, self.W_inverse, bias=None, stride=1, padding=0) - return z - else: - # Forward computation - log_det_W = batch_size * n_of_groups * torch.logdet(W) - z = self.conv(z) - return z, log_det_W - - -class WN(torch.nn.Module): - """ - This is the WaveNet like layer for the affine coupling. The primary difference - from WaveNet is the convolutions need not be causal. There is also no dilation - size reset. The dilation only doubles on each layer - """ - def __init__(self, n_in_channels, n_mel_channels, n_layers, n_channels, - kernel_size): - super(WN, self).__init__() - assert(kernel_size % 2 == 1) - assert(n_channels % 2 == 0) - self.n_layers = n_layers - self.n_channels = n_channels - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - - start = torch.nn.Conv1d(n_in_channels, n_channels, 1) - start = torch.nn.utils.weight_norm(start, name='weight') - self.start = start - - # Initializing last layer to 0 makes the affine coupling layers - # do nothing at first. This helps with training stability - end = torch.nn.Conv1d(n_channels, 2*n_in_channels, 1) - end.weight.data.zero_() - end.bias.data.zero_() - self.end = end - - cond_layer = torch.nn.Conv1d(n_mel_channels, 2*n_channels*n_layers, 1) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name='weight') - - for i in range(n_layers): - dilation = 2 ** i - padding = int((kernel_size*dilation - dilation)/2) - in_layer = torch.nn.Conv1d(n_channels, 2*n_channels, kernel_size, - dilation=dilation, padding=padding) - in_layer = torch.nn.utils.weight_norm(in_layer, name='weight') - self.in_layers.append(in_layer) - - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2*n_channels - else: - res_skip_channels = n_channels - res_skip_layer = torch.nn.Conv1d(n_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name='weight') - self.res_skip_layers.append(res_skip_layer) - - def forward(self, forward_input): - audio, spect = forward_input - audio = self.start(audio) - output = torch.zeros_like(audio) - n_channels_tensor = torch.IntTensor([self.n_channels]) - - spect = self.cond_layer(spect) - - for i in range(self.n_layers): - spect_offset = i*2*self.n_channels - acts = fused_add_tanh_sigmoid_multiply( - self.in_layers[i](audio), - spect[:,spect_offset:spect_offset+2*self.n_channels,:], - n_channels_tensor) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - audio = audio + res_skip_acts[:,:self.n_channels,:] - output = output + res_skip_acts[:,self.n_channels:,:] - else: - output = output + res_skip_acts - - return self.end(output) - - -class WaveGlow(torch.nn.Module): - def __init__(self, n_mel_channels, n_flows, n_group, n_early_every, - n_early_size, WN_config): - super(WaveGlow, self).__init__() - - self.upsample = torch.nn.ConvTranspose1d(n_mel_channels, - n_mel_channels, - 1024, stride=256) - assert(n_group % 2 == 0) - self.n_flows = n_flows - self.n_group = n_group - self.n_early_every = n_early_every - self.n_early_size = n_early_size - self.WN = torch.nn.ModuleList() - self.convinv = torch.nn.ModuleList() - - n_half = int(n_group/2) - - # Set up layers with the right sizes based on how many dimensions - # have been output already - n_remaining_channels = n_group - for k in range(n_flows): - if k % self.n_early_every == 0 and k > 0: - n_half = n_half - int(self.n_early_size/2) - n_remaining_channels = n_remaining_channels - self.n_early_size - self.convinv.append(Invertible1x1Conv(n_remaining_channels)) - self.WN.append(WN(n_half, n_mel_channels*n_group, **WN_config)) - self.n_remaining_channels = n_remaining_channels # Useful during inference - - def forward(self, forward_input): - """ - forward_input[0] = mel_spectrogram: batch x n_mel_channels x frames - forward_input[1] = audio: batch x time - """ - spect, audio = forward_input - - # Upsample spectrogram to size of audio - spect = self.upsample(spect) - assert(spect.size(2) >= audio.size(1)) - if spect.size(2) > audio.size(1): - spect = spect[:, :, :audio.size(1)] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - audio = audio.unfold(1, self.n_group, self.n_group).permute(0, 2, 1) - output_audio = [] - log_s_list = [] - log_det_W_list = [] - - for k in range(self.n_flows): - if k % self.n_early_every == 0 and k > 0: - output_audio.append(audio[:,:self.n_early_size,:]) - audio = audio[:,self.n_early_size:,:] - - audio, log_det_W = self.convinv[k](audio) - log_det_W_list.append(log_det_W) - - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - log_s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = torch.exp(log_s)*audio_1 + b - log_s_list.append(log_s) - - audio = torch.cat([audio_0, audio_1],1) - - output_audio.append(audio) - return torch.cat(output_audio,1), log_s_list, log_det_W_list - - def infer(self, spect, sigma=1.0): - spect = self.upsample(spect) - # trim conv artifacts. maybe pad spec to kernel multiple - time_cutoff = self.upsample.kernel_size[0] - self.upsample.stride[0] - spect = spect[:, :, :-time_cutoff] - - spect = spect.unfold(2, self.n_group, self.n_group).permute(0, 2, 1, 3) - spect = spect.contiguous().view(spect.size(0), spect.size(1), -1).permute(0, 2, 1) - - if spect.type() == 'torch.cuda.HalfTensor': - audio = torch.cuda.HalfTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - else: - audio = torch.cuda.FloatTensor(spect.size(0), - self.n_remaining_channels, - spect.size(2)).normal_() - - audio = torch.autograd.Variable(sigma*audio) - - for k in reversed(range(self.n_flows)): - n_half = int(audio.size(1)/2) - audio_0 = audio[:,:n_half,:] - audio_1 = audio[:,n_half:,:] - - output = self.WN[k]((audio_0, spect)) - - s = output[:, n_half:, :] - b = output[:, :n_half, :] - audio_1 = (audio_1 - b)/torch.exp(s) - audio = torch.cat([audio_0, audio_1],1) - - audio = self.convinv[k](audio, reverse=True) - - if k % self.n_early_every == 0 and k > 0: - if spect.type() == 'torch.cuda.HalfTensor': - z = torch.cuda.HalfTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - else: - z = torch.cuda.FloatTensor(spect.size(0), self.n_early_size, spect.size(2)).normal_() - audio = torch.cat((sigma*z, audio),1) - - audio = audio.permute(0,2,1).contiguous().view(audio.size(0), -1).data - return audio - - @staticmethod - def remove_weightnorm(model): - waveglow = model - for WN in waveglow.WN: - WN.start = torch.nn.utils.remove_weight_norm(WN.start) - WN.in_layers = remove(WN.in_layers) - WN.cond_layer = torch.nn.utils.remove_weight_norm(WN.cond_layer) - WN.res_skip_layers = remove(WN.res_skip_layers) - return waveglow - - -def remove(conv_list): - new_conv_list = torch.nn.ModuleList() - for old_conv in conv_list: - old_conv = torch.nn.utils.remove_weight_norm(old_conv) - new_conv_list.append(old_conv) - return new_conv_list diff --git a/spaces/stomexserde/gpt4-ui/Examples/845 Gigabyte Motherboard Driver Download LINK Mac.md b/spaces/stomexserde/gpt4-ui/Examples/845 Gigabyte Motherboard Driver Download LINK Mac.md deleted file mode 100644 index 5af4bf44a05f234ce2645e0d7dcb0f051becce50..0000000000000000000000000000000000000000 --- a/spaces/stomexserde/gpt4-ui/Examples/845 Gigabyte Motherboard Driver Download LINK Mac.md +++ /dev/null @@ -1,38 +0,0 @@ - -

    How to Download and Install 845 Gigabyte Motherboard Driver for Mac

    -

    If you have a Mac computer and you want to use an 845 Gigabyte motherboard, you might need to download and install the driver for it. A driver is a software that allows your computer to communicate with the hardware device. Without a driver, your motherboard might not work properly or at all.

    -

    In this article, we will show you how to download and install the 845 Gigabyte motherboard driver for Mac in a few simple steps. We will also provide some tips on how to troubleshoot common issues that might arise during the process.

    -

    845 gigabyte motherboard driver download mac


    Download ––– https://urlgoal.com/2uIa6E



    - -

    Step 1: Find Out Your Motherboard Model and Revision

    -

    The first step is to find out the exact model and revision of your 845 Gigabyte motherboard. This information is usually printed on a sticker on the motherboard itself, or on the box that it came in. You can also use a tool like CPU-Z to check the motherboard information on your computer.

    -

    For example, the model name of our motherboard is GA-8I845GV, and the revision is 1.0. You will need this information to download the correct driver for your motherboard.

    - -

    Step 2: Go to the GIGABYTE Support Website

    -

    The next step is to go to the GIGABYTE support website, where you can find the latest drivers and manuals for your motherboard. You can access the website by clicking here.

    -

    On the website, you can either search for your motherboard model in the search box, or select it from the product category list. For example, we searched for GA-8I845GV and found it under Motherboard > Socket 478 > GA-8I845GV.

    - -

    Step 3: Download the Driver for Mac

    -

    Once you have found your motherboard model on the website, you can click on the Support tab to see all the available downloads for it. You can filter the downloads by operating system, such as Windows, Linux, or Mac.

    -

    -

    For Mac users, you will need to download the driver for Audio. This is because the 845 Gigabyte motherboard uses a Realtek AC97 audio chip, which requires a driver to work on Mac. You can click on the Download button next to the Audio driver to start downloading it.

    -

    The file name of the driver should be something like Realtek_AC97_Driver.zip. The file size should be around 20 MB. The download speed may vary depending on your region and internet connection.

    - -

    Step 4: Install the Driver for Mac

    -

    After downloading the driver file, you will need to unzip it and run the installer. You can use a tool like WinZip or 7-Zip to unzip the file. You should see a folder called Realtek_AC97_Driver inside.

    -

    Inside the folder, you should see a file called Realtek AC97 Installer.pkg. This is the installer file that you need to run. Double-click on it to launch it.

    -

    The installer will guide you through the installation process. You will need to agree to the license agreement, select your destination folder, and enter your password if prompted. The installation should take a few minutes.

    -

    After installing the driver, you will need to restart your computer for the changes to take effect. You should now be able to use your 845 Gigabyte motherboard with your Mac computer.

    - -

    Troubleshooting Tips

    -

    If you encounter any problems during or after installing the driver, here are some tips that might help:

    -
      -
    • Make sure you have downloaded and installed the correct driver for your motherboard model and revision.
    • -
    • Make sure you have unzipped the driver file before running the installer.
    • -
    • Make sure you have enough disk space and memory available on your computer.
    • -
    • Make sure you have closed all other applications before running the installer.
    • -
    • Make sure you have followed all the instructions on the installer carefully.
    • -
    • If you get an error message or a warning during installation, try to follow the suggested solutions or contact GIGABYTE customer service for assistance.
    • -
    • If you still have no sound after installing the driver, try to check your sound settings and volume

      81aa517590
      -
      -
      \ No newline at end of file diff --git a/spaces/sudo-ai/zero123plus-demo-space/share_btn.py b/spaces/sudo-ai/zero123plus-demo-space/share_btn.py deleted file mode 100644 index ede7398c7e5e5558103412e7f1883aa4a96f196f..0000000000000000000000000000000000000000 --- a/spaces/sudo-ai/zero123plus-demo-space/share_btn.py +++ /dev/null @@ -1,84 +0,0 @@ -community_icon_html = """""" - -loading_icon_html = """""" - -share_js = """async () => { - async function uploadFile(file){ - const UPLOAD_URL = 'https://huggingface.co/uploads'; - const response = await fetch(UPLOAD_URL, { - method: 'POST', - headers: { - 'Content-Type': file.type, - 'X-Requested-With': 'XMLHttpRequest', - }, - body: file, /// <- File inherits from Blob - }); - const url = await response.text(); - return url; - } - - async function getInputImgFile(imgEl){ - const res = await fetch(imgEl.src); - const blob = await res.blob(); - const imgId = Date.now() % 200; - const isPng = imgEl.src.startsWith(`data:image/png`); - if(isPng){ - const fileName = `zero123++_${{imgId}}.png`; - return new File([blob], fileName, { type: 'image/png' }); - }else{ - const fileName = `zero123++_${{imgId}}.jpg`; - return new File([blob], fileName, { type: 'image/jpeg' }); - } - } - - const gradioEl = document.querySelector("gradio-app").shadowRoot || document.querySelector('body > gradio-app'); - const inputPrompt = "Multiviews generated by Zero123++"; - const seed = gradioEl.querySelector('#seed input[type="number"]').value; - const scale = gradioEl.querySelector('#scale input[type="number"]').value; - const num_steps = gradioEl.querySelector('#num_steps input[type="number"]').value; - const controlImage = gradioEl.querySelector('#input_image img'); - const outputImgEl = gradioEl.querySelector('#six_view img'); - - const shareBtnEl = gradioEl.querySelector('#share-btn'); - const shareIconEl = gradioEl.querySelector('#share-btn-share-icon'); - const loadingIconEl = gradioEl.querySelector('#share-btn-loading-icon'); - - shareBtnEl.style.pointerEvents = 'none'; - shareIconEl.style.display = 'none'; - loadingIconEl.style.removeProperty('display'); - - const outputFile = await getInputImgFile(outputImgEl); - const urlOutputImg = await uploadFile(outputFile); - - const controlFile = await getInputImgFile(controlImage); - const urlControlImg = await uploadFile(controlFile); - - const descriptionMd = ` -#### Generated Multiviews: - - -#### Input Image: - - -#### Parameters: -- *Seed*: ${seed} -- *Classifier Free Guidance Scale*: ${scale} -- *Number of Diffusion Inference Steps*: ${num_steps} -`; - const params = new URLSearchParams({ - title: inputPrompt, - description: descriptionMd, - preview: true - }); - const paramsStr = params.toString(); - window.open(`https://huggingface.co/spaces/sudo-ai/zero123plus-demo-space/discussions/new?${paramsStr}`, '_blank'); - shareBtnEl.style.removeProperty('pointer-events'); - shareIconEl.style.removeProperty('display'); - loadingIconEl.style.display = 'none'; -}""" diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Matlab 2009 License File Crack 140 EXCLUSIVE.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Matlab 2009 License File Crack 140 EXCLUSIVE.md deleted file mode 100644 index 69c968e0c3ee222b3e464e349475075278b6b642..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Matlab 2009 License File Crack 140 EXCLUSIVE.md +++ /dev/null @@ -1,6 +0,0 @@ -

      matlab 2009 license file crack 140


      Download ✸✸✸ https://cinurl.com/2uEYWd



      - -Use pcode to regenerate the file using MATLAB R2007b or later. ... data files described and made available on this web page are distributed under the GNU LGPL license. ... Dec 29, 2019 · Hi Sir: Could you provide me with Matlab P Code Viewer Crack . ... Discussion 137 Rererenccs 140 Appendix 142 1. 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/MotionDSPvRevealPremium32013029Portablerarrar WORK.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/MotionDSPvRevealPremium32013029Portablerarrar WORK.md deleted file mode 100644 index 6f03ea046b184487b69e300a10e28a4d0742dd3e..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/MotionDSPvRevealPremium32013029Portablerarrar WORK.md +++ /dev/null @@ -1,6 +0,0 @@ -

      MotionDSPvRevealPremium32013029Portablerarrar


      Download Ziphttps://cinurl.com/2uEYlf



      - -Me-Show-Off. MotionDSPvRevealPremium32013029Portablerarrar. 0) at the Center of the Mind". MotionDSP is a sound library for animators, designers and visual effects professionals. MotionDSP also provides a comprehensive set of actions and effects for various 3D and non-3D applications (such as After Effects, Illustrator, Photoshop and Flash) that can be used to add a playful, playful, silly, silly, unique effect to your own video production. The MotionDSP Vocaloid license free library is specifically designed to address the need to create a custom character for motion graphics, web design, advertising, motion video, and more. Starting from its core module, the library allows you to quickly create an endless set of ready-to-use characters that you can play with in After Effects or in a 3D engine. Whether you're interested in creating a character for your own project or you want to use it in your clients' projects, the library includes the following modules: • Body: Cylinder • Head: Cylinder • Mouth: Spheres • Eyes: Cubes • Ears: Rounded-Boxes • Brows: Rounded-Boxes • Nose: Round-Boxes • Nails: Rounded-Boxes • Hair: Cylinder • Clothing: Cylinder • Body: Cube • Clothing: Cube • Brows: Cubes • Eyes: Cubes • Hair: Cylinder • Ears: Cubes • Nose: Cubes • Nails: Cubes • Tail: Cylinder • Tail: Cube • Shoes: Cylinder • Shoes: Cube • Leggings: Cylinder • Leggings: Cube • Gloves: Cylinder • Gloves: Cube • Head: Cubes • Face: Cylinder • Mouth: Cubes • Head: Cube • Head: Cylinder • Head: Cube • Head: Cylinder • Head: Cube To work with the MotionDSP v3.0 or v3.1 make sure to refer to the support forum thread: To work with the MotionDSP v2.0 make sure to refer to the support forum thread: 4fefd39f24
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Sony Vegas 7.0d.Incl.Fixed Keygen-SSG Free Download.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Sony Vegas 7.0d.Incl.Fixed Keygen-SSG Free Download.md deleted file mode 100644 index 34ccfb1745e38f570808cd538f9a0ba9a22d10f7..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/Sony Vegas 7.0d.Incl.Fixed Keygen-SSG Free Download.md +++ /dev/null @@ -1,6 +0,0 @@ -

      Sony Vegas 7.0d.Incl.Keygen-SSG free download


      DOWNLOAD ✓✓✓ https://cinurl.com/2uEXTu



      -
      -BCC 7 for Sony Vegas+crack. Sony Vegas 7.0d.Incl.Keygen-SSG. This is a bcc plugin sony vegas free, sony vegas bcc 8 free download. Thanks for watching ... 4d29de3e1b
      -
      -
      -

      diff --git a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/WWE 2K16 Hacked.md b/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/WWE 2K16 Hacked.md deleted file mode 100644 index 4f6a9771138a44bcff07352d7e1fd0d8ac877305..0000000000000000000000000000000000000000 --- a/spaces/suppsumstagza/text-to-image-stable-diffusion-v1-5/scripts/WWE 2K16 Hacked.md +++ /dev/null @@ -1,57 +0,0 @@ - -

      WWE 2K16 Hacked: How to Unlock Everything and Play Online

      -

      WWE 2K16 is one of the most popular wrestling games ever released, featuring a huge roster of over 120 superstars, divas and legends, as well as a variety of game modes and features. However, some players may want to access more content and options without having to grind through the game or pay extra money for DLCs. That's where hacking comes in.

      -

      WWE 2K16 Hacked


      Downloadhttps://cinurl.com/2uEYm3



      -

      Hacking WWE 2K16 can allow you to unlock all characters and modes, play online with modded features, and even crack the game to run without Steam. In this article, we will show you how to hack WWE 2K16 using different methods and tools, as well as the risks and benefits of doing so.

      -

      WWE 2K16 Crack by Codex

      -

      One of the easiest ways to hack WWE 2K16 is to use a crack by Codex, a group of hackers who specialize in cracking PC games. A crack is a modified version of the game's executable file that bypasses the copy protection and allows you to play without the original disc or Steam account.

      -

      The Codex crack for WWE 2K16 not only lets you play the game without Steam, but also includes all DLC content, such as the Terminator pack, the Legends pack, and the Future Stars pack. To use the Codex crack, you need to download it from a reliable source, such as MegaGames or FitGirl Repacks, extract it, mount the .iso file, run the setup.exe file, install the game, copy the crack from the Codex folder to the game folder, and play.

      -

      However, there are some drawbacks to using the Codex crack. First of all, you won't be able to play online with other players who have the official version of the game. Second, you won't be able to receive any updates or patches from the developers. Third, you may encounter some bugs or errors that are fixed in the official version. And fourth, you may be violating the terms of service and copyright laws by using a cracked game.

      -

      WWE 2K16 Cheat Engine by XxRaPiD4K3LLERxX

      -

      Another way to hack WWE 2K16 is to use a cheat engine by XxRaPiD4K3LLERxX, a modder who created a cheat table for WWE 2K16 using Cheat Engine, a software that allows you to modify games by scanning and changing memory values.

      -

      -

      The cheat engine by XxRaPiD4K3LLERxX can allow you to unlock all characters and modes in WWE 2K16, as well as edit your CAW attributes, weight class, gender, and more. To use the cheat engine, you need to download it from FearLess Cheat Engine forum, install Cheat Engine on your PC, run WWE 2K16 on Steam, open Cheat Engine and select WWE 2K16 as the process, open the cheat table file and activate the options you want.

      -

      However, there are also some risks and limitations to using the cheat engine. First of all, you may corrupt your save data or cause glitches in your game if you change some values incorrectly. Second, you may get banned from online play if you use cheats against other players. Third, you may not be able to use some features or functions that are incompatible with your cheats. And fourth, you may be breaking the rules of fair play by using cheats in WWE 2K16.

      -

      WWE 2K16 Online Hacks by Brismania

      -

      A third way to hack WWE 2K16 is to use online hacks by Brismania, a hacker who offers various services for WWE 2K16 online players. Online hacks can allow you to manipulate your online rank, wins, losses, stats, and more.

      -

      Brismania claims that he can sell you the highest rank available in WWE 2K16 online mode (Legend), delete your losses or increase your wins on your record, and even put God mode on your created wrestler. To use his online hacks, you need to contact him on YouTube or Skype and pay him a certain amount of money.

      -

      However, there are many reasons why you should avoid using online hacks by Brismania or anyone else. First of all, you may get scammed or ripped off by someone who doesn't deliver what they promise or asks for more money than agreed. Second, you may get banned from online play or reported by other players who notice your hacks. Third, you may ruin the fun and challenge of playing online with other people who are playing fairly. And fourth, you may be cheating yourself out of improving your skills and enjoying WWE 2K16 online mode.

      -

      Conclusion

      -

      WWE 2K16 is a great game that offers a lot of content and options for wrestling fans. However, some players may want to hack it for various reasons. There are different ways to hack WWE 2K16 using cracks, cheat engines, or online hacks. However, each method has its own pros and cons that you should consider before trying them.

      -

      Hacking WWE 2K16 may give you some advantages or benefits in terms of unlocking content or modifying features. However, it may also expose you to some risks or drawbacks in terms of causing errors or glitches in your game; getting banned from online play; violating laws or rules; or losing your enjoyment or satisfaction from playing WWE 2K16.

      -

      Ultimately, hacking WWE 2K16 is up to your personal choice and preference. However, we recommend that you play WWE 2K16 without hacking it for a better and safer gaming experience.

      -

      WWE 2K16 Tips and Tricks: How to Improve Your Skills and Enjoyment

      -

      If you don't want to hack WWE 2K16, but still want to have a better and more enjoyable gaming experience, you may want to learn some tips and tricks that can help you improve your skills and enjoyment. WWE 2K16 is a complex and challenging game that requires practice, strategy, and creativity. Here are some tips and tricks that can help you master WWE 2K16:

      -
        -
      • Learn the basics of the gameplay mechanics, such as striking, grappling, reversing, pinning, submissions, stamina, momentum, and health. You can find tutorials and guides in the game menu or online.
      • -
      • Practice your moves and combos in the training mode or exhibition matches. You can also customize your difficulty level and match settings to suit your preferences and goals.
      • -
      • Explore the different game modes and features that WWE 2K16 offers, such as 2K Showcase, MyCareer, Universe Mode, Online Mode, Creation Suite, and more. You can find hours of fun and entertainment in these modes.
      • -
      • Experiment with different characters and styles. WWE 2K16 has a huge roster of superstars, divas and legends, each with their own strengths, weaknesses, abilities, and personalities. You can also create your own wrestler or edit existing ones.
      • -
      • Have fun and be creative. WWE 2K16 is a game that allows you to express yourself and create your own stories and scenarios. You can use the Creation Suite to make your own arenas, belts, logos, entrances, videos, and more. You can also use the Online Mode to share your creations or download others' creations.
      • -
      -

      WWE 2K16 Hacked: The Final Verdict

      -

      WWE 2K16 is a game that can be hacked in different ways using cracks, cheat engines, or online hacks. However, hacking WWE 2K16 may not be worth it in the long run. Hacking WWE 2K16 may have some benefits in terms of unlocking content or modifying features. However, it may also have some risks in terms of causing errors or glitches in your game; getting banned from online play; violating laws or rules; or losing your enjoyment or satisfaction from playing WWE 2K16.

      -

      We recommend that you play WWE 2K16 without hacking it for a better and safer gaming experience. WWE 2K16 is a great game that offers a lot of content and options for wrestling fans. You can improve your skills and enjoyment by learning some tips and tricks that can help you master WWE 2K16. You can also have fun and be creative by exploring the different game modes and features that WWE 2K16 offers.

      -

      WWE 2K16 is a game that can be enjoyed by anyone who loves wrestling or video games. Whether you hack it or not is up to you. But remember: don't cheat yourself out of a great game.

      -

      WWE 2K16 Mods: How to Customize Your Game and Enhance Your Experience

      -

      Another way to enjoy WWE 2K16 without hacking it is to use mods, which are modifications or additions to the game made by fans or developers. Mods can allow you to change or improve various aspects of WWE 2K16, such as graphics, gameplay, sound, interface, and more.

      -

      WWE 2K16 has a large and active modding community that creates and shares various mods for the game. You can find mods for WWE 2K16 on websites such as Smacktalks.org, ModDB.com, or NexusMods.com. You can also use tools such as WWE 2K16 Mod Tool or WWE 2K16 Mod Manager to install and manage your mods.

      -

      Some of the most popular and impressive mods for WWE 2K16 are:

      -
        -
      • WWE 2K16 Custom Wrestlers and Arenas: These mods allow you to add new wrestlers and arenas to your game, either based on real-life wrestlers or fictional characters. You can find mods for wrestlers such as CM Punk, Hulk Hogan, John Cena, The Rock, and more. You can also find mods for arenas such as WrestleMania, Royal Rumble, SummerSlam, and more.
      • -
      • WWE 2K16 Graphics Enhancements: These mods allow you to improve the graphics and visuals of WWE 2K16, such as lighting, textures, shadows, reflections, and more. You can find mods that make WWE 2K16 look more realistic, colorful, or stylized.
      • -
      • WWE 2K16 Gameplay Improvements: These mods allow you to tweak or change the gameplay mechanics of WWE 2K16, such as difficulty, AI, moveset, damage, physics, and more. You can find mods that make WWE 2K16 more challenging, fun, or balanced.
      • -
      • WWE 2K16 Sound Enhancements: These mods allow you to enhance the sound and music of WWE 2K16, such as commentary, crowd noise, entrance themes, sound effects, and more. You can find mods that make WWE 2K16 sound more immersive, dynamic, or diverse.
      • -
      • WWE 2K16 Interface Improvements: These mods allow you to modify or customize the interface and menus of WWE 2K16, such as fonts, colors, icons, logos, and more. You can find mods that make WWE 2K16 look more modern, sleek, or user-friendly.
      • -
      -

      WWE 2K16 Hacked vs WWE 2K16 Modded: Which One is Better?

      -

      WWE 2K16 is a game that can be hacked or modded in different ways. However, which one is better depends on your personal preference and goal. Hacking WWE 2K16 may give you some advantages or benefits in terms of unlocking content or modifying features. However, it may also expose you to some risks or drawbacks in terms of causing errors or glitches in your game; getting banned from online play; violating laws or rules; or losing your enjoyment or satisfaction from playing WWE 2K16.

      -

      Modding WWE 2K16 may give you some options or opportunities in terms of changing or improving various aspects of WWE 2K16. However, it may also require some skills or knowledge in terms of installing and managing your mods; finding compatible and quality mods; or dealing with potential conflicts or issues with your mods.

      -

      We recommend that you play WWE 2K16 without hacking it for a better and safer gaming experience. However, if you want to customize your game and enhance your experience without breaking the rules or risking your game's stability; you may want to try modding WWE 2K16 instead. Modding WWE 2K16 can allow you to express yourself and create your own stories and scenarios in WWE 2K16.

      -

      WWE 2K16 is a game that can be enjoyed by anyone who loves wrestling or video games. Whether you hack it or mod it is up to you. But remember: don't cheat yourself out of a great game.

      -

      WWE 2K16 Hacked: The Final Verdict

      -

      WWE 2K16 is a game that can be hacked in different ways using cracks, cheat engines, or online hacks. However, hacking WWE 2K16 may not be worth it in the long run. Hacking WWE 2K16 may have some benefits in terms of unlocking content or modifying features. However, it may also have some risks in terms of causing errors or glitches in your game; getting banned from online play; violating laws or rules; or losing your enjoyment or satisfaction from playing WWE 2K16.

      -

      We recommend that you play WWE 2K16 without hacking it for a better and safer gaming experience. WWE 2K16 is a great game that offers a lot of content and options for wrestling fans. You can improve your skills and enjoyment by learning some tips and tricks that can help you master WWE 2K16. You can also have fun and be creative by exploring the different game modes and features that WWE 2K16 offers.

      -

      WWE 2K16 is a game that can be enjoyed by anyone who loves wrestling or video games. Whether you hack it or not is up to you. But remember: don't cheat yourself out of a great game.

      3cee63e6c2
      -
      -
      \ No newline at end of file diff --git a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/upernet_uniformer.py b/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/upernet_uniformer.py deleted file mode 100644 index 41aa4db809dc6e2c508e98051f61807d07477903..0000000000000000000000000000000000000000 --- a/spaces/svjack/ControlNet-Pose-Chinese/annotator/uniformer/configs/_base_/models/upernet_uniformer.py +++ /dev/null @@ -1,43 +0,0 @@ -# model settings -norm_cfg = dict(type='BN', requires_grad=True) -model = dict( - type='EncoderDecoder', - pretrained=None, - backbone=dict( - type='UniFormer', - embed_dim=[64, 128, 320, 512], - layers=[3, 4, 8, 3], - head_dim=64, - mlp_ratio=4., - qkv_bias=True, - drop_rate=0., - attn_drop_rate=0., - drop_path_rate=0.1), - decode_head=dict( - type='UPerHead', - in_channels=[64, 128, 320, 512], - in_index=[0, 1, 2, 3], - pool_scales=(1, 2, 3, 6), - channels=512, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)), - auxiliary_head=dict( - type='FCNHead', - in_channels=320, - in_index=2, - channels=256, - num_convs=1, - concat_input=False, - dropout_ratio=0.1, - num_classes=19, - norm_cfg=norm_cfg, - align_corners=False, - loss_decode=dict( - type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)), - # model training and testing settings - train_cfg=dict(), - test_cfg=dict(mode='whole')) \ No newline at end of file diff --git a/spaces/szukevin/VISOR-GPT/train/scripts/convert_sbert_from_tencentpretrain_to_huggingface.py b/spaces/szukevin/VISOR-GPT/train/scripts/convert_sbert_from_tencentpretrain_to_huggingface.py deleted file mode 100644 index e37d7db8b08c6d383b2fd252af09210e39a34e1f..0000000000000000000000000000000000000000 --- a/spaces/szukevin/VISOR-GPT/train/scripts/convert_sbert_from_tencentpretrain_to_huggingface.py +++ /dev/null @@ -1,71 +0,0 @@ -import argparse -import collections -import torch - - -def convert_sbert_transformer_encoder_from_tencentpretrain_to_huggingface(input_model, output_model, layers_num): - for i in range(layers_num): - output_model["encoder.layer." + str(i) + ".attention.self.query.weight"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".self_attn.linear_layers.0.weight"] - output_model["encoder.layer." + str(i) + ".attention.self.query.bias"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".self_attn.linear_layers.0.bias"] - output_model["encoder.layer." + str(i) + ".attention.self.key.weight"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".self_attn.linear_layers.1.weight"] - output_model["encoder.layer." + str(i) + ".attention.self.key.bias"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".self_attn.linear_layers.1.bias"] - output_model["encoder.layer." + str(i) + ".attention.self.value.weight"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".self_attn.linear_layers.2.weight"] - output_model["encoder.layer." + str(i) + ".attention.self.value.bias"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".self_attn.linear_layers.2.bias"] - output_model["encoder.layer." + str(i) + ".attention.output.dense.weight"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".self_attn.final_linear.weight"] - output_model["encoder.layer." + str(i) + ".attention.output.dense.bias"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".self_attn.final_linear.bias"] - output_model["encoder.layer." + str(i) + ".attention.output.LayerNorm.weight"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".layer_norm_1.gamma"] - output_model["encoder.layer." + str(i) + ".attention.output.LayerNorm.bias"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".layer_norm_1.beta"] - output_model["encoder.layer." + str(i) + ".intermediate.dense.weight"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".feed_forward.linear_1.weight"] - output_model["encoder.layer." + str(i) + ".intermediate.dense.bias"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".feed_forward.linear_1.bias"] - output_model["encoder.layer." + str(i) + ".output.dense.weight"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".feed_forward.linear_2.weight"] - output_model["encoder.layer." + str(i) + ".output.dense.bias"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".feed_forward.linear_2.bias"] - output_model["encoder.layer." + str(i) + ".output.LayerNorm.weight"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".layer_norm_2.gamma"] - output_model["encoder.layer." + str(i) + ".output.LayerNorm.bias"] = \ - input_model["encoder.encoder_0.transformer." + str(i) + ".layer_norm_2.beta"] - - -def main(): - parser = argparse.ArgumentParser(formatter_class=argparse.ArgumentDefaultsHelpFormatter) - parser.add_argument("--input_model_path", type=str, default="models/input_model.bin", - help=".") - parser.add_argument("--output_model_path", type=str, default="models/output_model.bin", - help=".") - parser.add_argument("--layers_num", type=int, default=12, help=".") - - args = parser.parse_args() - - input_model = torch.load(args.input_model_path, map_location='cpu') - - output_model = collections.OrderedDict() - - output_model["embeddings.word_embeddings.weight"] = \ - input_model["embedding.embedding_0.word.embedding.weight"] - output_model["embeddings.position_embeddings.weight"] = \ - input_model["embedding.embedding_0.pos.embedding.weight"] - output_model["embeddings.token_type_embeddings.weight"] = \ - input_model["embedding.embedding_0.seg.embedding.weight"][1:, :] - output_model["embeddings.LayerNorm.weight"] = \ - input_model["embedding.embedding_0.layer_norm.gamma"] - output_model["embeddings.LayerNorm.bias"] = \ - input_model["embedding.embedding_0.layer_norm.beta"] - - convert_sbert_transformer_encoder_from_tencentpretrain_to_huggingface(input_model, output_model, args.layers_num) - torch.save(output_model, args.output_model_path) - -if __name__ == "__main__": - main() diff --git a/spaces/teragron/docuchat-webui/README.md b/spaces/teragron/docuchat-webui/README.md deleted file mode 100644 index 85aded85ecbdd6a1fd5dd914d4cc624881ec9575..0000000000000000000000000000000000000000 --- a/spaces/teragron/docuchat-webui/README.md +++ /dev/null @@ -1,17 +0,0 @@ ---- -title: Docuchat Webui -emoji: 📈 -colorFrom: pink -colorTo: green -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -I'm currently trying to solve the text storing issue along with model load/reload. -Right now, huggingfacae isn't the best way to test this space. -I'd recommend git cloning the space and using an earlier commit. - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/thapasushil/Multiverse/inpainting.py b/spaces/thapasushil/Multiverse/inpainting.py deleted file mode 100644 index 798c3fd252f826762aee6970f867eee537249db8..0000000000000000000000000000000000000000 --- a/spaces/thapasushil/Multiverse/inpainting.py +++ /dev/null @@ -1,194 +0,0 @@ -import inspect -from typing import List, Optional, Union - -import numpy as np -import torch - -import PIL -from diffusers import AutoencoderKL, DDIMScheduler, DiffusionPipeline, PNDMScheduler, UNet2DConditionModel -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -from tqdm.auto import tqdm -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer - - -def preprocess_image(image): - w, h = image.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - image = image.resize((w, h), resample=PIL.Image.LANCZOS) - image = np.array(image).astype(np.float32) / 255.0 - image = image[None].transpose(0, 3, 1, 2) - image = torch.from_numpy(image) - return 2.0 * image - 1.0 - - -def preprocess_mask(mask): - mask = mask.convert("L") - w, h = mask.size - w, h = map(lambda x: x - x % 32, (w, h)) # resize to integer multiple of 32 - mask = mask.resize((w // 8, h // 8), resample=PIL.Image.NEAREST) - mask = np.array(mask).astype(np.float32) / 255.0 - mask = np.tile(mask, (4, 1, 1)) - mask = mask[None].transpose(0, 1, 2, 3) # what does this step do? - mask = 1 - mask # repaint white, keep black - mask = torch.from_numpy(mask) - return mask - -class StableDiffusionInpaintingPipeline(DiffusionPipeline): - def __init__( - self, - vae: AutoencoderKL, - text_encoder: CLIPTextModel, - tokenizer: CLIPTokenizer, - unet: UNet2DConditionModel, - scheduler: Union[DDIMScheduler, PNDMScheduler], - safety_checker: StableDiffusionSafetyChecker, - feature_extractor: CLIPFeatureExtractor, - ): - super().__init__() - scheduler = scheduler.set_format("pt") - self.register_modules( - vae=vae, - text_encoder=text_encoder, - tokenizer=tokenizer, - unet=unet, - scheduler=scheduler, - safety_checker=safety_checker, - feature_extractor=feature_extractor, - ) - - @torch.no_grad() - def __call__( - self, - prompt: Union[str, List[str]], - init_image: torch.FloatTensor, - mask_image: torch.FloatTensor, - strength: float = 0.8, - num_inference_steps: Optional[int] = 50, - guidance_scale: Optional[float] = 7.5, - eta: Optional[float] = 0.0, - generator: Optional[torch.Generator] = None, - output_type: Optional[str] = "pil", - ): - - if isinstance(prompt, str): - batch_size = 1 - elif isinstance(prompt, list): - batch_size = len(prompt) - else: - raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}") - - if strength < 0 or strength > 1: - raise ValueError(f"The value of strength should in [0.0, 1.0] but is {strength}") - - # set timesteps - accepts_offset = "offset" in set(inspect.signature(self.scheduler.set_timesteps).parameters.keys()) - extra_set_kwargs = {} - offset = 0 - if accepts_offset: - offset = 1 - extra_set_kwargs["offset"] = 1 - - self.scheduler.set_timesteps(num_inference_steps, **extra_set_kwargs) - - # preprocess image - init_image = preprocess_image(init_image).to(self.device) - - # encode the init image into latents and scale the latents - init_latent_dist = self.vae.encode(init_image).latent_dist - init_latents = init_latent_dist.sample(generator=generator) - init_latents = 0.18215 * init_latents - - # prepare init_latents noise to latents - init_latents = torch.cat([init_latents] * batch_size) - init_latents_orig = init_latents - - # preprocess mask - mask = preprocess_mask(mask_image).to(self.device) - mask = torch.cat([mask] * batch_size) - - # check sizes - if not mask.shape == init_latents.shape: - raise ValueError(f"The mask and init_image should be the same size!") - - # get the original timestep using init_timestep - init_timestep = int(num_inference_steps * strength) + offset - init_timestep = min(init_timestep, num_inference_steps) - timesteps = self.scheduler.timesteps[-init_timestep] - timesteps = torch.tensor([timesteps] * batch_size, dtype=torch.long, device=self.device) - - # add noise to latents using the timesteps - noise = torch.randn(init_latents.shape, generator=generator, device=self.device) - init_latents = self.scheduler.add_noise(init_latents, noise, timesteps) - - # get prompt text embeddings - text_input = self.tokenizer( - prompt, - padding="max_length", - max_length=self.tokenizer.model_max_length, - truncation=True, - return_tensors="pt", - ) - text_embeddings = self.text_encoder(text_input.input_ids.to(self.device))[0] - - # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2) - # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1` - # corresponds to doing no classifier free guidance. - do_classifier_free_guidance = guidance_scale > 1.0 - # get unconditional embeddings for classifier free guidance - if do_classifier_free_guidance: - max_length = text_input.input_ids.shape[-1] - uncond_input = self.tokenizer( - [""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt" - ) - uncond_embeddings = self.text_encoder(uncond_input.input_ids.to(self.device))[0] - - # For classifier free guidance, we need to do two forward passes. - # Here we concatenate the unconditional and text embeddings into a single batch - # to avoid doing two forward passes - text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) - - # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature - # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers. - # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502 - # and should be between [0, 1] - accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys()) - extra_step_kwargs = {} - if accepts_eta: - extra_step_kwargs["eta"] = eta - - latents = init_latents - t_start = max(num_inference_steps - init_timestep + offset, 0) - for i, t in tqdm(enumerate(self.scheduler.timesteps[t_start:])): - # expand the latents if we are doing classifier free guidance - latent_model_input = torch.cat([latents] * 2) if do_classifier_free_guidance else latents - - # predict the noise residual - noise_pred = self.unet(latent_model_input, t, encoder_hidden_states=text_embeddings)["sample"] - - # perform guidance - if do_classifier_free_guidance: - noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) - noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - - # compute the previous noisy sample x_t -> x_t-1 - latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs)["prev_sample"] - - # masking - init_latents_proper = self.scheduler.add_noise(init_latents_orig, noise, t) - latents = (init_latents_proper * mask) + (latents * (1 - mask)) - - # scale and decode the image latents with vae - latents = 1 / 0.18215 * latents - image = self.vae.decode(latents).sample - - image = (image / 2 + 0.5).clamp(0, 1) - image = image.cpu().permute(0, 2, 3, 1).numpy() - - # run safety checker - safety_cheker_input = self.feature_extractor(self.numpy_to_pil(image), return_tensors="pt").to(self.device) - image, has_nsfw_concept = self.safety_checker(images=image, clip_input=safety_cheker_input.pixel_values) - - if output_type == "pil": - image = self.numpy_to_pil(image) - - return {"sample": image, "nsfw_content_detected": has_nsfw_concept} \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Buena Vista Social Club - Discografia Completa (Brunokito).rar Listen to the Full Album of the Grammy-Winning Cuban Group.md b/spaces/tialenAdioni/chat-gpt-api/logs/Buena Vista Social Club - Discografia Completa (Brunokito).rar Listen to the Full Album of the Grammy-Winning Cuban Group.md deleted file mode 100644 index f1fbc1b62d58c8212a816e06a11ec3f5e0abf1a7..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Buena Vista Social Club - Discografia Completa (Brunokito).rar Listen to the Full Album of the Grammy-Winning Cuban Group.md +++ /dev/null @@ -1,74 +0,0 @@ - -

      How to Download and Install OMSI 2 Add-On Wuppertal in 13GB

      -

      If you are a fan of bus simulation games, you might have heard of OMSI 2, the realistic and immersive game that lets you drive various buses in different cities and scenarios. One of the most popular add-ons for OMSI 2 is Wuppertal, which recreates the German city and its famous suspension railway.

      -

      However, downloading and installing OMSI 2 Add-On Wuppertal can be a challenge for some players, as the original file size is over 20GB. Fortunately, there is a way to reduce the file size to 13GB without losing any quality or features. In this article, we will show you how to do it step by step.

      -

      OMSI 2 Add-On Wuppertal download 13gb


      Download ——— https://urlcod.com/2uK8ZB



      -

      Step 1: Download OMSI 2 Add-On Wuppertal from Steam

      -

      The first thing you need to do is to purchase and download OMSI 2 Add-On Wuppertal from Steam. You can find it here. The price is $29.99, but you can wait for a sale or use a coupon to get a discount.

      -

      Once you have bought the add-on, you can download it from your Steam library. The download size is about 21GB, so make sure you have enough space and a stable internet connection. The download time will vary depending on your speed and bandwidth.

      -

      Step 2: Extract the files from the downloaded folder

      -

      After the download is complete, you will find a folder named "OMSI 2 Add-On Wuppertal" in your Steam directory. The default location is C:\Program Files (x86)\Steam\steamapps\common\OMSI 2\dlc. You need to extract the files from this folder using a program like WinRAR or 7-Zip.

      -

      To do this, right-click on the folder and select "Extract here" or "Extract to OMSI 2 Add-On Wuppertal". This will create a new folder with the same name that contains all the files of the add-on. The extraction process may take a few minutes.

      -

      Step 3: Delete unnecessary files from the extracted folder

      -

      Now that you have extracted the files, you can delete some of them that are not needed for the game. These files are mainly videos and sounds that are not used in the add-on or are duplicated in other folders. By deleting them, you can save about 8GB of space.

      -

      The files that you can delete are:

      -
        -
      • Videos: Delete all the files in the Videos folder except for "Wuppertal_Intro.mp4". These videos are trailers and tutorials that you can watch online or in other add-ons.
      • -
      • Sounds: Delete all the files in the Sounds folder except for "Wuppertal_Sound.cfg". These sounds are generic bus sounds that are already included in OMSI 2 or other add-ons.
      • -
      • Fonts: Delete all the files in the Fonts folder except for "Wuppertal_Fonts.cfg". These fonts are not used in the add-on or are already installed on your system.
      • -
      • Sceneryobjects: Delete all the subfolders in the Sceneryobjects folder except for "Wuppertal". These subfolders contain scenery objects that are not used in the add-on or are duplicated in other folders.
      • -
      • Splines: Delete all the subfolders in the Splines folder except for "Wuppertal". These subfolders contain splines that are not used in the add-on or are duplicated in other folders.
      • -
      -

      After deleting these files, you should have a folder named "OMSI 2 Add-On Wuppertal" that is about 13GB in size.

      -

      Step 4: Copy and paste the extracted folder into your OMSI 2 directory

      -

      The final step is to copy and paste the extracted folder into your OMSI 2 directory. This is where you installed OMSI 2 on your computer. The default location is C:\Program Files (x86)\Steam\steamapps\common\OMSI

      -

      OMSI 2 Wuppertal bus simulator free download
      -How to install OMSI 2 Add-On Wuppertal on PC
      -OMSI 2 Add-On Wuppertal full version torrent
      -OMSI 2 Wuppertal DLC review and gameplay
      -OMSI 2 Add-On Wuppertal system requirements and compatibility
      -OMSI 2 Wuppertal mod pack download and installation
      -OMSI 2 Add-On Wuppertal steam key generator
      -OMSI 2 Wuppertal update and patch notes
      -OMSI 2 Add-On Wuppertal best routes and maps
      -OMSI 2 Wuppertal multiplayer online mode
      -OMSI 2 Add-On Wuppertal crack and activation code
      -OMSI 2 Wuppertal realistic graphics and sound effects
      -OMSI 2 Add-On Wuppertal tips and tricks for beginners
      -OMSI 2 Wuppertal custom skins and liveries download
      -OMSI 2 Add-On Wuppertal cheats and hacks
      -OMSI 2 Wuppertal comparison with real life buses
      -OMSI 2 Add-On Wuppertal official trailer and screenshots
      -OMSI 2 Wuppertal new features and improvements
      -OMSI 2 Add-On Wuppertal price and discount offers
      -OMSI 2 Wuppertal support and customer service
      -OMSI 2 Add-On Wuppertal error and bug fixes
      -OMSI 2 Wuppertal minimum and recommended specs
      -OMSI 2 Add-On Wuppertal download speed and time
      -OMSI 2 Wuppertal controller and keyboard settings
      -OMSI 2 Add-On Wuppertal VR and motion simulation compatibility
      -OMSI 2 Wuppertal history and development
      -OMSI 2 Add-On Wuppertal ratings and reviews from users
      -OMSI 2 Wuppertal alternative download links and sources
      -OMSI 2 Add-On Wuppertal FAQ and troubleshooting guide
      -OMSI 2 Wuppertal achievements and rewards
      -OMSI 2 Add-On Wuppertal editor and modding tools
      -OMSI 2 Wuppertal expansion packs and add-ons list
      -OMSI 2 Add-On Wuppertal license key verification and activation
      -OMSI 2 Wuppertal demo and trial version download
      -OMSI 2 Add-On Wuppertal refund policy and terms of service
      -OMSI 2 Wuppertal forum and community discussions
      -OMSI 2 Add-On Wuppertal walkthrough and tutorial videos
      -OMSI 2 Wuppertal challenges and missions list
      -OMSI 2 Add-On Wuppertal secrets and easter eggs
      -OMSI 2 Wuppertal statistics and performance analysis
      -OMSI 2 Add-On Wuppertal backup and restore options
      -OMSI 2 Wuppertal languages and subtitles settings
      -OMSI 2 Add-On Wuppertal credits and acknowledgements
      -OMSI 2 Wuppertal merchandise and collectibles
      -OMSI 2 Add-On Wuppertal news and updates
      -OMSI 2 Wuppertal feedback and suggestions
      -OMSI 2 Add-On Wuppertal fun facts and trivia
      -OMSI 2 Wuppertal memes and jokes

      e753bf7129
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/Conflict Denied Ops Highly Compressed A Game That Pushes the Limits of Your PC and Your Skills.md b/spaces/tialenAdioni/chat-gpt-api/logs/Conflict Denied Ops Highly Compressed A Game That Pushes the Limits of Your PC and Your Skills.md deleted file mode 100644 index 22c0be785c1e230af0267940d829f81843e94f4c..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/Conflict Denied Ops Highly Compressed A Game That Pushes the Limits of Your PC and Your Skills.md +++ /dev/null @@ -1,130 +0,0 @@ -
      -

      Conflict Denied Ops Highly Compressed: A Review

      -

      If you are a fan of shooter games, you might have heard of Conflict Denied Ops, a game that was released in 2008 for Windows, PlayStation 3 and Xbox 360. The game features two playable characters, Graves and Lang, who are part of a covert CIA paramilitary team. The game allows players to switch between the two characters at any time and use their unique weapons and abilities to complete various missions around the world.

      -

      Conflict Denied Ops Highly Compressed


      Download Zip >>>>> https://urlcod.com/2uK8EY



      -

      However, you might also have heard that the game has a large file size of 7 GB, which can be a problem for some players who have limited storage space or slow internet connection. That's why some websites offer a highly compressed version of the game, which reduces the file size to about 1.5 GB without compromising the quality or performance. In this article, we will review Conflict Denied Ops highly compressed and tell you how to get it and play it.

      -

      How to Download and Install Conflict Denied Ops Highly Compressed

      -

      The first step to play Conflict Denied Ops highly compressed is to download the file from a reliable source. There are many websites that offer this file, but some of them may contain viruses or malware that can harm your computer. To avoid this, we recommend you to use this link, which is safe and verified: Download Conflict Denied Ops Highly Compressed.

      -

      This link will take you to a page where you can choose between different download options. You can either download the file directly from the website or use a torrent client to download it faster. The file size is about 1.5 GB, which is much smaller than the original size of 7 GB.

      -

      Once you have downloaded the file, you need to extract it using a software like WinRAR or 7-Zip. To do this, right-click on the file and select "Extract Here" or "Extract to Conflict Denied Ops". This will create a folder with the same name as the file. Open the folder and look for a file called "Setup.exe". Double-click on it and follow the instructions on the screen to install the game. You may need to enter a serial key or a password during the installation process. You can find these information in a text file inside the folder.

      -

      How to Run and Play Conflict Denied Ops Highly Compressed

      -

      After installing the game, you can run it from your desktop or start menu. You may need to adjust some settings like resolution, graphics and sound before playing. You can also change the language of the game from English to other options like French, German or Spanish.

      -

      Now you are ready to play Conflict Denied Ops highly compressed and enjoy its thrilling action and co-op gameplay. You can either play solo or with a friend online or offline. The game has 10 missions that will take you to different locations like Venezuela, Russia and Myanmar. You can also unlock new weapons and upgrades as you progress through the game.

      -

      Conclusion

      -

      Conflict Denied Ops is a fun and exciting shooter game that you can download and install in a highly compressed format. The game has a great co-op mode that allows you to switch between two characters with different skills and weapons. The game also has a good story and graphics that will keep you engaged for hours.

      -

      If you want to play Conflict Denied Ops highly compressed, you can use this link to download it safely and easily: Download Conflict Denied Ops Highly Compressed. We hope you enjoy this game as much as we did.

      -

      Conflict Denied Ops PC game highly compressed download
      -How to install Conflict Denied Ops in low size
      -Conflict Denied Ops full version highly compressed free
      -Download Conflict Denied Ops highly compressed for Windows 10
      -Conflict Denied Ops highly compressed 100MB
      -Conflict Denied Ops highly compressed rar file
      -Conflict Denied Ops highly compressed torrent link
      -Conflict Denied Ops highly compressed gameplay
      -Conflict Denied Ops highly compressed system requirements
      -Conflict Denied Ops highly compressed cheats and codes
      -Conflict Denied Ops highly compressed crack and patch
      -Conflict Denied Ops highly compressed iso file
      -Conflict Denied Ops highly compressed direct download
      -Conflict Denied Ops highly compressed no survey
      -Conflict Denied Ops highly compressed no password
      -Conflict Denied Ops highly compressed online multiplayer
      -Conflict Denied Ops highly compressed single link
      -Conflict Denied Ops highly compressed repack by fitgirl
      -Conflict Denied Ops highly compressed google drive
      -Conflict Denied Ops highly compressed mega link
      -Conflict Denied Ops highly compressed zip file
      -Conflict Denied Ops highly compressed utorrent
      -Conflict Denied Ops highly compressed skidrow
      -Conflict Denied Ops highly compressed ocean of games
      -Conflict Denied Ops highly compressed igg games
      -Conflict Denied Ops highly compressed apunkagames
      -Conflict Denied Ops highly compressed softonic
      -Conflict Denied Ops highly compressed pcgames88
      -Conflict Denied Ops highly compressed kgb archiver
      -Conflict Denied Ops highly compressed mediafire link
      -Conflict Denied Ops highly compressed review and rating
      -Conflict Denied Ops highly compressed trainer and mods
      -Conflict Denied Ops highly compressed save game file
      -Conflict Denied Ops highly compressed steam key
      -Conflict Denied Ops highly compressed cd key generator
      -Conflict Denied Ops highly compressed ps2 emulator
      -Conflict Denied Ops highly compressed xbox 360 controller support
      -Conflict Denied Ops highly compressed keyboard and mouse settings
      -Conflict Denied Ops highly compressed graphics and sound settings
      -Conflict Denied Ops highly compressed split screen co-op mode
      -Conflict Denied Ops highly compressed mission list and walkthrough
      -Conflict Denied Ops highly compressed weapons and gadgets list
      -Conflict Denied Ops highly compressed characters and voice actors list
      -Conflict Denied Ops highly compressed tips and tricks for beginners
      -Conflict Denied Ops highly compressed best settings for low end pc
      -Conflict Denied Ops highly compressed error fix and solution guide
      -Conflict Denied Ops high quality vs low quality comparison video
      -Download all parts of Conflict Denied Ops in high compression
      -How to play Conflict Denied Ops without installation
      -How to get unlimited ammo and health in Conflict Denied Ops

      -

      What are the Benefits of Playing Conflict Denied Ops Highly Compressed

      -

      Playing Conflict Denied Ops highly compressed has many benefits for players who want to enjoy a high-quality shooter game without spending too much time or space on downloading and installing it. Some of the benefits are:

      -
        -
      • You can save storage space on your computer or console by downloading a smaller file size of 1.5 GB instead of 7 GB.
      • -
      • You can download the game faster and easier by using a torrent client or a direct link from a verified website.
      • -
      • You can play the game smoothly and without any lag or glitches by adjusting some settings like resolution, graphics and sound.
      • -
      • You can experience the same gameplay and features as the original version of the game, such as switching between two characters, using different weapons and abilities, completing 10 missions, and unlocking new items.
      • -
      • You can have fun and challenge yourself by playing with a friend online or offline in the co-op mode.
      • -
      -

      What are the Drawbacks of Playing Conflict Denied Ops Highly Compressed

      -

      Playing Conflict Denied Ops highly compressed also has some drawbacks that players should be aware of before downloading and installing it. Some of the drawbacks are:

      -
        -
      • You may encounter some compatibility issues with your computer or console if you have an older or lower-end system.
      • -
      • You may need to enter a serial key or a password during the installation process, which can be found in a text file inside the folder.
      • -
      • You may not be able to access some online features or updates of the game if you play the highly compressed version.
      • -
      • You may risk getting viruses or malware on your computer if you download the file from an untrusted source.
      • -
      • You may violate some copyright laws or terms of service if you download and play the game illegally.
      • -
      -

      How to Play Conflict Denied Ops Highly Compressed Safely and Legally

      -

      If you want to play Conflict Denied Ops highly compressed safely and legally, you should follow some tips and precautions. Some of them are:

      -
        -
      • Make sure your computer or console meets the minimum system requirements for the game.
      • -
      • Download the file from a trusted and verified source, such as this link: Download Conflict Denied Ops Highly Compressed.
      • -
      • Use a software like WinRAR or 7-Zip to extract the file and install the game.
      • -
      • Enter the serial key or password correctly during the installation process.
      • -
      • Buy the original version of the game if you like it and want to support the developers.
      • -
      -

      What are the Features of Conflict Denied Ops Highly Compressed

      -

      Conflict Denied Ops highly compressed has many features that make it an enjoyable and immersive shooter game. Some of the features are:

      -
        -
      • You can play as two different characters, Graves and Lang, who have different weapons and abilities. Graves is a sniper who can use a rifle, a pistol and a rocket launcher. Lang is a demolitions expert who can use a machine gun, a shotgun and a grenade launcher.
      • -
      • You can switch between the two characters at any time during the game. You can also control the other character by giving orders or using the co-op mode.
      • -
      • You can experience a realistic and dynamic environment that reacts to your actions. You can destroy walls, doors, vehicles and other objects to create new paths or cover.
      • -
      • You can explore different locations around the world, such as Venezuela, Russia and Myanmar. Each location has its own challenges and objectives.
      • -
      • You can customize your weapons and abilities by collecting intel points throughout the game. You can use these points to upgrade your accuracy, damage, reload speed and more.
      • -
      -

      What are the Reviews of Conflict Denied Ops Highly Compressed

      -

      Conflict Denied Ops highly compressed has received mixed reviews from critics and players. Some of the reviews are:

      -
        -
      • "Conflict Denied Ops is a decent shooter game that offers some fun co-op action and destructible environments. However, it also suffers from some technical issues, repetitive gameplay and bland graphics." - IGN
      • -
      • "Conflict Denied Ops is a game that tries to do too many things at once and fails at most of them. The game has a weak story, poor AI, glitchy controls and boring missions. The only redeeming factor is the co-op mode, which can be fun with a friend." - GameSpot
      • -
      • "Conflict Denied Ops is a game that delivers what it promises: a fast-paced and explosive shooter game that you can play with a friend. The game has a great co-op mode, varied weapons and abilities, and destructible environments. The game may not be very original or innovative, but it is fun and entertaining." - Steam
      • -
      -

      What are the Tips and Tricks for Playing Conflict Denied Ops Highly Compressed

      -

      Playing Conflict Denied Ops highly compressed can be challenging and rewarding at the same time. If you want to master the game and complete all the missions, you should follow some tips and tricks that can help you improve your skills and strategies. Some of them are:

      -
        -
      • Use the cover system to protect yourself from enemy fire. You can hide behind walls, doors, vehicles and other objects and peek out to shoot. You can also destroy some objects to create new cover or expose enemies.
      • -
      • Switch between the two characters frequently to use their different weapons and abilities. Graves can snipe enemies from a distance or use a rocket launcher to cause massive damage. Lang can use a machine gun or a shotgun to deal with close-range enemies or a grenade launcher to clear out groups of enemies.
      • -
      • Use the co-op mode to play with a friend online or offline. You can communicate with your partner using voice chat or text chat and coordinate your actions. You can also revive your partner if they get downed by an enemy.
      • -
      • Collect intel points throughout the game by finding laptops, documents, radios and other items. You can use these points to upgrade your weapons and abilities by accessing the loadout menu. You can also replay missions to collect more intel points.
      • -
      • Try different difficulty levels to challenge yourself and earn more rewards. The game has four difficulty levels: easy, normal, hard and extreme. The higher the difficulty level, the more enemies, damage and objectives you will face.
      • -
      -

      What are the Alternatives to Conflict Denied Ops Highly Compressed

      -

      If you like Conflict Denied Ops highly compressed, you might also like some other shooter games that have similar features or themes. Some of the alternatives are:

      -
        -
      • Army of Two: A game that focuses on co-op gameplay and features two mercenaries who work for a private military company. The game allows players to customize their weapons, armor and appearance and use a variety of tactics and strategies.
      • -
      • Call of Duty: Modern Warfare: A game that is set in a modern-day conflict involving terrorists, insurgents and rogue states. The game has a cinematic and immersive campaign mode and a popular multiplayer mode that offers various modes, maps and weapons.
      • -
      • Gears of War: A game that is set in a post-apocalyptic world where humans fight against a race of creatures called Locusts. The game has a cover-based combat system and features chainsaw bayonets, grenades and other weapons.
      • -
      • Tom Clancy's Rainbow Six Vegas: A game that is set in Las Vegas where a team of counter-terrorism operatives must stop a terrorist plot. The game has a realistic and tactical gameplay and features various modes, maps and weapons.
      • -
      • Battlefield: Bad Company: A game that follows a group of renegade soldiers who go rogue and pursue a fortune in gold. The game has a destructible environment system and features various vehicles, weapons and gadgets.
      • -
      -

      Conclusion

      -

      Conflict Denied Ops is a shooter game that offers a fun and exciting co-op gameplay and a destructible environment. The game can be downloaded and installed in a highly compressed format that reduces the file size to 1.5 GB without affecting the quality or performance. The game can be played on Windows, PlayStation 3 and Xbox 360 and has 10 missions that take place in different locations around the world. The game also has a variety of weapons and abilities that can be upgraded using intel points. The game has received mixed reviews from critics and players and has some alternatives that have similar features or themes. If you want to play Conflict Denied Ops highly compressed, you can use this link to download it safely and easily: Download Conflict Denied Ops Highly Compressed. We hope you enjoy this game as much as we did.

      679dcb208e
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/DaVinci Resolve 18 Features Limitations and Alternatives.md b/spaces/tialenAdioni/chat-gpt-api/logs/DaVinci Resolve 18 Features Limitations and Alternatives.md deleted file mode 100644 index 58b3bc50d9459ea50f6716622e69ada11f5d9e01..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/DaVinci Resolve 18 Features Limitations and Alternatives.md +++ /dev/null @@ -1,49 +0,0 @@ -
      -

      How to Download and Install DaVinci Resolve 18 for Free

      -

      DaVinci Resolve 18 is a professional video editing software that offers a complete solution for editing, color grading, visual effects, audio post-production and more. It is used by many filmmakers, broadcasters and content creators around the world. DaVinci Resolve 18 is available in two versions: a paid version that includes all the features and a free version that has some limitations but still offers a lot of functionality. In this article, we will show you how to download and install DaVinci Resolve 18 for free and what features it offers.

      -

      davinci resolve 18 descargar


      Download File » https://urlcod.com/2uK1QL



      -

      How to download DaVinci Resolve 18 for free

      -

      To download DaVinci Resolve 18 for free, you need to have a computer that meets the minimum system requirements. The minimum system requirements are:

      -
        -
      • Windows 10 64-bit, macOS 10.15.7 or later, or Linux CentOS 7.3 or later
      • -
      • 16 GB of RAM or more
      • -
      • 4 GB of VRAM or more
      • -
      • A dedicated GPU that supports OpenCL 1.2 or CUDA 11
      • -
      • A monitor that supports at least 1366x768 resolution
      • -
      • A fast internet connection
      • -
      -

      If your computer meets the requirements, follow these steps:

      -
        -
      1. Go to https://www.blackmagicdesign.com/products/davinciresolve/.
      2. -
      3. Click on the "Download" button and choose your operating system.
      4. -
      5. Fill out the form with your name, email address and country and click on "Register and Download".
      6. -
      7. Save the file to your computer and run it to install DaVinci Resolve 18.
      8. -
      9. Follow the instructions on the screen and agree to the terms and conditions.
      10. -
      11. Launch DaVinci Resolve 18 and enjoy using it for free.
      12. -
      -

      What features does DaVinci Resolve 18 offer?

      -

      DaVinci Resolve 18 offers many features that can help you create stunning videos with ease. Some of the features are:

      -

      -
        -
      • A new cut page that lets you edit videos faster and more efficiently with smart tools and shortcuts.
      • -
      • A new neural engine that uses artificial intelligence to enhance your videos with features such as face refinement, speed warp, object removal and more.
      • -
      • A new color page that lets you adjust colors with advanced tools such as color warper, magic mask, HDR grading and more.
      • -
      • A new fusion page that lets you add visual effects with node-based compositing and 3D tools.
      • -
      • A new fairlight page that lets you edit audio with professional tools such as ADR, Foley, sound library and more.
      • -
      • A new media page that lets you manage your media files with features such as smart bins, metadata editing and transcoding.
      • -
      • A new deliver page that lets you export your videos with presets for various platforms and formats.
      • -
      -

      What are the limitations of DaVinci Resolve 18 free version?

      -

      DaVinci Resolve 18 free version is not a full version of the software. It has some limitations that you should be aware of before using it. Some of the limitations are:

      -
        -
      • It does not support collaboration features that let you work with other users on the same project.
      • -
      • It does not support stereoscopic 3D editing and grading.
      • -
      • It does not support advanced noise reduction and motion blur effects.
      • -
      • It does not support HDR Dolby Vision and HDR10+ grading.
      • -
      • It does not support remote rendering and encoding.
      • -
      • It does not include technical support or updates from Blackmagic Design.
      • -
      -

      Conclusion

      -

      DaVinci Resolve 18 is a powerful video editing software that offers a complete solution for editing, color grading, visual effects, audio post-production and more. It is available in two versions: a paid version that includes all the features and a free version that has some limitations but still offers a lot of functionality. If you want to download DaVinci Resolve 18 for free, you can follow the steps above and

      ddb901b051
      -
      -
      \ No newline at end of file diff --git a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Office 365 with Product Key and Install it on Your PC or Mac.md b/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Office 365 with Product Key and Install it on Your PC or Mac.md deleted file mode 100644 index 5c8133a2746252d23d252fbac9ae37c5e7c401cf..0000000000000000000000000000000000000000 --- a/spaces/tialenAdioni/chat-gpt-api/logs/How to Download Office 365 with Product Key and Install it on Your PC or Mac.md +++ /dev/null @@ -1,23 +0,0 @@ -
      -

      How to Download Office 365 with Product Key

      -

      If you have purchased Office 365 or Office 2021 and want to download it to your PC or Mac, you will need a product key to activate it. A product key is a 25-character code that comes with your Microsoft 365 purchase. You can find it in your order confirmation email, on a sticker on your physical copy of Office, or on a card that came with your digital download.

      -

      In this article, we will show you how to download Office 365 with product key in a few simple steps.

      -

      how to download office 365 with product key


      Download Zip ★★★★★ https://urlcod.com/2uK9ti



      -

      Step 1: Sign in to your Microsoft account

      -

      Go to www.office.com and sign in with the Microsoft account that you used to buy Office. This can be a personal Microsoft account, a work or school account, or an account associated with Microsoft 365 operated by 21 Vianet or Microsoft 365 Germany.

      -

      If you don't have a Microsoft account, you can create one for free using your email address.

      -

      Step 2: Install Office 365

      -

      After signing in, you will see the Microsoft 365 home page. From here, select Install Office. Depending on your version of Office, you may see Install Office or Install Office> >.

      -

      You will be taken to the download page where you can choose the language and version of Office that you want to install. The 64-bit version is installed by default unless Microsoft 365 detects that you already have a 32-bit version of Office installed. In that case, the 32-bit version will be installed instead.

      -

      To change from a 32-bit version to a 64-bit version or vice versa, you need to uninstall Office first and then reinstall it. You can also choose to install specific apps such as Visio or Project if you have them.

      -

      Once you have selected your options, click Install. This will start the download of the Office setup file to your device.

      -

      Step 3: Activate Office with your product key

      -

      After the download is complete, open the setup file and follow the prompts to install Office on your device. You may need to enter your administrator password or confirm your choice if prompted.

      -

      During the installation process, you will be asked to enter your product key. Type or paste the 25-character code that you received when you purchased Office and click Next.

      -

      -

      You may also need to sign in with your Microsoft account again to link it with your product key. This will allow you to manage your subscription and access your Office apps across multiple devices.

      -

      Once you have entered your product key and signed in, your installation will be complete and you can start using Office on your device.

      -

      Conclusion

      -

      In this article, we showed you how to download Office 365 with product key in three easy steps. You just need to sign in to your Microsoft account, install Office from the download page, and activate it with your product key. Now you can enjoy all the benefits of Microsoft 365 on your PC or Mac.

      ddb901b051
      -
      -
      \ No newline at end of file diff --git a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Highway Racing Hack APK The Best Way to Experience the Game on Your Android Phone.md b/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Highway Racing Hack APK The Best Way to Experience the Game on Your Android Phone.md deleted file mode 100644 index e52913cc604e0cbfc30cfabb04ef28d0431f34f9..0000000000000000000000000000000000000000 --- a/spaces/ticomspire/turkey-syria-earthquake-tweets/logs/CarX Highway Racing Hack APK The Best Way to Experience the Game on Your Android Phone.md +++ /dev/null @@ -1,92 +0,0 @@ -
      -

      How to Hack CarX Highway Racing APK

      -

      CarX Highway Racing APK is an Android game that lets you race on busy highways with realistic physics and graphics. You can choose from different cars, modes, and locations to enjoy the thrill of speed and adrenaline. But what if you want to get more coins, gold, fuel, or score in the game? What if you want to unlock all the cars and tracks without spending real money? What if you want to have unlimited health and never lose a race?

      -

      hack carx highway racing apk


      Download Zip »»» https://bltlly.com/2uOmN9



      -

      In this article, we will show you how to hack CarX Highway Racing APK using a powerful tool called Game Guardian. Game Guardian is an app that allows you to modify the values of any game or app on your Android device. You can use it to change your coins, gold, fuel, score, and more in CarX Highway Racing APK. You can also use it to hack other games and apps on your Android device.

      -

      But before we start hacking CarX Highway Racing APK, let's talk about some benefits and risks of hacking games. Hacking games can be fun and rewarding, as you can get more resources, unlock more features, and enjoy more gameplay options. However, hacking games can also be risky and unethical, as you can get banned from online servers, violate the terms of service of the game developers, and ruin the game experience for yourself and others. Therefore, we advise you to hack games only for educational purposes and at your own risk.

      -

      Preparing Your Android Device

      -

      To hack CarX Highway Racing APK with Game Guardian, you will need a rooted Android device. Rooting is a process that gives you full control over your Android device's system settings and files. Rooting can also void your warranty and expose your device to security risks. Therefore, we recommend you to backup your data before rooting your device.

      -

      Downloading Game Guardian

      -

      Game Guardian is a powerful tool that allows you to modify the values of any game or app on your Android device. You can use it to change your coins, gold, fuel, score, and more in CarX Highway Racing APK. You can also use it to hack other games and apps on your Android device.

      -

      To download Game Guardian, you need to visit its official website or its forum and download the latest APK file. You can also find the APK file on other sources, but make sure they are trustworthy and virus-free. Do not download Game Guardian from the Google Play Store, as it is not available there.

      -

      Once you have downloaded the APK file, you need to install it on your Android device. To do that, you need to enable downloads from unknown sources in your device settings. This will allow you to install apps that are not from the Google Play Store. To enable downloads from unknown sources, follow these steps:

      -

      -
        -
      • Go to Settings > Security > Unknown sources and toggle it on.
      • -
      • A warning message will pop up. Tap OK to confirm.
      • -
      • Now you can install Game Guardian by tapping on the APK file and following the instructions.
      • -
      -

      After installing Game Guardian, you need to grant it root access. This will allow Game Guardian to access and modify the system files of your games and apps. To grant root access to Game Guardian, follow these steps:

      -
        -
      • Open Game Guardian and tap Start. A floating icon will appear on your screen.
      • -
      • Tap on the floating icon and select Grant root permission.
      • -
      • A pop-up window will ask you to grant root access to Game Guardian. Tap Allow or Grant.
      • -
      • Now Game Guardian has root access and is ready to use.
      • -

        Using Game Guardian to Hack CarX Highway Racing APK

        -

        Now that you have installed and granted root access to Game Guardian, you can use it to hack CarX Highway Racing APK. To do that, you need to follow these steps:

        -

        Scanning for Values

        -

        The first step is to scan for the values that you want to hack in CarX Highway Racing APK, such as coins, gold, fuel, score, etc. To scan for values, follow these steps:

        -
          -
        • Open CarX Highway Racing APK and start playing. Note down the values that you want to hack, such as your coins, gold, fuel, score, etc.
        • -
        • Open Game Guardian by tapping on the floating icon and select CarX Highway Racing APK from the list of processes.
        • -
        • Tap on the search icon and enter the value that you want to hack, such as your coins. Select the data type, such as Dword or Float. Tap on New Scan.
        • -
        • Game Guardian will scan for the value that you entered and show you the results. If there are too many results, you need to refine your search by changing the value in the game and scanning again.
        • -
        • For example, if you want to hack your coins, you can spend some coins in the game and scan for the new value. Repeat this process until you have a few results left.
        • -
        • Select the results that match your value and add them to your list by tapping on them.
        • -
        -

        Modifying Values

        -

        The next step is to modify the values that you scanned for in CarX Highway Racing APK, such as increasing or decreasing them. To modify values, follow these steps:

        -
          -
        • Open your list of results by tapping on the list icon in Game Guardian.
        • -
        • Select the values that you want to modify and tap on Edit.
        • -
        • Enter the new value that you want, such as 999999 for coins. Tap on Yes.
        • -
        • The values will be changed in the game. You can check them by switching back to CarX Highway Racing APK.
        • -

          Freezing Values

          -

          The final step is to freeze the values that you modified in CarX Highway Racing APK, so that they don't change during the game. To freeze values, follow these steps:

          -
            -
          • Open your list of results by tapping on the list icon in Game Guardian.
          • -
          • Select the values that you want to freeze and tap on the lock icon.
          • -
          • The values will be frozen in the game. You can check them by switching back to CarX Highway Racing APK.
          • -
          • To unfreeze the values, tap on the lock icon again.
          • -
          -

          Hacking Unknown or Encrypted Values

          -

          Sometimes, you may not be able to find the values that you want to hack in CarX Highway Racing APK, because they are unknown or encrypted. In that case, you can use Game Guardian's advanced features to hack them. Here are some methods that you can try:

          -
            -
          • Fuzzy search: This method allows you to search for values that are not exact, but within a range. For example, if you want to hack your health, but you don't know the exact value, you can use fuzzy search to find it. To use fuzzy search, follow these steps:
          • -
              -
            • Open Game Guardian and select CarX Highway Racing APK from the list of processes.
            • -
            • Tap on the search icon and select Fuzzy from the data type menu.
            • -
            • Tap on New Scan and wait for Game Guardian to scan for all possible values.
            • -
            • Change the value in the game by losing or gaining some health.
            • -
            • Tap on Refine and select either Increased, Decreased, or Changed from the menu.
            • -
            • Game Guardian will scan for the values that match your criteria and show you the results. Repeat this process until you have a few results left.
            • -
            • Select the results that match your value and add them to your list by tapping on them.
            • -
            -
          • Range of float: This method allows you to search for values that are floating-point numbers, such as decimals. For example, if you want to hack your speed, but you don't know the exact value, you can use range of float to find it. To use range of float, follow these steps:
          • -
              -
            • Open Game Guardian and select CarX Highway Racing APK from the list of processes.
            • -
            • Tap on the search icon and select Float from the data type menu.
            • -
            • Enter a range of values that could be your speed, such as 0.1~10. Tap on New Scan.
            • -
            • Game Guardian will scan for the values that are within your range and show you the results. If there are too many results, you need to refine your search by changing the range or changing the value in the game and scanning again.
            • -
            • Select the results that match your value and add them to your list by tapping on them.
            • -
            -

            Conclusion

            -

            In this article, we have shown you how to hack CarX Highway Racing APK using Game Guardian. You have learned how to root your Android device, download and install Game Guardian, scan for values, modify values, freeze values, and hack unknown or encrypted values. You have also learned some benefits and risks of hacking games.

            -

            Now you can enjoy CarX Highway Racing APK with unlimited coins, gold, fuel, score, and more. You can also use Game Guardian to hack other games and apps on your Android device. However, remember to hack games only for educational purposes and at your own risk. Do not use Game Guardian to cheat in online games or violate the terms of service of the game developers. Also, be careful when rooting your device or downloading apps from unknown sources.

            -

            We hope you have found this article helpful and informative. If you have any questions or feedback, please let us know in the comments below. Happy hacking!

            -

            FAQs

            -

            Here are some frequently asked questions and answers about hacking CarX Highway Racing APK or Game Guardian.

            -
              -
            • Q: Is Game Guardian safe to use?
            • -
            • A: Game Guardian is safe to use as long as you download it from its official website or its forum. However, you should be careful when granting root access to Game Guardian or any other app, as it can potentially harm your device or data. You should also scan the apps that you download from unknown sources with an antivirus app before installing them.
            • -
            • Q: Is Game Guardian legal to use?
            • -
            • A: Game Guardian is legal to use as long as you use it for educational purposes and not for cheating in online games or violating the terms of service of the game developers. However, different countries may have different laws regarding hacking games or apps, so you should check your local laws before using Game Guardian.
            • -
            • Q: Can I hack CarX Highway Racing APK without rooting my device?
            • -
            • A: No, you cannot hack CarX Highway Racing APK without rooting your device. Rooting is necessary for Game Guardian to access and modify the system files of your games and apps. Without rooting, you will not be able to use Game Guardian or any other hacking tool.
            • -
            • Q: Can I hack CarX Highway Racing APK on iOS devices?
            • -
            • A: No, you cannot hack CarX Highway Racing APK on iOS devices. Game Guardian is only available for Android devices. There is no iOS version of Game Guardian or any other hacking tool that can hack CarX Highway Racing APK.
            • -
            • Q: Can I get banned from CarX Highway Racing APK for hacking it?
            • -
            • A: Yes, you can get banned from CarX Highway Racing APK for hacking it. The game developers may detect your hacking activities and ban your account or device from accessing their servers. Therefore, you should hack CarX Highway Racing APK at your own risk and avoid using it in online modes.
            • -

            401be4b1e0
            -
            -
            \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Edius 6.2 Free Download Full Version.md b/spaces/tioseFevbu/cartoon-converter/scripts/Edius 6.2 Free Download Full Version.md deleted file mode 100644 index d03976b30dbe29fa1f93a95cbc04b243006a9a74..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Edius 6.2 Free Download Full Version.md +++ /dev/null @@ -1,95 +0,0 @@ - -
            - Provides real-time editing and fast rendering
            - Offers advanced tools and effects
            - Compatible with Windows XP, Vista, and 7 | | H2: How to Download Edius 6.2 for Free | - Step 1: Visit the official website of Edius or a trusted third-party site
            - Step 2: Choose the version and options you want to download
            - Step 3: Enter your valid Edius license number or request a free 30-day trial version
            - Step 4: Follow the instructions to install and activate Edius 6.2 on your computer | | H2: How to Use Edius 6.2 for Video Editing | - Step 1: Launch Edius 6.2 and create a new project
            - Step 2: Import your media files and add them to the timeline
            - Step 3: Edit your clips using the trim, split, crop, rotate, and other tools
            - Step 4: Apply transitions, filters, titles, and other effects to enhance your video
            - Step 5: Export your video in your desired format and quality | | H2: Tips and Tricks for Edius 6.2 | - How to optimize your system performance and avoid crashes
            - How to customize your workspace and keyboard shortcuts
            - How to use the multicam editing and proxy editing features
            - How to use the audio mixer and audio filters
            - How to use the chroma key and color correction tools | | H2: Conclusion | Summary of the main points and benefits of Edius 6.2 | | H3: FAQs | - What are the system requirements for Edius 6.2?
            - Is Edius 6.2 compatible with Windows 10?
            - How can I update Edius 6.2 to the latest version?
            - What are the differences between Edius Pro and Edius Workgroup?
            - How can I get support for Edius 6.2? | Table 2: Article with HTML formatting

            Edius 6.2 Free Download Full Version: A Comprehensive Guide

            -

            If you are looking for a powerful and versatile video editing software that can handle any format and resolution, you might want to consider Edius 6.2. Edius is a professional video editing software developed by Grass Valley, a leading company in the field of broadcast and media technology. Edius is used by many video editors around the world for various purposes, such as weddings, events, documentaries, news, sports, music videos, and more.

            -

            In this article, we will show you how to download Edius 6.2 for free, how to use it for video editing, and some tips and tricks to make the most of it. We will also answer some frequently asked questions about Edius 6.2 at the end of this article.

            -

            Edius 6.2 Free Download Full Version


            Download Filehttps://urlcod.com/2uHyiO



            -

            Features of Edius 6.2

            -

            Edius 6.2 is one of the most popular versions of Edius software, as it offers many features that make it stand out from other video editing software. Here are some of the main features of Edius 6.2:

            -
              -
            • Supports various formats and resolutions: Edius 6.2 can edit any type of video file, such as AVCHD, AVI, DivX, DV, HDV, MP4, MPG, Quicktime, WMV, XDCAM, TS, etc. It can also handle different resolutions, from SD to HD to 4K digital cinema resolution.
            • -
            • Provides real-time editing and fast rendering: Edius 6.2 can edit multiple layers of video and audio in real-time without rendering or transcoding. It also has a fast rendering engine that can export your video in minutes.
            • -
            • Offers advanced tools and effects: Edius 6.2 has a wide window or on the toolbar. You can drag and drop them onto your clips or onto the information palette to adjust their parameters.
            • -
            • Step 5: Export your video in your desired format and quality: You need to export your video once you are done with editing. You can do this by clicking on the "Print to File" button on the toolbar or by choosing the "File" menu and then "Export". You can choose the format, quality, and destination of your video file. You can also preview your video before exporting it.
            • - -

              That's it! You have just created a video using Edius 6.2. You can now share it with your friends, family, or clients.

              -

              Tips and Tricks for Edius 6.2

              -

              Edius 6.2 is a powerful and versatile video editing software, but it also has some hidden features and tricks that can make your editing experience easier and more efficient. Here are some tips and tricks for Edius 6.2 that you might find useful:

              -
                -
              • How to optimize your system performance and avoid crashes: Edius 6.2 can consume a lot of memory and CPU resources, especially when dealing with high-resolution videos and multiple effects. To avoid lagging or crashing, you should optimize your system performance by doing the following:
                  -
                • Close any unnecessary programs or background processes that are running on your computer.
                • -
                • Defragment your hard drive regularly to improve its speed and efficiency.
                • -
                • Update your drivers and software to the latest versions.
                • -
                • Clean up your registry and disk space using a reliable cleaner tool.
                • -
                • Increase your RAM and disk space if possible.
                • -
                -
              • -
              • How to customize your workspace and keyboard shortcuts: Edius 6.2 allows you to customize your workspace and keyboard shortcuts according to your preferences and needs. You can do this by doing the following:
                  -
                • To customize your workspace, you can drag and drop the windows, palettes, and toolbars to arrange them in any way you like. You can also save and load different layouts for different projects.
                • -
                • To customize your keyboard shortcuts, you can go to the "Settings" menu and then "User Settings". You can then choose the "Keyboard" tab and assign any key or combination of keys to any function or command.
                • -
                -
              • -
              • How to use the multicam editing and proxy editing features: Edius 6.2 has some advanced features that can help you edit multiple camera angles and large files more easily. These features are multicam editing and proxy editing. You can use them by doing the following:
                  -
                • To use the multicam editing feature, you need to sync your clips from different cameras using the timecode, audio waveform, or in/out points. You can then create a multicam sequence and switch between the angles using the multicam monitor window or the keyboard shortcuts.
                • -
                • To use the proxy editing feature, you need to create low-resolution copies of your high-resolution clips using the proxy mode option. You can then edit these proxies faster and smoother than the original clips. When you export your video, Edius 6.2 will automatically replace the proxies with the original clips.
                • -
                -
              • -
              • How to use the audio mixer and audio filters: Edius 6.2 has a built-in audio mixer and audio filters that can help you adjust and enhance the sound of your video. You can use them by doing the following:
                  -
                • To use the audio mixer, you need to open it from the "View" menu or by clicking on the "Audio Mixer" button on the toolbar. You can then adjust the volume, pan, mute, solo, and other parameters of each audio track using the faders, knobs, buttons, and meters.
                • -
                • To use the audio filters, you need to drag and drop them from the effect palette window or from the toolbar onto your audio clips or onto the information palette. You can then adjust their parameters using the sliders, checkboxes, buttons, and other controls.
                • -
                -
              • -
              • How to use the chroma key and color correction tools: Edius 6.2 has some powerful tools that can help you change the background and color of your video. These tools are chroma key and color correction. You can use them by doing the following:
                  -
                • To use the chroma key tool, you need to apply it from the effect palette window or from the toolbar onto your clip that has a green or blue background. You can then adjust the key color, tolerance, edge, and other parameters to remove the background and replace it with another image or video.
                • -
                • To use the color correction tool, you need to apply it from the effect palette window or from the toolbar onto your clip that has a poor or undesired color. You can then adjust the hue, saturation, brightness, contrast, and other parameters to improve or change the color of your video.
                • -
                -
              • -
              -

              These are just some of the tips and tricks for Edius 6.2 that can help you edit your video faster and better. If you want to learn more about Edius 6.2, you can check out the user manual or the online help .

              -

              Conclusion

              -

              Edius 6.2 is a powerful and versatile video editing software that can handle any format and resolution, provide real-time editing and fast rendering, offer advanced tools and effects, and be compatible with Windows XP, Vista, and 7. It is a great choice for video editors who want to create professional and high-quality videos for various purposes.

              -

              In this article, we have shown you how to download Edius 6.2 for free, how to use it for video editing, and some tips and tricks to make the most of it. We hope you have found this article helpful and informative. If you have any questions or feedback, please feel free to leave a comment below.

              -

              -

              FAQs

              -

              Here are some frequently asked questions about Edius 6.2 that you might find useful:

              -
                -
              • What are the system requirements for Edius 6.2?
                -The minimum system requirements for Edius 6.2 are as follows:
                  -
                • Operating system: Windows XP (SP3), Vista (SP2), or 7 (SP1)
                • -
                • CPU: Intel Core 2 Duo 3 GHz or faster
                • -
                • RAM: 1 GB or more
                • -
                • HDD: 6 GB or more of free space
                • -
                • Graphics card: 256 MB or more of VRAM
                • -
                • Sound card: DirectSound compatible
                • -
                • Optical drive: DVD-ROM drive
                • -
                • Internet connection: Required for activation and updates
                • -
                -The recommended system requirements for Edius 6.2 are as follows:
                  -
                • Operating system: Windows 7 (SP1) 64-bit
                • -
                • CPU: Intel Core i7 or faster
                • -
                • RAM: 4 GB or more
                • -
                • HDD: 10 GB or more of free space
                • -
                • Graphics card: 512 MB or more of VRAM
                • -
                • Sound card: DirectSound compatible with ASIO support
                • -
                • Optical drive: Blu-ray disc writer
                • -
                • Internet connection: Required for activation and updates
                • -
              • -
              • Is Edius 6.2 compatible with Windows 10?
                -No, Edius 6.2 is not compatible with Windows 10. The latest version of Edius that is compatible with Windows 10 is Edius X . You can upgrade your Edius license to Edius X from the official website of Edius .
              • -
              • How can I update Edius 6.2 to the latest version?
                -You can update Edius 6.2 to the latest version by downloading and installing the update file from the official website of Edius . The latest version of Edius 6.2 is Edius 6.55 . You need to have a valid Edius license number to update your software.
              • -
              • What are the differences between Edius Pro and Edius Workgroup?
                -Edius Pro and Edius Workgroup are two different versions of Edius software that have different features and functions. The main differences between them are as follows:
                  -
                • Edius Pro is designed for individual users who work on their own projects.
                • -
                • Edius Workgroup is designed for team users who work on collaborative projects.
                • -
                • Edius Pro requires online activation and registration.
                • -
                • Edius Workgroup does not require online activation and registration.
                • -
                • Edius Pro has a limit of four video tracks and two audio tracks per sequence.
                • -
                • Ed ius Workgroup has no limit on the number of video and audio tracks per sequence.
                • -
                • Edius Pro has some features that Edius Workgroup does not have, such as proxy editing, multicam editing, and 3D editing.
                • -
                • Edius Workgroup has some features that Edius Pro does not have, such as background rendering, watch folder, and network support.
                • -
              • -
              • How can I get support for Edius 6.2?
                -You can get support for Edius 6.2 by visiting the official website of Edius or by contacting the Grass Valley customer service . You can also find helpful resources and tutorials on the Edius forum , the Edius blog , and the Edius YouTube channel .
              • -

              b2dd77e56b
              -
              -
              \ No newline at end of file diff --git a/spaces/tioseFevbu/cartoon-converter/scripts/Microsoft Office 2013 Profession ((BETTER)).md b/spaces/tioseFevbu/cartoon-converter/scripts/Microsoft Office 2013 Profession ((BETTER)).md deleted file mode 100644 index d354a35e48dd005ff0d1117edf58799fae904d7e..0000000000000000000000000000000000000000 --- a/spaces/tioseFevbu/cartoon-converter/scripts/Microsoft Office 2013 Profession ((BETTER)).md +++ /dev/null @@ -1,31 +0,0 @@ -
              -

              Microsoft Office 2013 Professional: A Complete Guide

              -

              Microsoft Office 2013 Professional is a suite of productivity applications that includes Word, Excel, PowerPoint, Outlook, Access, and Publisher. It is designed for professionals who need to create, edit, and share documents, spreadsheets, presentations, emails, databases, and publications. In this article, we will provide a complete guide on how to use Microsoft Office 2013 Professional, including its features, benefits, and tips.

              -

              Microsoft Office 2013 Profession


              Download File 🆓 https://urlcod.com/2uHylB



              -

              Features of Microsoft Office 2013 Professional

              -

              Microsoft Office 2013 Professional offers many features that can help you work more efficiently and effectively. Some of the main features are:

              -
                -
              • Cloud integration: You can save your files to OneDrive or SharePoint and access them from any device. You can also collaborate with others in real time using co-authoring and commenting tools.
              • -
              • Touch-optimized interface: You can use touch gestures to navigate and interact with your documents on touch-enabled devices. You can also use a stylus or a mouse and keyboard if you prefer.
              • -
              • New templates and design tools: You can choose from a variety of professional-looking templates and themes to create stunning documents. You can also use new design tools such as alignment guides, color picker, and SmartArt graphics to enhance your layout and visuals.
              • -
              • Improved performance and compatibility: You can open and edit PDF files directly in Word, Excel, and PowerPoint. You can also work faster with improved performance and stability. You can also open and save files in different formats such as ODF, CSV, and XML.
              • -
              -

              Benefits of Microsoft Office 2013 Professional

              -

              Microsoft Office 2013 Professional can help you achieve your goals and improve your productivity. Some of the benefits are:

              -
                -
              • Boost your productivity: You can work more efficiently with the familiar and intuitive interface of Microsoft Office. You can also use the built-in help and online resources to learn new skills and tips.
              • -
              • Enhance your creativity: You can express your ideas and vision with the powerful and flexible tools of Microsoft Office. You can also customize your documents to suit your needs and preferences.
              • -
              • Increase your collaboration: You can share your files and work with others online or offline using Microsoft Office. You can also communicate and stay connected with your colleagues and clients using Outlook and Skype.
              • -
              • Secure your data: You can protect your files and information with the advanced security features of Microsoft Office. You can also control who can access and edit your files using permissions and encryption.
              • -
              -

              Tips for Using Microsoft Office 2013 Professional

              -

              To make the most of Microsoft Office 2013 Professional, here are some tips that you can follow:

              -

              -
                -
              • Update your software: To get the latest features, improvements, and fixes, you should update your software regularly. You can check for updates manually or enable automatic updates in the settings.
              • -
              • Use keyboard shortcuts: To save time and effort, you should learn and use keyboard shortcuts for common tasks. You can find a list of keyboard shortcuts for each application in the help menu or online.
              • -
              • Use online services: To access more features and benefits, you should sign in to your Microsoft account and use online services such as OneDrive, SharePoint, Skype, and Office Online. You can also download additional apps and add-ins from the Microsoft Store.
              • -
              • Use feedback tools: To provide feedback or report issues, you should use the feedback tools in each application. You can also join the Microsoft Office Insider program to get early access to new features and updates.
              • -

              cec2833e83
              -
              -
              \ No newline at end of file diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/locations/_sysconfig.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/locations/_sysconfig.py deleted file mode 100644 index 0bbc9283db7fa93f9e2ed75101ea2cde41bc8fb0..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/locations/_sysconfig.py +++ /dev/null @@ -1,218 +0,0 @@ -import logging -import os -import sys -import sysconfig -import typing - -from pip._internal.exceptions import InvalidSchemeCombination, UserInstallationInvalid -from pip._internal.models.scheme import SCHEME_KEYS, Scheme -from pip._internal.utils.virtualenv import running_under_virtualenv - -from .base import change_root, get_major_minor_version, is_osx_framework - -logger = logging.getLogger(__name__) - - -# Notes on _infer_* functions. -# Unfortunately ``get_default_scheme()`` didn't exist before 3.10, so there's no -# way to ask things like "what is the '_prefix' scheme on this platform". These -# functions try to answer that with some heuristics while accounting for ad-hoc -# platforms not covered by CPython's default sysconfig implementation. If the -# ad-hoc implementation does not fully implement sysconfig, we'll fall back to -# a POSIX scheme. - -_AVAILABLE_SCHEMES = set(sysconfig.get_scheme_names()) - -_PREFERRED_SCHEME_API = getattr(sysconfig, "get_preferred_scheme", None) - - -def _should_use_osx_framework_prefix() -> bool: - """Check for Apple's ``osx_framework_library`` scheme. - - Python distributed by Apple's Command Line Tools has this special scheme - that's used when: - - * This is a framework build. - * We are installing into the system prefix. - - This does not account for ``pip install --prefix`` (also means we're not - installing to the system prefix), which should use ``posix_prefix``, but - logic here means ``_infer_prefix()`` outputs ``osx_framework_library``. But - since ``prefix`` is not available for ``sysconfig.get_default_scheme()``, - which is the stdlib replacement for ``_infer_prefix()``, presumably Apple - wouldn't be able to magically switch between ``osx_framework_library`` and - ``posix_prefix``. ``_infer_prefix()`` returning ``osx_framework_library`` - means its behavior is consistent whether we use the stdlib implementation - or our own, and we deal with this special case in ``get_scheme()`` instead. - """ - return ( - "osx_framework_library" in _AVAILABLE_SCHEMES - and not running_under_virtualenv() - and is_osx_framework() - ) - - -def _infer_prefix() -> str: - """Try to find a prefix scheme for the current platform. - - This tries: - - * A special ``osx_framework_library`` for Python distributed by Apple's - Command Line Tools, when not running in a virtual environment. - * Implementation + OS, used by PyPy on Windows (``pypy_nt``). - * Implementation without OS, used by PyPy on POSIX (``pypy``). - * OS + "prefix", used by CPython on POSIX (``posix_prefix``). - * Just the OS name, used by CPython on Windows (``nt``). - - If none of the above works, fall back to ``posix_prefix``. - """ - if _PREFERRED_SCHEME_API: - return _PREFERRED_SCHEME_API("prefix") - if _should_use_osx_framework_prefix(): - return "osx_framework_library" - implementation_suffixed = f"{sys.implementation.name}_{os.name}" - if implementation_suffixed in _AVAILABLE_SCHEMES: - return implementation_suffixed - if sys.implementation.name in _AVAILABLE_SCHEMES: - return sys.implementation.name - suffixed = f"{os.name}_prefix" - if suffixed in _AVAILABLE_SCHEMES: - return suffixed - if os.name in _AVAILABLE_SCHEMES: # On Windows, prefx is just called "nt". - return os.name - return "posix_prefix" - - -def _infer_user() -> str: - """Try to find a user scheme for the current platform.""" - if _PREFERRED_SCHEME_API: - return _PREFERRED_SCHEME_API("user") - if is_osx_framework() and not running_under_virtualenv(): - suffixed = "osx_framework_user" - else: - suffixed = f"{os.name}_user" - if suffixed in _AVAILABLE_SCHEMES: - return suffixed - if "posix_user" not in _AVAILABLE_SCHEMES: # User scheme unavailable. - raise UserInstallationInvalid() - return "posix_user" - - -def _infer_home() -> str: - """Try to find a home for the current platform.""" - if _PREFERRED_SCHEME_API: - return _PREFERRED_SCHEME_API("home") - suffixed = f"{os.name}_home" - if suffixed in _AVAILABLE_SCHEMES: - return suffixed - return "posix_home" - - -# Update these keys if the user sets a custom home. -_HOME_KEYS = [ - "installed_base", - "base", - "installed_platbase", - "platbase", - "prefix", - "exec_prefix", -] -if sysconfig.get_config_var("userbase") is not None: - _HOME_KEYS.append("userbase") - - -def get_scheme( - dist_name: str, - user: bool = False, - home: typing.Optional[str] = None, - root: typing.Optional[str] = None, - isolated: bool = False, - prefix: typing.Optional[str] = None, -) -> Scheme: - """ - Get the "scheme" corresponding to the input parameters. - - :param dist_name: the name of the package to retrieve the scheme for, used - in the headers scheme path - :param user: indicates to use the "user" scheme - :param home: indicates to use the "home" scheme - :param root: root under which other directories are re-based - :param isolated: ignored, but kept for distutils compatibility (where - this controls whether the user-site pydistutils.cfg is honored) - :param prefix: indicates to use the "prefix" scheme and provides the - base directory for the same - """ - if user and prefix: - raise InvalidSchemeCombination("--user", "--prefix") - if home and prefix: - raise InvalidSchemeCombination("--home", "--prefix") - - if home is not None: - scheme_name = _infer_home() - elif user: - scheme_name = _infer_user() - else: - scheme_name = _infer_prefix() - - # Special case: When installing into a custom prefix, use posix_prefix - # instead of osx_framework_library. See _should_use_osx_framework_prefix() - # docstring for details. - if prefix is not None and scheme_name == "osx_framework_library": - scheme_name = "posix_prefix" - - if home is not None: - variables = {k: home for k in _HOME_KEYS} - elif prefix is not None: - variables = {k: prefix for k in _HOME_KEYS} - else: - variables = {} - - paths = sysconfig.get_paths(scheme=scheme_name, vars=variables) - - # Logic here is very arbitrary, we're doing it for compatibility, don't ask. - # 1. Pip historically uses a special header path in virtual environments. - # 2. If the distribution name is not known, distutils uses 'UNKNOWN'. We - # only do the same when not running in a virtual environment because - # pip's historical header path logic (see point 1) did not do this. - if running_under_virtualenv(): - if user: - base = variables.get("userbase", sys.prefix) - else: - base = variables.get("base", sys.prefix) - python_xy = f"python{get_major_minor_version()}" - paths["include"] = os.path.join(base, "include", "site", python_xy) - elif not dist_name: - dist_name = "UNKNOWN" - - scheme = Scheme( - platlib=paths["platlib"], - purelib=paths["purelib"], - headers=os.path.join(paths["include"], dist_name), - scripts=paths["scripts"], - data=paths["data"], - ) - if root is not None: - for key in SCHEME_KEYS: - value = change_root(root, getattr(scheme, key)) - setattr(scheme, key, value) - return scheme - - -def get_bin_prefix() -> str: - # Forcing to use /usr/local/bin for standard macOS framework installs. - if sys.platform[:6] == "darwin" and sys.prefix[:16] == "/System/Library/": - return "/usr/local/bin" - return sysconfig.get_paths()["scripts"] - - -def get_purelib() -> str: - return sysconfig.get_paths()["purelib"] - - -def get_platlib() -> str: - return sysconfig.get_paths()["platlib"] - - -def get_prefixed_libs(prefix: str) -> typing.Tuple[str, str]: - paths = sysconfig.get_paths(vars={"base": prefix, "platbase": prefix}) - return (paths["purelib"], paths["platlib"]) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/reporter.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/reporter.py deleted file mode 100644 index 6ced5329b814e0fd3c397bb11dc69a82b06eb1f4..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/pip/_internal/resolution/resolvelib/reporter.py +++ /dev/null @@ -1,68 +0,0 @@ -from collections import defaultdict -from logging import getLogger -from typing import Any, DefaultDict - -from pip._vendor.resolvelib.reporters import BaseReporter - -from .base import Candidate, Requirement - -logger = getLogger(__name__) - - -class PipReporter(BaseReporter): - def __init__(self) -> None: - self.backtracks_by_package: DefaultDict[str, int] = defaultdict(int) - - self._messages_at_backtrack = { - 1: ( - "pip is looking at multiple versions of {package_name} to " - "determine which version is compatible with other " - "requirements. This could take a while." - ), - 8: ( - "pip is looking at multiple versions of {package_name} to " - "determine which version is compatible with other " - "requirements. This could take a while." - ), - 13: ( - "This is taking longer than usual. You might need to provide " - "the dependency resolver with stricter constraints to reduce " - "runtime. See https://pip.pypa.io/warnings/backtracking for " - "guidance. If you want to abort this run, press Ctrl + C." - ), - } - - def backtracking(self, candidate: Candidate) -> None: - self.backtracks_by_package[candidate.name] += 1 - - count = self.backtracks_by_package[candidate.name] - if count not in self._messages_at_backtrack: - return - - message = self._messages_at_backtrack[count] - logger.info("INFO: %s", message.format(package_name=candidate.name)) - - -class PipDebuggingReporter(BaseReporter): - """A reporter that does an info log for every event it sees.""" - - def starting(self) -> None: - logger.info("Reporter.starting()") - - def starting_round(self, index: int) -> None: - logger.info("Reporter.starting_round(%r)", index) - - def ending_round(self, index: int, state: Any) -> None: - logger.info("Reporter.ending_round(%r, state)", index) - - def ending(self, state: Any) -> None: - logger.info("Reporter.ending(%r)", state) - - def adding_requirement(self, requirement: Requirement, parent: Candidate) -> None: - logger.info("Reporter.adding_requirement(%r, %r)", requirement, parent) - - def backtracking(self, candidate: Candidate) -> None: - logger.info("Reporter.backtracking(%r)", candidate) - - def pinning(self, candidate: Candidate) -> None: - logger.info("Reporter.pinning(%r)", candidate) diff --git a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/archive_util.py b/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/archive_util.py deleted file mode 100644 index 5a70c32c2f84c139c18d88dbc263a2fd8f5b53fc..0000000000000000000000000000000000000000 --- a/spaces/tjburns/ask_marcus_aurelius/.venv/lib/python3.10/site-packages/setuptools/_distutils/archive_util.py +++ /dev/null @@ -1,280 +0,0 @@ -"""distutils.archive_util - -Utility functions for creating archive files (tarballs, zip files, -that sort of thing).""" - -import os -from warnings import warn -import sys - -try: - import zipfile -except ImportError: - zipfile = None - - -from distutils.errors import DistutilsExecError -from distutils.spawn import spawn -from distutils.dir_util import mkpath -from distutils import log - -try: - from pwd import getpwnam -except ImportError: - getpwnam = None - -try: - from grp import getgrnam -except ImportError: - getgrnam = None - - -def _get_gid(name): - """Returns a gid, given a group name.""" - if getgrnam is None or name is None: - return None - try: - result = getgrnam(name) - except KeyError: - result = None - if result is not None: - return result[2] - return None - - -def _get_uid(name): - """Returns an uid, given a user name.""" - if getpwnam is None or name is None: - return None - try: - result = getpwnam(name) - except KeyError: - result = None - if result is not None: - return result[2] - return None - - -def make_tarball( - base_name, base_dir, compress="gzip", verbose=0, dry_run=0, owner=None, group=None -): - """Create a (possibly compressed) tar file from all the files under - 'base_dir'. - - 'compress' must be "gzip" (the default), "bzip2", "xz", "compress", or - None. ("compress" will be deprecated in Python 3.2) - - 'owner' and 'group' can be used to define an owner and a group for the - archive that is being built. If not provided, the current owner and group - will be used. - - The output tar file will be named 'base_dir' + ".tar", possibly plus - the appropriate compression extension (".gz", ".bz2", ".xz" or ".Z"). - - Returns the output filename. - """ - tar_compression = { - 'gzip': 'gz', - 'bzip2': 'bz2', - 'xz': 'xz', - None: '', - 'compress': '', - } - compress_ext = {'gzip': '.gz', 'bzip2': '.bz2', 'xz': '.xz', 'compress': '.Z'} - - # flags for compression program, each element of list will be an argument - if compress is not None and compress not in compress_ext.keys(): - raise ValueError( - "bad value for 'compress': must be None, 'gzip', 'bzip2', " - "'xz' or 'compress'" - ) - - archive_name = base_name + '.tar' - if compress != 'compress': - archive_name += compress_ext.get(compress, '') - - mkpath(os.path.dirname(archive_name), dry_run=dry_run) - - # creating the tarball - import tarfile # late import so Python build itself doesn't break - - log.info('Creating tar archive') - - uid = _get_uid(owner) - gid = _get_gid(group) - - def _set_uid_gid(tarinfo): - if gid is not None: - tarinfo.gid = gid - tarinfo.gname = group - if uid is not None: - tarinfo.uid = uid - tarinfo.uname = owner - return tarinfo - - if not dry_run: - tar = tarfile.open(archive_name, 'w|%s' % tar_compression[compress]) - try: - tar.add(base_dir, filter=_set_uid_gid) - finally: - tar.close() - - # compression using `compress` - if compress == 'compress': - warn("'compress' will be deprecated.", PendingDeprecationWarning) - # the option varies depending on the platform - compressed_name = archive_name + compress_ext[compress] - if sys.platform == 'win32': - cmd = [compress, archive_name, compressed_name] - else: - cmd = [compress, '-f', archive_name] - spawn(cmd, dry_run=dry_run) - return compressed_name - - return archive_name - - -def make_zipfile(base_name, base_dir, verbose=0, dry_run=0): - """Create a zip file from all the files under 'base_dir'. - - The output zip file will be named 'base_name' + ".zip". Uses either the - "zipfile" Python module (if available) or the InfoZIP "zip" utility - (if installed and found on the default search path). If neither tool is - available, raises DistutilsExecError. Returns the name of the output zip - file. - """ - zip_filename = base_name + ".zip" - mkpath(os.path.dirname(zip_filename), dry_run=dry_run) - - # If zipfile module is not available, try spawning an external - # 'zip' command. - if zipfile is None: - if verbose: - zipoptions = "-r" - else: - zipoptions = "-rq" - - try: - spawn(["zip", zipoptions, zip_filename, base_dir], dry_run=dry_run) - except DistutilsExecError: - # XXX really should distinguish between "couldn't find - # external 'zip' command" and "zip failed". - raise DistutilsExecError( - ( - "unable to create zip file '%s': " - "could neither import the 'zipfile' module nor " - "find a standalone zip utility" - ) - % zip_filename - ) - - else: - log.info("creating '%s' and adding '%s' to it", zip_filename, base_dir) - - if not dry_run: - try: - zip = zipfile.ZipFile( - zip_filename, "w", compression=zipfile.ZIP_DEFLATED - ) - except RuntimeError: - zip = zipfile.ZipFile(zip_filename, "w", compression=zipfile.ZIP_STORED) - - with zip: - if base_dir != os.curdir: - path = os.path.normpath(os.path.join(base_dir, '')) - zip.write(path, path) - log.info("adding '%s'", path) - for dirpath, dirnames, filenames in os.walk(base_dir): - for name in dirnames: - path = os.path.normpath(os.path.join(dirpath, name, '')) - zip.write(path, path) - log.info("adding '%s'", path) - for name in filenames: - path = os.path.normpath(os.path.join(dirpath, name)) - if os.path.isfile(path): - zip.write(path, path) - log.info("adding '%s'", path) - - return zip_filename - - -ARCHIVE_FORMATS = { - 'gztar': (make_tarball, [('compress', 'gzip')], "gzip'ed tar-file"), - 'bztar': (make_tarball, [('compress', 'bzip2')], "bzip2'ed tar-file"), - 'xztar': (make_tarball, [('compress', 'xz')], "xz'ed tar-file"), - 'ztar': (make_tarball, [('compress', 'compress')], "compressed tar file"), - 'tar': (make_tarball, [('compress', None)], "uncompressed tar file"), - 'zip': (make_zipfile, [], "ZIP file"), -} - - -def check_archive_formats(formats): - """Returns the first format from the 'format' list that is unknown. - - If all formats are known, returns None - """ - for format in formats: - if format not in ARCHIVE_FORMATS: - return format - return None - - -def make_archive( - base_name, - format, - root_dir=None, - base_dir=None, - verbose=0, - dry_run=0, - owner=None, - group=None, -): - """Create an archive file (eg. zip or tar). - - 'base_name' is the name of the file to create, minus any format-specific - extension; 'format' is the archive format: one of "zip", "tar", "gztar", - "bztar", "xztar", or "ztar". - - 'root_dir' is a directory that will be the root directory of the - archive; ie. we typically chdir into 'root_dir' before creating the - archive. 'base_dir' is the directory where we start archiving from; - ie. 'base_dir' will be the common prefix of all files and - directories in the archive. 'root_dir' and 'base_dir' both default - to the current directory. Returns the name of the archive file. - - 'owner' and 'group' are used when creating a tar archive. By default, - uses the current owner and group. - """ - save_cwd = os.getcwd() - if root_dir is not None: - log.debug("changing into '%s'", root_dir) - base_name = os.path.abspath(base_name) - if not dry_run: - os.chdir(root_dir) - - if base_dir is None: - base_dir = os.curdir - - kwargs = {'dry_run': dry_run} - - try: - format_info = ARCHIVE_FORMATS[format] - except KeyError: - raise ValueError("unknown archive format '%s'" % format) - - func = format_info[0] - for arg, val in format_info[1]: - kwargs[arg] = val - - if format != 'zip': - kwargs['owner'] = owner - kwargs['group'] = group - - try: - filename = func(base_name, base_dir, **kwargs) - finally: - if root_dir is not None: - log.debug("changing back to '%s'", save_cwd) - os.chdir(save_cwd) - - return filename diff --git a/spaces/tmaham/DS-Fusion-Express/ldm/models/diffusion/ddpm.py b/spaces/tmaham/DS-Fusion-Express/ldm/models/diffusion/ddpm.py deleted file mode 100644 index eb2e72d492966d9b781e54ab06e3c1a7a8ac69eb..0000000000000000000000000000000000000000 --- a/spaces/tmaham/DS-Fusion-Express/ldm/models/diffusion/ddpm.py +++ /dev/null @@ -1,1602 +0,0 @@ -""" -wild mixture of -https://github.com/lucidrains/denoising-diffusion-pytorch/blob/7706bdfc6f527f58d33f84b7b522e61e6e3164b3/denoising_diffusion_pytorch/denoising_diffusion_pytorch.py -https://github.com/openai/improved-diffusion/blob/e94489283bb876ac1477d5dd7709bbbd2d9902ce/improved_diffusion/gaussian_diffusion.py -https://github.com/CompVis/taming-transformers --- merci -""" - -import torch -import torch.nn as nn -import numpy as np -import pytorch_lightning as pl -from torch.optim.lr_scheduler import LambdaLR -from einops import rearrange, repeat -from contextlib import contextmanager -from functools import partial -from tqdm import tqdm -from torchvision.utils import make_grid -from pytorch_lightning.utilities.distributed import rank_zero_only -import torchvision.transforms as T -from tqdm import tqdm, trange -from torchvision.utils import save_image -import pandas as pd - -from ldm.util import log_txt_as_img, exists, default, ismap, isimage, mean_flat, count_params, instantiate_from_config -from ldm.modules.ema import LitEma -from ldm.modules.distributions.distributions import normal_kl, DiagonalGaussianDistribution -from ldm.models.autoencoder import VQModelInterface, IdentityFirstStage, AutoencoderKL -from ldm.modules.diffusionmodules.util import make_beta_schedule, extract_into_tensor, noise_like -from ldm.models.diffusion.ddim import DDIMSampler -from PIL import Image -import os -import pdb -from ldm.modules.encoders.modules import FrozenClipImageEmbedder as clipimg -from ldm.modules.encoders.modules import FrozenCLIPTextEmbedder as cliptxt -import clip -import torch.nn.functional as F -import copy -import clip -from random import randrange - -__conditioning_keys__ = {'concat': 'c_concat', - 'crossattn': 'c_crossattn', - 'adm': 'y'} - - -def disabled_train(self, mode=True): - """Overwrite model.train with this function to make sure train/eval mode - does not change anymore.""" - return self - - -def uniform_on_device(r1, r2, shape, device): - return (r1 - r2) * torch.rand(*shape, device=device) + r2 - - -class DDPM(pl.LightningModule): - # classic DDPM with Gaussian diffusion, in image space - def __init__(self, - unet_config, - timesteps=1000, - beta_schedule="linear", - loss_type="l2", - ckpt_path=None, - ignore_keys=["model_ema.decay", "model_ema.num_updates"], - load_only_unet=False, - monitor="val/loss", - use_ema=True, - first_stage_key="image", - image_size=512, - channels=3, - log_every_t=100, - clip_denoised=True, - linear_start=1e-4, - linear_end=2e-2, - cosine_s=8e-3, - given_betas=None, - original_elbo_weight=0., - v_posterior=0., # weight for choosing posterior variance as sigma = (1-v) * beta_tilde + v * beta - l_simple_weight=1., - conditioning_key=None, - parameterization="eps", # all assuming fixed variance schedules - scheduler_config=None, - use_positional_encodings=False, - learn_logvar=False, - logvar_init=0., - ): - super().__init__() - assert parameterization in ["eps", "x0"], 'currently only supporting "eps" and "x0"' - self.parameterization = parameterization - print(f"{self.__class__.__name__}: Running in {self.parameterization}-prediction mode") - self.cond_stage_model = None - self.clip_denoised = clip_denoised - self.log_every_t = log_every_t - self.first_stage_key = first_stage_key - self.image_size = image_size # try conv? - self.channels = channels - self.use_positional_encodings = use_positional_encodings - self.model = DiffusionWrapper(unet_config, conditioning_key) - - count_params(self.model, verbose=True) - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self.model) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - self.use_scheduler = scheduler_config is not None - if self.use_scheduler: - self.scheduler_config = scheduler_config - - self.v_posterior = v_posterior - self.original_elbo_weight = original_elbo_weight - self.l_simple_weight = l_simple_weight - - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys, only_model=load_only_unet) - - self.register_schedule(given_betas=given_betas, beta_schedule=beta_schedule, timesteps=timesteps, - linear_start=linear_start, linear_end=linear_end, cosine_s=cosine_s) - - self.loss_type = loss_type - - self.learn_logvar = learn_logvar - self.logvar = torch.full(fill_value=logvar_init, size=(self.num_timesteps,)) - if self.learn_logvar: - self.logvar = nn.Parameter(self.logvar, requires_grad=True) - - def register_schedule(self, given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - if exists(given_betas): - betas = given_betas - else: - betas = make_beta_schedule(beta_schedule, timesteps, linear_start=linear_start, linear_end=linear_end, - cosine_s=cosine_s) - alphas = 1. - betas - alphas_cumprod = np.cumprod(alphas, axis=0) - alphas_cumprod_prev = np.append(1., alphas_cumprod[:-1]) - - timesteps, = betas.shape - self.num_timesteps = int(timesteps) - self.linear_start = linear_start - self.linear_end = linear_end - assert alphas_cumprod.shape[0] == self.num_timesteps, 'alphas have to be defined for each timestep' - - to_torch = partial(torch.tensor, dtype=torch.float32) - - self.register_buffer('betas', to_torch(betas)) - self.register_buffer('alphas_cumprod', to_torch(alphas_cumprod)) - self.register_buffer('alphas_cumprod_prev', to_torch(alphas_cumprod_prev)) - - # calculations for diffusion q(x_t | x_{t-1}) and others - self.register_buffer('sqrt_alphas_cumprod', to_torch(np.sqrt(alphas_cumprod))) - self.register_buffer('sqrt_one_minus_alphas_cumprod', to_torch(np.sqrt(1. - alphas_cumprod))) - self.register_buffer('log_one_minus_alphas_cumprod', to_torch(np.log(1. - alphas_cumprod))) - self.register_buffer('sqrt_recip_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod))) - self.register_buffer('sqrt_recipm1_alphas_cumprod', to_torch(np.sqrt(1. / alphas_cumprod - 1))) - - # calculations for posterior q(x_{t-1} | x_t, x_0) - posterior_variance = (1 - self.v_posterior) * betas * (1. - alphas_cumprod_prev) / ( - 1. - alphas_cumprod) + self.v_posterior * betas - # above: equal to 1. / (1. / (1. - alpha_cumprod_tm1) + alpha_t / beta_t) - self.register_buffer('posterior_variance', to_torch(posterior_variance)) - # below: log calculation clipped because the posterior variance is 0 at the beginning of the diffusion chain - self.register_buffer('posterior_log_variance_clipped', to_torch(np.log(np.maximum(posterior_variance, 1e-20)))) - self.register_buffer('posterior_mean_coef1', to_torch( - betas * np.sqrt(alphas_cumprod_prev) / (1. - alphas_cumprod))) - self.register_buffer('posterior_mean_coef2', to_torch( - (1. - alphas_cumprod_prev) * np.sqrt(alphas) / (1. - alphas_cumprod))) - - if self.parameterization == "eps": - lvlb_weights = self.betas ** 2 / ( - 2 * self.posterior_variance * to_torch(alphas) * (1 - self.alphas_cumprod)) - elif self.parameterization == "x0": - lvlb_weights = 0.5 * np.sqrt(torch.Tensor(alphas_cumprod)) / (2. * 1 - torch.Tensor(alphas_cumprod)) - else: - raise NotImplementedError("mu not supported") - # TODO how to choose this term - lvlb_weights[0] = lvlb_weights[1] - self.register_buffer('lvlb_weights', lvlb_weights, persistent=False) - assert not torch.isnan(self.lvlb_weights).all() - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.model.parameters()) - self.model_ema.copy_to(self.model) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.model.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def init_from_ckpt(self, path, ignore_keys=list(), only_model=False): - sd = torch.load(path, map_location="cpu") - if "state_dict" in list(sd.keys()): - sd = sd["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) if not only_model else self.model.load_state_dict( - sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - if len(unexpected) > 0: - print(f"Unexpected Keys: {unexpected}") - - def q_mean_variance(self, x_start, t): - """ - Get the distribution q(x_t | x_0). - :param x_start: the [N x C x ...] tensor of noiseless inputs. - :param t: the number of diffusion steps (minus 1). Here, 0 means one step. - :return: A tuple (mean, variance, log_variance), all of x_start's shape. - """ - mean = (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start) - variance = extract_into_tensor(1.0 - self.alphas_cumprod, t, x_start.shape) - log_variance = extract_into_tensor(self.log_one_minus_alphas_cumprod, t, x_start.shape) - return mean, variance, log_variance - - def predict_start_from_noise(self, x_t, t, noise): - return ( - extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) * noise - ) - - def q_posterior(self, x_start, x_t, t): - posterior_mean = ( - extract_into_tensor(self.posterior_mean_coef1, t, x_t.shape) * x_start + - extract_into_tensor(self.posterior_mean_coef2, t, x_t.shape) * x_t - ) - posterior_variance = extract_into_tensor(self.posterior_variance, t, x_t.shape) - posterior_log_variance_clipped = extract_into_tensor(self.posterior_log_variance_clipped, t, x_t.shape) - return posterior_mean, posterior_variance, posterior_log_variance_clipped - - def p_mean_variance(self, x, t, clip_denoised: bool): - model_out = self.model(x, t) - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - if clip_denoised: - x_recon.clamp_(-1., 1.) - - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, t, clip_denoised=True, repeat_noise=False): - b, *_, device = *x.shape, x.device - model_mean, _, model_log_variance = self.p_mean_variance(x=x, t=t, clip_denoised=clip_denoised) - noise = noise_like(x.shape, device, repeat_noise) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def p_sample_loop(self, shape, return_intermediates=False): - device = self.betas.device - b = shape[0] - img = torch.randn(shape, device=device) - intermediates = [img] - for i in tqdm(reversed(range(0, self.num_timesteps)), desc='Sampling t', total=self.num_timesteps): - img = self.p_sample(img, torch.full((b,), i, device=device, dtype=torch.long), - clip_denoised=self.clip_denoised) - if i % self.log_every_t == 0 or i == self.num_timesteps - 1: - intermediates.append(img) - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, batch_size=16, return_intermediates=False): - image_size = self.image_size - channels = self.channels - return self.p_sample_loop((batch_size, channels, image_size, image_size), - return_intermediates=return_intermediates) - - def q_sample(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - return (extract_into_tensor(self.sqrt_alphas_cumprod, t, x_start.shape) * x_start + - extract_into_tensor(self.sqrt_one_minus_alphas_cumprod, t, x_start.shape) * noise) - - def get_loss(self, pred, target, mean=True): - if self.loss_type == 'l1': - loss = (target - pred).abs() - if mean: - loss = loss.mean() - elif self.loss_type == 'l2': - if mean: - loss = torch.nn.functional.mse_loss(target, pred) - else: - loss = torch.nn.functional.mse_loss(target, pred, reduction='none') - else: - raise NotImplementedError("unknown loss type '{loss_type}'") - - return loss - - def p_losses(self, x_start, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_out = self.model(x_noisy, t) - - loss_dict = {} - if self.parameterization == "eps": - target = noise - elif self.parameterization == "x0": - target = x_start - else: - raise NotImplementedError(f"Paramterization {self.parameterization} not yet supported") - - loss = self.get_loss(model_out, target, mean=False).mean(dim=[1, 2, 3]) - - log_prefix = 'train' if self.training else 'val' - - loss_dict.update({f'{log_prefix}/loss_simple': loss.mean()}) - loss_simple = loss.mean() * self.l_simple_weight - - loss_vlb = (self.lvlb_weights[t] * loss).mean() - loss_dict.update({f'{log_prefix}/loss_vlb': loss_vlb}) - - loss = loss_simple + self.original_elbo_weight * loss_vlb - - loss_dict.update({f'{log_prefix}/loss': loss}) - - return loss, loss_dict - - def forward(self, x, *args, **kwargs): - # b, c, h, w, device, img_size, = *x.shape, x.device, self.image_size - # assert h == img_size and w == img_size, f'height and width of image must be {img_size}' - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - return self.p_losses(x, t, *args, **kwargs) - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = rearrange(x, 'b h w c -> b c h w') - x = x.to(memory_format=torch.contiguous_format).float() - return x - - def shared_step(self, batch): - x = self.get_input(batch, self.first_stage_key) - loss, loss_dict = self(x) - return loss, loss_dict - - def training_step(self, batch, batch_idx): - loss, loss_dict = self.shared_step(batch) - - self.log_dict(loss_dict, prog_bar=True, - logger=True, on_step=True, on_epoch=True) - - self.log("global_step", self.global_step, - prog_bar=True, logger=True, on_step=True, on_epoch=False) - - if self.use_scheduler: - lr = self.optimizers().param_groups[0]['lr'] - self.log('lr_abs', lr, prog_bar=True, logger=True, on_step=True, on_epoch=False) - - return loss - - @torch.no_grad() - def validation_step(self, batch, batch_idx): - _, loss_dict_no_ema = self.shared_step(batch) - with self.ema_scope(): - _, loss_dict_ema = self.shared_step(batch) - loss_dict_ema = {key + '_ema': loss_dict_ema[key] for key in loss_dict_ema} - self.log_dict(loss_dict_no_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - self.log_dict(loss_dict_ema, prog_bar=False, logger=True, on_step=False, on_epoch=True) - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self.model) - - def _get_rows_from_list(self, samples): - n_imgs_per_row = len(samples) - denoise_grid = rearrange(samples, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=2, sample=True, return_keys=None, **kwargs): - log = dict() - x = self.get_input(batch, self.first_stage_key) - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - x = x.to(self.device)[:N] - log["inputs"] = x - - # get diffusion row - diffusion_row = list() - x_start = x[:n_row] - - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(x_start) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - diffusion_row.append(x_noisy) - - log["diffusion_row"] = self._get_rows_from_list(diffusion_row) - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, denoise_row = self.sample(batch_size=N, return_intermediates=True) - - log["samples"] = samples - log["denoise_row"] = self._get_rows_from_list(denoise_row) - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - params = list(self.model.parameters()) - if self.learn_logvar: - params = params + [self.logvar] - opt = torch.optim.AdamW(params, lr=lr) - return opt - - -class LatentDiffusion(DDPM): - """main class""" - def __init__(self, - first_stage_config, - cond_stage_config=None, - num_timesteps_cond=None, - cond_stage_key="image", - cond_stage_trainable=False, - concat_mode=True, - cond_stage_forward=None, - conditioning_key=None, - scale_factor=1.0, - scale_by_std=False, - discriminator_config=None, - masker_config=None, - weight_disc=0.01, - *args, **kwargs): - self.num_timesteps_cond = default(num_timesteps_cond, 1) - self.scale_by_std = scale_by_std - assert self.num_timesteps_cond <= kwargs['timesteps'] - # for backwards compatibility after implementation of DiffusionWrapper - if conditioning_key is None: - conditioning_key = 'concat' if concat_mode else 'crossattn' - if cond_stage_config == '__is_unconditional__': - conditioning_key = None - ckpt_path = kwargs.pop("ckpt_path", None) - ignore_keys = kwargs.pop("ignore_keys", []) - ignore_keys = ["model_ema.decay", "model_ema.num_updates"] - super().__init__(conditioning_key=conditioning_key, *args, **kwargs) - self.concat_mode = concat_mode - self.cond_stage_trainable = cond_stage_trainable - self.cond_stage_key = cond_stage_key - try: - self.num_downs = len(first_stage_config.params.ddconfig.ch_mult) - 1 - except: - self.num_downs = 0 - if not scale_by_std: - self.scale_factor = scale_factor - else: - self.register_buffer('scale_factor', torch.tensor(scale_factor)) - self.instantiate_first_stage(first_stage_config) - self.instantiate_cond_stage(cond_stage_config) - self.cond_stage_forward = cond_stage_forward - self.clip_denoised = False - self.bbox_tokenizer = None - - self.restarted_from_ckpt = False - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys) - self.restarted_from_ckpt = True - - # self.discriminator = instantiate_from_config(discriminator_config) - self.weight_disc = weight_disc - self.iter = 0 - - def make_cond_schedule(self, ): - self.cond_ids = torch.full(size=(self.num_timesteps,), fill_value=self.num_timesteps - 1, dtype=torch.long) - ids = torch.round(torch.linspace(0, self.num_timesteps - 1, self.num_timesteps_cond)).long() - self.cond_ids[:self.num_timesteps_cond] = ids - - @rank_zero_only - @torch.no_grad() - def on_train_batch_start(self, batch, batch_idx, dataloader_idx): - # only for very first batch - if self.scale_by_std and self.current_epoch == 0 and self.global_step == 0 and batch_idx == 0 and not self.restarted_from_ckpt: - assert self.scale_factor == 1., 'rather not use custom rescaling and std-rescaling simultaneously' - # set rescale weight to 1./std of encodings - print("### USING STD-RESCALING ###") - x = super().get_input(batch, self.first_stage_key) - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - del self.scale_factor - self.register_buffer('scale_factor', 1. / z.flatten().std()) - print(f"setting self.scale_factor to {self.scale_factor}") - print("### USING STD-RESCALING ###") - - def register_schedule(self, - given_betas=None, beta_schedule="linear", timesteps=1000, - linear_start=1e-4, linear_end=2e-2, cosine_s=8e-3): - super().register_schedule(given_betas, beta_schedule, timesteps, linear_start, linear_end, cosine_s) - - self.shorten_cond_schedule = self.num_timesteps_cond > 1 - if self.shorten_cond_schedule: - self.make_cond_schedule() - - def instantiate_first_stage(self, config): - model = instantiate_from_config(config) - model.to(self.device) - self.first_stage_model = model.eval() - self.first_stage_model.train = disabled_train - for param in self.first_stage_model.parameters(): - param.requires_grad = False - - def instantiate_cond_stage(self, config): - if not self.cond_stage_trainable: - if config == "__is_first_stage__": - print("Using first stage also as cond stage.") - self.cond_stage_model = self.first_stage_model - elif config == "__is_unconditional__": - print(f"Training {self.__class__.__name__} as an unconditional model.") - self.cond_stage_model = None - # self.be_unconditional = True - else: - model = instantiate_from_config(config) - model.to(self.device) - self.cond_stage_model = model.eval() - self.cond_stage_model.train = disabled_train - for param in self.cond_stage_model.parameters(): - param.requires_grad = False - else: - assert config != '__is_first_stage__' - assert config != '__is_unconditional__' - model = instantiate_from_config(config) - model.to(self.device) - self.cond_stage_model = model - - def _get_denoise_row_from_list(self, samples, desc='', force_no_decoder_quantization=False): - denoise_row = [] - for zd in tqdm(samples, desc=desc): - denoise_row.append(self.decode_first_stage(zd.to(self.device), - force_not_quantize=force_no_decoder_quantization)) - n_imgs_per_row = len(denoise_row) - denoise_row = torch.stack(denoise_row) # n_log_step, n_row, C, H, W - denoise_grid = rearrange(denoise_row, 'n b c h w -> b n c h w') - denoise_grid = rearrange(denoise_grid, 'b n c h w -> (b n) c h w') - denoise_grid = make_grid(denoise_grid, nrow=n_imgs_per_row) - return denoise_grid - - def get_first_stage_encoding(self, encoder_posterior): - if isinstance(encoder_posterior, DiagonalGaussianDistribution): - z = encoder_posterior.sample() - elif isinstance(encoder_posterior, torch.Tensor): - z = encoder_posterior - else: - raise NotImplementedError(f"encoder_posterior of type '{type(encoder_posterior)}' not yet implemented") - return self.scale_factor * z - - @torch.no_grad() - def get_learned_conditioning(self, c): - # print("ddpm", self.device) - if self.cond_stage_forward is None: - if hasattr(self.cond_stage_model, 'encode') and callable(self.cond_stage_model.encode): - c = self.cond_stage_model.encode(c) - if isinstance(c, DiagonalGaussianDistribution): - c = c.mode() - else: - c = self.cond_stage_model(c) - else: - assert hasattr(self.cond_stage_model, self.cond_stage_forward) - c = getattr(self.cond_stage_model, self.cond_stage_forward)(c) - return c - - def meshgrid(self, h, w): - y = torch.arange(0, h).view(h, 1, 1).repeat(1, w, 1) - x = torch.arange(0, w).view(1, w, 1).repeat(h, 1, 1) - - arr = torch.cat([y, x], dim=-1) - return arr - - def delta_border(self, h, w): - """ - :param h: height - :param w: width - :return: normalized distance to image border, - wtith min distance = 0 at border and max dist = 0.5 at image center - """ - lower_right_corner = torch.tensor([h - 1, w - 1]).view(1, 1, 2) - arr = self.meshgrid(h, w) / lower_right_corner - dist_left_up = torch.min(arr, dim=-1, keepdims=True)[0] - dist_right_down = torch.min(1 - arr, dim=-1, keepdims=True)[0] - edge_dist = torch.min(torch.cat([dist_left_up, dist_right_down], dim=-1), dim=-1)[0] - return edge_dist - - def get_weighting(self, h, w, Ly, Lx, device): - weighting = self.delta_border(h, w) - weighting = torch.clip(weighting, self.split_input_params["clip_min_weight"], - self.split_input_params["clip_max_weight"], ) - weighting = weighting.view(1, h * w, 1).repeat(1, 1, Ly * Lx).to(device) - - if self.split_input_params["tie_braker"]: - L_weighting = self.delta_border(Ly, Lx) - L_weighting = torch.clip(L_weighting, - self.split_input_params["clip_min_tie_weight"], - self.split_input_params["clip_max_tie_weight"]) - - L_weighting = L_weighting.view(1, 1, Ly * Lx).to(device) - weighting = weighting * L_weighting - return weighting - - def get_fold_unfold(self, x, kernel_size, stride, uf=1, df=1): # todo load once not every time, shorten code - """ - :param x: img of size (bs, c, h, w) - :return: n img crops of size (n, bs, c, kernel_size[0], kernel_size[1]) - """ - bs, nc, h, w = x.shape - - # number of crops in image - Ly = (h - kernel_size[0]) // stride[0] + 1 - Lx = (w - kernel_size[1]) // stride[1] + 1 - - if uf == 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold = torch.nn.Fold(output_size=x.shape[2:], **fold_params) - - weighting = self.get_weighting(kernel_size[0], kernel_size[1], Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h, w) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0], kernel_size[1], Ly * Lx)) - - elif uf > 1 and df == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] * uf, kernel_size[0] * uf), - dilation=1, padding=0, - stride=(stride[0] * uf, stride[1] * uf)) - fold = torch.nn.Fold(output_size=(x.shape[2] * uf, x.shape[3] * uf), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] * uf, kernel_size[1] * uf, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h * uf, w * uf) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] * uf, kernel_size[1] * uf, Ly * Lx)) - - elif df > 1 and uf == 1: - fold_params = dict(kernel_size=kernel_size, dilation=1, padding=0, stride=stride) - unfold = torch.nn.Unfold(**fold_params) - - fold_params2 = dict(kernel_size=(kernel_size[0] // df, kernel_size[0] // df), - dilation=1, padding=0, - stride=(stride[0] // df, stride[1] // df)) - fold = torch.nn.Fold(output_size=(x.shape[2] // df, x.shape[3] // df), **fold_params2) - - weighting = self.get_weighting(kernel_size[0] // df, kernel_size[1] // df, Ly, Lx, x.device).to(x.dtype) - normalization = fold(weighting).view(1, 1, h // df, w // df) # normalizes the overlap - weighting = weighting.view((1, 1, kernel_size[0] // df, kernel_size[1] // df, Ly * Lx)) - - else: - raise NotImplementedError - - return fold, unfold, normalization, weighting - - @torch.no_grad() - def get_input(self, batch, k, return_first_stage_outputs=False, force_c_encode=False, - cond_key=None, return_original_cond=False, bs=None): - - x = super().get_input(batch, k) - if bs is not None: - x = x[:bs] - x = x.to(self.device) - encoder_posterior = self.encode_first_stage(x) - z = self.get_first_stage_encoding(encoder_posterior).detach() - - if self.model.conditioning_key is not None: - if cond_key is None: - cond_key = self.cond_stage_key - if cond_key != self.first_stage_key: - if cond_key in ['caption', 'coordinates_bbox']: - xc = batch[cond_key] - elif cond_key == 'class_label': - xc = batch - else: - xc = super().get_input(batch, cond_key).to(self.device) - else: - xc = x - if not self.cond_stage_trainable or force_c_encode: - if isinstance(xc, dict) or isinstance(xc, list): - # import pudb; pudb.set_trace() - c = self.get_learned_conditioning(xc) - else: - c = self.get_learned_conditioning(xc.to(self.device)) - else: - c = xc - if bs is not None: - c = c[:bs] - - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - ckey = __conditioning_keys__[self.model.conditioning_key] - c = {ckey: c, 'pos_x': pos_x, 'pos_y': pos_y} - - else: - c = None - xc = None - if self.use_positional_encodings: - pos_x, pos_y = self.compute_latent_shifts(batch) - c = {'pos_x': pos_x, 'pos_y': pos_y} - out = [z, c] - if return_first_stage_outputs: - xrec = self.decode_first_stage(z) - out.extend([x, xrec]) - if return_original_cond: - out.append(xc) - return out - - @torch.no_grad() - def decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - # same as above but without decorator - def differentiable_decode_first_stage(self, z, predict_cids=False, force_not_quantize=False): - if predict_cids: - if z.dim() == 4: - z = torch.argmax(z.exp(), dim=1).long() - z = self.first_stage_model.quantize.get_codebook_entry(z, shape=None) - z = rearrange(z, 'b h w c -> b c h w').contiguous() - - z = 1. / self.scale_factor * z - - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - uf = self.split_input_params["vqf"] - bs, nc, h, w = z.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(z, ks, stride, uf=uf) - - z = unfold(z) # (bn, nc * prod(**ks), L) - # 1. Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - # 2. apply model loop over last dim - if isinstance(self.first_stage_model, VQModelInterface): - output_list = [self.first_stage_model.decode(z[:, :, :, :, i], - force_not_quantize=predict_cids or force_not_quantize) - for i in range(z.shape[-1])] - else: - - output_list = [self.first_stage_model.decode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) # # (bn, nc, ks[0], ks[1], L) - o = o * weighting - # Reverse 1. reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization # norm is shape (1, 1, h, w) - return decoded - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - else: - if isinstance(self.first_stage_model, VQModelInterface): - return self.first_stage_model.decode(z, force_not_quantize=predict_cids or force_not_quantize) - else: - return self.first_stage_model.decode(z) - - @torch.no_grad() - def encode_first_stage(self, x): - if hasattr(self, "split_input_params"): - if self.split_input_params["patch_distributed_vq"]: - ks = self.split_input_params["ks"] # eg. (128, 128) - stride = self.split_input_params["stride"] # eg. (64, 64) - df = self.split_input_params["vqf"] - self.split_input_params['original_image_size'] = x.shape[-2:] - bs, nc, h, w = x.shape - if ks[0] > h or ks[1] > w: - ks = (min(ks[0], h), min(ks[1], w)) - print("reducing Kernel") - - if stride[0] > h or stride[1] > w: - stride = (min(stride[0], h), min(stride[1], w)) - print("reducing stride") - - fold, unfold, normalization, weighting = self.get_fold_unfold(x, ks, stride, df=df) - z = unfold(x) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - output_list = [self.first_stage_model.encode(z[:, :, :, :, i]) - for i in range(z.shape[-1])] - - o = torch.stack(output_list, axis=-1) - o = o * weighting - - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - decoded = fold(o) - decoded = decoded / normalization - return decoded - - else: - return self.first_stage_model.encode(x) - else: - return self.first_stage_model.encode(x) - - #mt1 - def shared_step(self, batch, **kwargs): - x, c = self.get_input(batch, self.first_stage_key) - loss = self(x, c) - return loss - - def ExpWeight(self, step, gamma=1, max_iter=1500, reverse=False): - step = max_iter-step - ans = 1.0 * (np.exp(- gamma * step * 1.0 / max_iter)) - return float(ans) - - def make_centre(self, batch): - i = randrange(100) - alpha = i/100 - - img1 = rearrange(batch["style"]["image"], 'b h w c -> b c h w') - img1 = img1.to(memory_format=torch.contiguous_format).float() - img1 = self.encode_first_stage(img1) - z_S = self.get_first_stage_encoding(img1).detach() - - img2 = rearrange(batch["base"]["image"], 'b h w c -> b c h w') - img2 = img2.to(memory_format=torch.contiguous_format).float() - img2 = self.encode_first_stage(img2) - z_R = self.get_first_stage_encoding(img2).detach() - - z_C = (alpha)*z_S + (1-alpha)*z_R - - return z_C, alpha - - - def make_images(self, batch): - batch = batch["base"] - use_ddim = 50 - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=1) - - uc = self.get_learned_conditioning(len(c) * [""]) - sample_scaled, _ = self.sample_log(cond=c, - batch_size=1, - ddim=False, - ddim_steps=50, - eta=1.0, - unconditional_guidance_scale=5.0, - unconditional_conditioning=uc) - - - images = self.decode_first_stage(sample_scaled) - images = torch.clamp((images+1.0)/2.0, min=0.0, max=1.0) - - img = 0 - for x_sample in images: - x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c') - name = "output_log/image"+str(self.iter)+"_"+str(img)+".jpg" - Image.fromarray(x_sample.astype(np.uint8)).save(name) - img += 1 - self.iter+=1 - - def last_step_run(self, batch): - base_count = 0 - uc = self.get_learned_conditioning(3 * [""]) - cond = self.get_learned_conditioning(3 * batch["cond"]) - # command = "mkdir -p out/regular/" - # os.system(command) - with self.ema_scope(): - command = "rm -r out_cur/*" - os.system(command) - for i in range(3): - samples, z_denoise_row = self.sample_log(cond=cond,batch_size=3,ddim=True,ddim_steps=50,eta=1.0,uc=uc) - x_samples_ddim = self.decode_first_stage(samples) - x_samples_ddim = torch.clamp((x_samples_ddim+1.0)/2.0, min=0.0, max=1.0) - - for x_sample in x_samples_ddim: - x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c') - Image.fromarray(x_sample.astype(np.uint8)).save(os.path.join("out_cur/", f"{base_count:04}.png")) - base_count += 1 - - @torch.no_grad() - def log_view(self, batch): - base_count = 0 - with self.ema_scope(): - uc = self.get_learned_conditioning(3 * [""]) - cond = self.get_learned_conditioning(3 * batch["cond"]) - sampler = DDIMSampler(self) - - for i in range(3): - # samples, z_denoise_row = self.sample_log(cond=cond,batch_size=3,ddim=True,ddim_steps=50,eta=1.0,uc=uc) - shape = [4, 32, 32] - samples_ddim, _ = sampler.sample(S=50, - conditioning=cond, - batch_size=3, - shape=shape, - verbose=False, - unconditional_guidance_scale=5, - unconditional_conditioning=uc, - eta=1.0) - x_samples_ddim = self.decode_first_stage(samples_ddim) - x_samples_ddim = torch.clamp((x_samples_ddim+1.0)/2.0, min=0.0, max=1.0) - - for x_sample in x_samples_ddim: - x_sample = 255. * rearrange(x_sample.cpu().numpy(), 'c h w -> h w c') - Image.fromarray(x_sample.astype(np.uint8)).save(os.path.join("out_cur/", f"{base_count:04}.png")) - base_count += 1 - - def forward(self, x, c, *args, **kwargs): - t = torch.randint(0, self.num_timesteps, (x.shape[0],), device=self.device).long() - if self.model.conditioning_key is not None: - assert c is not None - if self.cond_stage_trainable: - c = self.get_learned_conditioning(c) - if self.shorten_cond_schedule: # TODO: drop this option - tc = self.cond_ids[t].to(self.device) - c = self.q_sample(x_start=c, t=tc, noise=torch.randn_like(c.float())) - return self.p_losses(x, c, t, *args, **kwargs) - - def _rescale_annotations(self, bboxes, crop_coordinates): # TODO: move to dataset - def rescale_bbox(bbox): - x0 = clamp((bbox[0] - crop_coordinates[0]) / crop_coordinates[2]) - y0 = clamp((bbox[1] - crop_coordinates[1]) / crop_coordinates[3]) - w = min(bbox[2] / crop_coordinates[2], 1 - x0) - h = min(bbox[3] / crop_coordinates[3], 1 - y0) - return x0, y0, w, h - - return [rescale_bbox(b) for b in bboxes] - - def apply_model(self, x_noisy, t, cond, return_ids=False): - - if isinstance(cond, dict): - # hybrid case, cond is exptected to be a dict - pass - else: - if not isinstance(cond, list): - cond = [cond] - key = 'c_concat' if self.model.conditioning_key == 'concat' else 'c_crossattn' - cond = {key: cond} - - if hasattr(self, "split_input_params"): - assert len(cond) == 1 # todo can only deal with one conditioning atm - assert not return_ids - ks = self.split_input_params["ks"] # eg. (128, 128) - - h, w = x_noisy.shape[-2:] - - fold, unfold, normalization, weighting = self.get_fold_unfold(x_noisy, ks, stride) - - z = unfold(x_noisy) # (bn, nc * prod(**ks), L) - # Reshape to img shape - z = z.view((z.shape[0], -1, ks[0], ks[1], z.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - z_list = [z[:, :, :, :, i] for i in range(z.shape[-1])] - - if self.cond_stage_key in ["image", "LR_image", "segmentation", - 'bbox_img'] and self.model.conditioning_key: # todo check for completeness - c_key = next(iter(cond.keys())) # get key - c = next(iter(cond.values())) # get value - assert (len(c) == 1) # todo extend to list with more than one elem - c = c[0] # get element - - c = unfold(c) - c = c.view((c.shape[0], -1, ks[0], ks[1], c.shape[-1])) # (bn, nc, ks[0], ks[1], L ) - - cond_list = [{c_key: [c[:, :, :, :, i]]} for i in range(c.shape[-1])] - - elif self.cond_stage_key == 'coordinates_bbox': - assert 'original_image_size' in self.split_input_params, 'BoudingBoxRescaling is missing original_image_size' - - # assuming padding of unfold is always 0 and its dilation is always 1 - n_patches_per_row = int((w - ks[0]) / stride[0] + 1) - full_img_h, full_img_w = self.split_input_params['original_image_size'] - # as we are operating on latents, we need the factor from the original image size to the - # spatial latent size to properly rescale the crops for regenerating the bbox annotations - num_downs = self.first_stage_model.encoder.num_resolutions - 1 - rescale_latent = 2 ** (num_downs) - - # get top left postions of patches as conforming for the bbbox tokenizer, therefore we - # need to rescale the tl patch coordinates to be in between (0,1) - tl_patch_coordinates = [(rescale_latent * stride[0] * (patch_nr % n_patches_per_row) / full_img_w, - rescale_latent * stride[1] * (patch_nr // n_patches_per_row) / full_img_h) - for patch_nr in range(z.shape[-1])] - - # patch_limits are tl_coord, width and height coordinates as (x_tl, y_tl, h, w) - patch_limits = [(x_tl, y_tl, - rescale_latent * ks[0] / full_img_w, - rescale_latent * ks[1] / full_img_h) for x_tl, y_tl in tl_patch_coordinates] - # patch_values = [(np.arange(x_tl,min(x_tl+ks, 1.)),np.arange(y_tl,min(y_tl+ks, 1.))) for x_tl, y_tl in tl_patch_coordinates] - - # tokenize crop coordinates for the bounding boxes of the respective patches - patch_limits_tknzd = [torch.LongTensor(self.bbox_tokenizer._crop_encoder(bbox))[None].to(self.device) - for bbox in patch_limits] # list of length l with tensors of shape (1, 2) - print(patch_limits_tknzd[0].shape) - # cut tknzd crop position from conditioning - assert isinstance(cond, dict), 'cond must be dict to be fed into model' - cut_cond = cond['c_crossattn'][0][..., :-2].to(self.device) - print(cut_cond.shape) - - adapted_cond = torch.stack([torch.cat([cut_cond, p], dim=1) for p in patch_limits_tknzd]) - adapted_cond = rearrange(adapted_cond, 'l b n -> (l b) n') - print(adapted_cond.shape) - adapted_cond = self.get_learned_conditioning(adapted_cond) - print(adapted_cond.shape) - adapted_cond = rearrange(adapted_cond, '(l b) n d -> l b n d', l=z.shape[-1]) - print(adapted_cond.shape) - - cond_list = [{'c_crossattn': [e]} for e in adapted_cond] - - else: - cond_list = [cond for i in range(z.shape[-1])] # Todo make this more efficient - - # apply model by loop over crops - output_list = [self.model(z_list[i], t, **cond_list[i]) for i in range(z.shape[-1])] - assert not isinstance(output_list[0], - tuple) # todo cant deal with multiple model outputs check this never happens - - o = torch.stack(output_list, axis=-1) - o = o * weighting - # Reverse reshape to img shape - o = o.view((o.shape[0], -1, o.shape[-1])) # (bn, nc * ks[0] * ks[1], L) - # stitch crops together - x_recon = fold(o) / normalization - - else: - x_recon = self.model(x_noisy, t, **cond) - - if isinstance(x_recon, tuple) and not return_ids: - return x_recon[0] - else: - return x_recon - - def _predict_eps_from_xstart(self, x_t, t, pred_xstart): - return (extract_into_tensor(self.sqrt_recip_alphas_cumprod, t, x_t.shape) * x_t - pred_xstart) / \ - extract_into_tensor(self.sqrt_recipm1_alphas_cumprod, t, x_t.shape) - - def _prior_bpd(self, x_start): - """ - Get the prior KL term for the variational lower-bound, measured in - bits-per-dim. - This term can't be optimized, as it only depends on the encoder. - :param x_start: the [N x C x ...] tensor of inputs. - :return: a batch of [N] KL values (in bits), one per batch element. - """ - batch_size = x_start.shape[0] - t = torch.tensor([self.num_timesteps - 1] * batch_size, device=x_start.device) - qt_mean, _, qt_log_variance = self.q_mean_variance(x_start, t) - kl_prior = normal_kl(mean1=qt_mean, logvar1=qt_log_variance, mean2=0.0, logvar2=0.0) - return mean_flat(kl_prior) / np.log(2.0) - - def p_losses(self, x_start, cond, t, noise=None): - noise = default(noise, lambda: torch.randn_like(x_start)) - x_noisy = self.q_sample(x_start=x_start, t=t, noise=noise) - model_output = self.apply_model(x_noisy, t, cond) - - loss_dict = {} - prefix = 'train' if self.training else 'val' - - if self.parameterization == "x0": - target = x_start - elif self.parameterization == "eps": - target = noise - else: - raise NotImplementedError() - - loss_simple = self.get_loss(model_output, target, mean=False).mean([1, 2, 3]) - loss_dict.update({f'{prefix}/loss_simple': loss_simple.mean()}) - - logvar_t = self.logvar[t.cpu()].to(self.device) - loss = loss_simple / torch.exp(logvar_t) + logvar_t - # loss = loss_simple / torch.exp(self.logvar) + self.logvar - if self.learn_logvar: - loss_dict.update({f'{prefix}/loss_gamma': loss.mean()}) - loss_dict.update({'logvar': self.logvar.data.mean()}) - - loss = self.l_simple_weight * loss.mean() - - loss_vlb = self.get_loss(model_output, target, mean=False).mean(dim=(1, 2, 3)) - loss_vlb = (self.lvlb_weights[t] * loss_vlb).mean() - loss_dict.update({f'{prefix}/loss_vlb': loss_vlb}) - loss += (self.original_elbo_weight * loss_vlb) - loss_dict.update({f'{prefix}/loss': loss}) - - return loss, loss_dict - - def p_mean_variance(self, x, c, t, clip_denoised: bool, return_codebook_ids=False, quantize_denoised=False, - return_x0=False, score_corrector=None, corrector_kwargs=None): - t_in = t - model_out = self.apply_model(x, t_in, c, return_ids=return_codebook_ids) - - if score_corrector is not None: - assert self.parameterization == "eps" - model_out = score_corrector.modify_score(self, model_out, x, t, c, **corrector_kwargs) - - if return_codebook_ids: - model_out, logits = model_out - - if self.parameterization == "eps": - x_recon = self.predict_start_from_noise(x, t=t, noise=model_out) - elif self.parameterization == "x0": - x_recon = model_out - else: - raise NotImplementedError() - - if clip_denoised: - x_recon.clamp_(-1., 1.) - if quantize_denoised: - x_recon, _, [_, _, indices] = self.first_stage_model.quantize(x_recon) - model_mean, posterior_variance, posterior_log_variance = self.q_posterior(x_start=x_recon, x_t=x, t=t) - if return_codebook_ids: - return model_mean, posterior_variance, posterior_log_variance, logits - elif return_x0: - return model_mean, posterior_variance, posterior_log_variance, x_recon - else: - return model_mean, posterior_variance, posterior_log_variance - - @torch.no_grad() - def p_sample(self, x, c, t, clip_denoised=False, repeat_noise=False, - return_codebook_ids=False, quantize_denoised=False, return_x0=False, - temperature=1., noise_dropout=0., score_corrector=None, corrector_kwargs=None): - b, *_, device = *x.shape, x.device - outputs = self.p_mean_variance(x=x, c=c, t=t, clip_denoised=clip_denoised, - return_codebook_ids=return_codebook_ids, - quantize_denoised=quantize_denoised, - return_x0=return_x0, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if return_codebook_ids: - raise DeprecationWarning("Support dropped.") - model_mean, _, model_log_variance, logits = outputs - elif return_x0: - model_mean, _, model_log_variance, x0 = outputs - else: - model_mean, _, model_log_variance = outputs - - noise = noise_like(x.shape, device, repeat_noise) * temperature - if noise_dropout > 0.: - noise = torch.nn.functional.dropout(noise, p=noise_dropout) - # no noise when t == 0 - nonzero_mask = (1 - (t == 0).float()).reshape(b, *((1,) * (len(x.shape) - 1))) - - if return_codebook_ids: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, logits.argmax(dim=1) - if return_x0: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise, x0 - else: - return model_mean + nonzero_mask * (0.5 * model_log_variance).exp() * noise - - @torch.no_grad() - def progressive_denoising(self, cond, shape, verbose=True, callback=None, quantize_denoised=False, - img_callback=None, mask=None, x0=None, temperature=1., noise_dropout=0., - score_corrector=None, corrector_kwargs=None, batch_size=None, x_T=None, start_T=None, - log_every_t=None): - if not log_every_t: - log_every_t = self.log_every_t - timesteps = self.num_timesteps - if batch_size is not None: - b = batch_size if batch_size is not None else shape[0] - shape = [batch_size] + list(shape) - else: - b = batch_size = shape[0] - if x_T is None: - img = torch.randn(shape, device=self.device) - else: - img = x_T - intermediates = [] - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Progressive Generation', - total=timesteps) if verbose else reversed( - range(0, timesteps)) - if type(temperature) == float: - temperature = [temperature] * timesteps - - for i in iterator: - ts = torch.full((b,), i, device=self.device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img, x0_partial = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised, return_x0=True, - temperature=temperature[i], noise_dropout=noise_dropout, - score_corrector=score_corrector, corrector_kwargs=corrector_kwargs) - if mask is not None: - assert x0 is not None - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(x0_partial) - if callback: callback(i) - if img_callback: img_callback(img, i) - return img, intermediates - - @torch.no_grad() - def p_sample_loop(self, cond, shape, return_intermediates=False, - x_T=None, verbose=True, callback=None, timesteps=None, quantize_denoised=False, - mask=None, x0=None, img_callback=None, start_T=None, - log_every_t=None): - - if not log_every_t: - log_every_t = self.log_every_t - device = self.betas.device - b = shape[0] - if x_T is None: - img = torch.randn(shape, device=device) - else: - img = x_T - - intermediates = [img] - if timesteps is None: - timesteps = self.num_timesteps - - if start_T is not None: - timesteps = min(timesteps, start_T) - iterator = tqdm(reversed(range(0, timesteps)), desc='Sampling t', total=timesteps) if verbose else reversed( - range(0, timesteps)) - - if mask is not None: - assert x0 is not None - assert x0.shape[2:3] == mask.shape[2:3] # spatial size has to match - - for i in iterator: - ts = torch.full((b,), i, device=device, dtype=torch.long) - if self.shorten_cond_schedule: - assert self.model.conditioning_key != 'hybrid' - tc = self.cond_ids[ts].to(cond.device) - cond = self.q_sample(x_start=cond, t=tc, noise=torch.randn_like(cond)) - - img = self.p_sample(img, cond, ts, - clip_denoised=self.clip_denoised, - quantize_denoised=quantize_denoised) - if mask is not None: - img_orig = self.q_sample(x0, ts) - img = img_orig * mask + (1. - mask) * img - - if i % log_every_t == 0 or i == timesteps - 1: - intermediates.append(img) - if callback: callback(i) - if img_callback: img_callback(img, i) - - if return_intermediates: - return img, intermediates - return img - - @torch.no_grad() - def sample(self, cond, batch_size=16, return_intermediates=False, x_T=None, - verbose=True, timesteps=100, quantize_denoised=False, - mask=None, x0=None, shape=None,**kwargs): - if shape is None: - shape = (batch_size, self.channels, self.image_size, self.image_size) - if cond is not None: - if isinstance(cond, dict): - cond = {key: cond[key][:batch_size] if not isinstance(cond[key], list) else - list(map(lambda x: x[:batch_size], cond[key])) for key in cond} - else: - cond = [c[:batch_size] for c in cond] if isinstance(cond, list) else cond[:batch_size] - return self.p_sample_loop(cond, - shape, - return_intermediates=return_intermediates, x_T=x_T, - verbose=verbose, timesteps=timesteps, quantize_denoised=quantize_denoised, - mask=mask, x0=x0) - - @torch.no_grad() - def sample_log(self,cond,batch_size,ddim, ddim_steps,uc=None,**kwargs): - - if ddim: - ddim_sampler = DDIMSampler(self) - shape = (self.channels, self.image_size, self.image_size) - samples, intermediates =ddim_sampler.sample(ddim_steps,batch_size, - shape,cond,uc=uc,verbose=False,**kwargs) - - else: - samples, intermediates = self.sample(cond=cond, batch_size=batch_size, - return_intermediates=True,**kwargs) - - return samples, intermediates - - - @torch.no_grad() - def log_images(self, batch, N=8, n_row=4, sample=True, ddim_steps=50, ddim_eta=1., return_keys=None, - quantize_denoised=True, inpaint=False, plot_denoise_rows=False, plot_progressive_rows=False, - plot_diffusion_rows=False, **kwargs): - - - cap = batch["cond"] - batch = batch["base"] - batch["caption"] = cap - - use_ddim = ddim_steps is not None - log = dict() - z, c, x, xrec, xc = self.get_input(batch, self.first_stage_key, - return_first_stage_outputs=True, - force_c_encode=True, - return_original_cond=True, - bs=N) - - # c = c * 0 - # with torch.no_grad(): - # c = c.detach()*0 - # c = torch.tensor(c).cuda() - - N = min(x.shape[0], N) - n_row = min(x.shape[0], n_row) - log["inputs"] = x - log["reconstruction"] = xrec - if self.model.conditioning_key is not None: - if hasattr(self.cond_stage_model, "decode"): - xc = self.cond_stage_model.decode(c) - log["conditioning"] = xc - elif self.cond_stage_key in ["caption"]: - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["caption"]) - log["conditioning"] = xc - elif self.cond_stage_key == 'class_label': - xc = log_txt_as_img((x.shape[2], x.shape[3]), batch["human_label"]) - log['conditioning'] = xc - elif isimage(xc): - log["conditioning"] = xc - if ismap(xc): - log["original_conditioning"] = self.to_rgb(xc) - - if plot_diffusion_rows: - # get diffusion row - diffusion_row = list() - z_start = z[:n_row] - for t in range(self.num_timesteps): - if t % self.log_every_t == 0 or t == self.num_timesteps - 1: - t = repeat(torch.tensor([t]), '1 -> b', b=n_row) - t = t.to(self.device).long() - noise = torch.randn_like(z_start) - z_noisy = self.q_sample(x_start=z_start, t=t, noise=noise) - # samples = z + samples - diffusion_row.append(self.decode_first_stage(z_noisy)) - - diffusion_row = torch.stack(diffusion_row) # n_log_step, n_row, C, H, W - diffusion_grid = rearrange(diffusion_row, 'n b c h w -> b n c h w') - diffusion_grid = rearrange(diffusion_grid, 'b n c h w -> (b n) c h w') - diffusion_grid = make_grid(diffusion_grid, nrow=diffusion_row.shape[0]) - log["diffusion_row"] = diffusion_grid - - if sample: - # get denoise row - with self.ema_scope("Plotting"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True) - # samples = z + samples - x_samples = self.decode_first_stage(samples) - log["samples"] = x_samples - if plot_denoise_rows: - denoise_grid = self._get_denoise_row_from_list(z_denoise_row) - log["denoise_row"] = denoise_grid - - uc = self.get_learned_conditioning(len(c) * [""]) - sample_scaled, _ = self.sample_log(cond=c, - batch_size=N, - ddim=use_ddim, - ddim_steps=ddim_steps, - eta=ddim_eta, - unconditional_guidance_scale=5.0, - unconditional_conditioning=uc) - # samples = z + samples - log["samples_scaled"] = self.decode_first_stage(sample_scaled) - - # save_image(sample_scaled[0], "visualization/output"+str(self.iter)+".png") - - if quantize_denoised and not isinstance(self.first_stage_model, AutoencoderKL) and not isinstance( - self.first_stage_model, IdentityFirstStage): - # also display when quantizing x0 while sampling - with self.ema_scope("Plotting Quantized Denoised"): - samples, z_denoise_row = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, - ddim_steps=ddim_steps,eta=ddim_eta, - quantize_denoised=True) - # samples, z_denoise_row = self.sample(cond=c, batch_size=N, return_intermediates=True, - # quantize_denoised=True) - # samples = z + samples - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_x0_quantized"] = x_samples - - if inpaint: - # make a simple center square - b, h, w = z.shape[0], z.shape[2], z.shape[3] - mask = torch.ones(N, h, w).to(self.device) - # zeros will be filled in - mask[:, h // 4:3 * h // 4, w // 4:3 * w // 4] = 0. - mask = mask[:, None, ...] - with self.ema_scope("Plotting Inpaint"): - - samples, _ = self.sample_log(cond=c,batch_size=N,ddim=use_ddim, eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - # samples = z + samples - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_inpainting"] = x_samples - log["mask"] = mask - - # outpaint - with self.ema_scope("Plotting Outpaint"): - samples, _ = self.sample_log(cond=c, batch_size=N, ddim=use_ddim,eta=ddim_eta, - ddim_steps=ddim_steps, x0=z[:N], mask=mask) - # samples = z + samples - x_samples = self.decode_first_stage(samples.to(self.device)) - log["samples_outpainting"] = x_samples - - if plot_progressive_rows: - with self.ema_scope("Plotting Progressives"): - img, progressives = self.progressive_denoising(c, - shape=(self.channels, self.image_size, self.image_size), - batch_size=N) - prog_row = self._get_denoise_row_from_list(progressives, desc="Progressive Generation") - log["progressive_row"] = prog_row - - if return_keys: - if np.intersect1d(list(log.keys()), return_keys).shape[0] == 0: - return log - else: - return {key: log[key] for key in return_keys} - return log - - def configure_optimizers(self): - lr = self.learning_rate - - params = list(self.model.parameters()) - opt = torch.optim.AdamW(params, lr=lr) - - params = list(self.discriminator.parameters()) - opt2 = torch.optim.AdamW(params, lr=lr*10) - - if self.use_scheduler: - scheduler = instantiate_from_config(self.scheduler_config) - scheduler = [ - { - 'scheduler': LambdaLR(opt, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }] - return [opt], scheduler - - return [opt, opt2], [] - - - @torch.no_grad() - def to_rgb(self, x): - x = x.float() - if not hasattr(self, "colorize"): - self.colorize = torch.randn(3, x.shape[1], 1, 1).to(x) - x = nn.functional.conv2d(x, weight=self.colorize) - x = 2. * (x - x.min()) / (x.max() - x.min()) - 1. - return x - - # def on_save_checkpoint(self, checkpoint): - - # checkpoint.clear() - -class DiffusionWrapper(pl.LightningModule): - def __init__(self, diff_model_config, conditioning_key): - super().__init__() - self.diffusion_model = instantiate_from_config(diff_model_config) - self.conditioning_key = conditioning_key - assert self.conditioning_key in [None, 'concat', 'crossattn', 'hybrid', 'adm'] - - def forward(self, x, t, c_concat: list = None, c_crossattn: list = None): - - if self.conditioning_key is None: - out = self.diffusion_model(x, t) - elif self.conditioning_key == 'concat': - xc = torch.cat([x] + c_concat, dim=1) - out = self.diffusion_model(xc, t) - elif self.conditioning_key == 'crossattn': - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(x, t, context=cc) - elif self.conditioning_key == 'hybrid': - xc = torch.cat([x] + c_concat, dim=1) - cc = torch.cat(c_crossattn, 1) - out = self.diffusion_model(xc, t, context=cc) - elif self.conditioning_key == 'adm': - cc = c_crossattn[0] - out = self.diffusion_model(x, t, y=cc) - else: - raise NotImplementedError() - - return out - - -class Layout2ImgDiffusion(LatentDiffusion): - # TODO: move all layout-specific hacks to this class - def __init__(self, cond_stage_key, *args, **kwargs): - assert cond_stage_key == 'coordinates_bbox', 'Layout2ImgDiffusion only for cond_stage_key="coordinates_bbox"' - super().__init__(cond_stage_key=cond_stage_key, *args, **kwargs) - - def log_images(self, batch, N=8, *args, **kwargs): - logs = super().log_images(batch=batch, N=N, *args, **kwargs) - - key = 'train' if self.training else 'validation' - dset = self.trainer.datamodule.datasets[key] - mapper = dset.conditional_builders[self.cond_stage_key] - - bbox_imgs = [] - map_fn = lambda catno: dset.get_textual_label(dset.get_category_id(catno)) - for tknzd_bbox in batch[self.cond_stage_key][:N]: - bboximg = mapper.plot(tknzd_bbox.detach().cpu(), map_fn, (256, 256)) - bbox_imgs.append(bboximg) - - cond_img = torch.stack(bbox_imgs, dim=0) - logs['bbox_image'] = cond_img - return logs diff --git a/spaces/tomaseo2022/Text-a-Voz/README.md b/spaces/tomaseo2022/Text-a-Voz/README.md deleted file mode 100644 index 288655099f2b83d7e4de183efd5a8af8560de4e7..0000000000000000000000000000000000000000 --- a/spaces/tomaseo2022/Text-a-Voz/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Text A Voz -emoji: 📊 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.4.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/tomofi/MMOCR/mmocr/datasets/uniform_concat_dataset.py b/spaces/tomofi/MMOCR/mmocr/datasets/uniform_concat_dataset.py deleted file mode 100644 index 286119ba6bcc7cf160f921e16cb62408cfd95657..0000000000000000000000000000000000000000 --- a/spaces/tomofi/MMOCR/mmocr/datasets/uniform_concat_dataset.py +++ /dev/null @@ -1,67 +0,0 @@ -# Copyright (c) OpenMMLab. All rights reserved. -import copy - -from mmdet.datasets import DATASETS, ConcatDataset, build_dataset - -from mmocr.utils import is_2dlist, is_type_list - - -@DATASETS.register_module() -class UniformConcatDataset(ConcatDataset): - """A wrapper of ConcatDataset which support dataset pipeline assignment and - replacement. - - Args: - datasets (list[dict] | list[list[dict]]): A list of datasets cfgs. - separate_eval (bool): Whether to evaluate the results - separately if it is used as validation dataset. - Defaults to True. - pipeline (None | list[dict] | list[list[dict]]): If ``None``, - each dataset in datasets use its own pipeline; - If ``list[dict]``, it will be assigned to the dataset whose - pipeline is None in datasets; - If ``list[list[dict]]``, pipeline of dataset which is None - in datasets will be replaced by the corresponding pipeline - in the list. - force_apply (bool): If True, apply pipeline above to each dataset - even if it have its own pipeline. Default: False. - """ - - def __init__(self, - datasets, - separate_eval=True, - pipeline=None, - force_apply=False, - **kwargs): - new_datasets = [] - if pipeline is not None: - assert isinstance( - pipeline, - list), 'pipeline must be list[dict] or list[list[dict]].' - if is_type_list(pipeline, dict): - self._apply_pipeline(datasets, pipeline, force_apply) - new_datasets = datasets - elif is_2dlist(pipeline): - assert is_2dlist(datasets) - assert len(datasets) == len(pipeline) - for sub_datasets, tmp_pipeline in zip(datasets, pipeline): - self._apply_pipeline(sub_datasets, tmp_pipeline, - force_apply) - new_datasets.extend(sub_datasets) - else: - if is_2dlist(datasets): - for sub_datasets in datasets: - new_datasets.extend(sub_datasets) - else: - new_datasets = datasets - datasets = [build_dataset(c, kwargs) for c in new_datasets] - super().__init__(datasets, separate_eval) - - @staticmethod - def _apply_pipeline(datasets, pipeline, force_apply=False): - from_cfg = all(isinstance(x, dict) for x in datasets) - assert from_cfg, 'datasets should be config dicts' - assert all(isinstance(x, dict) for x in pipeline) - for dataset in datasets: - if dataset['pipeline'] is None or force_apply: - dataset['pipeline'] = copy.deepcopy(pipeline) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/mask_rcnn.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/mask_rcnn.py deleted file mode 100644 index 29ea62d32e31ed4b4a7c5050cdbcd3b4e553b9b4..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/detectors/mask_rcnn.py +++ /dev/null @@ -1,26 +0,0 @@ -from ..builder import DETECTORS -from .two_stage import TwoStageDetector - - -@DETECTORS.register_module() -class MaskRCNN(TwoStageDetector): - """Implementation of `Mask R-CNN `_""" - - def __init__(self, - backbone, - rpn_head, - roi_head, - train_cfg, - test_cfg, - neck=None, - pretrained=None, - init_cfg=None): - super(MaskRCNN, self).__init__( - backbone=backbone, - neck=neck, - rpn_head=rpn_head, - roi_head=roi_head, - train_cfg=train_cfg, - test_cfg=test_cfg, - pretrained=pretrained, - init_cfg=init_cfg) diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/mask_heads/maskiou_head.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/mask_heads/maskiou_head.py deleted file mode 100644 index fc117ff7e86cefab14b52de8f006d4193eb4c964..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/models/roi_heads/mask_heads/maskiou_head.py +++ /dev/null @@ -1,182 +0,0 @@ -import numpy as np -import torch -import torch.nn as nn -from mmcv.cnn import Conv2d, Linear, MaxPool2d -from mmcv.runner import BaseModule, force_fp32 -from torch.nn.modules.utils import _pair - -from mmdet.models.builder import HEADS, build_loss - - -@HEADS.register_module() -class MaskIoUHead(BaseModule): - """Mask IoU Head. - - This head predicts the IoU of predicted masks and corresponding gt masks. - """ - - def __init__(self, - num_convs=4, - num_fcs=2, - roi_feat_size=14, - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - num_classes=80, - loss_iou=dict(type='MSELoss', loss_weight=0.5), - init_cfg=[ - dict(type='Kaiming', override=dict(name='convs')), - dict(type='Caffe2Xavier', override=dict(name='fcs')), - dict( - type='Normal', - std=0.01, - override=dict(name='fc_mask_iou')) - ]): - super(MaskIoUHead, self).__init__(init_cfg) - self.in_channels = in_channels - self.conv_out_channels = conv_out_channels - self.fc_out_channels = fc_out_channels - self.num_classes = num_classes - self.fp16_enabled = False - - self.convs = nn.ModuleList() - for i in range(num_convs): - if i == 0: - # concatenation of mask feature and mask prediction - in_channels = self.in_channels + 1 - else: - in_channels = self.conv_out_channels - stride = 2 if i == num_convs - 1 else 1 - self.convs.append( - Conv2d( - in_channels, - self.conv_out_channels, - 3, - stride=stride, - padding=1)) - - roi_feat_size = _pair(roi_feat_size) - pooled_area = (roi_feat_size[0] // 2) * (roi_feat_size[1] // 2) - self.fcs = nn.ModuleList() - for i in range(num_fcs): - in_channels = ( - self.conv_out_channels * - pooled_area if i == 0 else self.fc_out_channels) - self.fcs.append(Linear(in_channels, self.fc_out_channels)) - - self.fc_mask_iou = Linear(self.fc_out_channels, self.num_classes) - self.relu = nn.ReLU() - self.max_pool = MaxPool2d(2, 2) - self.loss_iou = build_loss(loss_iou) - - def forward(self, mask_feat, mask_pred): - mask_pred = mask_pred.sigmoid() - mask_pred_pooled = self.max_pool(mask_pred.unsqueeze(1)) - - x = torch.cat((mask_feat, mask_pred_pooled), 1) - - for conv in self.convs: - x = self.relu(conv(x)) - x = x.flatten(1) - for fc in self.fcs: - x = self.relu(fc(x)) - mask_iou = self.fc_mask_iou(x) - return mask_iou - - @force_fp32(apply_to=('mask_iou_pred', )) - def loss(self, mask_iou_pred, mask_iou_targets): - pos_inds = mask_iou_targets > 0 - if pos_inds.sum() > 0: - loss_mask_iou = self.loss_iou(mask_iou_pred[pos_inds], - mask_iou_targets[pos_inds]) - else: - loss_mask_iou = mask_iou_pred.sum() * 0 - return dict(loss_mask_iou=loss_mask_iou) - - @force_fp32(apply_to=('mask_pred', )) - def get_targets(self, sampling_results, gt_masks, mask_pred, mask_targets, - rcnn_train_cfg): - """Compute target of mask IoU. - - Mask IoU target is the IoU of the predicted mask (inside a bbox) and - the gt mask of corresponding gt mask (the whole instance). - The intersection area is computed inside the bbox, and the gt mask area - is computed with two steps, firstly we compute the gt area inside the - bbox, then divide it by the area ratio of gt area inside the bbox and - the gt area of the whole instance. - - Args: - sampling_results (list[:obj:`SamplingResult`]): sampling results. - gt_masks (BitmapMask | PolygonMask): Gt masks (the whole instance) - of each image, with the same shape of the input image. - mask_pred (Tensor): Predicted masks of each positive proposal, - shape (num_pos, h, w). - mask_targets (Tensor): Gt mask of each positive proposal, - binary map of the shape (num_pos, h, w). - rcnn_train_cfg (dict): Training config for R-CNN part. - - Returns: - Tensor: mask iou target (length == num positive). - """ - pos_proposals = [res.pos_bboxes for res in sampling_results] - pos_assigned_gt_inds = [ - res.pos_assigned_gt_inds for res in sampling_results - ] - - # compute the area ratio of gt areas inside the proposals and - # the whole instance - area_ratios = map(self._get_area_ratio, pos_proposals, - pos_assigned_gt_inds, gt_masks) - area_ratios = torch.cat(list(area_ratios)) - assert mask_targets.size(0) == area_ratios.size(0) - - mask_pred = (mask_pred > rcnn_train_cfg.mask_thr_binary).float() - mask_pred_areas = mask_pred.sum((-1, -2)) - - # mask_pred and mask_targets are binary maps - overlap_areas = (mask_pred * mask_targets).sum((-1, -2)) - - # compute the mask area of the whole instance - gt_full_areas = mask_targets.sum((-1, -2)) / (area_ratios + 1e-7) - - mask_iou_targets = overlap_areas / ( - mask_pred_areas + gt_full_areas - overlap_areas) - return mask_iou_targets - - def _get_area_ratio(self, pos_proposals, pos_assigned_gt_inds, gt_masks): - """Compute area ratio of the gt mask inside the proposal and the gt - mask of the corresponding instance.""" - num_pos = pos_proposals.size(0) - if num_pos > 0: - area_ratios = [] - proposals_np = pos_proposals.cpu().numpy() - pos_assigned_gt_inds = pos_assigned_gt_inds.cpu().numpy() - # compute mask areas of gt instances (batch processing for speedup) - gt_instance_mask_area = gt_masks.areas - for i in range(num_pos): - gt_mask = gt_masks[pos_assigned_gt_inds[i]] - - # crop the gt mask inside the proposal - bbox = proposals_np[i, :].astype(np.int32) - gt_mask_in_proposal = gt_mask.crop(bbox) - - ratio = gt_mask_in_proposal.areas[0] / ( - gt_instance_mask_area[pos_assigned_gt_inds[i]] + 1e-7) - area_ratios.append(ratio) - area_ratios = torch.from_numpy(np.stack(area_ratios)).float().to( - pos_proposals.device) - else: - area_ratios = pos_proposals.new_zeros((0, )) - return area_ratios - - @force_fp32(apply_to=('mask_iou_pred', )) - def get_mask_scores(self, mask_iou_pred, det_bboxes, det_labels): - """Get the mask scores. - - mask_score = bbox_score * mask_iou - """ - inds = range(det_labels.size(0)) - mask_scores = mask_iou_pred[inds, det_labels] * det_bboxes[inds, -1] - mask_scores = mask_scores.cpu().numpy() - det_labels = det_labels.cpu().numpy() - return [mask_scores[det_labels == i] for i in range(self.num_classes)] diff --git a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/utils/util_random.py b/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/utils/util_random.py deleted file mode 100644 index e313e9947bb3232a9458878fd219e1594ab93d57..0000000000000000000000000000000000000000 --- a/spaces/tomofi/NDLOCR/src/ndl_layout/mmdetection/mmdet/utils/util_random.py +++ /dev/null @@ -1,33 +0,0 @@ -"""Helpers for random number generators.""" -import numpy as np - - -def ensure_rng(rng=None): - """Coerces input into a random number generator. - - If the input is None, then a global random state is returned. - - If the input is a numeric value, then that is used as a seed to construct a - random state. Otherwise the input is returned as-is. - - Adapted from [1]_. - - Args: - rng (int | numpy.random.RandomState | None): - if None, then defaults to the global rng. Otherwise this can be an - integer or a RandomState class - Returns: - (numpy.random.RandomState) : rng - - a numpy random number generator - - References: - .. [1] https://gitlab.kitware.com/computer-vision/kwarray/blob/master/kwarray/util_random.py#L270 # noqa: E501 - """ - - if rng is None: - rng = np.random.mtrand._rand - elif isinstance(rng, int): - rng = np.random.RandomState(rng) - else: - rng = rng - return rng diff --git a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/modules/image_degradation/bsrgan.py b/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/modules/image_degradation/bsrgan.py deleted file mode 100644 index 32ef56169978e550090261cddbcf5eb611a6173b..0000000000000000000000000000000000000000 --- a/spaces/tornadoslims/instruct-pix2pix/stable_diffusion/ldm/modules/image_degradation/bsrgan.py +++ /dev/null @@ -1,730 +0,0 @@ -# -*- coding: utf-8 -*- -""" -# -------------------------------------------- -# Super-Resolution -# -------------------------------------------- -# -# Kai Zhang (cskaizhang@gmail.com) -# https://github.com/cszn -# From 2019/03--2021/08 -# -------------------------------------------- -""" - -import numpy as np -import cv2 -import torch - -from functools import partial -import random -from scipy import ndimage -import scipy -import scipy.stats as ss -from scipy.interpolate import interp2d -from scipy.linalg import orth -import albumentations - -import ldm.modules.image_degradation.utils_image as util - - -def modcrop_np(img, sf): - ''' - Args: - img: numpy image, WxH or WxHxC - sf: scale factor - Return: - cropped image - ''' - w, h = img.shape[:2] - im = np.copy(img) - return im[:w - w % sf, :h - h % sf, ...] - - -""" -# -------------------------------------------- -# anisotropic Gaussian kernels -# -------------------------------------------- -""" - - -def analytic_kernel(k): - """Calculate the X4 kernel from the X2 kernel (for proof see appendix in paper)""" - k_size = k.shape[0] - # Calculate the big kernels size - big_k = np.zeros((3 * k_size - 2, 3 * k_size - 2)) - # Loop over the small kernel to fill the big one - for r in range(k_size): - for c in range(k_size): - big_k[2 * r:2 * r + k_size, 2 * c:2 * c + k_size] += k[r, c] * k - # Crop the edges of the big kernel to ignore very small values and increase run time of SR - crop = k_size // 2 - cropped_big_k = big_k[crop:-crop, crop:-crop] - # Normalize to 1 - return cropped_big_k / cropped_big_k.sum() - - -def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6): - """ generate an anisotropic Gaussian kernel - Args: - ksize : e.g., 15, kernel size - theta : [0, pi], rotation angle range - l1 : [0.1,50], scaling of eigenvalues - l2 : [0.1,l1], scaling of eigenvalues - If l1 = l2, will get an isotropic Gaussian kernel. - Returns: - k : kernel - """ - - v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.])) - V = np.array([[v[0], v[1]], [v[1], -v[0]]]) - D = np.array([[l1, 0], [0, l2]]) - Sigma = np.dot(np.dot(V, D), np.linalg.inv(V)) - k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize) - - return k - - -def gm_blur_kernel(mean, cov, size=15): - center = size / 2.0 + 0.5 - k = np.zeros([size, size]) - for y in range(size): - for x in range(size): - cy = y - center + 1 - cx = x - center + 1 - k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov) - - k = k / np.sum(k) - return k - - -def shift_pixel(x, sf, upper_left=True): - """shift pixel for super-resolution with different scale factors - Args: - x: WxHxC or WxH - sf: scale factor - upper_left: shift direction - """ - h, w = x.shape[:2] - shift = (sf - 1) * 0.5 - xv, yv = np.arange(0, w, 1.0), np.arange(0, h, 1.0) - if upper_left: - x1 = xv + shift - y1 = yv + shift - else: - x1 = xv - shift - y1 = yv - shift - - x1 = np.clip(x1, 0, w - 1) - y1 = np.clip(y1, 0, h - 1) - - if x.ndim == 2: - x = interp2d(xv, yv, x)(x1, y1) - if x.ndim == 3: - for i in range(x.shape[-1]): - x[:, :, i] = interp2d(xv, yv, x[:, :, i])(x1, y1) - - return x - - -def blur(x, k): - ''' - x: image, NxcxHxW - k: kernel, Nx1xhxw - ''' - n, c = x.shape[:2] - p1, p2 = (k.shape[-2] - 1) // 2, (k.shape[-1] - 1) // 2 - x = torch.nn.functional.pad(x, pad=(p1, p2, p1, p2), mode='replicate') - k = k.repeat(1, c, 1, 1) - k = k.view(-1, 1, k.shape[2], k.shape[3]) - x = x.view(1, -1, x.shape[2], x.shape[3]) - x = torch.nn.functional.conv2d(x, k, bias=None, stride=1, padding=0, groups=n * c) - x = x.view(n, c, x.shape[2], x.shape[3]) - - return x - - -def gen_kernel(k_size=np.array([15, 15]), scale_factor=np.array([4, 4]), min_var=0.6, max_var=10., noise_level=0): - """" - # modified version of https://github.com/assafshocher/BlindSR_dataset_generator - # Kai Zhang - # min_var = 0.175 * sf # variance of the gaussian kernel will be sampled between min_var and max_var - # max_var = 2.5 * sf - """ - # Set random eigen-vals (lambdas) and angle (theta) for COV matrix - lambda_1 = min_var + np.random.rand() * (max_var - min_var) - lambda_2 = min_var + np.random.rand() * (max_var - min_var) - theta = np.random.rand() * np.pi # random theta - noise = -noise_level + np.random.rand(*k_size) * noise_level * 2 - - # Set COV matrix using Lambdas and Theta - LAMBDA = np.diag([lambda_1, lambda_2]) - Q = np.array([[np.cos(theta), -np.sin(theta)], - [np.sin(theta), np.cos(theta)]]) - SIGMA = Q @ LAMBDA @ Q.T - INV_SIGMA = np.linalg.inv(SIGMA)[None, None, :, :] - - # Set expectation position (shifting kernel for aligned image) - MU = k_size // 2 - 0.5 * (scale_factor - 1) # - 0.5 * (scale_factor - k_size % 2) - MU = MU[None, None, :, None] - - # Create meshgrid for Gaussian - [X, Y] = np.meshgrid(range(k_size[0]), range(k_size[1])) - Z = np.stack([X, Y], 2)[:, :, :, None] - - # Calcualte Gaussian for every pixel of the kernel - ZZ = Z - MU - ZZ_t = ZZ.transpose(0, 1, 3, 2) - raw_kernel = np.exp(-0.5 * np.squeeze(ZZ_t @ INV_SIGMA @ ZZ)) * (1 + noise) - - # shift the kernel so it will be centered - # raw_kernel_centered = kernel_shift(raw_kernel, scale_factor) - - # Normalize the kernel and return - # kernel = raw_kernel_centered / np.sum(raw_kernel_centered) - kernel = raw_kernel / np.sum(raw_kernel) - return kernel - - -def fspecial_gaussian(hsize, sigma): - hsize = [hsize, hsize] - siz = [(hsize[0] - 1.0) / 2.0, (hsize[1] - 1.0) / 2.0] - std = sigma - [x, y] = np.meshgrid(np.arange(-siz[1], siz[1] + 1), np.arange(-siz[0], siz[0] + 1)) - arg = -(x * x + y * y) / (2 * std * std) - h = np.exp(arg) - h[h < scipy.finfo(float).eps * h.max()] = 0 - sumh = h.sum() - if sumh != 0: - h = h / sumh - return h - - -def fspecial_laplacian(alpha): - alpha = max([0, min([alpha, 1])]) - h1 = alpha / (alpha + 1) - h2 = (1 - alpha) / (alpha + 1) - h = [[h1, h2, h1], [h2, -4 / (alpha + 1), h2], [h1, h2, h1]] - h = np.array(h) - return h - - -def fspecial(filter_type, *args, **kwargs): - ''' - python code from: - https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py - ''' - if filter_type == 'gaussian': - return fspecial_gaussian(*args, **kwargs) - if filter_type == 'laplacian': - return fspecial_laplacian(*args, **kwargs) - - -""" -# -------------------------------------------- -# degradation models -# -------------------------------------------- -""" - - -def bicubic_degradation(x, sf=3): - ''' - Args: - x: HxWxC image, [0, 1] - sf: down-scale factor - Return: - bicubicly downsampled LR image - ''' - x = util.imresize_np(x, scale=1 / sf) - return x - - -def srmd_degradation(x, k, sf=3): - ''' blur + bicubic downsampling - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2018learning, - title={Learning a single convolutional super-resolution network for multiple degradations}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={3262--3271}, - year={2018} - } - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') # 'nearest' | 'mirror' - x = bicubic_degradation(x, sf=sf) - return x - - -def dpsr_degradation(x, k, sf=3): - ''' bicubic downsampling + blur - Args: - x: HxWxC image, [0, 1] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - Reference: - @inproceedings{zhang2019deep, - title={Deep Plug-and-Play Super-Resolution for Arbitrary Blur Kernels}, - author={Zhang, Kai and Zuo, Wangmeng and Zhang, Lei}, - booktitle={IEEE Conference on Computer Vision and Pattern Recognition}, - pages={1671--1681}, - year={2019} - } - ''' - x = bicubic_degradation(x, sf=sf) - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - return x - - -def classical_degradation(x, k, sf=3): - ''' blur + downsampling - Args: - x: HxWxC image, [0, 1]/[0, 255] - k: hxw, double - sf: down-scale factor - Return: - downsampled LR image - ''' - x = ndimage.filters.convolve(x, np.expand_dims(k, axis=2), mode='wrap') - # x = filters.correlate(x, np.expand_dims(np.flip(k), axis=2)) - st = 0 - return x[st::sf, st::sf, ...] - - -def add_sharpening(img, weight=0.5, radius=50, threshold=10): - """USM sharpening. borrowed from real-ESRGAN - Input image: I; Blurry image: B. - 1. K = I + weight * (I - B) - 2. Mask = 1 if abs(I - B) > threshold, else: 0 - 3. Blur mask: - 4. Out = Mask * K + (1 - Mask) * I - Args: - img (Numpy array): Input image, HWC, BGR; float32, [0, 1]. - weight (float): Sharp weight. Default: 1. - radius (float): Kernel size of Gaussian blur. Default: 50. - threshold (int): - """ - if radius % 2 == 0: - radius += 1 - blur = cv2.GaussianBlur(img, (radius, radius), 0) - residual = img - blur - mask = np.abs(residual) * 255 > threshold - mask = mask.astype('float32') - soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0) - - K = img + weight * residual - K = np.clip(K, 0, 1) - return soft_mask * K + (1 - soft_mask) * img - - -def add_blur(img, sf=4): - wd2 = 4.0 + sf - wd = 2.0 + 0.2 * sf - if random.random() < 0.5: - l1 = wd2 * random.random() - l2 = wd2 * random.random() - k = anisotropic_Gaussian(ksize=2 * random.randint(2, 11) + 3, theta=random.random() * np.pi, l1=l1, l2=l2) - else: - k = fspecial('gaussian', 2 * random.randint(2, 11) + 3, wd * random.random()) - img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror') - - return img - - -def add_resize(img, sf=4): - rnum = np.random.rand() - if rnum > 0.8: # up - sf1 = random.uniform(1, 2) - elif rnum < 0.7: # down - sf1 = random.uniform(0.5 / sf, 1) - else: - sf1 = 1.0 - img = cv2.resize(img, (int(sf1 * img.shape[1]), int(sf1 * img.shape[0])), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - return img - - -# def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): -# noise_level = random.randint(noise_level1, noise_level2) -# rnum = np.random.rand() -# if rnum > 0.6: # add color Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) -# elif rnum < 0.4: # add grayscale Gaussian noise -# img += np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) -# else: # add noise -# L = noise_level2 / 255. -# D = np.diag(np.random.rand(3)) -# U = orth(np.random.rand(3, 3)) -# conv = np.dot(np.dot(np.transpose(U), D), U) -# img += np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) -# img = np.clip(img, 0.0, 1.0) -# return img - -def add_Gaussian_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - rnum = np.random.rand() - if rnum > 0.6: # add color Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: # add grayscale Gaussian noise - img = img + np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: # add noise - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img = img + np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_speckle_noise(img, noise_level1=2, noise_level2=25): - noise_level = random.randint(noise_level1, noise_level2) - img = np.clip(img, 0.0, 1.0) - rnum = random.random() - if rnum > 0.6: - img += img * np.random.normal(0, noise_level / 255.0, img.shape).astype(np.float32) - elif rnum < 0.4: - img += img * np.random.normal(0, noise_level / 255.0, (*img.shape[:2], 1)).astype(np.float32) - else: - L = noise_level2 / 255. - D = np.diag(np.random.rand(3)) - U = orth(np.random.rand(3, 3)) - conv = np.dot(np.dot(np.transpose(U), D), U) - img += img * np.random.multivariate_normal([0, 0, 0], np.abs(L ** 2 * conv), img.shape[:2]).astype(np.float32) - img = np.clip(img, 0.0, 1.0) - return img - - -def add_Poisson_noise(img): - img = np.clip((img * 255.0).round(), 0, 255) / 255. - vals = 10 ** (2 * random.random() + 2.0) # [2, 4] - if random.random() < 0.5: - img = np.random.poisson(img * vals).astype(np.float32) / vals - else: - img_gray = np.dot(img[..., :3], [0.299, 0.587, 0.114]) - img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255. - noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray - img += noise_gray[:, :, np.newaxis] - img = np.clip(img, 0.0, 1.0) - return img - - -def add_JPEG_noise(img): - quality_factor = random.randint(30, 95) - img = cv2.cvtColor(util.single2uint(img), cv2.COLOR_RGB2BGR) - result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor]) - img = cv2.imdecode(encimg, 1) - img = cv2.cvtColor(util.uint2single(img), cv2.COLOR_BGR2RGB) - return img - - -def random_crop(lq, hq, sf=4, lq_patchsize=64): - h, w = lq.shape[:2] - rnd_h = random.randint(0, h - lq_patchsize) - rnd_w = random.randint(0, w - lq_patchsize) - lq = lq[rnd_h:rnd_h + lq_patchsize, rnd_w:rnd_w + lq_patchsize, :] - - rnd_h_H, rnd_w_H = int(rnd_h * sf), int(rnd_w * sf) - hq = hq[rnd_h_H:rnd_h_H + lq_patchsize * sf, rnd_w_H:rnd_w_H + lq_patchsize * sf, :] - return lq, hq - - -def degradation_bsrgan(img, sf=4, lq_patchsize=72, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - hq = img.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - img = cv2.resize(img, (int(1 / 2 * img.shape[1]), int(1 / 2 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - img = util.imresize_np(img, 1 / 2, True) - img = np.clip(img, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - img = add_blur(img, sf=sf) - - elif i == 1: - img = add_blur(img, sf=sf) - - elif i == 2: - a, b = img.shape[1], img.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - img = cv2.resize(img, (int(1 / sf1 * img.shape[1]), int(1 / sf1 * img.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - img = ndimage.filters.convolve(img, np.expand_dims(k_shifted, axis=2), mode='mirror') - img = img[0::sf, 0::sf, ...] # nearest downsampling - img = np.clip(img, 0.0, 1.0) - - elif i == 3: - # downsample3 - img = cv2.resize(img, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - img = np.clip(img, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - img = add_JPEG_noise(img) - - elif i == 6: - # add processed camera sensor noise - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf_ori, lq_patchsize) - - return img, hq - - -# todo no isp_model? -def degradation_bsrgan_variant(image, sf=4, isp_model=None): - """ - This is the degradation model of BSRGAN from the paper - "Designing a Practical Degradation Model for Deep Blind Image Super-Resolution" - ---------- - sf: scale factor - isp_model: camera ISP model - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - image = util.uint2single(image) - isp_prob, jpeg_prob, scale2_prob = 0.25, 0.9, 0.25 - sf_ori = sf - - h1, w1 = image.shape[:2] - image = image.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = image.shape[:2] - - hq = image.copy() - - if sf == 4 and random.random() < scale2_prob: # downsample1 - if np.random.rand() < 0.5: - image = cv2.resize(image, (int(1 / 2 * image.shape[1]), int(1 / 2 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - image = util.imresize_np(image, 1 / 2, True) - image = np.clip(image, 0.0, 1.0) - sf = 2 - - shuffle_order = random.sample(range(7), 7) - idx1, idx2 = shuffle_order.index(2), shuffle_order.index(3) - if idx1 > idx2: # keep downsample3 last - shuffle_order[idx1], shuffle_order[idx2] = shuffle_order[idx2], shuffle_order[idx1] - - for i in shuffle_order: - - if i == 0: - image = add_blur(image, sf=sf) - - elif i == 1: - image = add_blur(image, sf=sf) - - elif i == 2: - a, b = image.shape[1], image.shape[0] - # downsample2 - if random.random() < 0.75: - sf1 = random.uniform(1, 2 * sf) - image = cv2.resize(image, (int(1 / sf1 * image.shape[1]), int(1 / sf1 * image.shape[0])), - interpolation=random.choice([1, 2, 3])) - else: - k = fspecial('gaussian', 25, random.uniform(0.1, 0.6 * sf)) - k_shifted = shift_pixel(k, sf) - k_shifted = k_shifted / k_shifted.sum() # blur with shifted kernel - image = ndimage.filters.convolve(image, np.expand_dims(k_shifted, axis=2), mode='mirror') - image = image[0::sf, 0::sf, ...] # nearest downsampling - image = np.clip(image, 0.0, 1.0) - - elif i == 3: - # downsample3 - image = cv2.resize(image, (int(1 / sf * a), int(1 / sf * b)), interpolation=random.choice([1, 2, 3])) - image = np.clip(image, 0.0, 1.0) - - elif i == 4: - # add Gaussian noise - image = add_Gaussian_noise(image, noise_level1=2, noise_level2=25) - - elif i == 5: - # add JPEG noise - if random.random() < jpeg_prob: - image = add_JPEG_noise(image) - - # elif i == 6: - # # add processed camera sensor noise - # if random.random() < isp_prob and isp_model is not None: - # with torch.no_grad(): - # img, hq = isp_model.forward(img.copy(), hq) - - # add final JPEG compression noise - image = add_JPEG_noise(image) - image = util.single2uint(image) - example = {"image":image} - return example - - -# TODO incase there is a pickle error one needs to replace a += x with a = a + x in add_speckle_noise etc... -def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.5, use_sharp=True, lq_patchsize=64, isp_model=None): - """ - This is an extended degradation model by combining - the degradation models of BSRGAN and Real-ESRGAN - ---------- - img: HXWXC, [0, 1], its size should be large than (lq_patchsizexsf)x(lq_patchsizexsf) - sf: scale factor - use_shuffle: the degradation shuffle - use_sharp: sharpening the img - Returns - ------- - img: low-quality patch, size: lq_patchsizeXlq_patchsizeXC, range: [0, 1] - hq: corresponding high-quality patch, size: (lq_patchsizexsf)X(lq_patchsizexsf)XC, range: [0, 1] - """ - - h1, w1 = img.shape[:2] - img = img.copy()[:w1 - w1 % sf, :h1 - h1 % sf, ...] # mod crop - h, w = img.shape[:2] - - if h < lq_patchsize * sf or w < lq_patchsize * sf: - raise ValueError(f'img size ({h1}X{w1}) is too small!') - - if use_sharp: - img = add_sharpening(img) - hq = img.copy() - - if random.random() < shuffle_prob: - shuffle_order = random.sample(range(13), 13) - else: - shuffle_order = list(range(13)) - # local shuffle for noise, JPEG is always the last one - shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6))) - shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13))) - - poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1 - - for i in shuffle_order: - if i == 0: - img = add_blur(img, sf=sf) - elif i == 1: - img = add_resize(img, sf=sf) - elif i == 2: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 3: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 4: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 5: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - elif i == 6: - img = add_JPEG_noise(img) - elif i == 7: - img = add_blur(img, sf=sf) - elif i == 8: - img = add_resize(img, sf=sf) - elif i == 9: - img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25) - elif i == 10: - if random.random() < poisson_prob: - img = add_Poisson_noise(img) - elif i == 11: - if random.random() < speckle_prob: - img = add_speckle_noise(img) - elif i == 12: - if random.random() < isp_prob and isp_model is not None: - with torch.no_grad(): - img, hq = isp_model.forward(img.copy(), hq) - else: - print('check the shuffle!') - - # resize to desired size - img = cv2.resize(img, (int(1 / sf * hq.shape[1]), int(1 / sf * hq.shape[0])), - interpolation=random.choice([1, 2, 3])) - - # add final JPEG compression noise - img = add_JPEG_noise(img) - - # random crop - img, hq = random_crop(img, hq, sf, lq_patchsize) - - return img, hq - - -if __name__ == '__main__': - print("hey") - img = util.imread_uint('utils/test.png', 3) - print(img) - img = util.uint2single(img) - print(img) - img = img[:448, :448] - h = img.shape[0] // 4 - print("resizing to", h) - sf = 4 - deg_fn = partial(degradation_bsrgan_variant, sf=sf) - for i in range(20): - print(i) - img_lq = deg_fn(img) - print(img_lq) - img_lq_bicubic = albumentations.SmallestMaxSize(max_size=h, interpolation=cv2.INTER_CUBIC)(image=img)["image"] - print(img_lq.shape) - print("bicubic", img_lq_bicubic.shape) - print(img_hq.shape) - lq_nearest = cv2.resize(util.single2uint(img_lq), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - lq_bicubic_nearest = cv2.resize(util.single2uint(img_lq_bicubic), (int(sf * img_lq.shape[1]), int(sf * img_lq.shape[0])), - interpolation=0) - img_concat = np.concatenate([lq_bicubic_nearest, lq_nearest, util.single2uint(img_hq)], axis=1) - util.imsave(img_concat, str(i) + '.png') - - diff --git a/spaces/tracinginsights/F1-analysis/pages/Pit_Stops.py b/spaces/tracinginsights/F1-analysis/pages/Pit_Stops.py deleted file mode 100644 index ad98a4b7dd8346fac16a31e0e1fa493a501c051a..0000000000000000000000000000000000000000 --- a/spaces/tracinginsights/F1-analysis/pages/Pit_Stops.py +++ /dev/null @@ -1,35 +0,0 @@ -import streamlit as st -from repo_directory import PitStops -from repo_directory import button -import datetime - -YEAR_SELECTED = st.selectbox( - 'Select Year', - (2023, 2022, 2021, 2020, 2019, 2018)) - - - -season_events, events_list = PitStops.get_season_events(YEAR_SELECTED) - -RACE_SELECTED = st.selectbox( - 'Select Race', - events_list) - -event_id = PitStops.get_event_id(season_events, RACE_SELECTED) -df = PitStops.get_pitstops(event_id) #dhl pitstops - -race_names_df, pit_stops_df, drivers_df = PitStops.load_data() - -event_date = PitStops.get_event_date(season_events, RACE_SELECTED) - -ergast_pitstops, grandprix = PitStops.get_pitstops_by_date(pit_stops_df,drivers_df,race_names_df,event_date) - -df_agg = PitStops.combine_dfs(ergast_pitstops, df) - -PitStops.plot_event_ratings(df_agg, grandprix) - -PitStops.plot_event_pitstops(df, RACE_SELECTED) - -PitStops.plot_full_season_median(YEAR_SELECTED) -PitStops.plot_event_ratings(df_agg, grandprix) - diff --git a/spaces/trttung1610/musicgen/scripts/templates/base.html b/spaces/trttung1610/musicgen/scripts/templates/base.html deleted file mode 100644 index f74668c19ecb83090a8a2d82c026bf417190ec6d..0000000000000000000000000000000000000000 --- a/spaces/trttung1610/musicgen/scripts/templates/base.html +++ /dev/null @@ -1,16 +0,0 @@ - - - - {% block head %} - - - AudioCraft — MOS - {% endblock %} - - -
              -

              AudioCraft — MOS

              - {% block content %}{% endblock %} -
              - - diff --git a/spaces/vaibhavarduino/better-autogpt/style.css b/spaces/vaibhavarduino/better-autogpt/style.css deleted file mode 100644 index 114adf441e9032febb46bc056b2a8bb651075f0d..0000000000000000000000000000000000000000 --- a/spaces/vaibhavarduino/better-autogpt/style.css +++ /dev/null @@ -1,28 +0,0 @@ -body { - padding: 2rem; - font-family: -apple-system, BlinkMacSystemFont, "Arial", sans-serif; -} - -h1 { - font-size: 16px; - margin-top: 0; -} - -p { - color: rgb(107, 114, 128); - font-size: 15px; - margin-bottom: 10px; - margin-top: 5px; -} - -.card { - max-width: 620px; - margin: 0 auto; - padding: 16px; - border: 1px solid lightgray; - border-radius: 16px; -} - -.card p:last-child { - margin-bottom: 0; -} diff --git a/spaces/vincentclaes/emoji-predictor/Makefile b/spaces/vincentclaes/emoji-predictor/Makefile deleted file mode 100644 index 30500ef74a38a2b9f4bff78bfc53f1f5ccf70b48..0000000000000000000000000000000000000000 --- a/spaces/vincentclaes/emoji-predictor/Makefile +++ /dev/null @@ -1,3 +0,0 @@ -install: - poetry install - poetry run pip list --format=freeze > requirements.txt \ No newline at end of file diff --git a/spaces/vonbarnekowa/stable-diffusion/ldm/modules/midas/midas/midas_net.py b/spaces/vonbarnekowa/stable-diffusion/ldm/modules/midas/midas/midas_net.py deleted file mode 100644 index 8a954977800b0a0f48807e80fa63041910e33c1f..0000000000000000000000000000000000000000 --- a/spaces/vonbarnekowa/stable-diffusion/ldm/modules/midas/midas/midas_net.py +++ /dev/null @@ -1,76 +0,0 @@ -"""MidashNet: Network for monocular depth estimation trained by mixing several datasets. -This file contains code that is adapted from -https://github.com/thomasjpfan/pytorch_refinenet/blob/master/pytorch_refinenet/refinenet/refinenet_4cascade.py -""" -import torch -import torch.nn as nn - -from .base_model import BaseModel -from .blocks import FeatureFusionBlock, Interpolate, _make_encoder - - -class MidasNet(BaseModel): - """Network for monocular depth estimation. - """ - - def __init__(self, path=None, features=256, non_negative=True): - """Init. - - Args: - path (str, optional): Path to saved model. Defaults to None. - features (int, optional): Number of features. Defaults to 256. - backbone (str, optional): Backbone network for encoder. Defaults to resnet50 - """ - print("Loading weights: ", path) - - super(MidasNet, self).__init__() - - use_pretrained = False if path is None else True - - self.pretrained, self.scratch = _make_encoder(backbone="resnext101_wsl", features=features, use_pretrained=use_pretrained) - - self.scratch.refinenet4 = FeatureFusionBlock(features) - self.scratch.refinenet3 = FeatureFusionBlock(features) - self.scratch.refinenet2 = FeatureFusionBlock(features) - self.scratch.refinenet1 = FeatureFusionBlock(features) - - self.scratch.output_conv = nn.Sequential( - nn.Conv2d(features, 128, kernel_size=3, stride=1, padding=1), - Interpolate(scale_factor=2, mode="bilinear"), - nn.Conv2d(128, 32, kernel_size=3, stride=1, padding=1), - nn.ReLU(True), - nn.Conv2d(32, 1, kernel_size=1, stride=1, padding=0), - nn.ReLU(True) if non_negative else nn.Identity(), - ) - - if path: - self.load(path) - - def forward(self, x): - """Forward pass. - - Args: - x (tensor): input data (image) - - Returns: - tensor: depth - """ - - layer_1 = self.pretrained.layer1(x) - layer_2 = self.pretrained.layer2(layer_1) - layer_3 = self.pretrained.layer3(layer_2) - layer_4 = self.pretrained.layer4(layer_3) - - layer_1_rn = self.scratch.layer1_rn(layer_1) - layer_2_rn = self.scratch.layer2_rn(layer_2) - layer_3_rn = self.scratch.layer3_rn(layer_3) - layer_4_rn = self.scratch.layer4_rn(layer_4) - - path_4 = self.scratch.refinenet4(layer_4_rn) - path_3 = self.scratch.refinenet3(path_4, layer_3_rn) - path_2 = self.scratch.refinenet2(path_3, layer_2_rn) - path_1 = self.scratch.refinenet1(path_2, layer_1_rn) - - out = self.scratch.output_conv(path_1) - - return torch.squeeze(out, dim=1) diff --git a/spaces/wadhwani-ai/KKMS-Smart-Search-Demo/src/data_loader.py b/spaces/wadhwani-ai/KKMS-Smart-Search-Demo/src/data_loader.py deleted file mode 100644 index 6a5ec34825d5c1d1ac6842f3e6a4d655f8385c8d..0000000000000000000000000000000000000000 --- a/spaces/wadhwani-ai/KKMS-Smart-Search-Demo/src/data_loader.py +++ /dev/null @@ -1,230 +0,0 @@ -import os -import re -import pandas as pd -from pathlib import Path -import glob - -from llama_index import GPTVectorStoreIndex, download_loader, SimpleDirectoryReader, SimpleWebPageReader -from langchain.document_loaders import PyPDFLoader, TextLoader -from langchain.agents import initialize_agent, Tool -from langchain.llms import OpenAI -from langchain.chains.conversation.memory import ConversationBufferMemory -from langchain.docstore.document import Document - -import src.utils as utils - -import logging -logging.basicConfig( - format="%(asctime)s %(levelname)s [%(name)s] %(message)s", level=logging.INFO, datefmt="%Y-%m-%d %H:%M:%S" -) -logger = logging.getLogger(__name__) - -import warnings -warnings.filterwarnings('ignore') - - - -class DATA_LOADER: - def __init__(self): - # Instantiate UTILS class object - self.utils_obj = utils.UTILS() - - - def load_documents_from_urls(self, urls=[], doc_type='urls'): - url_documents = self.load_document(doc_type=doc_type, urls=urls) - return url_documents - - - def load_documents_from_pdf(self, doc_filepath='', urls=[], doc_type='pdf'): - if doc_type == 'pdf': - pdf_documents = self.load_document(doc_type=doc_type, doc_filepath=doc_filepath) - elif doc_type == 'online_pdf': - pdf_documents = self.load_document(doc_type=doc_type, urls=urls) - return pdf_documents - - - def load_documents_from_directory(self, doc_filepath='', doc_type='directory'): - doc_documents = self.load_document(doc_type=doc_type, doc_filepath=doc_filepath) - return doc_documents - - - def load_documents_from_text(self, doc_filepath='', doc_type='textfile'): - text_documents = self.load_document(doc_type=doc_type, doc_filepath=doc_filepath) - return text_documents - - - def pdf_loader(self, filepath): - loader = PyPDFLoader(filepath) - return loader.load_and_split() - - - def text_loader(self, filepath): - loader = TextLoader(filepath) - return loader.load() - - - def load_document(self, - doc_type='pdf', - doc_filepath='', - urls=[] - ): - logger.info(f'Loading {doc_type} in raw format from: {doc_filepath}') - - documents = [] - - # Validation checks - if doc_type in ['directory', 'pdf', 'textfile']: - if not os.path.exists(doc_filepath): - logger.warning(f"{doc_filepath} does not exist, nothing can be loaded!") - return documents - - elif doc_type in ['online_pdf', 'urls']: - if len(urls) == 0: - logger.warning(f"URLs list empty, nothing can be loaded!") - return documents - - - ######### Load documents ######### - # Load PDF - if doc_type == 'pdf': - # Load multiple PDFs from directory - if os.path.isdir(doc_filepath): - pdfs = glob.glob(f"{doc_filepath}/*.pdf") - logger.info(f'Total PDF files to load: {len(pdfs)}') - for pdf in pdfs: - documents.extend(self.pdf_loader(pdf)) - - # Loading from a single PDF file - elif os.path.isfile(doc_filepath) and doc_filepath.endswith('.pdf'): - documents.extend(self.pdf_loader(doc_filepath)) - - # Load PDFs from online (urls). Can read multiple PDFs from multiple URLs in one-shot - elif doc_type == 'online_pdf': - logger.info(f'URLs to load Online PDFs are from: {urls}') - valid_urls = self.utils_obj.validate_url_format( - urls=urls, - url_type=doc_type - ) - for url in valid_urls: - # Load and split PDF pages per document - documents.extend(self.pdf_loader(url)) - - # Load data from URLs (can load data from multiple URLs) - elif doc_type == 'urls': - logger.info(f'URLs to load data from are: {urls}') - valid_urls = self.utils_obj.validate_url_format( - urls=urls, - url_type=doc_type - ) - # Load data from URLs - docs = SimpleWebPageReader(html_to_text=True).load_data(valid_urls) - docs = [Document(page_content=doc.text) for doc in docs] - documents.extend(docs) - - # Load data from text file(s) - elif doc_type == 'textfile': - # Load multiple text files from directory - if os.path.isdir(doc_filepath): - text_files = glob.glob(f"{doc_filepath}/*.txt") - logger.info(f'Total text files to load: {len(text_files)}') - for tf in text_files: - documents.extend(self.text_loader(tf)) - - # Loading from a single text file - elif os.path.isfile(doc_filepath) and doc_filepath.endswith('.txt'): - documents.extend(self.text_loader(doc_filepath)) - - # Load data from files on the local directory (files may be of type .pdf, .txt, .doc, etc.) - elif doc_type == 'directory': - # Load multiple PDFs from directory - if os.path.isdir(doc_filepath): - documents = SimpleDirectoryReader( - input_dir=doc_filepath - ).load_data() - - # Loading from a file - elif os.path.isfile(doc_filepath): - documents.extend(SimpleDirectoryReader( - input_files=[doc_filepath] - ).load_data()) - - # Load data from URLs in Knowledge Base format - elif doc_type == 'url-kb': - KnowledgeBaseWebReader = download_loader("KnowledgeBaseWebReader") - loader = KnowledgeBaseWebReader() - for url in urls: - doc = loader.load_data( - root_url=url, - link_selectors=['.article-list a', '.article-list a'], - article_path='/articles', - body_selector='.article-body', - title_selector='.article-title', - subtitle_selector='.article-subtitle', - ) - documents.extend(doc) - - # Load data from URLs and create an agent chain using ChatGPT - elif doc_type == 'url-chatgpt': - BeautifulSoupWebReader = download_loader("BeautifulSoupWebReader") - loader = BeautifulSoupWebReader() - # Load data from URLs - documents = loader.load_data(urls=urls) - # Build the Vector database - index = GPTVectorStoreIndex(documents) - tools = [ - Tool( - name="Website Index", - func=lambda q: index.query(q), - description=f"Useful when you want answer questions about the text retrieved from websites.", - ), - ] - - # Call ChatGPT API - llm = OpenAI(temperature=0) # Keep temperature=0 to search from the given urls only - memory = ConversationBufferMemory(memory_key="chat_history") - agent_chain = initialize_agent( - tools, llm, agent="zero-shot-react-description", memory=memory - ) - - output = agent_chain.run(input="What language is on this website?") - - - # Clean documents - documents = self.clean_documents(documents) - logger.info(f'{doc_type} in raw format from: {doc_filepath} loaded successfully!') - return documents - - - def clean_documents( - self, - documents - ): - cleaned_documents = [] - for document in documents: - if hasattr(document, 'page_content'): - document.page_content = self.utils_obj.replace_newlines_and_spaces(document.page_content) - elif hasattr(document, 'text'): - document.text = self.utils_obj.replace_newlines_and_spaces(document.text) - else: - document = self.utils_obj.replace_newlines_and_spaces(document) - cleaned_documents.append(document) - return cleaned_documents - - - def load_external_links_used_by_FTAs(self, - sheet_filepath='./data/urls_used_by_ftas/external_links_used_by_FTAs.xlsx' - ): - xls = pd.ExcelFile(sheet_filepath) - df = pd.DataFrame(columns=['S.No.', 'Link used for', 'Link type', 'Link']) - for sheet_name in xls.sheet_names: - sheet = pd.read_excel(xls, sheet_name) - if sheet.shape[0] > 0: - df = pd.concat([df, sheet]) - else: - logger.info(f'{sheet_name} has no content.') - - df = df[['Link used for', 'Link type', 'Link']] - # Clean df - df = self.utils_obj.clean_df(df) - logger.info(f'Total links available across all cities: {df.shape[0]}') - return df diff --git a/spaces/wallezen/so-vits-svc/data_utils.py b/spaces/wallezen/so-vits-svc/data_utils.py deleted file mode 100644 index 7c76fd1c3a45b8304d916161718c7763874f3e35..0000000000000000000000000000000000000000 --- a/spaces/wallezen/so-vits-svc/data_utils.py +++ /dev/null @@ -1,155 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import modules.commons as commons -import utils -from modules.mel_processing import spectrogram_torch, spec_to_mel_torch -from utils import load_wav_to_torch, load_filepaths_and_text - -# import h5py - - -"""Multi speaker version""" - - -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - - def __init__(self, audiopaths, hparams, all_in_mem: bool = False): - self.audiopaths = load_filepaths_and_text(audiopaths) - self.max_wav_value = hparams.data.max_wav_value - self.sampling_rate = hparams.data.sampling_rate - self.filter_length = hparams.data.filter_length - self.hop_length = hparams.data.hop_length - self.win_length = hparams.data.win_length - self.sampling_rate = hparams.data.sampling_rate - self.use_sr = hparams.train.use_sr - self.spec_len = hparams.train.max_speclen - self.spk_map = hparams.spk - - random.seed(1234) - random.shuffle(self.audiopaths) - - self.all_in_mem = all_in_mem - if self.all_in_mem: - self.cache = [self.get_audio(p[0]) for p in self.audiopaths] - - def get_audio(self, filename): - filename = filename.replace("\\", "/") - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - - # Ideally, all data generated after Mar 25 should have .spec.pt - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - - spk = filename.split("/")[-2] - spk = torch.LongTensor([self.spk_map[spk]]) - - f0 = np.load(filename + ".f0.npy") - f0, uv = utils.interpolate_f0(f0) - f0 = torch.FloatTensor(f0) - uv = torch.FloatTensor(uv) - - c = torch.load(filename+ ".soft.pt") - c = utils.repeat_expand_2d(c.squeeze(0), f0.shape[0]) - - - lmin = min(c.size(-1), spec.size(-1)) - assert abs(c.size(-1) - spec.size(-1)) < 3, (c.size(-1), spec.size(-1), f0.shape, filename) - assert abs(audio_norm.shape[1]-lmin * self.hop_length) < 3 * self.hop_length - spec, c, f0, uv = spec[:, :lmin], c[:, :lmin], f0[:lmin], uv[:lmin] - audio_norm = audio_norm[:, :lmin * self.hop_length] - - return c, f0, spec, audio_norm, spk, uv - - def random_slice(self, c, f0, spec, audio_norm, spk, uv): - # if spec.shape[1] < 30: - # print("skip too short audio:", filename) - # return None - if spec.shape[1] > 800: - start = random.randint(0, spec.shape[1]-800) - end = start + 790 - spec, c, f0, uv = spec[:, start:end], c[:, start:end], f0[start:end], uv[start:end] - audio_norm = audio_norm[:, start * self.hop_length : end * self.hop_length] - - return c, f0, spec, audio_norm, spk, uv - - def __getitem__(self, index): - if self.all_in_mem: - return self.random_slice(*self.cache[index]) - else: - return self.random_slice(*self.get_audio(self.audiopaths[index][0])) - - def __len__(self): - return len(self.audiopaths) - - -class TextAudioCollate: - - def __call__(self, batch): - batch = [b for b in batch if b is not None] - - input_lengths, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[0].shape[1] for x in batch]), - dim=0, descending=True) - - max_c_len = max([x[0].size(1) for x in batch]) - max_wav_len = max([x[3].size(1) for x in batch]) - - lengths = torch.LongTensor(len(batch)) - - c_padded = torch.FloatTensor(len(batch), batch[0][0].shape[0], max_c_len) - f0_padded = torch.FloatTensor(len(batch), max_c_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][2].shape[0], max_c_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - spkids = torch.LongTensor(len(batch), 1) - uv_padded = torch.FloatTensor(len(batch), max_c_len) - - c_padded.zero_() - spec_padded.zero_() - f0_padded.zero_() - wav_padded.zero_() - uv_padded.zero_() - - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - c = row[0] - c_padded[i, :, :c.size(1)] = c - lengths[i] = c.size(1) - - f0 = row[1] - f0_padded[i, :f0.size(0)] = f0 - - spec = row[2] - spec_padded[i, :, :spec.size(1)] = spec - - wav = row[3] - wav_padded[i, :, :wav.size(1)] = wav - - spkids[i, 0] = row[4] - - uv = row[5] - uv_padded[i, :uv.size(0)] = uv - - return c_padded, f0_padded, spec_padded, wav_padded, spkids, lengths, uv_padded diff --git a/spaces/wangrongsheng/CareLlama/model.py b/spaces/wangrongsheng/CareLlama/model.py deleted file mode 100644 index 1d4bf04dafdb29d4a9c09fb00cb4480dc75471d5..0000000000000000000000000000000000000000 --- a/spaces/wangrongsheng/CareLlama/model.py +++ /dev/null @@ -1,74 +0,0 @@ -from threading import Thread -from typing import Iterator - -import torch -from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer - -model_id = 'wangrongsheng/CareLlama2-7b-super-mix' - -if torch.cuda.is_available(): - config = AutoConfig.from_pretrained(model_id) - config.pretraining_tp = 1 - model = AutoModelForCausalLM.from_pretrained( - model_id, - config=config, - torch_dtype=torch.float16, - load_in_4bit=True, - device_map='auto' - ) -else: - model = None -tokenizer = AutoTokenizer.from_pretrained(model_id) - - -def get_prompt(message: str, chat_history: list[tuple[str, str]], - system_prompt: str) -> str: - texts = [f'[INST] <>\n{system_prompt}\n<>\n\n'] - # The first user input is _not_ stripped - do_strip = False - for user_input, response in chat_history: - user_input = user_input.strip() if do_strip else user_input - do_strip = True - texts.append(f'{user_input} [/INST] {response.strip()} [INST] ') - message = message.strip() if do_strip else message - texts.append(f'{message} [/INST]') - return ''.join(texts) - - -def get_input_token_length(message: str, chat_history: list[tuple[str, str]], system_prompt: str) -> int: - prompt = get_prompt(message, chat_history, system_prompt) - input_ids = tokenizer([prompt], return_tensors='np', add_special_tokens=False)['input_ids'] - return input_ids.shape[-1] - - -def run(message: str, - chat_history: list[tuple[str, str]], - system_prompt: str, - max_new_tokens: int = 1024, - temperature: float = 0.8, - top_p: float = 0.95, - top_k: int = 50) -> Iterator[str]: - prompt = get_prompt(message, chat_history, system_prompt) - inputs = tokenizer([prompt], return_tensors='pt', add_special_tokens=False).to('cuda') - - streamer = TextIteratorStreamer(tokenizer, - timeout=10., - skip_prompt=True, - skip_special_tokens=True) - generate_kwargs = dict( - inputs, - streamer=streamer, - max_new_tokens=max_new_tokens, - do_sample=True, - top_p=top_p, - top_k=top_k, - temperature=temperature, - num_beams=1, - ) - t = Thread(target=model.generate, kwargs=generate_kwargs) - t.start() - - outputs = [] - for text in streamer: - outputs.append(text) - yield ''.join(outputs) diff --git a/spaces/weiwandaixu/ChatGPT3.5/modules/utils.py b/spaces/weiwandaixu/ChatGPT3.5/modules/utils.py deleted file mode 100644 index e1516e1fad4761787070d24e867bea57d86ac9ed..0000000000000000000000000000000000000000 --- a/spaces/weiwandaixu/ChatGPT3.5/modules/utils.py +++ /dev/null @@ -1,548 +0,0 @@ -# -*- coding:utf-8 -*- -from __future__ import annotations -from typing import TYPE_CHECKING, Any, Callable, Dict, List, Tuple, Type -import logging -import json -import os -import datetime -import hashlib -import csv -import requests -import re -import html -import sys -import subprocess - -import gradio as gr -from pypinyin import lazy_pinyin -import tiktoken -import mdtex2html -from markdown import markdown -from pygments import highlight -from pygments.lexers import get_lexer_by_name -from pygments.formatters import HtmlFormatter -import pandas as pd - -from modules.presets import * -from . import shared -from modules.config import retrieve_proxy - -if TYPE_CHECKING: - from typing import TypedDict - - class DataframeData(TypedDict): - headers: List[str] - data: List[List[str | int | bool]] - -def predict(current_model, *args): - iter = current_model.predict(*args) - for i in iter: - yield i - -def billing_info(current_model): - return current_model.billing_info() - -def set_key(current_model, *args): - return current_model.set_key(*args) - -def load_chat_history(current_model, *args): - return current_model.load_chat_history(*args) - -def interrupt(current_model, *args): - return current_model.interrupt(*args) - -def reset(current_model, *args): - return current_model.reset(*args) - -def retry(current_model, *args): - iter = current_model.retry(*args) - for i in iter: - yield i - -def delete_first_conversation(current_model, *args): - return current_model.delete_first_conversation(*args) - -def delete_last_conversation(current_model, *args): - return current_model.delete_last_conversation(*args) - -def set_system_prompt(current_model, *args): - return current_model.set_system_prompt(*args) - -def save_chat_history(current_model, *args): - return current_model.save_chat_history(*args) - -def export_markdown(current_model, *args): - return current_model.export_markdown(*args) - -def load_chat_history(current_model, *args): - return current_model.load_chat_history(*args) - -def set_token_upper_limit(current_model, *args): - return current_model.set_token_upper_limit(*args) - -def set_temperature(current_model, *args): - current_model.set_temperature(*args) - -def set_top_p(current_model, *args): - current_model.set_top_p(*args) - -def set_n_choices(current_model, *args): - current_model.set_n_choices(*args) - -def set_stop_sequence(current_model, *args): - current_model.set_stop_sequence(*args) - -def set_max_tokens(current_model, *args): - current_model.set_max_tokens(*args) - -def set_presence_penalty(current_model, *args): - current_model.set_presence_penalty(*args) - -def set_frequency_penalty(current_model, *args): - current_model.set_frequency_penalty(*args) - -def set_logit_bias(current_model, *args): - current_model.set_logit_bias(*args) - -def set_user_identifier(current_model, *args): - current_model.set_user_identifier(*args) - -def set_single_turn(current_model, *args): - current_model.set_single_turn(*args) - -def handle_file_upload(current_model, *args): - return current_model.handle_file_upload(*args) - -def like(current_model, *args): - return current_model.like(*args) - -def dislike(current_model, *args): - return current_model.dislike(*args) - - -def count_token(message): - encoding = tiktoken.get_encoding("cl100k_base") - input_str = f"role: {message['role']}, content: {message['content']}" - length = len(encoding.encode(input_str)) - return length - - -def markdown_to_html_with_syntax_highlight(md_str): - def replacer(match): - lang = match.group(1) or "text" - code = match.group(2) - - try: - lexer = get_lexer_by_name(lang, stripall=True) - except ValueError: - lexer = get_lexer_by_name("text", stripall=True) - - formatter = HtmlFormatter() - highlighted_code = highlight(code, lexer, formatter) - - return f'
              {highlighted_code}
              ' - - code_block_pattern = r"```(\w+)?\n([\s\S]+?)\n```" - md_str = re.sub(code_block_pattern, replacer, md_str, flags=re.MULTILINE) - - html_str = markdown(md_str) - return html_str - - -def normalize_markdown(md_text: str) -> str: - lines = md_text.split("\n") - normalized_lines = [] - inside_list = False - - for i, line in enumerate(lines): - if re.match(r"^(\d+\.|-|\*|\+)\s", line.strip()): - if not inside_list and i > 0 and lines[i - 1].strip() != "": - normalized_lines.append("") - inside_list = True - normalized_lines.append(line) - elif inside_list and line.strip() == "": - if i < len(lines) - 1 and not re.match( - r"^(\d+\.|-|\*|\+)\s", lines[i + 1].strip() - ): - normalized_lines.append(line) - continue - else: - inside_list = False - normalized_lines.append(line) - - return "\n".join(normalized_lines) - - -def convert_mdtext(md_text): - code_block_pattern = re.compile(r"```(.*?)(?:```|$)", re.DOTALL) - inline_code_pattern = re.compile(r"`(.*?)`", re.DOTALL) - code_blocks = code_block_pattern.findall(md_text) - non_code_parts = code_block_pattern.split(md_text)[::2] - - result = [] - for non_code, code in zip(non_code_parts, code_blocks + [""]): - if non_code.strip(): - non_code = normalize_markdown(non_code) - if inline_code_pattern.search(non_code): - result.append(markdown(non_code, extensions=["tables"])) - else: - result.append(mdtex2html.convert(non_code, extensions=["tables"])) - if code.strip(): - # _, code = detect_language(code) # 暂时去除代码高亮功能,因为在大段代码的情况下会出现问题 - # code = code.replace("\n\n", "\n") # 暂时去除代码中的空行,因为在大段代码的情况下会出现问题 - code = f"\n```{code}\n\n```" - code = markdown_to_html_with_syntax_highlight(code) - result.append(code) - result = "".join(result) - result += ALREADY_CONVERTED_MARK - return result - - -def convert_asis(userinput): - return ( - f'

              {html.escape(userinput)}

              ' - + ALREADY_CONVERTED_MARK - ) - - -def detect_converted_mark(userinput): - try: - if userinput.endswith(ALREADY_CONVERTED_MARK): - return True - else: - return False - except: - return True - - -def detect_language(code): - if code.startswith("\n"): - first_line = "" - else: - first_line = code.strip().split("\n", 1)[0] - language = first_line.lower() if first_line else "" - code_without_language = code[len(first_line) :].lstrip() if first_line else code - return language, code_without_language - - -def construct_text(role, text): - return {"role": role, "content": text} - - -def construct_user(text): - return construct_text("user", text) - - -def construct_system(text): - return construct_text("system", text) - - -def construct_assistant(text): - return construct_text("assistant", text) - - -def save_file(filename, system, history, chatbot, user_name): - logging.debug(f"{user_name} 保存对话历史中……") - os.makedirs(os.path.join(HISTORY_DIR, user_name), exist_ok=True) - if filename.endswith(".json"): - json_s = {"system": system, "history": history, "chatbot": chatbot} - print(json_s) - with open(os.path.join(HISTORY_DIR, user_name, filename), "w") as f: - json.dump(json_s, f) - elif filename.endswith(".md"): - md_s = f"system: \n- {system} \n" - for data in history: - md_s += f"\n{data['role']}: \n- {data['content']} \n" - with open(os.path.join(HISTORY_DIR, user_name, filename), "w", encoding="utf8") as f: - f.write(md_s) - logging.debug(f"{user_name} 保存对话历史完毕") - return os.path.join(HISTORY_DIR, user_name, filename) - - -def sorted_by_pinyin(list): - return sorted(list, key=lambda char: lazy_pinyin(char)[0][0]) - - -def get_file_names(dir, plain=False, filetypes=[".json"]): - logging.debug(f"获取文件名列表,目录为{dir},文件类型为{filetypes},是否为纯文本列表{plain}") - files = [] - try: - for type in filetypes: - files += [f for f in os.listdir(dir) if f.endswith(type)] - except FileNotFoundError: - files = [] - files = sorted_by_pinyin(files) - if files == []: - files = [""] - logging.debug(f"files are:{files}") - if plain: - return files - else: - return gr.Dropdown.update(choices=files) - - -def get_history_names(plain=False, user_name=""): - logging.debug(f"从用户 {user_name} 中获取历史记录文件名列表") - return get_file_names(os.path.join(HISTORY_DIR, user_name), plain) - - -def load_template(filename, mode=0): - logging.debug(f"加载模板文件{filename},模式为{mode}(0为返回字典和下拉菜单,1为返回下拉菜单,2为返回字典)") - lines = [] - if filename.endswith(".json"): - with open(os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8") as f: - lines = json.load(f) - lines = [[i["act"], i["prompt"]] for i in lines] - else: - with open( - os.path.join(TEMPLATES_DIR, filename), "r", encoding="utf8" - ) as csvfile: - reader = csv.reader(csvfile) - lines = list(reader) - lines = lines[1:] - if mode == 1: - return sorted_by_pinyin([row[0] for row in lines]) - elif mode == 2: - return {row[0]: row[1] for row in lines} - else: - choices = sorted_by_pinyin([row[0] for row in lines]) - return {row[0]: row[1] for row in lines}, gr.Dropdown.update( - choices=choices - ) - - -def get_template_names(plain=False): - logging.debug("获取模板文件名列表") - return get_file_names(TEMPLATES_DIR, plain, filetypes=[".csv", "json"]) - - -def get_template_content(templates, selection, original_system_prompt): - logging.debug(f"应用模板中,选择为{selection},原始系统提示为{original_system_prompt}") - try: - return templates[selection] - except: - return original_system_prompt - - -def reset_textbox(): - logging.debug("重置文本框") - return gr.update(value="") - - -def reset_default(): - default_host = shared.state.reset_api_host() - retrieve_proxy("") - return gr.update(value=default_host), gr.update(value=""), "API-Host 和代理已重置" - - -def change_api_host(host): - shared.state.set_api_host(host) - msg = f"API-Host更改为了{host}" - logging.info(msg) - return msg - - -def change_proxy(proxy): - retrieve_proxy(proxy) - os.environ["HTTPS_PROXY"] = proxy - msg = f"代理更改为了{proxy}" - logging.info(msg) - return msg - - -def hide_middle_chars(s): - if s is None: - return "" - if len(s) <= 8: - return s - else: - head = s[:4] - tail = s[-4:] - hidden = "*" * (len(s) - 8) - return head + hidden + tail - - -def submit_key(key): - key = key.strip() - msg = f"API密钥更改为了{hide_middle_chars(key)}" - logging.info(msg) - return key, msg - - -def replace_today(prompt): - today = datetime.datetime.today().strftime("%Y-%m-%d") - return prompt.replace("{current_date}", today) - - -def get_geoip(): - try: - with retrieve_proxy(): - response = requests.get("https://ipapi.co/json/", timeout=5) - data = response.json() - except: - data = {"error": True, "reason": "连接ipapi失败"} - if "error" in data.keys(): - logging.warning(f"无法获取IP地址信息。\n{data}") - if data["reason"] == "RateLimited": - return ( - i18n("您的IP区域:未知。") - ) - else: - return i18n("获取IP地理位置失败。原因:") + f"{data['reason']}" + i18n("。你仍然可以使用聊天功能。") - else: - country = data["country_name"] - if country == "China": - text = "**您的IP区域:中国。请立即检查代理设置,在不受支持的地区使用API可能导致账号被封禁。**" - else: - text = i18n("您的IP区域:") + f"{country}。" - logging.info(text) - return text - - -def find_n(lst, max_num): - n = len(lst) - total = sum(lst) - - if total < max_num: - return n - - for i in range(len(lst)): - if total - lst[i] < max_num: - return n - i - 1 - total = total - lst[i] - return 1 - - -def start_outputing(): - logging.debug("显示取消按钮,隐藏发送按钮") - return gr.Button.update(visible=False), gr.Button.update(visible=True) - - -def end_outputing(): - return ( - gr.Button.update(visible=True), - gr.Button.update(visible=False), - ) - - -def cancel_outputing(): - logging.info("中止输出……") - shared.state.interrupt() - - -def transfer_input(inputs): - # 一次性返回,降低延迟 - textbox = reset_textbox() - outputing = start_outputing() - return ( - inputs, - gr.update(value=""), - gr.Button.update(visible=False), - gr.Button.update(visible=True), - ) - - - -def run(command, desc=None, errdesc=None, custom_env=None, live=False): - if desc is not None: - print(desc) - if live: - result = subprocess.run(command, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - raise RuntimeError(f"""{errdesc or 'Error running command'}. -Command: {command} -Error code: {result.returncode}""") - - return "" - result = subprocess.run(command, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True, env=os.environ if custom_env is None else custom_env) - if result.returncode != 0: - message = f"""{errdesc or 'Error running command'}. - Command: {command} - Error code: {result.returncode} - stdout: {result.stdout.decode(encoding="utf8", errors="ignore") if len(result.stdout)>0 else ''} - stderr: {result.stderr.decode(encoding="utf8", errors="ignore") if len(result.stderr)>0 else ''} - """ - raise RuntimeError(message) - return result.stdout.decode(encoding="utf8", errors="ignore") - -def versions_html(): - git = os.environ.get('GIT', "git") - python_version = ".".join([str(x) for x in sys.version_info[0:3]]) - try: - commit_hash = run(f"{git} rev-parse HEAD").strip() - except Exception: - commit_hash = "" - if commit_hash != "": - short_commit = commit_hash[0:7] - commit_info = f"{short_commit}" - else: - commit_info = "unknown \U0001F615" - return f""" - Python: {python_version} -  •  - Gradio: {gr.__version__} -  •  - Commit: {commit_info} - """ - -def add_source_numbers(lst, source_name = "Source", use_source = True): - if use_source: - return [f'[{idx+1}]\t "{item[0]}"\n{source_name}: {item[1]}' for idx, item in enumerate(lst)] - else: - return [f'[{idx+1}]\t "{item}"' for idx, item in enumerate(lst)] - -def add_details(lst): - nodes = [] - for index, txt in enumerate(lst): - brief = txt[:25].replace("\n", "") - nodes.append( - f"
              {brief}...

              {txt}

              " - ) - return nodes - - -def sheet_to_string(sheet, sheet_name = None): - result = [] - for index, row in sheet.iterrows(): - row_string = "" - for column in sheet.columns: - row_string += f"{column}: {row[column]}, " - row_string = row_string.rstrip(", ") - row_string += "." - result.append(row_string) - return result - -def excel_to_string(file_path): - # 读取Excel文件中的所有工作表 - excel_file = pd.read_excel(file_path, engine='openpyxl', sheet_name=None) - - # 初始化结果字符串 - result = [] - - # 遍历每一个工作表 - for sheet_name, sheet_data in excel_file.items(): - - # 处理当前工作表并添加到结果字符串 - result += sheet_to_string(sheet_data, sheet_name=sheet_name) - - - return result - -def get_last_day_of_month(any_day): - # The day 28 exists in every month. 4 days later, it's always next month - next_month = any_day.replace(day=28) + datetime.timedelta(days=4) - # subtracting the number of the current day brings us back one month - return next_month - datetime.timedelta(days=next_month.day) - -def get_model_source(model_name, alternative_source): - if model_name == "gpt2-medium": - return "https://huggingface.co/gpt2-medium" - -def refresh_ui_elements_on_load(current_model, selected_model_name): - return toggle_like_btn_visibility(selected_model_name) - -def toggle_like_btn_visibility(selected_model_name): - if selected_model_name == "xmchat": - return gr.update(visible=True) - else: - return gr.update(visible=False) diff --git a/spaces/wffcyrus/MetaGPT-v1/examples/search_kb.py b/spaces/wffcyrus/MetaGPT-v1/examples/search_kb.py deleted file mode 100644 index 449099380b4f8c1704fbd9358ef45c80f218d02f..0000000000000000000000000000000000000000 --- a/spaces/wffcyrus/MetaGPT-v1/examples/search_kb.py +++ /dev/null @@ -1,29 +0,0 @@ -#!/usr/bin/env python -# -*- coding: utf-8 -*- -""" -@File : search_kb.py -@Modified By: mashenquan, 2023-8-9, fix-bug: cannot find metagpt module. -""" -import asyncio -from pathlib import Path -import sys -sys.path.append(str(Path(__file__).resolve().parent.parent)) -from metagpt.const import DATA_PATH -from metagpt.document_store import FaissStore -from metagpt.logs import logger -from metagpt.roles import Sales - - -async def search(): - store = FaissStore(DATA_PATH / 'example.json') - role = Sales(profile="Sales", store=store) - - queries = ["Which facial cleanser is good for oily skin?", "Is L'Oreal good to use?"] - for query in queries: - logger.info(f"User: {query}") - result = await role.run(query) - logger.info(result) - - -if __name__ == '__main__': - asyncio.run(search()) diff --git a/spaces/whxxiaojiang/bingai/Dockerfile b/spaces/whxxiaojiang/bingai/Dockerfile deleted file mode 100644 index 7b77fccb3333b3516261a14b6663de8d7e8f9434..0000000000000000000000000000000000000000 --- a/spaces/whxxiaojiang/bingai/Dockerfile +++ /dev/null @@ -1,33 +0,0 @@ -# Build Stage -# 使用 golang:alpine 作为构建阶段的基础镜像 -FROM golang:alpine AS builder - -# 添加 git,以便之后能从GitHub克隆项目 -RUN apk --no-cache add git - -# 从 GitHub 克隆 go-proxy-bingai 项目到 /workspace/app 目录下 -RUN git clone https://github.com/Harry-zklcdc/go-proxy-bingai.git /workspace/app - -# 设置工作目录为之前克隆的项目目录 -WORKDIR /workspace/app - -# 编译 go 项目。-ldflags="-s -w" 是为了减少编译后的二进制大小 -RUN go build -ldflags="-s -w" -tags netgo -trimpath -o go-proxy-bingai main.go - -# Runtime Stage -# 使用轻量级的 alpine 镜像作为运行时的基础镜像 -FROM alpine - -# 设置工作目录 -WORKDIR /workspace/app - -# 从构建阶段复制编译后的二进制文件到运行时镜像中 -COPY --from=builder /workspace/app/go-proxy-bingai . - -# 设置环境变量,此处为随机字符-仅可进行对话,如需绘画,需要修改为自己的token -ENV Go_Proxy_BingAI_USER_TOKEN_1="5kxiDYDA3TsDAURSzJA3TsDAeKUrk168a" -# 暴露8080端口 -EXPOSE 8080 - -# 容器启动时运行的命令 -CMD ["/workspace/app/go-proxy-bingai"] diff --git a/spaces/whyu/MM-Vet_Evaluator/README.md b/spaces/whyu/MM-Vet_Evaluator/README.md deleted file mode 100644 index b2faa66732d5bdf3ac92e9899dbb678cfe67a715..0000000000000000000000000000000000000000 --- a/spaces/whyu/MM-Vet_Evaluator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: MM-Vet Evaluator -emoji: 🐨 -colorFrom: gray -colorTo: gray -sdk: gradio -sdk_version: 3.39.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/wilson1/bingo/src/components/chat-header.tsx b/spaces/wilson1/bingo/src/components/chat-header.tsx deleted file mode 100644 index c6664b8dee61179f844d45c5bd650518fc2cb4c2..0000000000000000000000000000000000000000 --- a/spaces/wilson1/bingo/src/components/chat-header.tsx +++ /dev/null @@ -1,12 +0,0 @@ -import LogoIcon from '@/assets/images/logo.svg' -import Image from 'next/image' - -export function ChatHeader() { - return ( -
              - logo -
              欢迎使用新必应
              -
              由 AI 支持的网页版 Copilot
              -
              - ) -} diff --git a/spaces/wtarit/nllb-th-en-translation/app.py b/spaces/wtarit/nllb-th-en-translation/app.py deleted file mode 100644 index c7016d81d25c59949cdcd290b82ef35d92925682..0000000000000000000000000000000000000000 --- a/spaces/wtarit/nllb-th-en-translation/app.py +++ /dev/null @@ -1,26 +0,0 @@ -import gradio as gr -from transformers import AutoModelForSeq2SeqLM, NllbTokenizerFast - -model_repo = "wtarit/nllb-600M-th-en" - -model = AutoModelForSeq2SeqLM.from_pretrained(model_repo) -tokenizer = NllbTokenizerFast.from_pretrained(model_repo, src_lang="tha_Thai", tgt_lang="eng_Latn") - -def translate(Text): - inputs = tokenizer(Text, return_tensors="pt") - translated_tokens = model.generate( - **inputs, forced_bos_token_id=tokenizer.lang_code_to_id["eng_Latn"], max_length=64 - ) - return tokenizer.batch_decode(translated_tokens, skip_special_tokens=True)[0] - -demo = gr.Interface( - fn=translate, - inputs=[ - gr.components.Textbox(placeholder="Enter Thai text here...") - ], - outputs=["text"], - title="NLLB TH-EN Translation", - allow_flagging="never", - ) - -demo.launch() \ No newline at end of file diff --git a/spaces/wy213/213a/src/pages/api/create.ts b/spaces/wy213/213a/src/pages/api/create.ts deleted file mode 100644 index 430bb2d53431e6a2c7608234f512f2d9f577daee..0000000000000000000000000000000000000000 --- a/spaces/wy213/213a/src/pages/api/create.ts +++ /dev/null @@ -1,31 +0,0 @@ -'use server' - -import { NextApiRequest, NextApiResponse } from 'next' -import { fetch, debug } from '@/lib/isomorphic' -import { createHeaders } from '@/lib/utils' - -const API_ENDPOINT = 'https://www.bing.com/turing/conversation/create' -// const API_ENDPOINT = 'https://edgeservices.bing.com/edgesvc/turing/conversation/create'; - -export default async function handler(req: NextApiRequest, res: NextApiResponse) { - try { - const headers = createHeaders(req.cookies) - - res.writeHead(200, { - 'Content-Type': 'application/json', - }) - - debug('headers', headers) - const response = await fetch(API_ENDPOINT, { method: 'GET', headers }) - .then((res) => res.text()) - - res.end(response) - } catch (e) { - return res.end(JSON.stringify({ - result: { - value: 'UnauthorizedRequest', - message: `${e}` - } - })) - } -} diff --git a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/resnetmid.py b/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/resnetmid.py deleted file mode 100644 index 017f6c62653535a7b04566227d893cb4dfa2a34c..0000000000000000000000000000000000000000 --- a/spaces/xfys/yolov5_tracking/trackers/strong_sort/deep/models/resnetmid.py +++ /dev/null @@ -1,307 +0,0 @@ -from __future__ import division, absolute_import -import torch -import torch.utils.model_zoo as model_zoo -from torch import nn - -__all__ = ['resnet50mid'] - -model_urls = { - 'resnet18': 'https://download.pytorch.org/models/resnet18-5c106cde.pth', - 'resnet34': 'https://download.pytorch.org/models/resnet34-333f7ec4.pth', - 'resnet50': 'https://download.pytorch.org/models/resnet50-19c8e357.pth', - 'resnet101': 'https://download.pytorch.org/models/resnet101-5d3b4d8f.pth', - 'resnet152': 'https://download.pytorch.org/models/resnet152-b121ed2d.pth', -} - - -def conv3x3(in_planes, out_planes, stride=1): - """3x3 convolution with padding""" - return nn.Conv2d( - in_planes, - out_planes, - kernel_size=3, - stride=stride, - padding=1, - bias=False - ) - - -class BasicBlock(nn.Module): - expansion = 1 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(BasicBlock, self).__init__() - self.conv1 = conv3x3(inplanes, planes, stride) - self.bn1 = nn.BatchNorm2d(planes) - self.relu = nn.ReLU(inplace=True) - self.conv2 = conv3x3(planes, planes) - self.bn2 = nn.BatchNorm2d(planes) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class Bottleneck(nn.Module): - expansion = 4 - - def __init__(self, inplanes, planes, stride=1, downsample=None): - super(Bottleneck, self).__init__() - self.conv1 = nn.Conv2d(inplanes, planes, kernel_size=1, bias=False) - self.bn1 = nn.BatchNorm2d(planes) - self.conv2 = nn.Conv2d( - planes, - planes, - kernel_size=3, - stride=stride, - padding=1, - bias=False - ) - self.bn2 = nn.BatchNorm2d(planes) - self.conv3 = nn.Conv2d( - planes, planes * self.expansion, kernel_size=1, bias=False - ) - self.bn3 = nn.BatchNorm2d(planes * self.expansion) - self.relu = nn.ReLU(inplace=True) - self.downsample = downsample - self.stride = stride - - def forward(self, x): - residual = x - - out = self.conv1(x) - out = self.bn1(out) - out = self.relu(out) - - out = self.conv2(out) - out = self.bn2(out) - out = self.relu(out) - - out = self.conv3(out) - out = self.bn3(out) - - if self.downsample is not None: - residual = self.downsample(x) - - out += residual - out = self.relu(out) - - return out - - -class ResNetMid(nn.Module): - """Residual network + mid-level features. - - Reference: - Yu et al. The Devil is in the Middle: Exploiting Mid-level Representations for - Cross-Domain Instance Matching. arXiv:1711.08106. - - Public keys: - - ``resnet50mid``: ResNet50 + mid-level feature fusion. - """ - - def __init__( - self, - num_classes, - loss, - block, - layers, - last_stride=2, - fc_dims=None, - **kwargs - ): - self.inplanes = 64 - super(ResNetMid, self).__init__() - self.loss = loss - self.feature_dim = 512 * block.expansion - - # backbone network - self.conv1 = nn.Conv2d( - 3, 64, kernel_size=7, stride=2, padding=3, bias=False - ) - self.bn1 = nn.BatchNorm2d(64) - self.relu = nn.ReLU(inplace=True) - self.maxpool = nn.MaxPool2d(kernel_size=3, stride=2, padding=1) - self.layer1 = self._make_layer(block, 64, layers[0]) - self.layer2 = self._make_layer(block, 128, layers[1], stride=2) - self.layer3 = self._make_layer(block, 256, layers[2], stride=2) - self.layer4 = self._make_layer( - block, 512, layers[3], stride=last_stride - ) - - self.global_avgpool = nn.AdaptiveAvgPool2d(1) - assert fc_dims is not None - self.fc_fusion = self._construct_fc_layer( - fc_dims, 512 * block.expansion * 2 - ) - self.feature_dim += 512 * block.expansion - self.classifier = nn.Linear(self.feature_dim, num_classes) - - self._init_params() - - def _make_layer(self, block, planes, blocks, stride=1): - downsample = None - if stride != 1 or self.inplanes != planes * block.expansion: - downsample = nn.Sequential( - nn.Conv2d( - self.inplanes, - planes * block.expansion, - kernel_size=1, - stride=stride, - bias=False - ), - nn.BatchNorm2d(planes * block.expansion), - ) - - layers = [] - layers.append(block(self.inplanes, planes, stride, downsample)) - self.inplanes = planes * block.expansion - for i in range(1, blocks): - layers.append(block(self.inplanes, planes)) - - return nn.Sequential(*layers) - - def _construct_fc_layer(self, fc_dims, input_dim, dropout_p=None): - """Constructs fully connected layer - - Args: - fc_dims (list or tuple): dimensions of fc layers, if None, no fc layers are constructed - input_dim (int): input dimension - dropout_p (float): dropout probability, if None, dropout is unused - """ - if fc_dims is None: - self.feature_dim = input_dim - return None - - assert isinstance( - fc_dims, (list, tuple) - ), 'fc_dims must be either list or tuple, but got {}'.format( - type(fc_dims) - ) - - layers = [] - for dim in fc_dims: - layers.append(nn.Linear(input_dim, dim)) - layers.append(nn.BatchNorm1d(dim)) - layers.append(nn.ReLU(inplace=True)) - if dropout_p is not None: - layers.append(nn.Dropout(p=dropout_p)) - input_dim = dim - - self.feature_dim = fc_dims[-1] - - return nn.Sequential(*layers) - - def _init_params(self): - for m in self.modules(): - if isinstance(m, nn.Conv2d): - nn.init.kaiming_normal_( - m.weight, mode='fan_out', nonlinearity='relu' - ) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm2d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.BatchNorm1d): - nn.init.constant_(m.weight, 1) - nn.init.constant_(m.bias, 0) - elif isinstance(m, nn.Linear): - nn.init.normal_(m.weight, 0, 0.01) - if m.bias is not None: - nn.init.constant_(m.bias, 0) - - def featuremaps(self, x): - x = self.conv1(x) - x = self.bn1(x) - x = self.relu(x) - x = self.maxpool(x) - x = self.layer1(x) - x = self.layer2(x) - x = self.layer3(x) - x4a = self.layer4[0](x) - x4b = self.layer4[1](x4a) - x4c = self.layer4[2](x4b) - return x4a, x4b, x4c - - def forward(self, x): - x4a, x4b, x4c = self.featuremaps(x) - - v4a = self.global_avgpool(x4a) - v4b = self.global_avgpool(x4b) - v4c = self.global_avgpool(x4c) - v4ab = torch.cat([v4a, v4b], 1) - v4ab = v4ab.view(v4ab.size(0), -1) - v4ab = self.fc_fusion(v4ab) - v4c = v4c.view(v4c.size(0), -1) - v = torch.cat([v4ab, v4c], 1) - - if not self.training: - return v - - y = self.classifier(v) - - if self.loss == 'softmax': - return y - elif self.loss == 'triplet': - return y, v - else: - raise KeyError('Unsupported loss: {}'.format(self.loss)) - - -def init_pretrained_weights(model, model_url): - """Initializes model with pretrained weights. - - Layers that don't match with pretrained layers in name or size are kept unchanged. - """ - pretrain_dict = model_zoo.load_url(model_url) - model_dict = model.state_dict() - pretrain_dict = { - k: v - for k, v in pretrain_dict.items() - if k in model_dict and model_dict[k].size() == v.size() - } - model_dict.update(pretrain_dict) - model.load_state_dict(model_dict) - - -""" -Residual network configurations: --- -resnet18: block=BasicBlock, layers=[2, 2, 2, 2] -resnet34: block=BasicBlock, layers=[3, 4, 6, 3] -resnet50: block=Bottleneck, layers=[3, 4, 6, 3] -resnet101: block=Bottleneck, layers=[3, 4, 23, 3] -resnet152: block=Bottleneck, layers=[3, 8, 36, 3] -""" - - -def resnet50mid(num_classes, loss='softmax', pretrained=True, **kwargs): - model = ResNetMid( - num_classes=num_classes, - loss=loss, - block=Bottleneck, - layers=[3, 4, 6, 3], - last_stride=2, - fc_dims=[1024], - **kwargs - ) - if pretrained: - init_pretrained_weights(model, model_urls['resnet50']) - return model diff --git a/spaces/xnetba/MMS/uroman/lib/NLP/UTF8.pm b/spaces/xnetba/MMS/uroman/lib/NLP/UTF8.pm deleted file mode 100644 index b28cb4dede3b84f45aeade2e24f240e3a39e7cc1..0000000000000000000000000000000000000000 --- a/spaces/xnetba/MMS/uroman/lib/NLP/UTF8.pm +++ /dev/null @@ -1,1404 +0,0 @@ -################################################################ -# # -# UTF8 # -# # -################################################################ - -package NLP::UTF8; - -use NLP::utilities; -$util = NLP::utilities; - -%empty_ht = (); - -sub new { - local($caller) = @_; - - my $object = {}; - my $class = ref( $caller ) || $caller; - bless($object, $class); - return $object; -} - -sub unicode_string2string { -# input: string that might contain unicode sequences such as "U+0627" -# output: string in pure utf-8 - local($caller,$s) = @_; - - my $pre; - my $unicode; - my $post; - my $r1; - my $r2; - my $r3; - - ($pre,$unicode,$post) = ($s =~ /^(.*)(?:U\+|\\u)([0-9A-Fa-f][0-9A-Fa-f][0-9A-Fa-f][0-9A-Fa-f])(.*)$/); - return $s unless defined($post); - $r1 = $caller->unicode_string2string($pre); - $r2 = $caller->unicode_hex_string2string($unicode); - $r3 = $caller->unicode_string2string($post); - $result = $r1 . $r2 . $r3; - return $result; -} - -sub unicode_hex_string2string { -# input: "0627" (interpreted as hex code) -# output: utf-8 string for Arabic letter alef - local($caller,$unicode) = @_; - return "" unless defined($unicode); - my $d = hex($unicode); - return $caller->unicode2string($d); -} - -sub unicode2string { -# input: non-neg integer, e.g. 0x627 -# output: utf-8 string for Arabic letter alef - local($caller,$d) = @_; - return "" unless defined($d) && $d >= 0; - return sprintf("%c",$d) if $d <= 0x7F; - - my $lastbyte1 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c",$d | 0xC0, $lastbyte1) if $d <= 0x1F; - - my $lastbyte2 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c",$d | 0xE0, $lastbyte2, $lastbyte1) if $d <= 0xF; - - my $lastbyte3 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c%c",$d | 0xF0, $lastbyte3, $lastbyte2, $lastbyte1) if $d <= 0x7; - - my $lastbyte4 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c%c%c",$d | 0xF8, $lastbyte4, $lastbyte3, $lastbyte2, $lastbyte1) if $d <= 0x3; - - my $lastbyte5 = ($d & 0x3F) | 0x80; - $d >>= 6; - return sprintf("%c%c%c%c%c%c",$d | 0xFC, $lastbyte5, $lastbyte4, $lastbyte3, $lastbyte2, $lastbyte1) if $d <= 0x1; - return ""; # bad input -} - -sub html2utf8 { - local($caller, $string) = @_; - - return $string unless $string =~ /\&\#\d{3,5};/; - - my $prev = ""; - my $s = $string; - while ($s ne $prev) { - $prev = $s; - ($pre,$d,$post) = ($s =~ /^(.*)\&\#(\d+);(.*)$/); - if (defined($d) && ((($d >= 160) && ($d <= 255)) - || (($d >= 1500) && ($d <= 1699)) - || (($d >= 19968) && ($d <= 40879)))) { - $html_code = "\&\#" . $d . ";"; - $utf8_code = $caller->unicode2string($d); - $s =~ s/$html_code/$utf8_code/; - } - } - return $s; -} - -sub xhtml2utf8 { - local($caller, $string) = @_; - - return $string unless $string =~ /\&\#x[0-9a-fA-F]{2,5};/; - - my $prev = ""; - my $s = $string; - while ($s ne $prev) { - $prev = $s; - if (($pre, $html_code, $x, $post) = ($s =~ /^(.*)(\&\#x([0-9a-fA-F]{2,5});)(.*)$/)) { - $utf8_code = $caller->unicode_hex_string2string($x); - $s =~ s/$html_code/$utf8_code/; - } - } - return $s; -} - -sub utf8_marker { - return sprintf("%c%c%c\n", 0xEF, 0xBB, 0xBF); -} - -sub enforcer { -# input: string that might not conform to utf-8 -# output: string in pure utf-8, with a few "smart replacements" and possibly "?" - local($caller,$s,$no_repair) = @_; - - my $ascii; - my $utf8; - my $rest; - - return $s if $s =~ /^[\x00-\x7F]*$/; - - $no_repair = 0 unless defined($no_repair); - $orig = $s; - $result = ""; - - while ($s ne "") { - ($ascii,$rest) = ($s =~ /^([\x00-\x7F]+)(.*)$/); - if (defined($ascii)) { - $result .= $ascii; - $s = $rest; - next; - } - ($utf8,$rest) = ($s =~ /^([\xC0-\xDF][\x80-\xBF])(.*)$/); - ($utf8,$rest) = ($s =~ /^([\xE0-\xEF][\x80-\xBF][\x80-\xBF])(.*)$/) - unless defined($rest); - ($utf8,$rest) = ($s =~ /^([\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])(.*)$/) - unless defined($rest); - ($utf8,$rest) = ($s =~ /^([\xF8-\xFB][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF])(.*)$/) - unless defined($rest); - if (defined($utf8)) { - $result .= $utf8; - $s = $rest; - next; - } - ($c,$rest) = ($s =~ /^(.)(.*)$/); - if (defined($c)) { - if ($no_repair) { $result .= "?"; } - elsif ($c =~ /\x85/) { $result .= "..."; } - elsif ($c =~ /\x91/) { $result .= "'"; } - elsif ($c =~ /\x92/) { $result .= "'"; } - elsif ($c =~ /\x93/) { $result .= $caller->unicode2string(0x201C); } - elsif ($c =~ /\x94/) { $result .= $caller->unicode2string(0x201D); } - elsif ($c =~ /[\xC0-\xFF]/) { - $c2 = $c; - $c2 =~ tr/[\xC0-\xFF]/[\x80-\xBF]/; - $result .= "\xC3$c2"; - } else { - $result .= "?"; - } - $s = $rest; - next; - } - $s = ""; - } - $result .= "\n" if ($orig =~ /\n$/) && ! ($result =~ /\n$/); - return $result; -} - -sub split_into_utf8_characters { -# input: utf8 string -# output: list of sub-strings, each representing a utf8 character - local($caller,$string,$group_control, *ht) = @_; - - @characters = (); - $end_of_token_p_string = ""; - $skipped_bytes = ""; - $group_control = "" unless defined($group_control); - $group_ascii_numbers = ($group_control =~ /ASCII numbers/); - $group_ascii_spaces = ($group_control =~ /ASCII spaces/); - $group_ascii_punct = ($group_control =~ /ASCII punct/); - $group_ascii_chars = ($group_control =~ /ASCII chars/); - $group_xml_chars = ($group_control =~ /XML chars/); - $group_xml_tags = ($group_control =~ /XML tags/); - $return_only_chars = ($group_control =~ /return only chars/); - $return_trailing_whitespaces = ($group_control =~ /return trailing whitespaces/); - if ($group_control =~ /ASCII all/) { - $group_ascii_numbers = 1; - $group_ascii_spaces = 1; - $group_ascii_chars = 1; - $group_ascii_punct = 1; - } - if ($group_control =~ /(XML chars and tags|XML tags and chars)/) { - $group_xml_chars = 1; - $group_xml_tags = 1; - } - $orig_string = $string; - $string .= " "; - while ($string =~ /\S/) { - # one-character UTF-8 = ASCII - if ($string =~ /^[\x00-\x7F]/) { - if ($group_xml_chars - && (($dec_unicode, $rest) = ($string =~ /^&#(\d+);(.*)$/s)) - && ($utf8_char = $caller->unicode2string($dec_unicode))) { - push(@characters, $utf8_char); - $string = $rest; - } elsif ($group_xml_chars - && (($hex_unicode, $rest) = ($string =~ /^&#x([0-9a-f]{1,6});(.*)$/is)) - && ($utf8_char = $caller->unicode_hex_string2string($hex_unicode))) { - push(@characters, $utf8_char); - $string = $rest; - } elsif ($group_xml_chars - && (($html_entity_name, $rest) = ($string =~ /^&([a-z]{1,6});(.*)$/is)) - && ($dec_unicode = $ht{HTML_ENTITY_NAME_TO_DECUNICODE}->{$html_entity_name}) - && ($utf8_char = $caller->unicode2string($dec_unicode)) - ) { - push(@characters, $utf8_char); - $string = $rest; - } elsif ($group_xml_tags - && (($tag, $rest) = ($string =~ /^(<\/?[a-zA-Z][-_:a-zA-Z0-9]*(\s+[a-zA-Z][-_:a-zA-Z0-9]*=\"[^"]*\")*\s*\/?>)(.*)$/s))) { - push(@characters, $tag); - $string = $rest; - } elsif ($group_ascii_numbers && ($string =~ /^[12]\d\d\d\.[01]?\d.[0-3]?\d([^0-9].*)?$/)) { - ($date) = ($string =~ /^(\d\d\d\d\.\d?\d.\d?\d)([^0-9].*)?$/); - push(@characters,$date); - $string = substr($string, length($date)); - } elsif ($group_ascii_numbers && ($string =~ /^\d/)) { - ($number) = ($string =~ /^(\d+(,\d\d\d)*(\.\d+)?)/); - push(@characters,$number); - $string = substr($string, length($number)); - } elsif ($group_ascii_spaces && ($string =~ /^(\s+)/)) { - ($space) = ($string =~ /^(\s+)/); - $string = substr($string, length($space)); - } elsif ($group_ascii_punct && (($punct_seq) = ($string =~ /^(-+|\.+|[:,%()"])/))) { - push(@characters,$punct_seq); - $string = substr($string, length($punct_seq)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^(\$[A-Z]*|[A-Z]{1,3}\$)/))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && (($abbrev) = ($string =~ /^((?:Jan|Feb|Febr|Mar|Apr|Jun|Jul|Aug|Sep|Sept|Oct|Nov|Dec|Mr|Mrs|Dr|a.m|p.m)\.)/))) { - push(@characters,$abbrev); - $string = substr($string, length($abbrev)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^(second|minute|hour|day|week|month|year|inch|foot|yard|meter|kilometer|mile)-(?:long|old)/i))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^(zero|one|two|three|four|five|six|seven|eight|nine|ten|eleven|twelve|thirteen|fourteen|fifteen|sixteen|seventeen|eighteen|nineteen|twenty|thirty|forty|fifty|sixty|seventy|eighty|ninety|hundred|thousand|million|billion|trillion)-/i))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && (($word) = ($string =~ /^([a-zA-Z]+)(?:[ ,;%?|()"]|'s |' |\. |\d+[:hms][0-9 ])/))) { - push(@characters,$word); - $string = substr($string, length($word)); - } elsif ($group_ascii_chars && ($string =~ /^([\x21-\x27\x2A-\x7E]+)/)) { # exclude () - ($ascii) = ($string =~ /^([\x21-\x27\x2A-\x7E]+)/); # ASCII black-characters - push(@characters,$ascii); - $string = substr($string, length($ascii)); - } elsif ($group_ascii_chars && ($string =~ /^([\x21-\x7E]+)/)) { - ($ascii) = ($string =~ /^([\x21-\x7E]+)/); # ASCII black-characters - push(@characters,$ascii); - $string = substr($string, length($ascii)); - } elsif ($group_ascii_chars && ($string =~ /^([\x00-\x7F]+)/)) { - ($ascii) = ($string =~ /^([\x00-\x7F]+)/); - push(@characters,$ascii); - $string = substr($string, length($ascii)); - } else { - push(@characters,substr($string, 0, 1)); - $string = substr($string, 1); - } - - # two-character UTF-8 - } elsif ($string =~ /^[\xC0-\xDF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 2)); - $string = substr($string, 2); - - # three-character UTF-8 - } elsif ($string =~ /^[\xE0-\xEF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 3)); - $string = substr($string, 3); - - # four-character UTF-8 - } elsif ($string =~ /^[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 4)); - $string = substr($string, 4); - - # five-character UTF-8 - } elsif ($string =~ /^[\xF8-\xFB][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 5)); - $string = substr($string, 5); - - # six-character UTF-8 - } elsif ($string =~ /^[\xFC-\xFD][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF]/) { - push(@characters,substr($string, 0, 6)); - $string = substr($string, 6); - - # not a UTF-8 character - } else { - $skipped_bytes .= substr($string, 0, 1); - $string = substr($string, 1); - } - - $end_of_token_p_string .= ($string =~ /^\S/) ? "0" : "1" - if $#characters >= length($end_of_token_p_string); - } - $string =~ s/ $//; # remove previously added space, but keep original spaces - if ($return_trailing_whitespaces) { - while ($string =~ /^[ \t]/) { - push(@characters,substr($string, 0, 1)); - $string = substr($string, 1); - } - push(@characters, "\n") if $orig_string =~ /\n$/; - } - return ($return_only_chars) ? @characters : ($skipped_bytes, $end_of_token_p_string, @characters); -} - -sub max_substring_info { - local($caller,$s1,$s2,$info_type) = @_; - - ($skipped_bytes1, $end_of_token_p_string1, @char_list1) = $caller->split_into_utf8_characters($s1, "", *empty_ht); - ($skipped_bytes2, $end_of_token_p_string2, @char_list2) = $caller->split_into_utf8_characters($s2, "", *empty_ht); - return 0 if $skipped_bytes1 || $skipped_bytes2; - - $best_substring_start1 = 0; - $best_substring_start2 = 0; - $best_substring_length = 0; - - foreach $start_pos2 ((0 .. $#char_list2)) { - last if $start_pos2 + $best_substring_length > $#char_list2; - foreach $start_pos1 ((0 .. $#char_list1)) { - last if $start_pos1 + $best_substring_length > $#char_list1; - $matching_length = 0; - while (($start_pos1 + $matching_length <= $#char_list1) - && ($start_pos2 + $matching_length <= $#char_list2) - && ($char_list1[$start_pos1+$matching_length] eq $char_list2[$start_pos2+$matching_length])) { - $matching_length++; - } - if ($matching_length > $best_substring_length) { - $best_substring_length = $matching_length; - $best_substring_start1 = $start_pos1; - $best_substring_start2 = $start_pos2; - } - } - } - if ($info_type =~ /^max-ratio1$/) { - $length1 = $#char_list1 + 1; - return ($length1 > 0) ? ($best_substring_length / $length1) : 0; - } elsif ($info_type =~ /^max-ratio2$/) { - $length2 = $#char_list2 + 1; - return ($length2 > 0) ? ($best_substring_length / $length2) : 0; - } elsif ($info_type =~ /^substring$/) { - return join("", @char_list1[$best_substring_start1 .. $best_substring_start1+$best_substring_length-1]); - } else { - $length1 = $#char_list1 + 1; - $length2 = $#char_list2 + 1; - $info = "s1=$s1;s2=$s2"; - $info .= ";best_substring_length=$best_substring_length"; - $info .= ";best_substring_start1=$best_substring_start1"; - $info .= ";best_substring_start2=$best_substring_start2"; - $info .= ";length1=$length1"; - $info .= ";length2=$length2"; - return $info; - } -} - -sub n_shared_chars_at_start { - local($caller,$s1,$s2) = @_; - - my $n = 0; - while (($s1 ne "") && ($s2 ne "")) { - ($c1, $rest1) = ($s1 =~ /^(.[\x80-\xBF]*)(.*)$/); - ($c2, $rest2) = ($s2 =~ /^(.[\x80-\xBF]*)(.*)$/); - if ($c1 eq $c2) { - $n++; - $s1 = $rest1; - $s2 = $rest2; - } else { - last; - } - } - return $n; -} - -sub char_length { - local($caller,$string,$byte_offset) = @_; - - my $char = ($byte_offset) ? substr($string, $byte_offset) : $string; - return 1 if $char =~ /^[\x00-\x7F]/; - return 2 if $char =~ /^[\xC0-\xDF]/; - return 3 if $char =~ /^[\xE0-\xEF]/; - return 4 if $char =~ /^[\xF0-\xF7]/; - return 5 if $char =~ /^[\xF8-\xFB]/; - return 6 if $char =~ /^[\xFC-\xFD]/; - return 0; -} - -sub length_in_utf8_chars { - local($caller,$s) = @_; - - $s =~ s/[\x80-\xBF]//g; - $s =~ s/[\x00-\x7F\xC0-\xFF]/c/g; - return length($s); -} - -sub byte_length_of_n_chars { - local($caller,$char_length,$string,$byte_offset,$undef_return_value) = @_; - - $byte_offset = 0 unless defined($byte_offset); - $undef_return_value = -1 unless defined($undef_return_value); - my $result = 0; - my $len; - foreach $i ((1 .. $char_length)) { - $len = $caller->char_length($string,($byte_offset+$result)); - return $undef_return_value unless $len; - $result += $len; - } - return $result; -} - -sub replace_non_ASCII_bytes { - local($caller,$string,$replacement) = @_; - - $replacement = "HEX" unless defined($replacement); - if ($replacement =~ /^(Unicode|U\+4|\\u|HEX)$/) { - $new_string = ""; - while (($pre,$utf8_char, $post) = ($string =~ /^([\x09\x0A\x20-\x7E]*)([\x00-\x08\x0B-\x1F\x7F]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF]|[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF]|[\xF8-\xFF][\x80-\xBF]+|[\x80-\xBF])(.*)$/s)) { - if ($replacement =~ /Unicode/) { - $new_string .= $pre . "utf8_to_unicode($utf8_char)) . ">"; - } elsif ($replacement =~ /\\u/) { - $new_string .= $pre . "\\u" . (uc sprintf("%04x", $caller->utf8_to_unicode($utf8_char))); - } elsif ($replacement =~ /U\+4/) { - $new_string .= $pre . "utf8_to_4hex_unicode($utf8_char)) . ">"; - } else { - $new_string .= $pre . "utf8_to_hex($utf8_char) . ">"; - } - $string = $post; - } - $new_string .= $string; - } else { - $new_string = $string; - $new_string =~ s/[\x80-\xFF]/$replacement/g; - } - return $new_string; -} - -sub valid_utf8_string_p { - local($caller,$string) = @_; - - return $string =~ /^(?:[\x09\x0A\x20-\x7E]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF]|[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])*$/; -} - -sub valid_utf8_string_incl_ascii_control_p { - local($caller,$string) = @_; - - return $string =~ /^(?:[\x00-\x7F]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF]|[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF])*$/; -} - -sub utf8_to_hex { - local($caller,$s) = @_; - - $hex = ""; - foreach $i ((0 .. length($s)-1)) { - $hex .= uc sprintf("%2.2x",ord(substr($s, $i, 1))); - } - return $hex; -} - -sub hex_to_utf8 { - local($caller,$s) = @_; - # surface string \xE2\x80\xBA to UTF8 - - my $utf8 = ""; - while (($hex, $rest) = ($s =~ /^(?:\\x)?([0-9A-Fa-f]{2,2})(.*)$/)) { - $utf8 .= sprintf("%c", hex($hex)); - $s = $rest; - } - return $utf8; -} - -sub utf8_to_4hex_unicode { - local($caller,$s) = @_; - - return sprintf("%4.4x", $caller->utf8_to_unicode($s)); -} - -sub utf8_to_unicode { - local($caller,$s) = @_; - - $unicode = 0; - foreach $i ((0 .. length($s)-1)) { - $c = substr($s, $i, 1); - if ($c =~ /^[\x80-\xBF]$/) { - $unicode = $unicode * 64 + (ord($c) & 0x3F); - } elsif ($c =~ /^[\xC0-\xDF]$/) { - $unicode = $unicode * 32 + (ord($c) & 0x1F); - } elsif ($c =~ /^[\xE0-\xEF]$/) { - $unicode = $unicode * 16 + (ord($c) & 0x0F); - } elsif ($c =~ /^[\xF0-\xF7]$/) { - $unicode = $unicode * 8 + (ord($c) & 0x07); - } elsif ($c =~ /^[\xF8-\xFB]$/) { - $unicode = $unicode * 4 + (ord($c) & 0x03); - } elsif ($c =~ /^[\xFC-\xFD]$/) { - $unicode = $unicode * 2 + (ord($c) & 0x01); - } - } - return $unicode; -} - -sub charhex { - local($caller,$string) = @_; - - my $result = ""; - while ($string ne "") { - $char = substr($string, 0, 1); - $string = substr($string, 1); - if ($char =~ /^[ -~]$/) { - $result .= $char; - } else { - $hex = sprintf("%2.2x",ord($char)); - $hex =~ tr/a-f/A-F/; - $result .= ""; - } - } - return $result; -} - -sub windows1252_to_utf8 { - local($caller,$s, $norm_to_ascii_p, $preserve_potential_utf8s_p) = @_; - - return $s if $s =~ /^[\x00-\x7F]*$/; # all ASCII - - $norm_to_ascii_p = 1 unless defined($norm_to_ascii_p); - $preserve_potential_utf8s_p = 1 unless defined($preserve_potential_utf8s_p); - my $result = ""; - my $c = ""; - while ($s ne "") { - $n_bytes = 1; - if ($s =~ /^[\x00-\x7F]/) { - $result .= substr($s, 0, 1); # ASCII - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xC0-\xDF][\x80-\xBF]/)) { - $result .= substr($s, 0, 2); # valid 2-byte UTF8 - $n_bytes = 2; - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xE0-\xEF][\x80-\xBF][\x80-\xBF]/)) { - $result .= substr($s, 0, 3); # valid 3-byte UTF8 - $n_bytes = 3; - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xF0-\xF7][\x80-\xBF][\x80-\xBF][\x80-\xBF]/)) { - $result .= substr($s, 0, 4); # valid 4-byte UTF8 - $n_bytes = 4; - } elsif ($preserve_potential_utf8s_p && ($s =~ /^[\xF8-\xFB][\x80-\xBF][\x80-\xBF][\x80-\xBF][\x80-\xBF]/)) { - $result .= substr($s, 0, 5); # valid 5-byte UTF8 - $n_bytes = 5; - } elsif ($s =~ /^[\xA0-\xBF]/) { - $c = substr($s, 0, 1); - $result .= "\xC2$c"; - } elsif ($s =~ /^[\xC0-\xFF]/) { - $c = substr($s, 0, 1); - $c =~ tr/[\xC0-\xFF]/[\x80-\xBF]/; - $result .= "\xC3$c"; - } elsif ($s =~ /^\x80/) { - $result .= "\xE2\x82\xAC"; # Euro sign - } elsif ($s =~ /^\x82/) { - $result .= "\xE2\x80\x9A"; # single low quotation mark - } elsif ($s =~ /^\x83/) { - $result .= "\xC6\x92"; # Latin small letter f with hook - } elsif ($s =~ /^\x84/) { - $result .= "\xE2\x80\x9E"; # double low quotation mark - } elsif ($s =~ /^\x85/) { - $result .= ($norm_to_ascii_p) ? "..." : "\xE2\x80\xA6"; # horizontal ellipsis (three dots) - } elsif ($s =~ /^\x86/) { - $result .= "\xE2\x80\xA0"; # dagger - } elsif ($s =~ /^\x87/) { - $result .= "\xE2\x80\xA1"; # double dagger - } elsif ($s =~ /^\x88/) { - $result .= "\xCB\x86"; # circumflex - } elsif ($s =~ /^\x89/) { - $result .= "\xE2\x80\xB0"; # per mille sign - } elsif ($s =~ /^\x8A/) { - $result .= "\xC5\xA0"; # Latin capital letter S with caron - } elsif ($s =~ /^\x8B/) { - $result .= "\xE2\x80\xB9"; # single left-pointing angle quotation mark - } elsif ($s =~ /^\x8C/) { - $result .= "\xC5\x92"; # OE ligature - } elsif ($s =~ /^\x8E/) { - $result .= "\xC5\xBD"; # Latin capital letter Z with caron - } elsif ($s =~ /^\x91/) { - $result .= ($norm_to_ascii_p) ? "`" : "\xE2\x80\x98"; # left single quotation mark - } elsif ($s =~ /^\x92/) { - $result .= ($norm_to_ascii_p) ? "'" : "\xE2\x80\x99"; # right single quotation mark - } elsif ($s =~ /^\x93/) { - $result .= "\xE2\x80\x9C"; # left double quotation mark - } elsif ($s =~ /^\x94/) { - $result .= "\xE2\x80\x9D"; # right double quotation mark - } elsif ($s =~ /^\x95/) { - $result .= "\xE2\x80\xA2"; # bullet - } elsif ($s =~ /^\x96/) { - $result .= ($norm_to_ascii_p) ? "-" : "\xE2\x80\x93"; # n dash - } elsif ($s =~ /^\x97/) { - $result .= ($norm_to_ascii_p) ? "-" : "\xE2\x80\x94"; # m dash - } elsif ($s =~ /^\x98/) { - $result .= ($norm_to_ascii_p) ? "~" : "\xCB\x9C"; # small tilde - } elsif ($s =~ /^\x99/) { - $result .= "\xE2\x84\xA2"; # trade mark sign - } elsif ($s =~ /^\x9A/) { - $result .= "\xC5\xA1"; # Latin small letter s with caron - } elsif ($s =~ /^\x9B/) { - $result .= "\xE2\x80\xBA"; # single right-pointing angle quotation mark - } elsif ($s =~ /^\x9C/) { - $result .= "\xC5\x93"; # oe ligature - } elsif ($s =~ /^\x9E/) { - $result .= "\xC5\xBE"; # Latin small letter z with caron - } elsif ($s =~ /^\x9F/) { - $result .= "\xC5\xB8"; # Latin capital letter Y with diaeresis - } else { - $result .= "?"; - } - $s = substr($s, $n_bytes); - } - return $result; -} - -sub delete_weird_stuff { - local($caller, $s) = @_; - - # delete control chacters (except tab and linefeed), zero-width characters, byte order mark, - # directional marks, join marks, variation selectors, Arabic tatweel - $s =~ s/([\x00-\x08\x0B-\x1F\x7F]|\xC2[\x80-\x9F]|\xD9\x80|\xE2\x80[\x8B-\x8F]|\xEF\xB8[\x80-\x8F]|\xEF\xBB\xBF|\xF3\xA0[\x84-\x87][\x80-\xBF])//g; - return $s; -} - -sub number_of_utf8_character { - local($caller, $s) = @_; - - $s2 = $s; - $s2 =~ s/[\x80-\xBF]//g; - return length($s2); -} - -sub cap_letter_reg_exp { - # includes A-Z and other Latin-based capital letters with accents, umlauts and other decorations etc. - return "[A-Z]|\xC3[\x80-\x96\x98-\x9E]|\xC4[\x80\x82\x84\x86\x88\x8A\x8C\x8E\x90\x94\x964\x98\x9A\x9C\x9E\xA0\xA2\xA4\xA6\xA8\xAA\xAC\xAE\xB0\xB2\xB4\xB6\xB9\xBB\xBD\xBF]|\xC5[\x81\x83\x85\x87\x8A\x8C\x8E\x90\x92\x96\x98\x9A\x9C\x9E\xA0\xA2\xA4\xA6\xA8\xAA\xAC\xB0\xB2\xB4\xB6\xB8\xB9\xBB\xBD]"; -} - -sub regex_extended_case_expansion { - local($caller, $s) = @_; - - if ($s =~ /\xC3/) { - $s =~ s/\xC3\xA0/\xC3\[\x80\xA0\]/g; - $s =~ s/\xC3\xA1/\xC3\[\x81\xA1\]/g; - $s =~ s/\xC3\xA2/\xC3\[\x82\xA2\]/g; - $s =~ s/\xC3\xA3/\xC3\[\x83\xA3\]/g; - $s =~ s/\xC3\xA4/\xC3\[\x84\xA4\]/g; - $s =~ s/\xC3\xA5/\xC3\[\x85\xA5\]/g; - $s =~ s/\xC3\xA6/\xC3\[\x86\xA6\]/g; - $s =~ s/\xC3\xA7/\xC3\[\x87\xA7\]/g; - $s =~ s/\xC3\xA8/\xC3\[\x88\xA8\]/g; - $s =~ s/\xC3\xA9/\xC3\[\x89\xA9\]/g; - $s =~ s/\xC3\xAA/\xC3\[\x8A\xAA\]/g; - $s =~ s/\xC3\xAB/\xC3\[\x8B\xAB\]/g; - $s =~ s/\xC3\xAC/\xC3\[\x8C\xAC\]/g; - $s =~ s/\xC3\xAD/\xC3\[\x8D\xAD\]/g; - $s =~ s/\xC3\xAE/\xC3\[\x8E\xAE\]/g; - $s =~ s/\xC3\xAF/\xC3\[\x8F\xAF\]/g; - $s =~ s/\xC3\xB0/\xC3\[\x90\xB0\]/g; - $s =~ s/\xC3\xB1/\xC3\[\x91\xB1\]/g; - $s =~ s/\xC3\xB2/\xC3\[\x92\xB2\]/g; - $s =~ s/\xC3\xB3/\xC3\[\x93\xB3\]/g; - $s =~ s/\xC3\xB4/\xC3\[\x94\xB4\]/g; - $s =~ s/\xC3\xB5/\xC3\[\x95\xB5\]/g; - $s =~ s/\xC3\xB6/\xC3\[\x96\xB6\]/g; - $s =~ s/\xC3\xB8/\xC3\[\x98\xB8\]/g; - $s =~ s/\xC3\xB9/\xC3\[\x99\xB9\]/g; - $s =~ s/\xC3\xBA/\xC3\[\x9A\xBA\]/g; - $s =~ s/\xC3\xBB/\xC3\[\x9B\xBB\]/g; - $s =~ s/\xC3\xBC/\xC3\[\x9C\xBC\]/g; - $s =~ s/\xC3\xBD/\xC3\[\x9D\xBD\]/g; - $s =~ s/\xC3\xBE/\xC3\[\x9E\xBE\]/g; - } - if ($s =~ /\xC5/) { - $s =~ s/\xC5\x91/\xC5\[\x90\x91\]/g; - $s =~ s/\xC5\xA1/\xC5\[\xA0\xA1\]/g; - $s =~ s/\xC5\xB1/\xC5\[\xB0\xB1\]/g; - } - - return $s; -} - -sub extended_lower_case { - local($caller, $s) = @_; - - $s =~ tr/A-Z/a-z/; - - # Latin-1 - if ($s =~ /\xC3[\x80-\x9F]/) { - $s =~ s/À/à/g; - $s =~ s/Á/á/g; - $s =~ s/Â/â/g; - $s =~ s/Ã/ã/g; - $s =~ s/Ä/ä/g; - $s =~ s/Å/å/g; - $s =~ s/Æ/æ/g; - $s =~ s/Ç/ç/g; - $s =~ s/È/è/g; - $s =~ s/É/é/g; - $s =~ s/Ê/ê/g; - $s =~ s/Ë/ë/g; - $s =~ s/Ì/ì/g; - $s =~ s/Í/í/g; - $s =~ s/Î/î/g; - $s =~ s/Ï/ï/g; - $s =~ s/Ð/ð/g; - $s =~ s/Ñ/ñ/g; - $s =~ s/Ò/ò/g; - $s =~ s/Ó/ó/g; - $s =~ s/Ô/ô/g; - $s =~ s/Õ/õ/g; - $s =~ s/Ö/ö/g; - $s =~ s/Ø/ø/g; - $s =~ s/Ù/ù/g; - $s =~ s/Ú/ú/g; - $s =~ s/Û/û/g; - $s =~ s/Ü/ü/g; - $s =~ s/Ý/ý/g; - $s =~ s/Þ/þ/g; - } - # Latin Extended-A - if ($s =~ /[\xC4-\xC5][\x80-\xBF]/) { - $s =~ s/Ā/ā/g; - $s =~ s/Ă/ă/g; - $s =~ s/Ą/ą/g; - $s =~ s/Ć/ć/g; - $s =~ s/Ĉ/ĉ/g; - $s =~ s/Ċ/ċ/g; - $s =~ s/Č/č/g; - $s =~ s/Ď/ď/g; - $s =~ s/Đ/đ/g; - $s =~ s/Ē/ē/g; - $s =~ s/Ĕ/ĕ/g; - $s =~ s/Ė/ė/g; - $s =~ s/Ę/ę/g; - $s =~ s/Ě/ě/g; - $s =~ s/Ĝ/ĝ/g; - $s =~ s/Ğ/ğ/g; - $s =~ s/Ġ/ġ/g; - $s =~ s/Ģ/ģ/g; - $s =~ s/Ĥ/ĥ/g; - $s =~ s/Ħ/ħ/g; - $s =~ s/Ĩ/ĩ/g; - $s =~ s/Ī/ī/g; - $s =~ s/Ĭ/ĭ/g; - $s =~ s/Į/į/g; - $s =~ s/İ/ı/g; - $s =~ s/IJ/ij/g; - $s =~ s/Ĵ/ĵ/g; - $s =~ s/Ķ/ķ/g; - $s =~ s/Ĺ/ĺ/g; - $s =~ s/Ļ/ļ/g; - $s =~ s/Ľ/ľ/g; - $s =~ s/Ŀ/ŀ/g; - $s =~ s/Ł/ł/g; - $s =~ s/Ń/ń/g; - $s =~ s/Ņ/ņ/g; - $s =~ s/Ň/ň/g; - $s =~ s/Ŋ/ŋ/g; - $s =~ s/Ō/ō/g; - $s =~ s/Ŏ/ŏ/g; - $s =~ s/Ő/ő/g; - $s =~ s/Œ/œ/g; - $s =~ s/Ŕ/ŕ/g; - $s =~ s/Ŗ/ŗ/g; - $s =~ s/Ř/ř/g; - $s =~ s/Ś/ś/g; - $s =~ s/Ŝ/ŝ/g; - $s =~ s/Ş/ş/g; - $s =~ s/Š/š/g; - $s =~ s/Ţ/ţ/g; - $s =~ s/Ť/ť/g; - $s =~ s/Ŧ/ŧ/g; - $s =~ s/Ũ/ũ/g; - $s =~ s/Ū/ū/g; - $s =~ s/Ŭ/ŭ/g; - $s =~ s/Ů/ů/g; - $s =~ s/Ű/ű/g; - $s =~ s/Ų/ų/g; - $s =~ s/Ŵ/ŵ/g; - $s =~ s/Ŷ/ŷ/g; - $s =~ s/Ź/ź/g; - $s =~ s/Ż/ż/g; - $s =~ s/Ž/ž/g; - } - # Greek letters - if ($s =~ /\xCE[\x86-\xAB]/) { - $s =~ s/Α/α/g; - $s =~ s/Β/β/g; - $s =~ s/Γ/γ/g; - $s =~ s/Δ/δ/g; - $s =~ s/Ε/ε/g; - $s =~ s/Ζ/ζ/g; - $s =~ s/Η/η/g; - $s =~ s/Θ/θ/g; - $s =~ s/Ι/ι/g; - $s =~ s/Κ/κ/g; - $s =~ s/Λ/λ/g; - $s =~ s/Μ/μ/g; - $s =~ s/Ν/ν/g; - $s =~ s/Ξ/ξ/g; - $s =~ s/Ο/ο/g; - $s =~ s/Π/π/g; - $s =~ s/Ρ/ρ/g; - $s =~ s/Σ/σ/g; - $s =~ s/Τ/τ/g; - $s =~ s/Υ/υ/g; - $s =~ s/Φ/φ/g; - $s =~ s/Χ/χ/g; - $s =~ s/Ψ/ψ/g; - $s =~ s/Ω/ω/g; - $s =~ s/Ϊ/ϊ/g; - $s =~ s/Ϋ/ϋ/g; - $s =~ s/Ά/ά/g; - $s =~ s/Έ/έ/g; - $s =~ s/Ή/ή/g; - $s =~ s/Ί/ί/g; - $s =~ s/Ό/ό/g; - $s =~ s/Ύ/ύ/g; - $s =~ s/Ώ/ώ/g; - } - # Cyrillic letters - if ($s =~ /\xD0[\x80-\xAF]/) { - $s =~ s/А/а/g; - $s =~ s/Б/б/g; - $s =~ s/В/в/g; - $s =~ s/Г/г/g; - $s =~ s/Д/д/g; - $s =~ s/Е/е/g; - $s =~ s/Ж/ж/g; - $s =~ s/З/з/g; - $s =~ s/И/и/g; - $s =~ s/Й/й/g; - $s =~ s/К/к/g; - $s =~ s/Л/л/g; - $s =~ s/М/м/g; - $s =~ s/Н/н/g; - $s =~ s/О/о/g; - $s =~ s/П/п/g; - $s =~ s/Р/р/g; - $s =~ s/С/с/g; - $s =~ s/Т/т/g; - $s =~ s/У/у/g; - $s =~ s/Ф/ф/g; - $s =~ s/Х/х/g; - $s =~ s/Ц/ц/g; - $s =~ s/Ч/ч/g; - $s =~ s/Ш/ш/g; - $s =~ s/Щ/щ/g; - $s =~ s/Ъ/ъ/g; - $s =~ s/Ы/ы/g; - $s =~ s/Ь/ь/g; - $s =~ s/Э/э/g; - $s =~ s/Ю/ю/g; - $s =~ s/Я/я/g; - $s =~ s/Ѐ/ѐ/g; - $s =~ s/Ё/ё/g; - $s =~ s/Ђ/ђ/g; - $s =~ s/Ѓ/ѓ/g; - $s =~ s/Є/є/g; - $s =~ s/Ѕ/ѕ/g; - $s =~ s/І/і/g; - $s =~ s/Ї/ї/g; - $s =~ s/Ј/ј/g; - $s =~ s/Љ/љ/g; - $s =~ s/Њ/њ/g; - $s =~ s/Ћ/ћ/g; - $s =~ s/Ќ/ќ/g; - $s =~ s/Ѝ/ѝ/g; - $s =~ s/Ў/ў/g; - $s =~ s/Џ/џ/g; - } - # Fullwidth A-Z - if ($s =~ /\xEF\xBC[\xA1-\xBA]/) { - $s =~ s/A/a/g; - $s =~ s/B/b/g; - $s =~ s/C/c/g; - $s =~ s/D/d/g; - $s =~ s/E/e/g; - $s =~ s/F/f/g; - $s =~ s/G/g/g; - $s =~ s/H/h/g; - $s =~ s/I/i/g; - $s =~ s/J/j/g; - $s =~ s/K/k/g; - $s =~ s/L/l/g; - $s =~ s/M/m/g; - $s =~ s/N/n/g; - $s =~ s/O/o/g; - $s =~ s/P/p/g; - $s =~ s/Q/q/g; - $s =~ s/R/r/g; - $s =~ s/S/s/g; - $s =~ s/T/t/g; - $s =~ s/U/u/g; - $s =~ s/V/v/g; - $s =~ s/W/w/g; - $s =~ s/X/x/g; - $s =~ s/Y/y/g; - $s =~ s/Z/z/g; - } - - return $s; -} - -sub extended_upper_case { - local($caller, $s) = @_; - - $s =~ tr/a-z/A-Z/; - return $s unless $s =~ /[\xC3-\xC5][\x80-\xBF]/; - - $s =~ s/\xC3\xA0/\xC3\x80/g; - $s =~ s/\xC3\xA1/\xC3\x81/g; - $s =~ s/\xC3\xA2/\xC3\x82/g; - $s =~ s/\xC3\xA3/\xC3\x83/g; - $s =~ s/\xC3\xA4/\xC3\x84/g; - $s =~ s/\xC3\xA5/\xC3\x85/g; - $s =~ s/\xC3\xA6/\xC3\x86/g; - $s =~ s/\xC3\xA7/\xC3\x87/g; - $s =~ s/\xC3\xA8/\xC3\x88/g; - $s =~ s/\xC3\xA9/\xC3\x89/g; - $s =~ s/\xC3\xAA/\xC3\x8A/g; - $s =~ s/\xC3\xAB/\xC3\x8B/g; - $s =~ s/\xC3\xAC/\xC3\x8C/g; - $s =~ s/\xC3\xAD/\xC3\x8D/g; - $s =~ s/\xC3\xAE/\xC3\x8E/g; - $s =~ s/\xC3\xAF/\xC3\x8F/g; - $s =~ s/\xC3\xB0/\xC3\x90/g; - $s =~ s/\xC3\xB1/\xC3\x91/g; - $s =~ s/\xC3\xB2/\xC3\x92/g; - $s =~ s/\xC3\xB3/\xC3\x93/g; - $s =~ s/\xC3\xB4/\xC3\x94/g; - $s =~ s/\xC3\xB5/\xC3\x95/g; - $s =~ s/\xC3\xB6/\xC3\x96/g; - $s =~ s/\xC3\xB8/\xC3\x98/g; - $s =~ s/\xC3\xB9/\xC3\x99/g; - $s =~ s/\xC3\xBA/\xC3\x9A/g; - $s =~ s/\xC3\xBB/\xC3\x9B/g; - $s =~ s/\xC3\xBC/\xC3\x9C/g; - $s =~ s/\xC3\xBD/\xC3\x9D/g; - $s =~ s/\xC3\xBE/\xC3\x9E/g; - - $s =~ s/\xC5\x91/\xC5\x90/g; - $s =~ s/\xC5\xA1/\xC5\xA0/g; - $s =~ s/\xC5\xB1/\xC5\xB0/g; - return $s unless $s =~ /[\xC3-\xC5][\x80-\xBF]/; - - return $s; -} - -sub extended_first_upper_case { - local($caller, $s) = @_; - - if (($first_char, $rest) = ($s =~ /^([\x00-\x7F]|[\xC0-\xDF][\x80-\xBF]|[\xE0-\xEF][\x80-\xBF][\x80-\xBF])(.*)$/)) { - return $caller->extended_upper_case($first_char) . $rest; - } else { - return $s; - } -} - -sub repair_doubly_converted_utf8_strings { - local($caller, $s) = @_; - - if ($s =~ /\xC3[\x82-\x85]\xC2[\x80-\xBF]/) { - $s =~ s/\xC3\x82\xC2([\x80-\xBF])/\xC2$1/g; - $s =~ s/\xC3\x83\xC2([\x80-\xBF])/\xC3$1/g; - $s =~ s/\xC3\x84\xC2([\x80-\xBF])/\xC4$1/g; - $s =~ s/\xC3\x85\xC2([\x80-\xBF])/\xC5$1/g; - } - return $s; -} - -sub repair_misconverted_windows_to_utf8_strings { - local($caller, $s) = @_; - - # correcting conversions of UTF8 using Latin1-to-UTF converter - if ($s =~ /\xC3\xA2\xC2\x80\xC2[\x90-\xEF]/) { - my $result = ""; - while (($pre,$last_c,$post) = ($s =~ /^(.*?)\xC3\xA2\xC2\x80\xC2([\x90-\xEF])(.*)$/s)) { - $result .= "$pre\xE2\x80$last_c"; - $s = $post; - } - $result .= $s; - $s = $result; - } - # correcting conversions of Windows1252-to-UTF8 using Latin1-to-UTF converter - if ($s =~ /\xC2[\x80-\x9F]/) { - my $result = ""; - while (($pre,$c_windows,$post) = ($s =~ /^(.*?)\xC2([\x80-\x9F])(.*)$/s)) { - $c_utf8 = $caller->windows1252_to_utf8($c_windows, 0); - $result .= ($c_utf8 eq "?") ? ($pre . "\xC2" . $c_windows) : "$pre$c_utf8"; - $s = $post; - } - $result .= $s; - $s = $result; - } - if ($s =~ /\xC3/) { - $s =~ s/\xC3\xA2\xE2\x80\x9A\xC2\xAC/\xE2\x82\xAC/g; # x80 -> Euro sign - # x81 codepoint undefined in Windows 1252 - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC5\xA1/\xE2\x80\x9A/g; # x82 -> single low-9 quotation mark - $s =~ s/\xC3\x86\xE2\x80\x99/\xC6\x92/g; # x83 -> Latin small letter f with hook - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC5\xBE/\xE2\x80\x9E/g; # x84 -> double low-9 quotation mark - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA6/\xE2\x80\xA6/g; # x85 -> horizontal ellipsis - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA0/\xE2\x80\xA0/g; # x86 -> dagger - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA1/\xE2\x80\xA1/g; # x87 -> double dagger - $s =~ s/\xC3\x8B\xE2\x80\xA0/\xCB\x86/g; # x88 -> modifier letter circumflex accent - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xB0/\xE2\x80\xB0/g; # x89 -> per mille sign - $s =~ s/\xC3\x85\xC2\xA0/\xC5\xA0/g; # x8A -> Latin capital letter S with caron - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xB9/\xE2\x80\xB9/g; # x8B -> single left-pointing angle quotation mark - $s =~ s/\xC3\x85\xE2\x80\x99/\xC5\x92/g; # x8C -> Latin capital ligature OE - # x8D codepoint undefined in Windows 1252 - $s =~ s/\xC3\x85\xC2\xBD/\xC5\xBD/g; # x8E -> Latin capital letter Z with caron - # x8F codepoint undefined in Windows 1252 - # x90 codepoint undefined in Windows 1252 - $s =~ s/\xC3\xA2\xE2\x82\xAC\xCB\x9C/\xE2\x80\x98/g; # x91 a-circumflex+euro+small tilde -> left single quotation mark - $s =~ s/\xC3\xA2\xE2\x82\xAC\xE2\x84\xA2/\xE2\x80\x99/g; # x92 a-circumflex+euro+trademark -> right single quotation mark - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC5\x93/\xE2\x80\x9C/g; # x93 a-circumflex+euro+Latin small ligature oe -> left double quotation mark - # x94 maps through undefined intermediate code point - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xA2/\xE2\x80\xA2/g; # x95 a-circumflex+euro+cent sign -> bullet - $s =~ s/\xC3\xA2\xE2\x82\xAC\xE2\x80\x9C/\xE2\x80\x93/g; # x96 a-circumflex+euro+left double quotation mark -> en dash - $s =~ s/\xC3\xA2\xE2\x82\xAC\xE2\x80\x9D/\xE2\x80\x94/g; # x97 a-circumflex+euro+right double quotation mark -> em dash - $s =~ s/\xC3\x8B\xC5\x93/\xCB\x9C/g; # x98 Latin capital e diaeresis+Latin small ligature oe -> small tilde - $s =~ s/\xC3\xA2\xE2\x80\x9E\xC2\xA2/\xE2\x84\xA2/g; # x99 -> trade mark sign - $s =~ s/\xC3\x85\xC2\xA1/\xC5\xA1/g; # x9A -> Latin small letter s with caron - $s =~ s/\xC3\xA2\xE2\x82\xAC\xC2\xBA/\xE2\x80\xBA/g; # x9B -> single right-pointing angle quotation mark - $s =~ s/\xC3\x85\xE2\x80\x9C/\xC5\x93/g; # x9C -> Latin small ligature oe - # x9D codepoint undefined in Windows 1252 - $s =~ s/\xC3\x85\xC2\xBE/\xC5\xBE/g; # x9E -> Latin small letter z with caron - $s =~ s/\xC3\x85\xC2\xB8/\xC5\xB8/g; # x9F -> Latin capital letter Y with diaeresis - $s =~ s/\xC3\xAF\xC2\xBF\xC2\xBD/\xEF\xBF\xBD/g; # replacement character - } - - return $s; -} - -sub latin1_to_utf { - local($caller, $s) = @_; - - my $result = ""; - while (($pre,$c,$post) = ($s =~ /^(.*?)([\x80-\xFF])(.*)$/s)) { - $result .= $pre; - if ($c =~ /^[\x80-\xBF]$/) { - $result .= "\xC2$c"; - } elsif ($c =~ /^[\xC0-\xFF]$/) { - $c =~ tr/[\xC0-\xFF]/[\x80-\xBF]/; - $result .= "\xC3$c"; - } - $s = $post; - } - $result .= $s; - return $result; -} - -sub character_type_is_letter_type { - local($caller, $char_type) = @_; - - return ($char_type =~ /\b((CJK|hiragana|kana|katakana)\s+character|diacritic|letter|syllable)\b/); -} - -sub character_type { - local($caller, $c) = @_; - - if ($c =~ /^[\x00-\x7F]/) { - return "XML tag" if $c =~ /^<.*>$/; - return "ASCII Latin letter" if $c =~ /^[a-z]$/i; - return "ASCII digit" if $c =~ /^[0-9]$/i; - return "ASCII whitespace" if $c =~ /^[\x09-\x0D\x20]$/; - return "ASCII control-character" if $c =~ /^[\x00-\x1F\x7F]$/; - return "ASCII currency" if $c eq "\$"; - return "ASCII punctuation"; - } elsif ($c =~ /^[\xC0-\xDF]/) { - return "non-UTF8 (invalid)" unless $c =~ /^[\xC0-\xDF][\x80-\xBF]$/; - return "non-shortest-UTF8 (invalid)" if $c =~ /[\xC0-\xC1]/; - return "non-ASCII control-character" if $c =~ /\xC2[\x80-\x9F]/; - return "non-ASCII whitespace" if $c =~ /\xC2\xA0/; - return "non-ASCII currency" if $c =~ /\xC2[\xA2-\xA5]/; - return "fraction" if $c =~ /\xC2[\xBC-\xBE]/; # NEW - return "superscript digit" if $c =~ /\xC2[\xB2\xB3\xB9]/; - return "non-ASCII Latin letter" if $c =~ /\xC2\xB5/; # micro sign - return "non-ASCII punctuation" if $c =~ /\xC2[\xA0-\xBF]/; - return "non-ASCII punctuation" if $c =~ /\xC3[\x97\xB7]/; - return "non-ASCII Latin letter" if $c =~ /\xC3[\x80-\xBF]/; - return "Latin ligature letter" if $c =~ /\xC4[\xB2\xB3]/; - return "Latin ligature letter" if $c =~ /\xC5[\x92\x93]/; - return "non-ASCII Latin letter" if $c =~ /[\xC4-\xC8]/; - return "non-ASCII Latin letter" if $c =~ /\xC9[\x80-\x8F]/; - return "IPA" if $c =~ /\xC9[\x90-\xBF]/; - return "IPA" if $c =~ /\xCA[\x80-\xBF]/; - return "IPA" if $c =~ /\xCB[\x80-\xBF]/; - return "combining-diacritic" if $c =~ /\xCC[\x80-\xBF]/; - return "combining-diacritic" if $c =~ /\xCD[\x80-\xAF]/; - return "Greek punctuation" if $c =~ /\xCD[\xBE]/; # Greek question mark - return "Greek punctuation" if $c =~ /\xCE[\x87]/; # Greek semicolon - return "Greek letter" if $c =~ /\xCD[\xB0-\xBF]/; - return "Greek letter" if $c =~ /\xCE/; - return "Greek letter" if $c =~ /\xCF[\x80-\xA1\xB3\xB7\xB8\xBA\xBB]/; - return "Coptic letter" if $c =~ /\xCF[\xA2-\xAF]/; - return "Cyrillic letter" if $c =~ /[\xD0-\xD3]/; - return "Cyrillic letter" if $c =~ /\xD4[\x80-\xAF]/; - return "Armenian punctuation" if $c =~ /\xD5[\x9A-\x9F]/; - return "Armenian punctuation" if $c =~ /\xD6[\x89-\x8F]/; - return "Armenian letter" if $c =~ /\xD4[\xB0-\xBF]/; - return "Armenian letter" if $c =~ /\xD5/; - return "Armenian letter" if $c =~ /\xD6[\x80-\x8F]/; - return "Hebrew accent" if $c =~ /\xD6[\x91-\xAE]/; - return "Hebrew punctuation" if $c =~ /\xD6\xBE/; - return "Hebrew punctuation" if $c =~ /\xD7[\x80\x83\x86\xB3\xB4]/; - return "Hebrew point" if $c =~ /\xD6[\xB0-\xBF]/; - return "Hebrew point" if $c =~ /\xD7[\x81\x82\x87]/; - return "Hebrew letter" if $c =~ /\xD7[\x90-\xB2]/; - return "other Hebrew" if $c =~ /\xD6[\x90-\xBF]/; - return "other Hebrew" if $c =~ /\xD7/; - return "Arabic currency" if $c =~ /\xD8\x8B/; # Afghani sign - return "Arabic punctuation" if $c =~ /\xD8[\x89-\x8D\x9B\x9E\x9F]/; - return "Arabic punctuation" if $c =~ /\xD9[\xAA-\xAD]/; - return "Arabic punctuation" if $c =~ /\xDB[\x94]/; - return "Arabic tatweel" if $c =~ /\xD9\x80/; - return "Arabic letter" if $c =~ /\xD8[\xA0-\xBF]/; - return "Arabic letter" if $c =~ /\xD9[\x81-\x9F]/; - return "Arabic letter" if $c =~ /\xD9[\xAE-\xBF]/; - return "Arabic letter" if $c =~ /\xDA[\x80-\xBF]/; - return "Arabic letter" if $c =~ /\xDB[\x80-\x95]/; - return "Arabic Indic digit" if $c =~ /\xD9[\xA0-\xA9]/; - return "Arabic Indic digit" if $c =~ /\xDB[\xB0-\xB9]/; - return "other Arabic" if $c =~ /[\xD8-\xDB]/; - return "Syriac punctuation" if $c =~ /\xDC[\x80-\x8F]/; - return "Syriac letter" if $c =~ /\xDC[\x90-\xAF]/; - return "Syriac diacritic" if $c =~ /\xDC[\xB0-\xBF]/; - return "Syriac diacritic" if $c =~ /\xDD[\x80-\x8A]/; - return "Thaana letter" if $c =~ /\xDE/; - } elsif ($c =~ /^[\xE0-\xEF]/) { - return "non-UTF8 (invalid)" unless $c =~ /^[\xE0-\xEF][\x80-\xBF]{2,2}$/; - return "non-shortest-UTF8 (invalid)" if $c =~ /\xE0[\x80-\x9F]/; - return "Arabic letter" if $c =~ /\xE0\xA2[\xA0-\xBF]/; # extended letters - return "other Arabic" if $c =~ /\xE0\xA3/; # extended characters - return "Devanagari punctuation" if $c =~ /\xE0\xA5[\xA4\xA5]/; # danda, double danda - return "Devanagari digit" if $c =~ /\xE0\xA5[\xA6-\xAF]/; - return "Devanagari letter" if $c =~ /\xE0[\xA4-\xA5]/; - return "Bengali digit" if $c =~ /\xE0\xA7[\xA6-\xAF]/; - return "Bengali currency" if $c =~ /\xE0\xA7[\xB2-\xB9]/; - return "Bengali letter" if $c =~ /\xE0[\xA6-\xA7]/; - return "Gurmukhi digit" if $c =~ /\xE0\xA9[\xA6-\xAF]/; - return "Gurmukhi letter" if $c =~ /\xE0[\xA8-\xA9]/; - return "Gujarati digit" if $c =~ /\xE0\xAB[\xA6-\xAF]/; - return "Gujarati letter" if $c =~ /\xE0[\xAA-\xAB]/; - return "Oriya digit" if $c =~ /\xE0\xAD[\xA6-\xAF]/; - return "Oriya fraction" if $c =~ /\xE0\xAD[\xB2-\xB7]/; - return "Oriya letter" if $c =~ /\xE0[\xAC-\xAD]/; - return "Tamil digit" if $c =~ /\xE0\xAF[\xA6-\xAF]/; - return "Tamil number" if $c =~ /\xE0\xAF[\xB0-\xB2]/; # number (10, 100, 1000) - return "Tamil letter" if $c =~ /\xE0[\xAE-\xAF]/; - return "Telegu digit" if $c =~ /\xE0\xB1[\xA6-\xAF]/; - return "Telegu fraction" if $c =~ /\xE0\xB1[\xB8-\xBE]/; - return "Telegu letter" if $c =~ /\xE0[\xB0-\xB1]/; - return "Kannada digit" if $c =~ /\xE0\xB3[\xA6-\xAF]/; - return "Kannada letter" if $c =~ /\xE0[\xB2-\xB3]/; - return "Malayalam digit" if $c =~ /\xE0\xB5[\x98-\x9E\xA6-\xB8]/; - return "Malayalam punctuation" if $c =~ /\xE0\xB5\xB9/; # date mark - return "Malayalam letter" if $c =~ /\xE0[\xB4-\xB5]/; - return "Sinhala digit" if $c =~ /\xE0\xB7[\xA6-\xAF]/; - return "Sinhala punctuation" if $c =~ /\xE0\xB7\xB4/; - return "Sinhala letter" if $c =~ /\xE0[\xB6-\xB7]/; - return "Thai currency" if $c =~ /\xE0\xB8\xBF/; - return "Thai digit" if $c =~ /\xE0\xB9[\x90-\x99]/; - return "Thai character" if $c =~ /\xE0[\xB8-\xB9]/; - return "Lao punctuation" if $c =~ /\xE0\xBA\xAF/; # Lao ellipsis - return "Lao digit" if $c =~ /\xE0\xBB[\x90-\x99]/; - return "Lao character" if $c =~ /\xE0[\xBA-\xBB]/; - return "Tibetan punctuation" if $c =~ /\xE0\xBC[\x81-\x94]/; - return "Tibetan sign" if $c =~ /\xE0\xBC[\x95-\x9F]/; - return "Tibetan digit" if $c =~ /\xE0\xBC[\xA0-\xB3]/; - return "Tibetan punctuation" if $c =~ /\xE0\xBC[\xB4-\xBD]/; - return "Tibetan letter" if $c =~ /\xE0[\xBC-\xBF]/; - return "Myanmar digit" if $c =~ /\xE1\x81[\x80-\x89]/; - return "Myanmar digit" if $c =~ /\xE1\x82[\x90-\x99]/; # Myanmar Shan digits - return "Myanmar punctuation" if $c =~ /\xE1\x81[\x8A-\x8B]/; - return "Myanmar letter" if $c =~ /\xE1[\x80-\x81]/; - return "Myanmar letter" if $c =~ /\xE1\x82[\x80-\x9F]/; - return "Georgian punctuation" if $c =~ /\xE1\x83\xBB/; - return "Georgian letter" if $c =~ /\xE1\x82[\xA0-\xBF]/; - return "Georgian letter" if $c =~ /\xE1\x83/; - return "Georgian letter" if $c =~ /\xE1\xB2[\x90-\xBF]/; # Georgian Mtavruli capital letters - return "Georgian letter" if $c =~ /\xE2\xB4[\x80-\xAF]/; # Georgian small letters (Khutsuri) - return "Korean Hangul letter" if $c =~ /\xE1[\x84-\x87]/; - return "Ethiopic punctuation" if $c =~ /\xE1\x8D[\xA0-\xA8]/; - return "Ethiopic digit" if $c =~ /\xE1\x8D[\xA9-\xB1]/; - return "Ethiopic number" if $c =~ /\xE1\x8D[\xB2-\xBC]/; - return "Ethiopic syllable" if $c =~ /\xE1[\x88-\x8D]/; - return "Cherokee letter" if $c =~ /\xE1\x8E[\xA0-\xBF]/; - return "Cherokee letter" if $c =~ /\xE1\x8F/; - return "Canadian punctuation" if $c =~ /\xE1\x90\x80/; # Canadian Syllabics hyphen - return "Canadian punctuation" if $c =~ /\xE1\x99\xAE/; # Canadian Syllabics full stop - return "Canadian syllable" if $c =~ /\xE1[\x90-\x99]/; - return "Canadian syllable" if $c =~ /\xE1\xA2[\xB0-\xBF]/; - return "Canadian syllable" if $c =~ /\xE1\xA3/; - return "Ogham whitespace" if $c =~ /\xE1\x9A\x80/; - return "Ogham letter" if $c =~ /\xE1\x9A[\x81-\x9A]/; - return "Ogham punctuation" if $c =~ /\xE1\x9A[\x9B-\x9C]/; - return "Runic punctuation" if $c =~ /\xE1\x9B[\xAB-\xAD]/; - return "Runic letter" if $c =~ /\xE1\x9A[\xA0-\xBF]/; - return "Runic letter" if $c =~ /\xE1\x9B/; - return "Khmer currency" if $c =~ /\xE1\x9F\x9B/; - return "Khmer digit" if $c =~ /\xE1\x9F[\xA0-\xA9]/; - return "Khmer letter" if $c =~ /\xE1[\x9E-\x9F]/; - return "Mongolian punctuation" if $c =~ /\xE1\xA0[\x80-\x8A]/; - return "Mongolian digit" if $c =~ /\xE1\xA0[\x90-\x99]/; - return "Mongolian letter" if $c =~ /\xE1[\xA0-\xA1]/; - return "Mongolian letter" if $c =~ /\xE1\xA2[\x80-\xAF]/; - return "Buginese letter" if $c =~ /\xE1\xA8[\x80-\x9B]/; - return "Buginese punctuation" if $c =~ /\xE1\xA8[\x9E-\x9F]/; - return "Balinese letter" if $c =~ /\xE1\xAC/; - return "Balinese letter" if $c =~ /\xE1\xAD[\x80-\x8F]/; - return "Balinese digit" if $c =~ /\xE1\xAD[\x90-\x99]/; - return "Balinese puncutation" if $c =~ /\xE1\xAD[\x9A-\xA0]/; - return "Balinese symbol" if $c =~ /\xE1\xAD[\xA1-\xBF]/; - return "Sundanese digit" if $c =~ /\xE1\xAE[\xB0-\xB9]/; - return "Sundanese letter" if $c =~ /\xE1\xAE/; - return "Cyrillic letter" if $c =~ /\xE1\xB2[\x80-\x8F]/; - return "Sundanese punctuation" if $c =~ /\xE1\xB3[\x80-\x8F]/; - return "IPA" if $c =~ /\xE1[\xB4-\xB6]/; - return "non-ASCII Latin letter" if $c =~ /\xE1[\xB8-\xBB]/; - return "Greek letter" if $c =~ /\xE1[\xBC-\xBF]/; - return "non-ASCII whitespace" if $c =~ /\xE2\x80[\x80-\x8A\xAF]/; - return "zero-width space" if $c =~ /\xE2\x80\x8B/; - return "zero-width non-space" if $c =~ /\xE2\x80\x8C/; - return "zero-width joiner" if $c =~ /\xE2\x80\x8D/; - return "directional mark" if $c =~ /\xE2\x80[\x8E-\x8F\xAA-\xAE]/; - return "non-ASCII punctuation" if $c =~ /\xE2\x80[\x90-\xBF]/; - return "non-ASCII punctuation" if $c =~ /\xE2\x81[\x80-\x9E]/; - return "superscript letter" if $c =~ /\xE2\x81[\xB1\xBF]/; - return "superscript digit" if $c =~ /\xE2\x81[\xB0-\xB9]/; - return "superscript punctuation" if $c =~ /\xE2\x81[\xBA-\xBE]/; - return "subscript digit" if $c =~ /\xE2\x82[\x80-\x89]/; - return "subscript punctuation" if $c =~ /\xE2\x82[\x8A-\x8E]/; - return "non-ASCII currency" if $c =~ /\xE2\x82[\xA0-\xBF]/; - return "letterlike symbol" if $c =~ /\xE2\x84/; - return "letterlike symbol" if $c =~ /\xE2\x85[\x80-\x8F]/; - return "fraction" if $c =~ /\xE2\x85[\x90-\x9E]/; # NEW - return "Roman number" if $c =~ /\xE2\x85[\xA0-\xBF]/; # NEW - return "arrow symbol" if $c =~ /\xE2\x86[\x90-\xBF]/; - return "arrow symbol" if $c =~ /\xE2\x87/; - return "mathematical operator" if $c =~ /\xE2[\x88-\x8B]/; - return "technical symbol" if $c =~ /\xE2[\x8C-\x8F]/; - return "enclosed alphanumeric" if $c =~ /\xE2\x91[\xA0-\xBF]/; - return "enclosed alphanumeric" if $c =~ /\xE2[\x92-\x93]/; - return "box drawing" if $c =~ /\xE2[\x94-\x95]/; - return "geometric shape" if $c =~ /\xE2\x96[\xA0-\xBF]/; - return "geometric shape" if $c =~ /\xE2\x97/; - return "pictograph" if $c =~ /\xE2[\x98-\x9E]/; - return "arrow symbol" if $c =~ /\xE2\xAC[\x80-\x91\xB0-\xBF]/; - return "geometric shape" if $c =~ /\xE2\xAC[\x92-\xAF]/; - return "arrow symbol" if $c =~ /\xE2\xAD[\x80-\x8F\x9A-\xBF]/; - return "geometric shape" if $c =~ /\xE2\xAD[\x90-\x99]/; - return "arrow symbol" if $c =~ /\xE2\xAE[\x80-\xB9]/; - return "geometric shape" if $c =~ /\xE2\xAE[\xBA-\xBF]/; - return "geometric shape" if $c =~ /\xE2\xAF[\x80-\x88\x8A-\x8F]/; - return "symbol" if $c =~ /\xE2[\xAC-\xAF]/; - return "Coptic fraction" if $c =~ /\xE2\xB3\xBD/; - return "Coptic punctuation" if $c =~ /\xE2\xB3[\xB9-\xBF]/; - return "Coptic letter" if $c =~ /\xE2[\xB2-\xB3]/; - return "Georgian letter" if $c =~ /\xE2\xB4[\x80-\xAF]/; - return "Tifinagh punctuation" if $c =~ /\xE2\xB5\xB0/; - return "Tifinagh letter" if $c =~ /\xE2\xB4[\xB0-\xBF]/; - return "Tifinagh letter" if $c =~ /\xE2\xB5/; - return "Ethiopic syllable" if $c =~ /\xE2\xB6/; - return "Ethiopic syllable" if $c =~ /\xE2\xB7[\x80-\x9F]/; - return "non-ASCII punctuation" if $c =~ /\xE3\x80[\x80-\x91\x94-\x9F\xB0\xBB-\xBD]/; - return "symbol" if $c =~ /\xE3\x80[\x91\x92\xA0\xB6\xB7]/; - return "Japanese hiragana character" if $c =~ /\xE3\x81/; - return "Japanese hiragana character" if $c =~ /\xE3\x82[\x80-\x9F]/; - return "Japanese katakana character" if $c =~ /\xE3\x82[\xA0-\xBF]/; - return "Japanese katakana character" if $c =~ /\xE3\x83/; - return "Bopomofo letter" if $c =~ /\xE3\x84[\x80-\xAF]/; - return "Korean Hangul letter" if $c =~ /\xE3\x84[\xB0-\xBF]/; - return "Korean Hangul letter" if $c =~ /\xE3\x85/; - return "Korean Hangul letter" if $c =~ /\xE3\x86[\x80-\x8F]/; - return "Bopomofo letter" if $c =~ /\xE3\x86[\xA0-\xBF]/; - return "CJK stroke" if $c =~ /\xE3\x87[\x80-\xAF]/; - return "Japanese kana character" if $c =~ /\xE3\x87[\xB0-\xBF]/; - return "CJK symbol" if $c =~ /\xE3[\x88-\x8B]/; - return "CJK square Latin abbreviation" if $c =~ /\xE3\x8D[\xB1-\xBA]/; - return "CJK square Latin abbreviation" if $c =~ /\xE3\x8E/; - return "CJK square Latin abbreviation" if $c =~ /\xE3\x8F[\x80-\x9F\xBF]/; - return "CJK character" if $c =~ /\xE4[\xB8-\xBF]/; - return "CJK character" if $c =~ /[\xE5-\xE9]/; - return "Yi syllable" if $c =~ /\xEA[\x80-\x92]/; - return "Lisu letter" if $c =~ /\xEA\x93[\x90-\xBD]/; - return "Lisu punctuation" if $c =~ /\xEA\x93[\xBE-\xBF]/; - return "Cyrillic letter" if $c =~ /\xEA\x99/; - return "Cyrillic letter" if $c =~ /\xEA\x9A[\x80-\x9F]/; - return "modifier tone" if $c =~ /\xEA\x9C[\x80-\xA1]/; - return "Javanese punctuation" if $c =~ /\xEA\xA7[\x81-\x8D\x9E-\x9F]/; - return "Javanese digit" if $c =~ /\xEA\xA7[\x90-\x99]/; - return "Javanese letter" if $c =~ /\xEA\xA6/; - return "Javanese letter" if $c =~ /\xEA\xA7[\x80-\x9F]/; - return "Ethiopic syllable" if $c =~ /\xEA\xAC[\x80-\xAF]/; - return "Cherokee letter" if $c =~ /\xEA\xAD[\xB0-\xBF]/; - return "Cherokee letter" if $c =~ /\xEA\xAE/; - return "Meetai Mayek digit" if $c =~ /\xEA\xAF[\xB0-\xB9]/; - return "Meetai Mayek letter" if $c =~ /\xEA\xAF/; - return "Korean Hangul syllable" if $c =~ /\xEA[\xB0-\xBF]/; - return "Korean Hangul syllable" if $c =~ /[\xEB-\xEC]/; - return "Korean Hangul syllable" if $c =~ /\xED[\x80-\x9E]/; - return "Klingon letter" if $c =~ /\xEF\xA3[\x90-\xA9]/; - return "Klingon digit" if $c =~ /\xEF\xA3[\xB0-\xB9]/; - return "Klingon punctuation" if $c =~ /\xEF\xA3[\xBD-\xBE]/; - return "Klingon symbol" if $c =~ /\xEF\xA3\xBF/; - return "private use character" if $c =~ /\xEE/; - return "Latin typographic ligature" if $c =~ /\xEF\xAC[\x80-\x86]/; - return "Hebrew presentation letter" if $c =~ /\xEF\xAC[\x9D-\xBF]/; - return "Hebrew presentation letter" if $c =~ /\xEF\xAD[\x80-\x8F]/; - return "Arabic presentation letter" if $c =~ /\xEF\xAD[\x90-\xBF]/; - return "Arabic presentation letter" if $c =~ /\xEF[\xAE-\xB7]/; - return "non-ASCII punctuation" if $c =~ /\xEF\xB8[\x90-\x99]/; - return "non-ASCII punctuation" if $c =~ /\xEF\xB8[\xB0-\xBF]/; - return "non-ASCII punctuation" if $c =~ /\xEF\xB9[\x80-\xAB]/; - return "Arabic presentation letter" if $c =~ /\xEF\xB9[\xB0-\xBF]/; - return "Arabic presentation letter" if $c =~ /\xEF\xBA/; - return "Arabic presentation letter" if $c =~ /\xEF\xBB[\x80-\xBC]/; - return "byte-order mark/zero-width no-break space" if $c eq "\xEF\xBB\xBF"; - return "fullwidth currency" if $c =~ /\xEF\xBC\x84/; - return "fullwidth digit" if $c =~ /\xEF\xBC[\x90-\x99]/; - return "fullwidth Latin letter" if $c =~ /\xEF\xBC[\xA1-\xBA]/; - return "fullwidth Latin letter" if $c =~ /\xEF\xBD[\x81-\x9A]/; - return "fullwidth punctuation" if $c =~ /\xEF\xBC/; - return "fullwidth punctuation" if $c =~ /\xEF\xBD[\x9B-\xA4]/; - return "halfwidth Japanese punctuation" if $c =~ /\xEF\xBD[\xA1-\xA4]/; - return "halfwidth Japanese katakana character" if $c =~ /\xEF\xBD[\xA5-\xBF]/; - return "halfwidth Japanese katakana character" if $c =~ /\xEF\xBE[\x80-\x9F]/; - return "fullwidth currency" if $c =~ /\xEF\xBF[\xA0-\xA6]/; - return "replacement character" if $c eq "\xEF\xBF\xBD"; - } elsif ($c =~ /[\xF0-\xF7]/) { - return "non-UTF8 (invalid)" unless $c =~ /[\xF0-\xF7][\x80-\xBF]{3,3}$/; - return "non-shortest-UTF8 (invalid)" if $c =~ /\xF0[\x80-\x8F]/; - return "Linear B syllable" if $c =~ /\xF0\x90\x80/; - return "Linear B syllable" if $c =~ /\xF0\x90\x81[\x80-\x8F]/; - return "Linear B symbol" if $c =~ /\xF0\x90\x81[\x90-\x9F]/; - return "Linear B ideogram" if $c =~ /\xF0\x90[\x82-\x83]/; - return "Gothic letter" if $c =~ /\xF0\x90\x8C[\xB0-\xBF]/; - return "Gothic letter" if $c =~ /\xF0\x90\x8D[\x80-\x8F]/; - return "Phoenician letter" if $c =~ /\xF0\x90\xA4[\x80-\x95]/; - return "Phoenician number" if $c =~ /\xF0\x90\xA4[\x96-\x9B]/; - return "Phoenician punctuation" if $c =~ /\xF0\x90\xA4\x9F/; # word separator - return "Old Hungarian number" if $c =~ /\xF0\x90\xB3[\xBA-\xBF]/; - return "Old Hungarian letter" if $c =~ /\xF0\x90[\xB2-\xB3]/; - return "Cuneiform digit" if $c =~ /\xF0\x92\x90/; # numberic sign - return "Cuneiform digit" if $c =~ /\xF0\x92\x91[\x80-\xAF]/; # numberic sign - return "Cuneiform punctuation" if $c =~ /\xF0\x92\x91[\xB0-\xBF]/; - return "Cuneiform sign" if $c =~ /\xF0\x92[\x80-\x95]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x81\xA8/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x82[\xAD-\xB6]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x86[\x90\xBC-\xBF]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x87[\x80-\x84]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x8D[\xA2-\xAB]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x8E[\x86-\x92]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x8F[\xBA-\xBF]/; - return "Egyptian hieroglyph number" if $c =~ /\xF0\x93\x90[\x80-\x83]/; - return "Egyptian hieroglyph" if $c =~ /\xF0\x93[\x80-\x90]/; - return "enclosed alphanumeric" if $c =~ /\xF0\x9F[\x84-\x87]/; - return "Mahjong symbol" if $c =~ /\xF0\x9F\x80[\x80-\xAF]/; - return "Domino symbol" if $c =~ /\xF0\x9F\x80[\xB0-\xBF]/; - return "Domino symbol" if $c =~ /\xF0\x9F\x81/; - return "Domino symbol" if $c =~ /\xF0\x9F\x82[\x80-\x9F]/; - return "Playing card symbol" if $c =~ /\xF0\x9F\x82[\xA0-\xBF]/; - return "Playing card symbol" if $c =~ /\xF0\x9F\x83/; - return "CJK symbol" if $c =~ /\xF0\x9F[\x88-\x8B]/; - return "pictograph" if $c =~ /\xF0\x9F[\x8C-\x9B]/; - return "geometric shape" if $c =~ /\xF0\x9F[\x9E-\x9F]/; - return "non-ASCII punctuation" if $c =~ /\xF0\x9F[\xA0-\xA3]/; - return "pictograph" if $c =~ /\xF0\x9F[\xA4-\xAB]/; - return "CJK character" if $c =~ /\xF0[\xA0-\xAF]/; - return "tag" if $c =~ /\xF3\xA0[\x80-\x81]/; - return "variation selector" if $c =~ /\xF3\xA0[\x84-\x87]/; - return "private use character" if $c =~ /\xF3[\xB0-\xBF]/; - return "private use character" if $c =~ /\xF4[\x80-\x8F]/; - # ... - } elsif ($c =~ /[\xF8-\xFB]/) { - return "non-UTF8 (invalid)" unless $c =~ /[\xF8-\xFB][\x80-\xBF]{4,4}$/; - } elsif ($c =~ /[\xFC-\xFD]/) { - return "non-UTF8 (invalid)" unless $c =~ /[\xFC-\xFD][\x80-\xBF]{5,5}$/; - } elsif ($c =~ /\xFE/) { - return "non-UTF8 (invalid)" unless $c =~ /\xFE][\x80-\xBF]{6,6}$/; - } else { - return "non-UTF8 (invalid)"; - } - return "other character"; -} - -1; - - diff --git a/spaces/xnetba/MMS/vits/data_utils.py b/spaces/xnetba/MMS/vits/data_utils.py deleted file mode 100644 index 4855699d23d5dee36d4a12e875c7465265caac0f..0000000000000000000000000000000000000000 --- a/spaces/xnetba/MMS/vits/data_utils.py +++ /dev/null @@ -1,392 +0,0 @@ -import time -import os -import random -import numpy as np -import torch -import torch.utils.data - -import commons -from mel_processing import spectrogram_torch -from utils import load_wav_to_torch, load_filepaths_and_text -from text import text_to_sequence, cleaned_text_to_sequence - - -class TextAudioLoader(torch.utils.data.Dataset): - """ - 1) loads audio, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_and_text, hparams): - self.audiopaths_and_text = load_filepaths_and_text(audiopaths_and_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_and_text) - self._filter() - - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_and_text_new = [] - lengths = [] - for audiopath, text in self.audiopaths_and_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_and_text_new.append([audiopath, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_and_text = audiopaths_and_text_new - self.lengths = lengths - - def get_audio_text_pair(self, audiopath_and_text): - # separate filename and text - audiopath, text = audiopath_and_text[0], audiopath_and_text[1] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - return (text, spec, wav) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def __getitem__(self, index): - return self.get_audio_text_pair(self.audiopaths_and_text[index]) - - def __len__(self): - return len(self.audiopaths_and_text) - - -class TextAudioCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text and aduio - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths - - -"""Multi speaker version""" -class TextAudioSpeakerLoader(torch.utils.data.Dataset): - """ - 1) loads audio, speaker_id, text pairs - 2) normalizes text and converts them to sequences of integers - 3) computes spectrograms from audio files. - """ - def __init__(self, audiopaths_sid_text, hparams): - self.audiopaths_sid_text = load_filepaths_and_text(audiopaths_sid_text) - self.text_cleaners = hparams.text_cleaners - self.max_wav_value = hparams.max_wav_value - self.sampling_rate = hparams.sampling_rate - self.filter_length = hparams.filter_length - self.hop_length = hparams.hop_length - self.win_length = hparams.win_length - self.sampling_rate = hparams.sampling_rate - - self.cleaned_text = getattr(hparams, "cleaned_text", False) - - self.add_blank = hparams.add_blank - self.min_text_len = getattr(hparams, "min_text_len", 1) - self.max_text_len = getattr(hparams, "max_text_len", 190) - - random.seed(1234) - random.shuffle(self.audiopaths_sid_text) - self._filter() - - def _filter(self): - """ - Filter text & store spec lengths - """ - # Store spectrogram lengths for Bucketing - # wav_length ~= file_size / (wav_channels * Bytes per dim) = file_size / (1 * 2) - # spec_length = wav_length // hop_length - - audiopaths_sid_text_new = [] - lengths = [] - for audiopath, sid, text in self.audiopaths_sid_text: - if self.min_text_len <= len(text) and len(text) <= self.max_text_len: - audiopaths_sid_text_new.append([audiopath, sid, text]) - lengths.append(os.path.getsize(audiopath) // (2 * self.hop_length)) - self.audiopaths_sid_text = audiopaths_sid_text_new - self.lengths = lengths - - def get_audio_text_speaker_pair(self, audiopath_sid_text): - # separate filename, speaker_id and text - audiopath, sid, text = audiopath_sid_text[0], audiopath_sid_text[1], audiopath_sid_text[2] - text = self.get_text(text) - spec, wav = self.get_audio(audiopath) - sid = self.get_sid(sid) - return (text, spec, wav, sid) - - def get_audio(self, filename): - audio, sampling_rate = load_wav_to_torch(filename) - if sampling_rate != self.sampling_rate: - raise ValueError("{} {} SR doesn't match target {} SR".format( - sampling_rate, self.sampling_rate)) - audio_norm = audio / self.max_wav_value - audio_norm = audio_norm.unsqueeze(0) - spec_filename = filename.replace(".wav", ".spec.pt") - if os.path.exists(spec_filename): - spec = torch.load(spec_filename) - else: - spec = spectrogram_torch(audio_norm, self.filter_length, - self.sampling_rate, self.hop_length, self.win_length, - center=False) - spec = torch.squeeze(spec, 0) - torch.save(spec, spec_filename) - return spec, audio_norm - - def get_text(self, text): - if self.cleaned_text: - text_norm = cleaned_text_to_sequence(text) - else: - text_norm = text_to_sequence(text, self.text_cleaners) - if self.add_blank: - text_norm = commons.intersperse(text_norm, 0) - text_norm = torch.LongTensor(text_norm) - return text_norm - - def get_sid(self, sid): - sid = torch.LongTensor([int(sid)]) - return sid - - def __getitem__(self, index): - return self.get_audio_text_speaker_pair(self.audiopaths_sid_text[index]) - - def __len__(self): - return len(self.audiopaths_sid_text) - - -class TextAudioSpeakerCollate(): - """ Zero-pads model inputs and targets - """ - def __init__(self, return_ids=False): - self.return_ids = return_ids - - def __call__(self, batch): - """Collate's training batch from normalized text, audio and speaker identities - PARAMS - ------ - batch: [text_normalized, spec_normalized, wav_normalized, sid] - """ - # Right zero-pad all one-hot text sequences to max input length - _, ids_sorted_decreasing = torch.sort( - torch.LongTensor([x[1].size(1) for x in batch]), - dim=0, descending=True) - - max_text_len = max([len(x[0]) for x in batch]) - max_spec_len = max([x[1].size(1) for x in batch]) - max_wav_len = max([x[2].size(1) for x in batch]) - - text_lengths = torch.LongTensor(len(batch)) - spec_lengths = torch.LongTensor(len(batch)) - wav_lengths = torch.LongTensor(len(batch)) - sid = torch.LongTensor(len(batch)) - - text_padded = torch.LongTensor(len(batch), max_text_len) - spec_padded = torch.FloatTensor(len(batch), batch[0][1].size(0), max_spec_len) - wav_padded = torch.FloatTensor(len(batch), 1, max_wav_len) - text_padded.zero_() - spec_padded.zero_() - wav_padded.zero_() - for i in range(len(ids_sorted_decreasing)): - row = batch[ids_sorted_decreasing[i]] - - text = row[0] - text_padded[i, :text.size(0)] = text - text_lengths[i] = text.size(0) - - spec = row[1] - spec_padded[i, :, :spec.size(1)] = spec - spec_lengths[i] = spec.size(1) - - wav = row[2] - wav_padded[i, :, :wav.size(1)] = wav - wav_lengths[i] = wav.size(1) - - sid[i] = row[3] - - if self.return_ids: - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid, ids_sorted_decreasing - return text_padded, text_lengths, spec_padded, spec_lengths, wav_padded, wav_lengths, sid - - -class DistributedBucketSampler(torch.utils.data.distributed.DistributedSampler): - """ - Maintain similar input lengths in a batch. - Length groups are specified by boundaries. - Ex) boundaries = [b1, b2, b3] -> any batch is included either {x | b1 < length(x) <=b2} or {x | b2 < length(x) <= b3}. - - It removes samples which are not included in the boundaries. - Ex) boundaries = [b1, b2, b3] -> any x s.t. length(x) <= b1 or length(x) > b3 are discarded. - """ - def __init__(self, dataset, batch_size, boundaries, num_replicas=None, rank=None, shuffle=True): - super().__init__(dataset, num_replicas=num_replicas, rank=rank, shuffle=shuffle) - self.lengths = dataset.lengths - self.batch_size = batch_size - self.boundaries = boundaries - - self.buckets, self.num_samples_per_bucket = self._create_buckets() - self.total_size = sum(self.num_samples_per_bucket) - self.num_samples = self.total_size // self.num_replicas - - def _create_buckets(self): - buckets = [[] for _ in range(len(self.boundaries) - 1)] - for i in range(len(self.lengths)): - length = self.lengths[i] - idx_bucket = self._bisect(length) - if idx_bucket != -1: - buckets[idx_bucket].append(i) - - for i in range(len(buckets) - 1, 0, -1): - if len(buckets[i]) == 0: - buckets.pop(i) - self.boundaries.pop(i+1) - - num_samples_per_bucket = [] - for i in range(len(buckets)): - len_bucket = len(buckets[i]) - total_batch_size = self.num_replicas * self.batch_size - rem = (total_batch_size - (len_bucket % total_batch_size)) % total_batch_size - num_samples_per_bucket.append(len_bucket + rem) - return buckets, num_samples_per_bucket - - def __iter__(self): - # deterministically shuffle based on epoch - g = torch.Generator() - g.manual_seed(self.epoch) - - indices = [] - if self.shuffle: - for bucket in self.buckets: - indices.append(torch.randperm(len(bucket), generator=g).tolist()) - else: - for bucket in self.buckets: - indices.append(list(range(len(bucket)))) - - batches = [] - for i in range(len(self.buckets)): - bucket = self.buckets[i] - len_bucket = len(bucket) - ids_bucket = indices[i] - num_samples_bucket = self.num_samples_per_bucket[i] - - # add extra samples to make it evenly divisible - rem = num_samples_bucket - len_bucket - ids_bucket = ids_bucket + ids_bucket * (rem // len_bucket) + ids_bucket[:(rem % len_bucket)] - - # subsample - ids_bucket = ids_bucket[self.rank::self.num_replicas] - - # batching - for j in range(len(ids_bucket) // self.batch_size): - batch = [bucket[idx] for idx in ids_bucket[j*self.batch_size:(j+1)*self.batch_size]] - batches.append(batch) - - if self.shuffle: - batch_ids = torch.randperm(len(batches), generator=g).tolist() - batches = [batches[i] for i in batch_ids] - self.batches = batches - - assert len(self.batches) * self.batch_size == self.num_samples - return iter(self.batches) - - def _bisect(self, x, lo=0, hi=None): - if hi is None: - hi = len(self.boundaries) - 1 - - if hi > lo: - mid = (hi + lo) // 2 - if self.boundaries[mid] < x and x <= self.boundaries[mid+1]: - return mid - elif x <= self.boundaries[mid]: - return self._bisect(x, lo, mid) - else: - return self._bisect(x, mid + 1, hi) - else: - return -1 - - def __len__(self): - return self.num_samples // self.batch_size diff --git a/spaces/xuetao/bingo3/src/components/chat-history.tsx b/spaces/xuetao/bingo3/src/components/chat-history.tsx deleted file mode 100644 index feb81de66562edda8f40d3c0cc717202c92b6509..0000000000000000000000000000000000000000 --- a/spaces/xuetao/bingo3/src/components/chat-history.tsx +++ /dev/null @@ -1,48 +0,0 @@ -import { IconEdit, IconTrash, IconMore, IconDownload } from "./ui/icons" - -export function ChatHistory() { - return ( -
              -
              - 历史记录 -
              -
              -
              -
              -
              -
              -
              - -
              -

              无标题的聊天

              -
              -

              上午1:42

              -
              - - - - - - - - -
              -
              -
              -
              -
              -
              -
              -
              - ) -} diff --git a/spaces/xuxw98/TAPA/howto/download_weights.md b/spaces/xuxw98/TAPA/howto/download_weights.md deleted file mode 100644 index 5f1c918113ad825e3ef2ff2f1c35aa782937fdc1..0000000000000000000000000000000000000000 --- a/spaces/xuxw98/TAPA/howto/download_weights.md +++ /dev/null @@ -1,130 +0,0 @@ -## Downloading pretrained weights - -Except for when you are training from scratch, you will need the pretrained weights from Meta. - -### Original Meta weights - -Download the model weights following the instructions on the official [LLaMA repository](https://github.com/facebookresearch/llama). - -Once downloaded, you should have a folder like this: - -```text -checkpoints/llama -├── 7B -│ ├── ... -│ └── consolidated.00.pth -├── 13B -│ ... -└── tokenizer.model -``` - -Convert the weights to the Lit-LLaMA format: - -```bash -python scripts/convert_checkpoint.py --model_size 7B -``` - -> **Note** -> All scripts support argument [customization](customize_paths.md) - -### OpenLLaMA - -OpenLM Research has released **Apache 2.0 licensed** weights obtained by training LLaMA on the 1.2 trillion token open-source [RedPajama](https://github.com/togethercomputer/RedPajama-Data) dataset. - -Weights were released in preview on intermediate number of tokens (1T at the time of writing). In order to get them do: - -```bash -# Make sure you have git-lfs installed (https://git-lfs.com): git lfs install -git clone https://huggingface.co/openlm-research/open_llama_7b checkpoints/open-llama/7B -``` - -Or if you don't have `git-lfs` installed: - -```bash -python scripts/download.py --repo_id openlm-research/open_llama_7b --local_dir checkpoints/open-llama/7B -``` - -Once downloaded, you should have a folder like this: - -```text -checkpoints/open-llama/ -└── 7B - ├── ... - ├── pytorch_model-00001-of-00002.bin - ├── pytorch_model-00002-of-00002.bin - ├── pytorch_model.bin.index.json - └── tokenizer.model -``` - -Convert the weights to the Lit-LLaMA format: - -```bash -python scripts/convert_hf_checkpoint.py --checkpoint_dir checkpoints/open-llama/7B --model_size 7B -``` - -> **Note** -> All scripts support argument [customization](customize_paths.md) - -Once converted, you should have a folder like this: - -```text -checkpoints/lit-llama/ -├── 7B -│ └── lit-llama.pth -└── tokenizer.model -``` - -You are all set. Now you can continue with inference or finetuning. - -Try running [`generate.py` to test the imported weights](inference.md). - - -### Alternative sources - -You might find LLaMA weights hosted online in the HuggingFace hub. Beware that this infringes the original weight's license. -You could try downloading them by running the following command with a specific repo id: - -```bash -# Make sure you have git-lfs installed (https://git-lfs.com): git lfs install -git clone REPO_ID checkpoints/hf-llama/7B -``` - -Or if you don't have `git-lfs` installed: - -```bash -python scripts/download.py --repo_id REPO_ID --local_dir checkpoints/hf-llama/7B -``` - -Once downloaded, you should have a folder like this: - -```text -checkpoints/hf-llama/ -└── 7B - ├── ... - ├── pytorch_model-00001-of-00002.bin - ├── pytorch_model-00002-of-00002.bin - ├── pytorch_model.bin.index.json - └── tokenizer.model -``` - -Convert the weights to the Lit-LLaMA format: - -```bash -python scripts/convert_hf_checkpoint.py --model_size 7B -``` - -> **Note** -> All scripts support argument [customization](customize_paths.md) - -Once converted, you should have a folder like this: - -```text -checkpoints/lit-llama/ -├── 7B -│ └── lit-llama.pth -└── tokenizer.model -``` - -You are all set. Now you can continue with inference or finetuning. - -Try running [`generate.py` to test the imported weights](inference.md). diff --git "a/spaces/xxccc/gpt-academic/crazy_functions/\345\233\276\347\211\207\347\224\237\346\210\220.py" "b/spaces/xxccc/gpt-academic/crazy_functions/\345\233\276\347\211\207\347\224\237\346\210\220.py" deleted file mode 100644 index 5bf8bc4ba95864dc53f98b7335e654f58c4fed54..0000000000000000000000000000000000000000 --- "a/spaces/xxccc/gpt-academic/crazy_functions/\345\233\276\347\211\207\347\224\237\346\210\220.py" +++ /dev/null @@ -1,67 +0,0 @@ -from toolbox import CatchException, update_ui, get_conf, select_api_key -from .crazy_utils import request_gpt_model_in_new_thread_with_ui_alive -import datetime - - -def gen_image(llm_kwargs, prompt, resolution="256x256"): - import requests, json, time, os - from request_llm.bridge_all import model_info - - proxies, = get_conf('proxies') - # Set up OpenAI API key and model - api_key = select_api_key(llm_kwargs['api_key'], llm_kwargs['llm_model']) - chat_endpoint = model_info[llm_kwargs['llm_model']]['endpoint'] - # 'https://api.openai.com/v1/chat/completions' - img_endpoint = chat_endpoint.replace('chat/completions','images/generations') - # # Generate the image - url = img_endpoint - headers = { - 'Authorization': f"Bearer {api_key}", - 'Content-Type': 'application/json' - } - data = { - 'prompt': prompt, - 'n': 1, - 'size': resolution, - 'response_format': 'url' - } - response = requests.post(url, headers=headers, json=data, proxies=proxies) - print(response.content) - image_url = json.loads(response.content.decode('utf8'))['data'][0]['url'] - - # 文件保存到本地 - r = requests.get(image_url, proxies=proxies) - file_path = 'gpt_log/image_gen/' - os.makedirs(file_path, exist_ok=True) - file_name = 'Image' + time.strftime("%Y-%m-%d-%H-%M-%S", time.localtime()) + '.png' - with open(file_path+file_name, 'wb+') as f: f.write(r.content) - - - return image_url, file_path+file_name - - - -@CatchException -def 图片生成(prompt, llm_kwargs, plugin_kwargs, chatbot, history, system_prompt, web_port): - """ - txt 输入栏用户输入的文本,例如需要翻译的一段话,再例如一个包含了待处理文件的路径 - llm_kwargs gpt模型参数,如温度和top_p等,一般原样传递下去就行 - plugin_kwargs 插件模型的参数,暂时没有用武之地 - chatbot 聊天显示框的句柄,用于显示给用户 - history 聊天历史,前情提要 - system_prompt 给gpt的静默提醒 - web_port 当前软件运行的端口号 - """ - history = [] # 清空历史,以免输入溢出 - chatbot.append(("这是什么功能?", "[Local Message] 生成图像, 请先把模型切换至gpt-xxxx或者api2d-xxxx。如果中文效果不理想, 尝试Prompt。正在处理中 .....")) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 由于请求gpt需要一段时间,我们先及时地做一次界面更新 - if ("advanced_arg" in plugin_kwargs) and (plugin_kwargs["advanced_arg"] == ""): plugin_kwargs.pop("advanced_arg") - resolution = plugin_kwargs.get("advanced_arg", '256x256') - image_url, image_path = gen_image(llm_kwargs, prompt, resolution) - chatbot.append([prompt, - f'图像中转网址:
              `{image_url}`
              '+ - f'中转网址预览:
              ' - f'本地文件地址:
              `{image_path}`
              '+ - f'本地文件预览:
              ' - ]) - yield from update_ui(chatbot=chatbot, history=history) # 刷新界面 # 界面更新 diff --git a/spaces/xyyyds/som/README.md b/spaces/xyyyds/som/README.md deleted file mode 100644 index fabd7e43bcbbd3717afbcde561ba4ab307b30c74..0000000000000000000000000000000000000000 --- a/spaces/xyyyds/som/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Real CUGAN -emoji: 🐢 -colorFrom: gray -colorTo: green -sdk: gradio -sdk_version: 3.6 -app_file: app.py -pinned: false -license: gpl-3.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/yaelvinker/CLIPasso/config.py b/spaces/yaelvinker/CLIPasso/config.py deleted file mode 100644 index 3cfe78467ae12b84e450d5a5932a7fb92e674acd..0000000000000000000000000000000000000000 --- a/spaces/yaelvinker/CLIPasso/config.py +++ /dev/null @@ -1,144 +0,0 @@ -import argparse -import os -import random - -import numpy as np -import pydiffvg -import torch -import wandb - - -def set_seed(seed): - random.seed(seed) - np.random.seed(seed) - os.environ['PYTHONHASHSEED'] = str(seed) - torch.manual_seed(seed) - torch.cuda.manual_seed(seed) - torch.cuda.manual_seed_all(seed) - - -def parse_arguments(): - parser = argparse.ArgumentParser() - # ================================= - # ============ general ============ - # ================================= - parser.add_argument("target", help="target image path") - parser.add_argument("--output_dir", type=str, - help="directory to save the output images and loss") - parser.add_argument("--path_svg", type=str, default="none", - help="if you want to load an svg file and train from it") - parser.add_argument("--use_gpu", type=int, default=0) - parser.add_argument("--seed", type=int, default=0) - parser.add_argument("--mask_object", type=int, default=0) - parser.add_argument("--fix_scale", type=int, default=0) - parser.add_argument("--display_logs", type=int, default=0) - parser.add_argument("--display", type=int, default=0) - - # ================================= - # ============ wandb ============ - # ================================= - parser.add_argument("--use_wandb", type=int, default=0) - parser.add_argument("--wandb_user", type=str, default="yael-vinker") - parser.add_argument("--wandb_name", type=str, default="test") - parser.add_argument("--wandb_project_name", type=str, default="none") - - # ================================= - # =========== training ============ - # ================================= - parser.add_argument("--num_iter", type=int, default=500, - help="number of optimization iterations") - parser.add_argument("--num_stages", type=int, default=1, - help="training stages, you can train x strokes, then freeze them and train another x strokes etc.") - parser.add_argument("--lr_scheduler", type=int, default=0) - parser.add_argument("--lr", type=float, default=1.0) - parser.add_argument("--color_lr", type=float, default=0.01) - parser.add_argument("--color_vars_threshold", type=float, default=0.0) - parser.add_argument("--batch_size", type=int, default=1, - help="for optimization it's only one image") - parser.add_argument("--save_interval", type=int, default=10) - parser.add_argument("--eval_interval", type=int, default=10) - parser.add_argument("--image_scale", type=int, default=224) - - # ================================= - # ======== strokes params ========= - # ================================= - parser.add_argument("--num_paths", type=int, - default=16, help="number of strokes") - parser.add_argument("--width", type=float, - default=1.5, help="stroke width") - parser.add_argument("--control_points_per_seg", type=int, default=4) - parser.add_argument("--num_segments", type=int, default=1, - help="number of segments for each stroke, each stroke is a bezier curve with 4 control points") - parser.add_argument("--attention_init", type=int, default=1, - help="if True, use the attention heads of Dino model to set the location of the initial strokes") - parser.add_argument("--saliency_model", type=str, default="clip") - parser.add_argument("--saliency_clip_model", type=str, default="ViT-B/32") - parser.add_argument("--xdog_intersec", type=int, default=1) - parser.add_argument("--mask_object_attention", type=int, default=0) - parser.add_argument("--softmax_temp", type=float, default=0.3) - - # ================================= - # ============= loss ============== - # ================================= - parser.add_argument("--percep_loss", type=str, default="none", - help="the type of perceptual loss to be used (L2/LPIPS/none)") - parser.add_argument("--perceptual_weight", type=float, default=0, - help="weight the perceptual loss") - parser.add_argument("--train_with_clip", type=int, default=0) - parser.add_argument("--clip_weight", type=float, default=0) - parser.add_argument("--start_clip", type=int, default=0) - parser.add_argument("--num_aug_clip", type=int, default=4) - parser.add_argument("--include_target_in_aug", type=int, default=0) - parser.add_argument("--augment_both", type=int, default=1, - help="if you want to apply the affine augmentation to both the sketch and image") - parser.add_argument("--augemntations", type=str, default="affine", - help="can be any combination of: 'affine_noise_eraserchunks_eraser_press'") - parser.add_argument("--noise_thresh", type=float, default=0.5) - parser.add_argument("--aug_scale_min", type=float, default=0.7) - parser.add_argument("--force_sparse", type=float, default=0, - help="if True, use L1 regularization on stroke's opacity to encourage small number of strokes") - parser.add_argument("--clip_conv_loss", type=float, default=1) - parser.add_argument("--clip_conv_loss_type", type=str, default="L2") - parser.add_argument("--clip_conv_layer_weights", - type=str, default="0,0,1.0,1.0,0") - parser.add_argument("--clip_model_name", type=str, default="RN101") - parser.add_argument("--clip_fc_loss_weight", type=float, default=0.1) - parser.add_argument("--clip_text_guide", type=float, default=0) - parser.add_argument("--text_target", type=str, default="none") - - args = parser.parse_args() - set_seed(args.seed) - - args.clip_conv_layer_weights = [ - float(item) for item in args.clip_conv_layer_weights.split(',')] - - args.output_dir = os.path.join(args.output_dir, args.wandb_name) - if not os.path.exists(args.output_dir): - os.mkdir(args.output_dir) - - jpg_logs_dir = f"{args.output_dir}/jpg_logs" - svg_logs_dir = f"{args.output_dir}/svg_logs" - if not os.path.exists(jpg_logs_dir): - os.mkdir(jpg_logs_dir) - if not os.path.exists(svg_logs_dir): - os.mkdir(svg_logs_dir) - - if args.use_wandb: - wandb.init(project=args.wandb_project_name, entity=args.wandb_user, - config=args, name=args.wandb_name, id=wandb.util.generate_id()) - - if args.use_gpu: - args.device = torch.device("cuda" if ( - torch.cuda.is_available() and torch.cuda.device_count() > 0) else "cpu") - else: - args.device = torch.device("cpu") - pydiffvg.set_use_gpu(torch.cuda.is_available() and args.use_gpu) - pydiffvg.set_device(args.device) - return args - - -if __name__ == "__main__": - # for cog predict - args = parse_arguments() - final_config = vars(args) - np.save(f"{args.output_dir}/config_init.npy", final_config) \ No newline at end of file diff --git a/spaces/yancey001/Linaqruf-anything-v3.0/app.py b/spaces/yancey001/Linaqruf-anything-v3.0/app.py deleted file mode 100644 index 16e8131a0bbf7b06956e69e2b7758fa01e4eb51f..0000000000000000000000000000000000000000 --- a/spaces/yancey001/Linaqruf-anything-v3.0/app.py +++ /dev/null @@ -1,3 +0,0 @@ -import gradio as gr - -gr.Interface.load("models/Linaqruf/anything-v3.0").launch() \ No newline at end of file diff --git a/spaces/yangheng/Multilingual-Aspect-Based-Sentiment-Analysis/README.md b/spaces/yangheng/Multilingual-Aspect-Based-Sentiment-Analysis/README.md deleted file mode 100644 index c92b051c02ddbab1f06255f8f5f87ebed33795e9..0000000000000000000000000000000000000000 --- a/spaces/yangheng/Multilingual-Aspect-Based-Sentiment-Analysis/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: PyABSA ATEPC -emoji: 📈 -colorFrom: purple -colorTo: yellow -app_file: app.py -pinned: false -sdk: gradio -sdk_version: 3.12.0 -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/yderre-aubay/midi-player-demo/src/main/actions/recording.ts b/spaces/yderre-aubay/midi-player-demo/src/main/actions/recording.ts deleted file mode 100644 index d067a71a6a1015a6fad95c1c15f25f0f4c337464..0000000000000000000000000000000000000000 --- a/spaces/yderre-aubay/midi-player-demo/src/main/actions/recording.ts +++ /dev/null @@ -1,13 +0,0 @@ -import RootStore from "../stores/RootStore" - -export const toggleRecording = - ({ midiRecorder, player }: RootStore) => - () => { - if (midiRecorder.isRecording) { - midiRecorder.isRecording = false - player.stop() - } else { - midiRecorder.isRecording = true - player.play() - } - } diff --git a/spaces/yfyangd/PictureBookUnderstanding/BLIP/transform/randaugment.py b/spaces/yfyangd/PictureBookUnderstanding/BLIP/transform/randaugment.py deleted file mode 100644 index 094d9f4cacc93146d2bab7311d9dc04feb07032c..0000000000000000000000000000000000000000 --- a/spaces/yfyangd/PictureBookUnderstanding/BLIP/transform/randaugment.py +++ /dev/null @@ -1,340 +0,0 @@ -import cv2 -import numpy as np - - -## aug functions -def identity_func(img): - return img - - -def autocontrast_func(img, cutoff=0): - ''' - same output as PIL.ImageOps.autocontrast - ''' - n_bins = 256 - - def tune_channel(ch): - n = ch.size - cut = cutoff * n // 100 - if cut == 0: - high, low = ch.max(), ch.min() - else: - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - low = np.argwhere(np.cumsum(hist) > cut) - low = 0 if low.shape[0] == 0 else low[0] - high = np.argwhere(np.cumsum(hist[::-1]) > cut) - high = n_bins - 1 if high.shape[0] == 0 else n_bins - 1 - high[0] - if high <= low: - table = np.arange(n_bins) - else: - scale = (n_bins - 1) / (high - low) - offset = -low * scale - table = np.arange(n_bins) * scale + offset - table[table < 0] = 0 - table[table > n_bins - 1] = n_bins - 1 - table = table.clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def equalize_func(img): - ''' - same output as PIL.ImageOps.equalize - PIL's implementation is different from cv2.equalize - ''' - n_bins = 256 - - def tune_channel(ch): - hist = cv2.calcHist([ch], [0], None, [n_bins], [0, n_bins]) - non_zero_hist = hist[hist != 0].reshape(-1) - step = np.sum(non_zero_hist[:-1]) // (n_bins - 1) - if step == 0: return ch - n = np.empty_like(hist) - n[0] = step // 2 - n[1:] = hist[:-1] - table = (np.cumsum(n) // step).clip(0, 255).astype(np.uint8) - return table[ch] - - channels = [tune_channel(ch) for ch in cv2.split(img)] - out = cv2.merge(channels) - return out - - -def rotate_func(img, degree, fill=(0, 0, 0)): - ''' - like PIL, rotate by degree, not radians - ''' - H, W = img.shape[0], img.shape[1] - center = W / 2, H / 2 - M = cv2.getRotationMatrix2D(center, degree, 1) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill) - return out - - -def solarize_func(img, thresh=128): - ''' - same output as PIL.ImageOps.posterize - ''' - table = np.array([el if el < thresh else 255 - el for el in range(256)]) - table = table.clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def color_func(img, factor): - ''' - same output as PIL.ImageEnhance.Color - ''' - ## implementation according to PIL definition, quite slow - # degenerate = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)[:, :, np.newaxis] - # out = blend(degenerate, img, factor) - # M = ( - # np.eye(3) * factor - # + np.float32([0.114, 0.587, 0.299]).reshape(3, 1) * (1. - factor) - # )[np.newaxis, np.newaxis, :] - M = ( - np.float32([ - [0.886, -0.114, -0.114], - [-0.587, 0.413, -0.587], - [-0.299, -0.299, 0.701]]) * factor - + np.float32([[0.114], [0.587], [0.299]]) - ) - out = np.matmul(img, M).clip(0, 255).astype(np.uint8) - return out - - -def contrast_func(img, factor): - """ - same output as PIL.ImageEnhance.Contrast - """ - mean = np.sum(np.mean(img, axis=(0, 1)) * np.array([0.114, 0.587, 0.299])) - table = np.array([( - el - mean) * factor + mean - for el in range(256) - ]).clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def brightness_func(img, factor): - ''' - same output as PIL.ImageEnhance.Contrast - ''' - table = (np.arange(256, dtype=np.float32) * factor).clip(0, 255).astype(np.uint8) - out = table[img] - return out - - -def sharpness_func(img, factor): - ''' - The differences the this result and PIL are all on the 4 boundaries, the center - areas are same - ''' - kernel = np.ones((3, 3), dtype=np.float32) - kernel[1][1] = 5 - kernel /= 13 - degenerate = cv2.filter2D(img, -1, kernel) - if factor == 0.0: - out = degenerate - elif factor == 1.0: - out = img - else: - out = img.astype(np.float32) - degenerate = degenerate.astype(np.float32)[1:-1, 1:-1, :] - out[1:-1, 1:-1, :] = degenerate + factor * (out[1:-1, 1:-1, :] - degenerate) - out = out.astype(np.uint8) - return out - - -def shear_x_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, factor, 0], [0, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def translate_x_func(img, offset, fill=(0, 0, 0)): - ''' - same output as PIL.Image.transform - ''' - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, -offset], [0, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def translate_y_func(img, offset, fill=(0, 0, 0)): - ''' - same output as PIL.Image.transform - ''' - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [0, 1, -offset]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def posterize_func(img, bits): - ''' - same output as PIL.ImageOps.posterize - ''' - out = np.bitwise_and(img, np.uint8(255 << (8 - bits))) - return out - - -def shear_y_func(img, factor, fill=(0, 0, 0)): - H, W = img.shape[0], img.shape[1] - M = np.float32([[1, 0, 0], [factor, 1, 0]]) - out = cv2.warpAffine(img, M, (W, H), borderValue=fill, flags=cv2.INTER_LINEAR).astype(np.uint8) - return out - - -def cutout_func(img, pad_size, replace=(0, 0, 0)): - replace = np.array(replace, dtype=np.uint8) - H, W = img.shape[0], img.shape[1] - rh, rw = np.random.random(2) - pad_size = pad_size // 2 - ch, cw = int(rh * H), int(rw * W) - x1, x2 = max(ch - pad_size, 0), min(ch + pad_size, H) - y1, y2 = max(cw - pad_size, 0), min(cw + pad_size, W) - out = img.copy() - out[x1:x2, y1:y2, :] = replace - return out - - -### level to args -def enhance_level_to_args(MAX_LEVEL): - def level_to_args(level): - return ((level / MAX_LEVEL) * 1.8 + 0.1,) - return level_to_args - - -def shear_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 0.3 - if np.random.random() > 0.5: level = -level - return (level, replace_value) - - return level_to_args - - -def translate_level_to_args(translate_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * float(translate_const) - if np.random.random() > 0.5: level = -level - return (level, replace_value) - - return level_to_args - - -def cutout_level_to_args(cutout_const, MAX_LEVEL, replace_value): - def level_to_args(level): - level = int((level / MAX_LEVEL) * cutout_const) - return (level, replace_value) - - return level_to_args - - -def solarize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 256) - return (level, ) - return level_to_args - - -def none_level_to_args(level): - return () - - -def posterize_level_to_args(MAX_LEVEL): - def level_to_args(level): - level = int((level / MAX_LEVEL) * 4) - return (level, ) - return level_to_args - - -def rotate_level_to_args(MAX_LEVEL, replace_value): - def level_to_args(level): - level = (level / MAX_LEVEL) * 30 - if np.random.random() < 0.5: - level = -level - return (level, replace_value) - - return level_to_args - - -func_dict = { - 'Identity': identity_func, - 'AutoContrast': autocontrast_func, - 'Equalize': equalize_func, - 'Rotate': rotate_func, - 'Solarize': solarize_func, - 'Color': color_func, - 'Contrast': contrast_func, - 'Brightness': brightness_func, - 'Sharpness': sharpness_func, - 'ShearX': shear_x_func, - 'TranslateX': translate_x_func, - 'TranslateY': translate_y_func, - 'Posterize': posterize_func, - 'ShearY': shear_y_func, -} - -translate_const = 10 -MAX_LEVEL = 10 -replace_value = (128, 128, 128) -arg_dict = { - 'Identity': none_level_to_args, - 'AutoContrast': none_level_to_args, - 'Equalize': none_level_to_args, - 'Rotate': rotate_level_to_args(MAX_LEVEL, replace_value), - 'Solarize': solarize_level_to_args(MAX_LEVEL), - 'Color': enhance_level_to_args(MAX_LEVEL), - 'Contrast': enhance_level_to_args(MAX_LEVEL), - 'Brightness': enhance_level_to_args(MAX_LEVEL), - 'Sharpness': enhance_level_to_args(MAX_LEVEL), - 'ShearX': shear_level_to_args(MAX_LEVEL, replace_value), - 'TranslateX': translate_level_to_args( - translate_const, MAX_LEVEL, replace_value - ), - 'TranslateY': translate_level_to_args( - translate_const, MAX_LEVEL, replace_value - ), - 'Posterize': posterize_level_to_args(MAX_LEVEL), - 'ShearY': shear_level_to_args(MAX_LEVEL, replace_value), -} - - -class RandomAugment(object): - - def __init__(self, N=2, M=10, isPIL=False, augs=[]): - self.N = N - self.M = M - self.isPIL = isPIL - if augs: - self.augs = augs - else: - self.augs = list(arg_dict.keys()) - - def get_random_ops(self): - sampled_ops = np.random.choice(self.augs, self.N) - return [(op, 0.5, self.M) for op in sampled_ops] - - def __call__(self, img): - if self.isPIL: - img = np.array(img) - ops = self.get_random_ops() - for name, prob, level in ops: - if np.random.random() > prob: - continue - args = arg_dict[name](level) - img = func_dict[name](img, *args) - return img - - -if __name__ == '__main__': - a = RandomAugment() - img = np.random.randn(32, 32, 3) - a(img) \ No newline at end of file diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/byt5/tokenization_byt5.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/byt5/tokenization_byt5.py deleted file mode 100644 index 68c70db0d18d65e25bf60a672615f833bd5e504b..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/byt5/tokenization_byt5.py +++ /dev/null @@ -1,234 +0,0 @@ -# coding=utf-8 -# Copyright 2021 T5 Authors and HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" Tokenization class for model ByT5.""" - - -import warnings -from typing import List, Optional, Tuple - -from ...tokenization_utils import AddedToken, PreTrainedTokenizer -from ...utils import logging - - -logger = logging.get_logger(__name__) - - -class ByT5Tokenizer(PreTrainedTokenizer): - """ - Construct a ByT5 tokenizer. ByT5 simply uses raw bytes utf-8 encoding. - - This tokenizer inherits from [`PreTrainedTokenizer`] which contains most of the main methods. Users should refer to - this superclass for more information regarding those methods. - - Args: - eos_token (`str`, *optional*, defaults to `"
              "`): - The end of sequence token. - - - - When building a sequence using special tokens, this is not the token that is used for the end of sequence. - The token used is the `sep_token`. - - - - unk_token (`str`, *optional*, defaults to `""`): - The unknown token. A token that is not in the vocabulary cannot be converted to an ID and is set to be this - token instead. - pad_token (`str`, *optional*, defaults to `""`): - The token used for padding, for example when batching sequences of different lengths. - extra_ids (`int`, *optional*, defaults to 125): - Add a number of extra ids added to the end of the vocabulary for use as sentinels. These tokens are - accessible as "" where "{%d}" is a number between 0 and extra_ids-1. Extra tokens are - indexed from the end of the vocabulary up to beginning ("" is the last token in the vocabulary - like in ByT5 preprocessing see - [here](https://github.com/google-research/text-to-text-transfer-transformer/blob/9fd7b14a769417be33bc6c850f9598764913c833/t5/data/preprocessors.py#L2117)). - additional_special_tokens (`List[str]`, *optional*): - Additional special tokens used by the tokenizer. - """ - - model_input_names = ["input_ids", "attention_mask"] - - def __init__( - self, - eos_token="", - unk_token="", - pad_token="", - extra_ids=125, - additional_special_tokens=None, - **kwargs, - ) -> None: - # Add extra_ids to the special token list - if extra_ids > 0 and additional_special_tokens is None: - additional_special_tokens = [f"" for i in range(extra_ids)] - elif extra_ids > 0 and additional_special_tokens is not None and len(additional_special_tokens) > 0: - # Check that we have the right number of extra_id special tokens - extra_tokens = len(set(filter(lambda x: bool("extra_id" in str(x)), additional_special_tokens))) - if extra_tokens != extra_ids: - raise ValueError( - f"Both extra_ids ({extra_ids}) and additional_special_tokens ({additional_special_tokens}) are" - " provided to ByT5Tokenizer. In this case the additional_special_tokens must include the" - " extra_ids tokens" - ) - - pad_token = AddedToken(pad_token, lstrip=True, rstrip=True) if isinstance(pad_token, str) else pad_token - # we force left and right stripping for backward compatibility. The byt5tests depend on this. - eos_token = AddedToken(eos_token, lstrip=True, rstrip=True) if isinstance(eos_token, str) else eos_token - unk_token = AddedToken(unk_token, lstrip=True, rstrip=True) if isinstance(unk_token, str) else unk_token - # unk token needs to be in the vocab with correct index - self._added_tokens_decoder = {0: pad_token, 1: eos_token, 2: unk_token} - self.offset = len(self._added_tokens_decoder) - self._utf_vocab_size = 2**8 # utf is 8 bits - super().__init__( - eos_token=eos_token, - unk_token=unk_token, - pad_token=pad_token, - extra_ids=0, - additional_special_tokens=additional_special_tokens, # TODO extra ids are not used :sweatywmile: - **kwargs, - ) - - @property - def vocab_size(self): - return self._utf_vocab_size - - def get_vocab(self): - vocab = {self.convert_ids_to_tokens(i): i for i in range(self.vocab_size + self.offset)} - vocab.update(self.added_tokens_encoder) - return vocab - - def get_special_tokens_mask( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None, already_has_special_tokens: bool = False - ) -> List[int]: - """ - Retrieve sequence ids from a token list that has no special tokens added. This method is called when adding - special tokens using the tokenizer `prepare_for_model` method. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - already_has_special_tokens (`bool`, *optional*, defaults to `False`): - Whether or not the token list is already formatted with special tokens for the model. - - Returns: - `List[int]`: A list of integers in the range [0, 1]: 1 for a special token, 0 for a sequence token. - """ - if already_has_special_tokens: - return super().get_special_tokens_mask( - token_ids_0=token_ids_0, token_ids_1=token_ids_1, already_has_special_tokens=True - ) - - # normal case: some special tokens - if token_ids_1 is None: - return ([0] * len(token_ids_0)) + [1] - return ([0] * len(token_ids_0)) + [1] + ([0] * len(token_ids_1)) + [1] - - def _add_eos_if_not_present(self, token_ids: List[int]) -> List[int]: - """Do not add eos again if user already added it.""" - if len(token_ids) > 0 and token_ids[-1] == self.eos_token_id: - warnings.warn( - f"This sequence already has {self.eos_token}. In future versions this behavior may lead to duplicated" - " eos tokens being added." - ) - return token_ids - else: - return token_ids + [self.eos_token_id] - - def create_token_type_ids_from_sequences( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Create a mask from the two sequences passed to be used in a sequence-pair classification task. ByT5 does not - make use of token type ids, therefore a list of zeros is returned. - - Args: - token_ids_0 (`List[int]`): - List of IDs. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of zeros. - """ - eos = [self.eos_token_id] - - if token_ids_1 is None: - return len(token_ids_0 + eos) * [0] - return len(token_ids_0 + eos + token_ids_1 + eos) * [0] - - def build_inputs_with_special_tokens( - self, token_ids_0: List[int], token_ids_1: Optional[List[int]] = None - ) -> List[int]: - """ - Build model inputs from a sequence or a pair of sequence for sequence classification tasks by concatenating and - adding special tokens. A sequence has the following format: - - - single sequence: `X ` - - pair of sequences: `A B ` - - Args: - token_ids_0 (`List[int]`): - List of IDs to which the special tokens will be added. - token_ids_1 (`List[int]`, *optional*): - Optional second list of IDs for sequence pairs. - - Returns: - `List[int]`: List of [input IDs](../glossary#input-ids) with the appropriate special tokens. - """ - token_ids_0 = self._add_eos_if_not_present(token_ids_0) - if token_ids_1 is None: - return token_ids_0 - else: - token_ids_1 = self._add_eos_if_not_present(token_ids_1) - return token_ids_0 + token_ids_1 - - def _tokenize(self, text: str) -> List[str]: - """Take as input a string and return a list of strings (tokens) for words/sub-words""" - tokens = [chr(i) for i in text.encode("utf-8")] - return tokens - - def _convert_token_to_id(self, token): - """Converts a token (str) in an id using the vocab.""" - - if len(token) != 1: - token_id = None - else: - token_id = ord(token) + self.offset - - return token_id - - def _convert_id_to_token(self, index): - """Converts an index (integer) in a token (str) using the vocab.""" - token = chr(index - self.offset) - return token - - def convert_tokens_to_string(self, tokens): - """Converts a sequence of tokens (string) in a single string.""" - bstring = b"" - for token in tokens: - if token in self.added_tokens_decoder: - tok_string = self.added_tokens_decoder[token].encode("utf-8") - elif token in self.added_tokens_encoder: - tok_string = token.encode("utf-8") - else: - tok_string = bytes([ord(token)]) - bstring += tok_string - string = bstring.decode("utf-8", errors="ignore") - return string - - # ByT5Tokenizer has no vocab file - def save_vocabulary(self, save_directory: str, filename_prefix: Optional[str] = None) -> Tuple[str]: - return () diff --git a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv3/processing_layoutlmv3.py b/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv3/processing_layoutlmv3.py deleted file mode 100644 index 31d0c5e60a548e3908e4b42c3f9687c4a5708169..0000000000000000000000000000000000000000 --- a/spaces/yizhangliu/Grounded-Segment-Anything/transformers_4_35_0/models/layoutlmv3/processing_layoutlmv3.py +++ /dev/null @@ -1,198 +0,0 @@ -# coding=utf-8 -# Copyright 2022 The HuggingFace Inc. team. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -""" -Processor class for LayoutLMv3. -""" - -import warnings -from typing import List, Optional, Union - -from ...processing_utils import ProcessorMixin -from ...tokenization_utils_base import BatchEncoding, PaddingStrategy, PreTokenizedInput, TextInput, TruncationStrategy -from ...utils import TensorType - - -class LayoutLMv3Processor(ProcessorMixin): - r""" - Constructs a LayoutLMv3 processor which combines a LayoutLMv3 image processor and a LayoutLMv3 tokenizer into a - single processor. - - [`LayoutLMv3Processor`] offers all the functionalities you need to prepare data for the model. - - It first uses [`LayoutLMv3ImageProcessor`] to resize and normalize document images, and optionally applies OCR to - get words and normalized bounding boxes. These are then provided to [`LayoutLMv3Tokenizer`] or - [`LayoutLMv3TokenizerFast`], which turns the words and bounding boxes into token-level `input_ids`, - `attention_mask`, `token_type_ids`, `bbox`. Optionally, one can provide integer `word_labels`, which are turned - into token-level `labels` for token classification tasks (such as FUNSD, CORD). - - Args: - image_processor (`LayoutLMv3ImageProcessor`, *optional*): - An instance of [`LayoutLMv3ImageProcessor`]. The image processor is a required input. - tokenizer (`LayoutLMv3Tokenizer` or `LayoutLMv3TokenizerFast`, *optional*): - An instance of [`LayoutLMv3Tokenizer`] or [`LayoutLMv3TokenizerFast`]. The tokenizer is a required input. - """ - attributes = ["image_processor", "tokenizer"] - image_processor_class = "LayoutLMv3ImageProcessor" - tokenizer_class = ("LayoutLMv3Tokenizer", "LayoutLMv3TokenizerFast") - - def __init__(self, image_processor=None, tokenizer=None, **kwargs): - feature_extractor = None - if "feature_extractor" in kwargs: - warnings.warn( - "The `feature_extractor` argument is deprecated and will be removed in v5, use `image_processor`" - " instead.", - FutureWarning, - ) - feature_extractor = kwargs.pop("feature_extractor") - - image_processor = image_processor if image_processor is not None else feature_extractor - if image_processor is None: - raise ValueError("You need to specify an `image_processor`.") - if tokenizer is None: - raise ValueError("You need to specify a `tokenizer`.") - - super().__init__(image_processor, tokenizer) - - def __call__( - self, - images, - text: Union[TextInput, PreTokenizedInput, List[TextInput], List[PreTokenizedInput]] = None, - text_pair: Optional[Union[PreTokenizedInput, List[PreTokenizedInput]]] = None, - boxes: Union[List[List[int]], List[List[List[int]]]] = None, - word_labels: Optional[Union[List[int], List[List[int]]]] = None, - add_special_tokens: bool = True, - padding: Union[bool, str, PaddingStrategy] = False, - truncation: Union[bool, str, TruncationStrategy] = None, - max_length: Optional[int] = None, - stride: int = 0, - pad_to_multiple_of: Optional[int] = None, - return_token_type_ids: Optional[bool] = None, - return_attention_mask: Optional[bool] = None, - return_overflowing_tokens: bool = False, - return_special_tokens_mask: bool = False, - return_offsets_mapping: bool = False, - return_length: bool = False, - verbose: bool = True, - return_tensors: Optional[Union[str, TensorType]] = None, - **kwargs, - ) -> BatchEncoding: - """ - This method first forwards the `images` argument to [`~LayoutLMv3ImageProcessor.__call__`]. In case - [`LayoutLMv3ImageProcessor`] was initialized with `apply_ocr` set to `True`, it passes the obtained words and - bounding boxes along with the additional arguments to [`~LayoutLMv3Tokenizer.__call__`] and returns the output, - together with resized and normalized `pixel_values`. In case [`LayoutLMv3ImageProcessor`] was initialized with - `apply_ocr` set to `False`, it passes the words (`text`/``text_pair`) and `boxes` specified by the user along - with the additional arguments to [`~LayoutLMv3Tokenizer.__call__`] and returns the output, together with - resized and normalized `pixel_values`. - - Please refer to the docstring of the above two methods for more information. - """ - # verify input - if self.image_processor.apply_ocr and (boxes is not None): - raise ValueError( - "You cannot provide bounding boxes if you initialized the image processor with apply_ocr set to True." - ) - - if self.image_processor.apply_ocr and (word_labels is not None): - raise ValueError( - "You cannot provide word labels if you initialized the image processor with apply_ocr set to True." - ) - - # first, apply the image processor - features = self.image_processor(images=images, return_tensors=return_tensors) - - # second, apply the tokenizer - if text is not None and self.image_processor.apply_ocr and text_pair is None: - if isinstance(text, str): - text = [text] # add batch dimension (as the image processor always adds a batch dimension) - text_pair = features["words"] - - encoded_inputs = self.tokenizer( - text=text if text is not None else features["words"], - text_pair=text_pair if text_pair is not None else None, - boxes=boxes if boxes is not None else features["boxes"], - word_labels=word_labels, - add_special_tokens=add_special_tokens, - padding=padding, - truncation=truncation, - max_length=max_length, - stride=stride, - pad_to_multiple_of=pad_to_multiple_of, - return_token_type_ids=return_token_type_ids, - return_attention_mask=return_attention_mask, - return_overflowing_tokens=return_overflowing_tokens, - return_special_tokens_mask=return_special_tokens_mask, - return_offsets_mapping=return_offsets_mapping, - return_length=return_length, - verbose=verbose, - return_tensors=return_tensors, - **kwargs, - ) - - # add pixel values - images = features.pop("pixel_values") - if return_overflowing_tokens is True: - images = self.get_overflowing_images(images, encoded_inputs["overflow_to_sample_mapping"]) - encoded_inputs["pixel_values"] = images - - return encoded_inputs - - def get_overflowing_images(self, images, overflow_to_sample_mapping): - # in case there's an overflow, ensure each `input_ids` sample is mapped to its corresponding image - images_with_overflow = [] - for sample_idx in overflow_to_sample_mapping: - images_with_overflow.append(images[sample_idx]) - - if len(images_with_overflow) != len(overflow_to_sample_mapping): - raise ValueError( - "Expected length of images to be the same as the length of `overflow_to_sample_mapping`, but got" - f" {len(images_with_overflow)} and {len(overflow_to_sample_mapping)}" - ) - - return images_with_overflow - - def batch_decode(self, *args, **kwargs): - """ - This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.batch_decode`]. Please - refer to the docstring of this method for more information. - """ - return self.tokenizer.batch_decode(*args, **kwargs) - - def decode(self, *args, **kwargs): - """ - This method forwards all its arguments to PreTrainedTokenizer's [`~PreTrainedTokenizer.decode`]. Please refer - to the docstring of this method for more information. - """ - return self.tokenizer.decode(*args, **kwargs) - - @property - def model_input_names(self): - return ["input_ids", "bbox", "attention_mask", "pixel_values"] - - @property - def feature_extractor_class(self): - warnings.warn( - "`feature_extractor_class` is deprecated and will be removed in v5. Use `image_processor_class` instead.", - FutureWarning, - ) - return self.image_processor_class - - @property - def feature_extractor(self): - warnings.warn( - "`feature_extractor` is deprecated and will be removed in v5. Use `image_processor` instead.", - FutureWarning, - ) - return self.image_processor diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vdecoder/__init__.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/vdecoder/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/ContentVec256L9_Onnx.py b/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/ContentVec256L9_Onnx.py deleted file mode 100644 index fae2b928252801795b038f51451b234e007f6f03..0000000000000000000000000000000000000000 --- a/spaces/yl12053/so-vits-4.1-Grass-Wonder/vencoder/ContentVec256L9_Onnx.py +++ /dev/null @@ -1,28 +0,0 @@ -from vencoder.encoder import SpeechEncoder -import onnxruntime -import torch - -class ContentVec256L9_Onnx(SpeechEncoder): - def __init__(self,vec_path = "pretrain/vec-256-layer-9.onnx",device=None): - print("load model(s) from {}".format(vec_path)) - self.hidden_dim = 256 - if device is None: - self.dev = torch.device("cpu") - else: - self.dev = torch.device(device) - if device == 'cpu' or device == torch.device("cpu") or device is None: - providers = ['CPUExecutionProvider'] - elif device == 'cuda' or device == torch.device("cuda"): - providers = ['CUDAExecutionProvider', 'CPUExecutionProvider'] - self.model = onnxruntime.InferenceSession(vec_path, providers=providers) - - def encoder(self, wav): - feats = wav - if feats.dim() == 2: # double channels - feats = feats.mean(-1) - assert feats.dim() == 1, feats.dim() - feats = feats.view(1, -1) - feats = feats.unsqueeze(0).cpu().detach().numpy() - onnx_input = {self.model.get_inputs()[0].name: feats} - logits = self.model.run(None, onnx_input) - return torch.tensor(logits[0]).transpose(1, 2).to(self.dev) \ No newline at end of file diff --git a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py b/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py deleted file mode 100644 index d4693b2125217527033727ec9a82959286d180f9..0000000000000000000000000000000000000000 --- a/spaces/ynhe/AskAnything/models/grit_src/third_party/CenterNet2/projects/CenterNet2/centernet/modeling/layers/heatmap_focal_loss.py +++ /dev/null @@ -1,92 +0,0 @@ -import torch -from torch.nn import functional as F - -# TODO: merge these two function -def heatmap_focal_loss( - inputs, - targets, - pos_inds, - labels, - alpha: float = -1, - beta: float = 4, - gamma: float = 2, - reduction: str = 'sum', - sigmoid_clamp: float = 1e-4, - ignore_high_fp: float = -1., -): - """ - Loss used in RetinaNet for dense detection: https://arxiv.org/abs/1708.02002. - Args: - inputs: (sum_l N*Hl*Wl, C) - targets: (sum_l N*Hl*Wl, C) - pos_inds: N - labels: N - Returns: - Loss tensor with the reduction option applied. - """ - pred = torch.clamp(inputs.sigmoid_(), min=sigmoid_clamp, max=1-sigmoid_clamp) - neg_weights = torch.pow(1 - targets, beta) - pos_pred_pix = pred[pos_inds] # N x C - pos_pred = pos_pred_pix.gather(1, labels.unsqueeze(1)) - pos_loss = torch.log(pos_pred) * torch.pow(1 - pos_pred, gamma) - neg_loss = torch.log(1 - pred) * torch.pow(pred, gamma) * neg_weights - - if ignore_high_fp > 0: - not_high_fp = (pred < ignore_high_fp).float() - neg_loss = not_high_fp * neg_loss - - if reduction == "sum": - pos_loss = pos_loss.sum() - neg_loss = neg_loss.sum() - - if alpha >= 0: - pos_loss = alpha * pos_loss - neg_loss = (1 - alpha) * neg_loss - - return - pos_loss, - neg_loss - -heatmap_focal_loss_jit = torch.jit.script(heatmap_focal_loss) -# heatmap_focal_loss_jit = heatmap_focal_loss - -def binary_heatmap_focal_loss( - inputs, - targets, - pos_inds, - alpha: float = -1, - beta: float = 4, - gamma: float = 2, - sigmoid_clamp: float = 1e-4, - ignore_high_fp: float = -1., -): - """ - Args: - inputs: (sum_l N*Hl*Wl,) - targets: (sum_l N*Hl*Wl,) - pos_inds: N - Returns: - Loss tensor with the reduction option applied. - """ - pred = torch.clamp(inputs.sigmoid_(), min=sigmoid_clamp, max=1-sigmoid_clamp) - neg_weights = torch.pow(1 - targets, beta) - for i, ind in enumerate(pos_inds): - if ind >= pred.shape[0]: - print('%'*100) - print(pred.shape, ind, pos_inds) - pos_inds[i] = pred.shape[0] - 1 - pos_pred = pred[pos_inds] # N - pos_loss = torch.log(pos_pred) * torch.pow(1 - pos_pred, gamma) - neg_loss = torch.log(1 - pred) * torch.pow(pred, gamma) * neg_weights - if ignore_high_fp > 0: - not_high_fp = (pred < ignore_high_fp).float() - neg_loss = not_high_fp * neg_loss - - pos_loss = - pos_loss.sum() - neg_loss = - neg_loss.sum() - - if alpha >= 0: - pos_loss = alpha * pos_loss - neg_loss = (1 - alpha) * neg_loss - - return pos_loss, neg_loss - -# binary_heatmap_focal_loss_jit = torch.jit.script(binary_heatmap_focal_loss) \ No newline at end of file diff --git a/spaces/ysharma/text-to-image-to-video/autoencoder.py b/spaces/ysharma/text-to-image-to-video/autoencoder.py deleted file mode 100644 index dc712c713ac1b2353154125fa642693ef2096cd4..0000000000000000000000000000000000000000 --- a/spaces/ysharma/text-to-image-to-video/autoencoder.py +++ /dev/null @@ -1,443 +0,0 @@ -import torch -import pytorch_lightning as pl -import torch.nn.functional as F -from contextlib import contextmanager - -from taming.modules.vqvae.quantize import VectorQuantizer2 as VectorQuantizer - -from model import Encoder, Decoder -from distributions import DiagonalGaussianDistribution - -from util import instantiate_from_config - - -class VQModel(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - n_embed, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - batch_resize_range=None, - scheduler_config=None, - lr_g_factor=1.0, - remap=None, - sane_index_shape=False, # tell vector quantizer to return indices as bhw - use_ema=False - ): - super().__init__() - self.embed_dim = embed_dim - self.n_embed = n_embed - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - self.quantize = VectorQuantizer(n_embed, embed_dim, beta=0.25, - remap=remap, - sane_index_shape=sane_index_shape) - self.quant_conv = torch.nn.Conv2d(ddconfig["z_channels"], embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - self.batch_resize_range = batch_resize_range - if self.batch_resize_range is not None: - print(f"{self.__class__.__name__}: Using per-batch resizing in range {batch_resize_range}.") - - self.use_ema = use_ema - if self.use_ema: - self.model_ema = LitEma(self) - print(f"Keeping EMAs of {len(list(self.model_ema.buffers()))}.") - - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - self.scheduler_config = scheduler_config - self.lr_g_factor = lr_g_factor - - @contextmanager - def ema_scope(self, context=None): - if self.use_ema: - self.model_ema.store(self.parameters()) - self.model_ema.copy_to(self) - if context is not None: - print(f"{context}: Switched to EMA weights") - try: - yield None - finally: - if self.use_ema: - self.model_ema.restore(self.parameters()) - if context is not None: - print(f"{context}: Restored training weights") - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - missing, unexpected = self.load_state_dict(sd, strict=False) - print(f"Restored from {path} with {len(missing)} missing and {len(unexpected)} unexpected keys") - if len(missing) > 0: - print(f"Missing Keys: {missing}") - print(f"Unexpected Keys: {unexpected}") - - def on_train_batch_end(self, *args, **kwargs): - if self.use_ema: - self.model_ema(self) - - def encode(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - quant, emb_loss, info = self.quantize(h) - return quant, emb_loss, info - - def encode_to_prequant(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - return h - - def decode(self, quant): - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - return dec - - def decode_code(self, code_b): - quant_b = self.quantize.embed_code(code_b) - dec = self.decode(quant_b) - return dec - - def forward(self, input, return_pred_indices=False): - quant, diff, (_,_,ind) = self.encode(input) - dec = self.decode(quant) - if return_pred_indices: - return dec, diff, ind - return dec, diff - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - if self.batch_resize_range is not None: - lower_size = self.batch_resize_range[0] - upper_size = self.batch_resize_range[1] - if self.global_step <= 4: - # do the first few batches with max size to avoid later oom - new_resize = upper_size - else: - new_resize = np.random.choice(np.arange(lower_size, upper_size+16, 16)) - if new_resize != x.shape[2]: - x = F.interpolate(x, size=new_resize, mode="bicubic") - x = x.detach() - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - # https://github.com/pytorch/pytorch/issues/37142 - # try not to fool the heuristics - x = self.get_input(batch, self.image_key) - xrec, qloss, ind = self(x, return_pred_indices=True) - - if optimizer_idx == 0: - # autoencode - aeloss, log_dict_ae = self.loss(qloss, x, xrec, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train", - predicted_indices=ind) - - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=True) - return aeloss - - if optimizer_idx == 1: - # discriminator - discloss, log_dict_disc = self.loss(qloss, x, xrec, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=True) - return discloss - - def validation_step(self, batch, batch_idx): - log_dict = self._validation_step(batch, batch_idx) - with self.ema_scope(): - log_dict_ema = self._validation_step(batch, batch_idx, suffix="_ema") - return log_dict - - def _validation_step(self, batch, batch_idx, suffix=""): - x = self.get_input(batch, self.image_key) - xrec, qloss, ind = self(x, return_pred_indices=True) - aeloss, log_dict_ae = self.loss(qloss, x, xrec, 0, - self.global_step, - last_layer=self.get_last_layer(), - split="val"+suffix, - predicted_indices=ind - ) - - discloss, log_dict_disc = self.loss(qloss, x, xrec, 1, - self.global_step, - last_layer=self.get_last_layer(), - split="val"+suffix, - predicted_indices=ind - ) - rec_loss = log_dict_ae[f"val{suffix}/rec_loss"] - self.log(f"val{suffix}/rec_loss", rec_loss, - prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True) - self.log(f"val{suffix}/aeloss", aeloss, - prog_bar=True, logger=True, on_step=False, on_epoch=True, sync_dist=True) - if version.parse(pl.__version__) >= version.parse('1.4.0'): - del log_dict_ae[f"val{suffix}/rec_loss"] - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr_d = self.learning_rate - lr_g = self.lr_g_factor*self.learning_rate - print("lr_d", lr_d) - print("lr_g", lr_g) - opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ - list(self.decoder.parameters())+ - list(self.quantize.parameters())+ - list(self.quant_conv.parameters())+ - list(self.post_quant_conv.parameters()), - lr=lr_g, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr_d, betas=(0.5, 0.9)) - - if self.scheduler_config is not None: - scheduler = instantiate_from_config(self.scheduler_config) - - print("Setting up LambdaLR scheduler...") - scheduler = [ - { - 'scheduler': LambdaLR(opt_ae, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }, - { - 'scheduler': LambdaLR(opt_disc, lr_lambda=scheduler.schedule), - 'interval': 'step', - 'frequency': 1 - }, - ] - return [opt_ae, opt_disc], scheduler - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - def log_images(self, batch, only_inputs=False, plot_ema=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if only_inputs: - log["inputs"] = x - return log - xrec, _ = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["inputs"] = x - log["reconstructions"] = xrec - if plot_ema: - with self.ema_scope(): - xrec_ema, _ = self(x) - if x.shape[1] > 3: xrec_ema = self.to_rgb(xrec_ema) - log["reconstructions_ema"] = xrec_ema - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - - -class VQModelInterface(VQModel): - def __init__(self, embed_dim, *args, **kwargs): - super().__init__(embed_dim=embed_dim, *args, **kwargs) - self.embed_dim = embed_dim - - def encode(self, x): - h = self.encoder(x) - h = self.quant_conv(h) - return h - - def decode(self, h, force_not_quantize=False): - # also go through quantization layer - if not force_not_quantize: - quant, emb_loss, info = self.quantize(h) - else: - quant = h - quant = self.post_quant_conv(quant) - dec = self.decoder(quant) - return dec - - -class AutoencoderKL(pl.LightningModule): - def __init__(self, - ddconfig, - lossconfig, - embed_dim, - ckpt_path=None, - ignore_keys=[], - image_key="image", - colorize_nlabels=None, - monitor=None, - ): - super().__init__() - self.image_key = image_key - self.encoder = Encoder(**ddconfig) - self.decoder = Decoder(**ddconfig) - self.loss = instantiate_from_config(lossconfig) - assert ddconfig["double_z"] - self.quant_conv = torch.nn.Conv2d(2*ddconfig["z_channels"], 2*embed_dim, 1) - self.post_quant_conv = torch.nn.Conv2d(embed_dim, ddconfig["z_channels"], 1) - self.embed_dim = embed_dim - if colorize_nlabels is not None: - assert type(colorize_nlabels)==int - self.register_buffer("colorize", torch.randn(3, colorize_nlabels, 1, 1)) - if monitor is not None: - self.monitor = monitor - if ckpt_path is not None: - self.init_from_ckpt(ckpt_path, ignore_keys=ignore_keys) - - def init_from_ckpt(self, path, ignore_keys=list()): - sd = torch.load(path, map_location="cpu")["state_dict"] - keys = list(sd.keys()) - for k in keys: - for ik in ignore_keys: - if k.startswith(ik): - print("Deleting key {} from state_dict.".format(k)) - del sd[k] - self.load_state_dict(sd, strict=False) - print(f"Restored from {path}") - - def encode(self, x): - h = self.encoder(x) - moments = self.quant_conv(h) - posterior = DiagonalGaussianDistribution(moments) - return posterior - - def decode(self, z): - z = self.post_quant_conv(z) - dec = self.decoder(z) - return dec - - def forward(self, input, sample_posterior=True): - posterior = self.encode(input) - if sample_posterior: - z = posterior.sample() - else: - z = posterior.mode() - dec = self.decode(z) - return dec, posterior - - def get_input(self, batch, k): - x = batch[k] - if len(x.shape) == 3: - x = x[..., None] - x = x.permute(0, 3, 1, 2).to(memory_format=torch.contiguous_format).float() - return x - - def training_step(self, batch, batch_idx, optimizer_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - - if optimizer_idx == 0: - # train encoder+decoder+logvar - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - self.log("aeloss", aeloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_ae, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return aeloss - - if optimizer_idx == 1: - # train the discriminator - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, optimizer_idx, self.global_step, - last_layer=self.get_last_layer(), split="train") - - self.log("discloss", discloss, prog_bar=True, logger=True, on_step=True, on_epoch=True) - self.log_dict(log_dict_disc, prog_bar=False, logger=True, on_step=True, on_epoch=False) - return discloss - - def validation_step(self, batch, batch_idx): - inputs = self.get_input(batch, self.image_key) - reconstructions, posterior = self(inputs) - aeloss, log_dict_ae = self.loss(inputs, reconstructions, posterior, 0, self.global_step, - last_layer=self.get_last_layer(), split="val") - - discloss, log_dict_disc = self.loss(inputs, reconstructions, posterior, 1, self.global_step, - last_layer=self.get_last_layer(), split="val") - - self.log("val/rec_loss", log_dict_ae["val/rec_loss"]) - self.log_dict(log_dict_ae) - self.log_dict(log_dict_disc) - return self.log_dict - - def configure_optimizers(self): - lr = self.learning_rate - opt_ae = torch.optim.Adam(list(self.encoder.parameters())+ - list(self.decoder.parameters())+ - list(self.quant_conv.parameters())+ - list(self.post_quant_conv.parameters()), - lr=lr, betas=(0.5, 0.9)) - opt_disc = torch.optim.Adam(self.loss.discriminator.parameters(), - lr=lr, betas=(0.5, 0.9)) - return [opt_ae, opt_disc], [] - - def get_last_layer(self): - return self.decoder.conv_out.weight - - @torch.no_grad() - def log_images(self, batch, only_inputs=False, **kwargs): - log = dict() - x = self.get_input(batch, self.image_key) - x = x.to(self.device) - if not only_inputs: - xrec, posterior = self(x) - if x.shape[1] > 3: - # colorize with random projection - assert xrec.shape[1] > 3 - x = self.to_rgb(x) - xrec = self.to_rgb(xrec) - log["samples"] = self.decode(torch.randn_like(posterior.sample())) - log["reconstructions"] = xrec - log["inputs"] = x - return log - - def to_rgb(self, x): - assert self.image_key == "segmentation" - if not hasattr(self, "colorize"): - self.register_buffer("colorize", torch.randn(3, x.shape[1], 1, 1).to(x)) - x = F.conv2d(x, weight=self.colorize) - x = 2.*(x-x.min())/(x.max()-x.min()) - 1. - return x - - -class IdentityFirstStage(torch.nn.Module): - def __init__(self, *args, vq_interface=False, **kwargs): - self.vq_interface = vq_interface # TODO: Should be true by default but check to not break older stuff - super().__init__() - - def encode(self, x, *args, **kwargs): - return x - - def decode(self, x, *args, **kwargs): - return x - - def quantize(self, x, *args, **kwargs): - if self.vq_interface: - return x, None, [None, None, None] - return x - - def forward(self, x, *args, **kwargs): - return x diff --git a/spaces/zhongkaifu/mt_chs_enu/README.md b/spaces/zhongkaifu/mt_chs_enu/README.md deleted file mode 100644 index 4b77376b1a31ff88393ea682af9ad2641215223a..0000000000000000000000000000000000000000 --- a/spaces/zhongkaifu/mt_chs_enu/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: Translation from Chinese to English -emoji: 🐨 -colorFrom: indigo -colorTo: blue -sdk: docker -pinned: false -license: bsd-3-clause ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference