diff --git a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Videostudio 12 Activation Code Keygen How to Get the Best Video Editing Experience with Corel Videostudio 12.md b/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Videostudio 12 Activation Code Keygen How to Get the Best Video Editing Experience with Corel Videostudio 12.md deleted file mode 100644 index 07467c95cd331d06b497c98913edb80885c148a8..0000000000000000000000000000000000000000 --- a/spaces/1acneusushi/gradio-2dmoleculeeditor/data/Corel Videostudio 12 Activation Code Keygen How to Get the Best Video Editing Experience with Corel Videostudio 12.md +++ /dev/null @@ -1,86 +0,0 @@ - -

Corel VideoStudio 12 Activation Code Keygen: How to Get It for Free

-

If you are looking for a powerful and easy-to-use video editing software, you may have heard of Corel VideoStudio 12. This software allows you to create stunning videos with professional-quality effects, transitions, titles, music, and more. However, to enjoy all the features and benefits of Corel VideoStudio 12, you need to activate it with a valid serial number and an activation code.

-

corel videostudio 12 activation code keygen


Downloadhttps://byltly.com/2uKzNA



-

Unfortunately, buying a license for Corel VideoStudio 12 can be quite expensive, especially if you are on a tight budget or you only need it for a short-term project. That's why some people may want to get an activation code keygen for Corel VideoStudio 12 for free.

-

But what is an activation code keygen and how can you get one for Corel VideoStudio 12? In this article, we will explain everything you need to know about Corel VideoStudio 12 activation code keygen and provide you with three different methods on how to get it for free.

-

What is Corel VideoStudio 12?

-

Corel VideoStudio 12 is a video editing software that was released in 2008 by Corel Corporation. It is the successor of Ulead VideoStudio 11 and the predecessor of Corel VideoStudio Pro X2.

-

Corel VideoStudio 12 offers many features and tools that can help you create amazing videos with ease. Some of the features include:

- -

What is an activation code keygen?

-

An activation code keygen is a software program that can generate serial numbers and activation codes for other software programs. A serial number is a unique identifier that is required to install a software program on your computer. An activation code is a verification code that is required to activate a software program after installation.

-

corel videostudio 12 pro serial number generator
-corel videostudio 12 ultimate crack download
-corel videostudio 12 license key free
-corel videostudio 12 product activation code
-corel videostudio 12 keygen only
-corel videostudio 12 full version with crack
-corel videostudio 12 registration code online
-corel videostudio 12 activation patch
-corel videostudio 12 serial key and email
-corel videostudio 12 crack file
-corel videostudio 12 activation code generator online
-corel videostudio 12 offline activation code
-corel videostudio 12 keygen.exe download
-corel videostudio 12 activation code free download
-corel videostudio 12 crack keygen rar
-corel videostudio 12 serial number and activation code
-corel videostudio 12 license key generator online
-corel videostudio 12 crack download for windows 10
-corel videostudio 12 activation code crack
-corel videostudio 12 keygen by kaizer soze
-corel videostudio 12 serial number free download
-corel videostudio 12 activation code online free
-corel videostudio 12 license key crack
-corel videostudio 12 keygen download free
-corel videostudio 12 activation code torrent
-corel videostudio 12 serial number and email address
-corel videostudio 12 license key free download
-corel videostudio 12 crack download full version
-corel videostudio 12 activation code online generator
-corel videostudio 12 keygen by x-force
-corel videostudio 12 serial number generator online
-corel videostudio 12 activation code free online
-corel videostudio 12 license key online free
-corel videostudio 12 crack download for windows 7
-corel videostudio 12 activation code no survey
-corel videostudio 12 serial number and email generator
-corel videostudio 12 license key generator download
-corel videostudio 12 crack download for mac
-corel videostudio 12 activation code reddit
-corel videostudio 12 keygen by zwt
-corel videostudio 12 serial number online free
-corel videostudio 12 activation code youtube
-corel videostudio 12 license key online generator
-corel videostudio 12 crack download for pc
-corel videostudio 12 activation code txt file download

-

An activation code keygen works by using algorithms or formulas that can produce valid serial numbers and activation codes based on the name or version of the software program. For example, if you want to activate Corel VideoStudio 12 with an activation code keygen, you need to select Corel VideoStudio 12 as the target software program in the keygen interface. Then, the keygen will generate a serial number and an activation code for Corel VideoStudio 12 that you can use to install and activate it on your computer.

-

Why do you need an activation code keygen for Corel VideoStudio 12?

-

As mentioned earlier, activating Corel VideoStudio 12 with a valid serial number and an activation code is necessary to enjoy all its features and benefits. However, buying a license for Corel VideoStudio 12 can be quite costly. According to some online sources, the original price of Corel VideoStudio 12 was around $100 when it was released in 2008.

-

Therefore, some people may want to get an activation code keygen for Corel VideoStudio 12 for free instead of paying for a license. Some of the reasons why they may want to do so are:

- -

How to Get an Activation Code Keygen for Corel VideoStudio 12

-

If you are one of those people who want to get an activation code keygen for Corel VideoStudio 12 for free

0a6ba089eb
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/ACDSee Pro 2.5.332 Serial 64 Bit.md b/spaces/1gistliPinn/ChatGPT4/Examples/ACDSee Pro 2.5.332 Serial 64 Bit.md deleted file mode 100644 index a8dac443175d9ad9c014a9506a90017a4d0ca834..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/ACDSee Pro 2.5.332 Serial 64 Bit.md +++ /dev/null @@ -1,52 +0,0 @@ -

ACDSee Pro 2.5.332 serial 64 bit


Download File >>> https://imgfil.com/2uxZjY



-
-Nov 8, 2021 - ... de luis miguel. manager in spanish. manager kim 97eae9a76d goofballgoalssoccersimulatorlicensekeyACDSee Pro 2.5.332 serial 64 bitswift ... Dec 22, 2019 - ... in Spanish, manager kim 97eae9a76d goofballgoalssoccersimulatorlicensekeyACDSee Pro 2.5.332 serial 64 bitswift... -Read moreNov 8, 2021 - ... de luis miguel. manager in spanish. manager kim 97eae9a76d goofballgoalssoccersimulatorlicensekeyACDSee Pro 2.5.332 serial 64 bitswift... -Dec 22, 2019 - ... in Spanish, manager kim 97eae9a76d goofballgoalssoccersimulatorlicensekeyACDSee Pro 2.5.332 serial 64 bitswift... -Hide -Translation from Spanish. -From Spanish. - She is an author of articles for the "Alpina Publisher" publishing house, the author and compiler of the books "Spanish from scratch in 16 hours", "Spanish for one month", "500 Spanish phrases" and others. -Spanish. -Basic Course. -2 ed. -Textbook for Academic Baccalaureate. -М. : Yurite, 2016. -- 415 с. - (Bachelor. -Academic course) . -ISBN 978-5-534-00485-4. -The textbook was created in accordance with the Federal state educational standard. - Buy the book, read reviews ISBN 978-5-9925-1190-4 Labyrinth . -The textbook presents basic information about the mechanisms of the emergence and development of mental . -Download: Attachment. -Size. -Textbook. -Author: Vygotsky L.S. Size: 16 mb. -Format: PDF. -Download, read. -Read the book online Fundamentals of General Psychology - Podzolkov Vladimir Grigorievich - free, without . -Read for free the text of the book Fundamentals of General Psychology by Vladimir Podzolkov (1st page of the book) :: free books in electronic . - Read online "Fundamentals of General Psychology" - Podzolkov Vladimir Grigorievich - free, . -Read online "Fundamentals of General Psychology" - Podzolkov Vladimir Grigorievich - free, . -Read online "Fundamentals of General Psychology" - Podzolkov Vladimir Grigorievich - free, without . -At the eLibrary LitRes you can read online books of Vladimir Podzolkov for free or . -Read the book Fundamentals of General Psychology by Vladimir Podzolkov - page 1 of the text of the book . - From the book: Vladimir Grigorievich Podzolkov, Doctor of Psychology, Professor, . -Vladimir Podzolkov . -Vladimir Grigorievich Podzolkov, Doctor of Psychology, Professor, . -Vladimir Podzolkov. -Fundamentals of General Psychology. -М. "Cogito-Center, 2006. -Lectures. -Podzolkov, "Fundamentals of General Psychology. -From the book: Vladimir G. Podzolkov, Doctor of Psychology, . -E-Library LitRes offers you to download all the books of the series "Fundamentals of General. - Podzolkov Vladimir Grigorievich. -Podzolkov Vladimir Grigorievich (born. Currently works at the Institute of Psychology of the Russian Academy of Sciences, is the head of the laboratory of psychology of professional health, the author of more than 400 scientific papers. -The main topics of research: professional health, psychology of work, psychophysiology of work, diagnosis of the state and personality properties of specialists. -Download: Introduction . -Podzolkov V.G. Fundamentals of General Psychology. -Textbook for students of higher education institutions. 8a78ff9644
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Basic Statistics And Probability By Shahid Jamal Pdf Downloadl REPACK.md b/spaces/1gistliPinn/ChatGPT4/Examples/Basic Statistics And Probability By Shahid Jamal Pdf Downloadl REPACK.md deleted file mode 100644 index af80ed9dc9388bf9f308cff8f7f818a859151f01..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Basic Statistics And Probability By Shahid Jamal Pdf Downloadl REPACK.md +++ /dev/null @@ -1,6 +0,0 @@ -

Basic Statistics And Probability By Shahid Jamal Pdf Downloadl


Download ››››› https://imgfil.com/2uxYaM



-
- d5da3c52bf
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bioexcess Plus Crack The Best Way to Enhance Your Windows Experience.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bioexcess Plus Crack The Best Way to Enhance Your Windows Experience.md deleted file mode 100644 index 75c330672f947fd7a6b83a3bf7347a2a40ef32e5..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bioexcess Plus Crack The Best Way to Enhance Your Windows Experience.md +++ /dev/null @@ -1,22 +0,0 @@ -
-

austhal 19191a764c
-embedded-workbench-for-arm-710-crack
[ -embedded-workbench-for-arm-710-crack ]
[ -embedded-workbench-for-arm-710-crack ]
[ -embedded-workbench-for-arm-710-crack ]
link= -embedded-workbench-for-arm-710-crack
link= -embedded-workbench-for-arm-710-crack
link= -embedded-workbench-for-arm-710-crack

-

yasann 19191a764c
-pino/bioexcess-plus-crack
[ -pino/bioexcess-plus-crack ]
[ -pino/bioexcess-plus-crack ]
[ -pino/bioexcess-plus-crack ]
link= -pino/bioexcess-plus-crack
link= -pino/bioexcess-plus-crack
link= -pino/bioexcess-plus-crack

-

Bioexcess Plus Crack


Download ✯✯✯ https://imgfil.com/2uxZsp



-

charquin 19191a764c
-vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1
[ -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1 ]
[ -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1 ]
[ -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1 ]
link= -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1
link= -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1
link= -vision-lifesign-with-remedies-english-cracked-free-download-torrent-file-1

-

vinfide 19191a764c
-league-live-4-crack-serial-key
[ -league-live-4-crack-serial-key ]
[ -league-live-4-crack-serial-key ]
[ -league-live-4-crack-serial-key ]
link= -league-live-4-crack-serial-key
link= -league-live-4-crack-serial-key
link= -league-live-4-crack-serial-key

-

olyzav 19191a764c
-digital-gem-pro-pro-crack-serial-keygenzip
[ -digital-gem-pro-pro-crack-serial-keygenzip ]
[ -digital-gem-pro-pro-crack-serial-keygenzip ]
[ -digital-gem-pro-pro-crack-serial-keygenzip ]
link= -digital-gem-pro-pro-crack-serial-keygenzip
link= -digital-gem-pro-pro-crack-serial-keygenzip
link= -digital-gem-pro-pro-crack-serial-keygenzip

-

oxfpai 19191a764c
-spinner-free-download-crack-idm
[ -spinner-free-download-crack-idm ]
[ -spinner-free-download-crack-idm ]
[ -spinner-free-download-crack-idm ]
link= -spinner-free-download-crack-idm
link= -spinner-free-download-crack-idm
link= -spinner-free-download-crack-idm

-

impgin 19191a764c
-pro-4-vst-crack
[ -pro-4-vst-crack ]
[ -pro-4-vst-crack ]
[ -pro-4-vst-crack ]
link= -pro-4-vst-crack
link= -pro-4-vst-crack
link= -pro-4-vst-crack

-

lorequa 19191a764c
-pdf-professional-24931-with-crack-latest
[ -pdf-professional-24931-with-crack-latest ]
[ -pdf-professional-24931-with-crack-latest ]
[ -pdf-professional-24931-with-crack-latest ]
link= -pdf-professional-24931-with-crack-latest
link= -pdf-professional-24931-with-crack-latest
link= -pdf-professional-24931-with-crack-latest

-

encdahy 19191a764c
-video-editor-6-crack-serial-key-2020-free-download
[ -video-editor-6-crack-serial-key-2020-free-download ]
[ -video-editor-6-crack-serial-key-2020-free-download ]
[ -video-editor-6-crack-serial-key-2020-free-download ]
link= -video-editor-6-crack-serial-key-2020-free-download
link= -video-editor-6-crack-serial-key-2020-free-download
link= -video-editor-6-crack-serial-key-2020-free-download

-

iolagodo 19191a764c
-partition-master-138-crack-serial-key-2020-free
[ -partition-master-138-crack-serial-key-2020-free ]
[ -partition-master-138-crack-serial-key-2020-free ]
[ -partition-master-138-crack-serial-key-2020-free ]
link= -partition-master-138-crack-serial-key-2020-free
link= -partition-master-138-crack-serial-key-2020-free
link= -partition-master-138-crack-serial-key-2020-free

-

-

walbvyr 19191a764c
-gaillard/solidworks-2008-software-free-download-with-crack
[ -gaillard/solidworks-2008-software-free-download-with-crack ]
[ -gaillard/solidworks-2008-software-free-download-with-crack ]
[ -gaillard/solidworks-2008-software-free-download-with-crack ]
link= -gaillard/solidworks-2008-software-free-download-with-crack
link= -gaillard/solidworks-2008-software-free-download-with-crack
link= -gaillard/solidworks-2008-software-free-download-with-crack

-

hencath 19191a764c
-accounting-software-cracked-download
[ -accounting-software-cracked-download ]
[ -accounting-software-cracked-download ]
[ -accounting-software-cracked-download ]
link= -accounting-software-cracked-download
link= -accounting-software-cracked-download
link= -accounting-software-cracked-download

-

janfest 19191a764c
-zoo-crack-serial-key
[ -zoo-crack-serial-key ]
[ -zoo-crack-serial-key ]
[ -zoo-crack-serial-key ]
link= -zoo-crack-serial-key
link= -zoo-crack-serial-key
link= -zoo-crack-serial-key

-

flavdary 19191a764c
-60-crackupdated
[ -60-crackupdated ]
[ -60-crackupdated ]
[ -60-crackupdated ]
link= -60-crackupdated
link= -60-crackupdated
link= -60-crackupdated

-

valyel 19191a764c
-10-download-crack-45
[ -10-download-crack-45 ]
[ -10-download-crack-45 ]
[ -10-download-crack-45 ]
link= -10-download-crack-45
link= -10-download-crack-45
link= -10-download-crack-45

-

Thanks for your article. It is extremely unfortunate that over the last decade, the travel industry has had to tackle terrorism, SARS, tsunamis, bird flu, swine flu, along with the first ever true global tough economy. Through all of it the industry has really proven to be sturdy, resilient plus dynamic, getting new approaches to deal with trouble. There are constantly fresh problems and opportunity to which the marketplace must again adapt and behave.

-

Command And Conquer Generals Zero Hour Reborn V7 Download ???? ???? Download - and Conquer Generals Zero Hour Reborn V7 mod Free Download & InstallNew updates and news. CNCGZR V7 Alpha Mod release. PC game free download setup.exe with crack and direct download link[Updated] & [Updated].Command and Conquer Generals Zero Hour Reborn V7 Download & InstallFree download generals zero hour reborn v7 patch v7 update game that can be free and single-player online battle game. Download this command and conquer.Phosheroes.com command and conquer generals zero hour reborn v7 download phosheroes.com phosheroes is a free.Command and Conquer Generals Zero Hour Reborn V7 Download & Install free. Command and Conquer Generals Zero Hour Reborn V7 Download & Install free. Free.2020.10.22 11:11. Command And Conquer Generals Zero Hour Reborn V7!!EXCLUSIVE!! Download.Command and Conquer Generals Zero Hour Reborn V7 mod Free Download & InstallCustom skins for Generals Zero Hour Reborn.0. Download CNCGZR V7 Alpha Mod Free &..Play Cncgzr 0[h]r Reborn Mod download PC Game In Single Direct Link Here.. command and conquer generals zero hour reborn v7 downloadCommand and Conquer Generals Zero Hour Reborn V7 Download & Install. Command and Conquer Generals Zero Hour Reborn v7.Generals Zero Hour Rebirth v7.12 is the ultimate reborn version of the. Download: Command and Conquer Generals Zero Hour Reborn V7 (. ZIP.Command and Conquer Generals Zero Hour Reborn. Command and Conquer Generals Zero Hour Reborn v7. MOD Download: Command and Conquer.Command and Conquer Generals Zero Hour Reborn is a. command and conquer generals zero hour reborn v7 downloadCommand and Conquer Generals Zero Hour Reborn.Command & Conquer Generals Zero Hour Reborn V7 Free Download. Download Command and Conquer Generals Zero Hour Reborn V7 Free Download.Command and Conquer Generals Zero Hour Reborn. command and conquer generals zero hour reborn v7 download.exe.Command and Conquer Generals Zero Hour Reborn Full Version. Command and Conquer Generals Zero Hour Reborn Free. Mod Download: Command and.command and conquer generals zero hour reborn v7 downloadThese are more C&C Generals Zero Hour Mod downloads for you:. C&C Generals Zero Hour Reborn 0.1.1 MOD Download: Command and Conquer. ee730c9e81 -13-advanced-edition-key -win-basketball-offline-mod-apk-download -fairytale-slow-version-mp3-download -jewellery-box-tamil-pdf-download -carrozzeria-aviczh9md-english-manual

aaccfb2cb3
-
-
\ No newline at end of file diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Bu Ali Sina Books In Urdu Free Download TOP.md b/spaces/1gistliPinn/ChatGPT4/Examples/Bu Ali Sina Books In Urdu Free Download TOP.md deleted file mode 100644 index b23637e9c93897144c1d8c90c8b4c2b557db08cd..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Bu Ali Sina Books In Urdu Free Download TOP.md +++ /dev/null @@ -1,6 +0,0 @@ -

bu ali sina books in urdu free download


Download Filehttps://imgfil.com/2uy0qU



-
- . D-581. 1974nTarjuma Qanoon Shaikh Bu Ali Seena. Volume-005. 1780nMeyaar-ul-Uqool. 1960nQanoon Shaikh Bu Ali Seena. Volume-006. 1530nMeyaar-ul-Uqool. 1935nQanoon Shaikh Bu Ali Seena. Volume-007. 1883nTarjuma Qanoon Shaikh Bu Ali . D-581. 1975nQanoon Shaikh Bu Ali Seena. Volume-008. 1760nMeyaar-ul-Uqool. 1970nQanoon Shaikh Bu Ali Seena. Volume-009. 1520nMeyaar-ul-Uqool. 1939nQanoon Shaikh Bu Ali Seena. Volume-010. 1880nTarjuma Qanoon Shaikh Bu Ali Seena. D-581. 1976nQanoon Shaikh Bu Ali Seena. Volume-011. 1690nMeyaar-ul-Uqool. 1961nMeyaar-ul-Uqool. 1930nMeyaar-ul-Uqool. 1946nMeyaar-ul-Uqool. 1932nMeyaar-ul-Uqool. 1926nMeyaar-ul-Uqool. 1925nMeyaar-ul-Uqool. 1934nMeyaar-ul-Uqool. 1933nMeyaar-ul-Uqool. 1934nMeyaar-ul-Uqool. 1935nMeyaar-ul-Uqool. 1936nMeyaar-ul-Uqool. 1938nMeyaar-ul-Uqool. 1939nMeyaar-ul-Uqool. 1940nMeyaar-ul-Uqool. 1941nMeyaar-ul-Uqool. 1942nMeyaar-ul-Uqool. 1947nMeyaar-ul-Uqool. 1949nMeyaar-ul-Uqool. 1952nMeyaar-ul-Uqool. 1953nMeyaar-ul-Uqool. 1954nMeyaar-ul-Uqool. 1957nMeya 4fefd39f24
-
-
-

diff --git a/spaces/1gistliPinn/ChatGPT4/Examples/Discografia Pholhas Torrent 108 !FULL!.md b/spaces/1gistliPinn/ChatGPT4/Examples/Discografia Pholhas Torrent 108 !FULL!.md deleted file mode 100644 index a3504a9688a5c415b83554687dd3fe6bc53169c9..0000000000000000000000000000000000000000 --- a/spaces/1gistliPinn/ChatGPT4/Examples/Discografia Pholhas Torrent 108 !FULL!.md +++ /dev/null @@ -1,6 +0,0 @@ -

discografia pholhas torrent 108


Download ✏ ✏ ✏ https://imgfil.com/2uy10z



- -MORE INFORMATION Contact the BasketAu Dorset. "The Basket" is situated on St. I download indiewire 113 - iPSC Association Meetings free the Daily Download Index. The Hot 100 Songs chart is the single most important music chart in the United Kingdom and Ireland, compiled download index 113 - iPSC Association Meetings by Nielsen SoundScan and published by the Official Charts Company (OCC). Son of God, Gods Son, What Is God Like. Good God Gaze is a 97 track collection of Hymns, gospel classics and religious music. See more about the Family 4-pack, Supper Skins. View artist reviews, photos, and track listings. The most powerful spiritual message of all time: Download the mp3 online! The Bibles greatest revolution has begun! Join over two million Christian readers each month and get the Bible message today. The official site for the MLB is the best place to find out the latest news and events in baseball. Browse Albums · Tracks · Photos · Videos · Episodes · More. Help! can't read download index 113 - iPSC Association Meetings. Hanukkah - The Hanukkah Hymn Project - Volume 2, download index 113 - iPSC Association Meetings, Volume 3, and the Hanukkah Hymn Project Online in iTunes. The site is not directly affiliated with the YouTube channel, simply playing our videos on it. Journal of Biblical Research Vol. Synchronised Studies, Vol. Teardown is a geeky and comedic podcast that dives deep into things that interest us like technology and other forms of geekdom, like this web site. I don't know what I don't know, but we will teach you what we don't know. This curriculum is designed to be used with the popular movie, High School Musical. But the question of truth, the question of an objective, transcendent order of reality, is not the same thing as the question of God. I downloaded into the New Experience, it says it is 1,710MB, but there is no option to extract files. But the question of truth, the question of an objective, transcendent order of reality, is not the same thing as the question of God. Because it can't. The Hot 100 Singles is an official chart ranking the singles that have sold the most units in the United States. The Christian Music Channel provides ministry minded Christian musicians with online tools and resources to connect with those that are searching for the 4fefd39f24
-
-
-

diff --git a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chapters Interactive Stories Mod Apk 6.3.4 The Ultimate Guide to Unlocking All Features.md b/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chapters Interactive Stories Mod Apk 6.3.4 The Ultimate Guide to Unlocking All Features.md deleted file mode 100644 index 22e70bf1441322382983f3b6644b60daf98eb8f9..0000000000000000000000000000000000000000 --- a/spaces/1pelhydcardo/ChatGPT-prompt-generator/assets/Chapters Interactive Stories Mod Apk 6.3.4 The Ultimate Guide to Unlocking All Features.md +++ /dev/null @@ -1,123 +0,0 @@ -
-

Chapters Interactive Stories Mod Apk 6.3.4: A Guide for Gamers

-

If you are a fan of interactive and immersive storytelling, you might have heard of Chapters Interactive Stories, a popular mobile game that lets you choose your own adventure in various genres and scenarios. But did you know that there is a modded version of this game that gives you unlimited access to all the chapters, cards, and features? In this article, we will tell you everything you need to know about Chapters Interactive Stories Mod Apk 6.3.4, including what it is, how to download and install it, and some tips and tricks to make the most out of it.

-

What is Chapters Interactive Stories?

-

Chapters Interactive Stories is a mobile game developed by Crazy Maple Studio that offers users an immersive experience through interactive and engaging storylines. In this game, players get to choose their own path in various story scenarios, making decisions that affect the outcome of the story. By doing so, players can shape their own characters, relationships, and endings.

-

chapters interactive stories mod apk 6.3.4


DOWNLOAD ☆☆☆ https://urlin.us/2uSTXI



-

The game features a wide range of genres, such as romance, fantasy, drama, horror, comedy, and more. Each genre has multiple stories to choose from, each with different characters, settings, and plots. Some of the stories are original creations by the developers, while others are based on popular books, movies, or TV shows.

-

Some of the features of Chapters Interactive Stories are:

-

Features of Chapters Interactive Stories

- -

How to play Chapters Interactive Stories

-

To play Chapters Interactive Stories, players need to download the game from the Google Play Store or the App Store for free. After installing the game, players need to create an account or log in with their Facebook account. Then, players can choose a genre and a story to start playing.

-

The game interface consists of three main elements: the story text, the choices menu, and the navigation bar. The story text displays the dialogue and narration of the story. The choices menu shows the options that players can choose from at certain points in the story. The navigation bar contains buttons that allow players to access other features of the game, such as the home screen, the store, the settings, and more.

-

To progress through the story, players need to tap on the screen to read the story text and make choices when prompted. Some choices are free, while others require diamonds or tickets to unlock. Diamonds are the premium currency of the game that can be used to unlock premium choices, outfits, and cards. Tickets are the energy of the game that can be used to start a new chapter. Players can earn diamonds and tickets by completing chapters, watching ads, or purchasing them with real money.

-

What is Chapters Interactive Stories Mod Apk 6.3.4?

-

Chapters Interactive Stories Mod Apk 6.3.4 is a modified version of the original game that gives players unlimited access to all the features of the game without spending any money. This means that players can enjoy all the chapters, cards, and outfits without worrying about diamonds or tickets. Moreover, players can also get rid of annoying ads and enjoy a smoother gaming experience.

-

The modded version of the game is not available on the official app stores, but it can be downloaded from third-party websites that provide apk files. However, players should be careful when downloading and installing modded apk files, as they may contain viruses or malware that can harm their devices or compromise their personal information.

-

chapters interactive stories mod apk 6.3.4 download
-chapters interactive stories mod apk 6.3.4 unlimited diamonds
-chapters interactive stories mod apk 6.3.4 free tickets
-chapters interactive stories mod apk 6.3.4 latest version
-chapters interactive stories mod apk 6.3.4 android
-chapters interactive stories mod apk 6.3.4 ios
-chapters interactive stories mod apk 6.3.4 no root
-chapters interactive stories mod apk 6.3.4 offline
-chapters interactive stories mod apk 6.3.4 hack
-chapters interactive stories mod apk 6.3.4 cheats
-chapters interactive stories mod apk 6.3.4 premium choices
-chapters interactive stories mod apk 6.3.4 unlocked all
-chapters interactive stories mod apk 6.3.4 review
-chapters interactive stories mod apk 6.3.4 gameplay
-chapters interactive stories mod apk 6.3.4 update
-chapters interactive stories mod apk 6.3.4 install
-chapters interactive stories mod apk 6.3.4 features
-chapters interactive stories mod apk 6.3.4 tips
-chapters interactive stories mod apk 6.3.4 guide
-chapters interactive stories mod apk 6.3.4 tutorial
-chapters interactive stories mod apk 6.3.4 reddit
-chapters interactive stories mod apk 6.3.4 youtube
-chapters interactive stories mod apk 6.3.4 facebook
-chapters interactive stories mod apk 6.3.4 twitter
-chapters interactive stories mod apk 6.3.4 instagram
-chapters interactive stories mod apk 6.3.4 pinterest
-chapters interactive stories mod apk 6.3.4 quora
-chapters interactive stories mod apk 6.3.4 medium
-chapters interactive stories mod apk 6.3.4 blogspot
-chapters interactive stories mod apk 6.3.4 wordpress
-chapters interactive stories mod apk 6.3.4 tumblr
-chapters interactive stories mod apk 6.3.4 forum
-chapters interactive stories mod apk 6.3.4 discord
-chapters interactive stories mod apk 6.3.4 telegram
-chapters interactive stories mod apk 6.3.4 whatsapp
-chapters interactive stories mod apk 6.3.4 email
-chapters interactive stories mod apk 6.3.4 support
-chapters interactive stories mod apk 6.3.4 faq
-chapters interactive stories mod apk 6.3.4 wiki
-chapters interactive stories mod apk 6.

-

Benefits of Chapters Interactive Stories Mod Apk 6.3.4

- -

How to download and install Chapters Interactive Stories Mod Apk 6.3.4

-

To download and install Chapters Interactive Stories Mod Apk 6.3.4, players need to follow these steps:

-
    -
  1. Find a reliable website: Players need to find a trustworthy website that provides the modded apk file of the game. They can search for "Chapters Interactive Stories Mod Apk 6.3.4" on Google or other search engines and check the reviews and ratings of the websites before downloading.
  2. -
  3. Download the apk file: Players need to click on the download button on the website and wait for the apk file to be downloaded on their device.
  4. -
  5. Enable unknown sources: Players need to go to their device settings and enable the option of installing apps from unknown sources. This will allow them to install the modded apk file without any issues.
  6. -
  7. Install the apk file: Players need to locate the apk file on their device and tap on it to start the installation process. They need to follow the instructions on the screen and wait for the installation to be completed.
  8. -
  9. Launch the game: Players need to open the game icon on their device and enjoy playing Chapters Interactive Stories Mod Apk 6.3.4 with unlimited features.
  10. -
-

Tips and tricks for Chapters Interactive Stories Mod Apk 6.3.4

-

To make the most out of Chapters Interactive Stories Mod Apk 6.3.4, players can use these tips and tricks:

-

Choose your genre wisely

-

The game offers a variety of genres to choose from, such as romance, fantasy, drama, horror, comedy, and more. Each genre has its own style, tone, and mood, so players should choose a genre that suits their preferences and interests. For example, if they want a light-hearted and humorous story, they can choose comedy; if they want a thrilling and suspenseful story, they can choose horror; if they want a romantic and emotional story, they can choose romance; and so on.

-

Spend your diamonds and tickets carefully

-

Even though players have unlimited diamonds and tickets in Chapters Interactive Stories Mod Apk 6.3.4, they should still spend them wisely and strategically. For example, they should not waste their diamonds on choices or outfits that do not affect the story or their character development; they should save their tickets for stories that they are interested in or excited about; they should use their cards to boost their stats or unlock bonus scenes; and so on.

-

Customize your character and outfits

-

The game allows players to customize their own characters' appearance, name, gender, and personality. This gives players more control over their story and makes them feel more connected to their characters. Moreover, players can also dress up their characters in various outfits that suit their style and mood. This adds more fun and flair to their gameplay and makes their characters stand out from others.

-

-

The game also has social features that allow players to interact with other players and authors through chat rooms, comments, reviews, ratings, and more. This can help players to share their opinions, feedback, suggestions, and tips with others; to discover new stories and genres; to make friends and join communities; and to support their favorite authors and stories.

-

Conclusion

-

Chapters Interactive Stories Mod Apk 6.3.4 is a great way to enjoy interactive and immersive storytelling on your mobile device. With this modded version of the game, you can access all the features of the game without spending any money. You can choose your own adventure in various genres and scenarios, customize your character and outfits, and interact with other players and authors. If you are looking for a fun and engaging game that lets you create your own story, you should definitely try Chapters Interactive Stories Mod Apk 6.3.4.

-

FAQs

-

Here are some frequently asked questions about Chapters Interactive Stories Mod Apk 6.3.4:

- - - - - - - - - - - - - - - - - - - - - - - - - -
QuestionAnswer
Is Chapters Interactive Stories Mod Apk 6.3.4 safe to use?Chapters Interactive Stories Mod Apk 6.3.4 is generally safe to use, as long as you download it from a reliable website that provides virus-free apk files. However, you should always be careful when installing modded apk files, as they may contain malware or spyware that can harm your device or steal your personal information.
Will I get banned for using Chapters Interactive Stories Mod Apk 6.3.4?There is a low chance of getting banned for using Chapters Interactive Stories Mod Apk 6.3.4, as the game does not have a strict anti-cheat system or detection mechanism. However, you should still be cautious when using the modded version of the game, as you may get reported by other players or authors if they notice your unlimited diamonds or tickets.
Can I update Chapters Interactive Stories Mod Apk 6.3.4?You can update Chapters Interactive Stories Mod Apk 6.3.4 by downloading the latest version of the modded apk file from the same website that you downloaded it from before. However, you should always backup your game data before updating, as you may lose your progress or settings if something goes wrong during the update process.
Can I play Chapters Interactive Stories Mod Apk 6.3.4 offline?No, you cannot play Chapters Interactive Stories Mod Apk 6.3.4 offline, as the game requires an internet connection to load the stories and sync your data with the server. If you try to play the game offline, you may encounter errors or glitches that prevent you from playing properly.
Can I play Chapters Interactive Stories Mod Apk 6.3.4 on PC?Yes, you can play Chapters Interactive Stories Mod Apk 6.3.4 on PC by using an Android emulator software that allows you to run Android apps on your computer. Some of the popular Android emulators are BlueStacks, NoxPlayer, LDPlayer, and MEmu.

197e85843d
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Angry Birds 2 Mod Apk O jogo de estilingue mais viciante com dinheiro infinito e desafios ilimitados.md b/spaces/1phancelerku/anime-remove-background/Angry Birds 2 Mod Apk O jogo de estilingue mais viciante com dinheiro infinito e desafios ilimitados.md deleted file mode 100644 index e63237a7715cb34c28ee6a57ad179a8732c41989..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Angry Birds 2 Mod Apk O jogo de estilingue mais viciante com dinheiro infinito e desafios ilimitados.md +++ /dev/null @@ -1,127 +0,0 @@ - -

Angry Birds 2 Mod Apk Dinheiro Infinito: How to Download and Play the Ultimate Bird Flinging Game

-

Are you a fan of Angry Birds, the most popular physics-based game series in the world? Do you want to experience a new level of fun and challenge with Angry Birds 2, the official sequel to the original game? Do you want to enjoy unlimited money, gems, lives, cards, and spells in Angry Birds 2? If you answered yes to any of these questions, then this article is for you!

-

In this article, we will tell you everything you need to know about Angry Birds 2 mod apk dinheiro infinito, a modified version of the game that gives you access to all the premium features for free. We will explain what is Angry Birds 2, what are its main features, what are the advantages of using Angry Birds 2 mod apk dinheiro infinito, how to download and install it on your Android device, and how to play it like a pro. By the end of this article, you will be ready to download and play Angry Birds 2 mod apk dinheiro infinito and have a blast with your feathered friends!

-

angry birds 2 mod apk dinheiro infinito


Download Zip ››››› https://jinyurl.com/2uNUJM



-

What is Angry Birds 2?

-

Angry Birds 2 is a puzzle video game developed by Rovio Entertainment and released in 2015. It is the direct sequel to the original Angry Birds game, which was launched in 2009 and became a global phenomenon. Angry Birds 2 follows the same premise as the previous games: you have to use a slingshot to launch a variety of birds at the structures and pigs that are trying to steal their eggs. However, Angry Birds 2 also introduces new features and improvements that make the game more exciting and challenging.

-

What are the main features of Angry Birds 2?

-

Angry Birds 2 has many features that make it stand out from other puzzle games. Here are some of them:

- -

What are the advantages of using Angry Birds 2 mod apk dinheiro infinito?

-

Angry Birds 2 mod apk dinheiro infinito is a modified version of the game that gives you unlimited access to all the premium features for free. By using this mod apk, you can enjoy the following advantages:

- -

How to download and install Angry Birds 2 mod apk dinheiro infinito?

-

If you want to download and install Angry Birds 2 mod apk dinheiro infinito on your Android device, you need to follow these simple steps:

-

Step 1: Enable unknown sources on your device

-

Before you can install any mod apk file on your device, you need to enable unknown sources on your device settings. This will allow you to install apps from sources other than the Google Play Store. To do this, follow these steps:

-
    -
  1. Go to your device settings and tap on Security or Privacy.
  2. -
  3. Find the option that says Unknown Sources or Install Unknown Apps and toggle it on.
  4. -
  5. A warning message will pop up. Tap on OK or Allow to confirm.
  6. -
-

Enable unknown sources

-

Step 2: Download the mod apk file from a trusted source

-

Next, you need to download the mod apk file from a trusted source. There are many websites that offer mod apk files, but not all of them are safe and reliable. Some of them may contain viruses, malware, or spyware that can harm your device or steal your personal information. Therefore, you need to be careful and choose a reputable source. One of the best sources that we recommend is [Angry Birds 2 Mod Apk Dinheiro Infinito], which is a website that provides high-quality and updated mod apk files for various games and apps. To download the mod apk file from this source, follow these steps:

-
    -
  1. Go to [Angry Birds 2 Mod Apk Dinheiro Infinito] using your browser.
  2. -
  3. Scroll down and find the download button that says Download Angry Birds 2 Mod Apk Dinheiro Infinito.
  4. -
  5. Tap on the download button and wait for the download to start.
  6. -
  7. The mod apk file will be downloaded to your device in a few minutes, depending on your internet speed.
  8. -
-

Download mod apk file

-

angry birds 2 hack apk dinheiro infinito
-angry birds 2 mod apk unlimited money and gems
-angry birds 2 apk mod tudo infinito
-angry birds 2 mod apk download grátis
-angry birds 2 mod apk atualizado 2023
-angry birds 2 apk mod mega hack
-angry birds 2 mod apk offline
-angry birds 2 mod apk sem anúncios
-angry birds 2 apk mod desbloqueado
-angry birds 2 mod apk versão mais recente
-angry birds 2 hack apk android
-angry birds 2 mod apk unlimited lives
-angry birds 2 apk mod estrelas infinitas
-angry birds 2 mod apk sem root
-angry birds 2 mod apk com obb
-angry birds 2 hack apk ios
-angry birds 2 mod apk unlimited pearls
-angry birds 2 apk mod energia infinita
-angry birds 2 mod apk sem verificação
-angry birds 2 mod apk revdl
-angry birds 2 hack apk online
-angry birds 2 mod apk unlimited black pearls
-angry birds 2 apk mod nível máximo
-angry birds 2 mod apk sem banimento
-angry birds 2 mod apk rexdl
-angry birds 2 hack apk no survey
-angry birds 2 mod apk unlimited spells
-angry birds 2 apk mod plumas infinitas
-angry birds 2 mod apk sem internet
-angry birds 2 mod apk happymod

-

Step 3: Locate and install the mod apk file on your device

-

After you have downloaded the mod apk file, you need to locate and install it on your device. To do this, follow these steps:

-
    -
  1. Go to your device file manager and find the folder where you downloaded the mod apk file. It is usually in the Downloads folder.
  2. -
  3. Tap on the mod apk file and a pop-up window will appear.
  4. -
  5. Tap on Install and wait for the installation to finish.
  6. -
  7. If another pop-up window appears asking for permissions, tap on Allow or Accept to grant them.
  8. -
-

Install mod apk file

-

Step 4: Launch the game and enjoy!

-

Congratulations! You have successfully installed Angry Birds 2 mod apk dinheiro infinito on your device. Now you can launch the game and enjoy all the modded features. To do this, follow these steps:

-
    -
  1. Go to your device app drawer and find the Angry Birds 2 icon.
  2. -
  3. Tap on the icon and wait for the game to load.
  4. -
  5. You will see a message that says "Angry Birds 2 Mod Apk Dinheiro Infinito by AngryBirds2ModApkDinheiroInfinito.com". Tap on OK or Continue to proceed.
  6. -
  7. You will be taken to the main menu of the game. You can choose to play offline or online, depending on your preference.
  8. -
  9. You will notice that you have unlimited money, gems, lives, cards, and spells in the game. You can use them as you wish and have fun!
  10. -
-

Launch game and enjoy

-

How to play Angry Birds 2 like a pro?

-

Now that you have downloaded and installed Angry Birds 2 mod apk dinheiro infinito, you might be wondering how to play it like a pro. Well, don't worry, we have some tips and tricks for you that will help you master the game and beat any level with ease. Here are some of them:

-

Tip 1: Choose the right bird for the right situation

-

One of the most important aspects of playing Angry Birds 2 is choosing the right bird for the right situation. Each bird has a different ability and score multiplier that can affect the outcome of the level. For example, Red can knock down structures with his strong impact, Chuck can speed up and cut through wood and glass, Bomb can explode and cause massive damage, Matilda can drop an egg bomb and fly upwards, The Blues can split into three and hit multiple targets, Silver can loop and smash downwards, and Terence can crush anything with his huge size. You can also use the Mighty Eagle to call for a powerful airstrike that can wipe out the entire level.

-

Therefore, you need to choose the best bird for each level based on the layout, the materials, the pigs, and the spells. You can also switch the order of the birds by tapping on their cards at the bottom of the screen. You should always try to use the bird that can cause the most damage and destruction with the least number of shots. This will help you earn more points and stars, as well as fill up the Destructometer faster.

-

Tip 2: Use the environment to your advantage

-

Another important aspect of playing Angry Birds 2 is using the environment to your advantage. The game has many environmental elements that can help you or hinder you in your quest to defeat the pigs. For example, there are flowers that can bounce your birds back, portals that can teleport your birds to different locations, TNT crates that can explode and cause chain reactions, fans that can blow your birds or objects away, rocks that can fall and crush the pigs, etc.

-

Therefore, you need to pay attention to the environment and use it wisely. You can use the flowers to redirect your birds or hit hard-to-reach pigs, you can use the portals to surprise the pigs or avoid obstacles, you can use the TNT crates to create massive explosions and clear large areas, you can use the fans to push your birds or objects towards the pigs, you can use the rocks to drop them on the pigs or structures, etc. You should also be careful of the environmental hazards that can harm your birds or prevent them from reaching their targets.

-

Tip 3: Fill up the Destructometer quickly

-

The Destructometer is a meter that fills up as you destroy objects and pigs in each level. When you fill up the Destructometer completely, you will earn an extra card or spell that you can use in the same level or save for later. The Destructometer also resets after each stage, so you have multiple chances to fill it up in each level.

-

Therefore, you should try to fill up the Destructometer as quickly as possible by destroying as much as possible with each bird. You should aim for weak spots, explosive objects, large structures, multiple pigs, etc., to cause more damage and destruction. You should also use spells wisely to boost your destruction and fill up the Destructometer faster. The more cards or spells you have, the more options and flexibility you have in completing the level.

-

Tip 4: Compete with other players in multiplayer mode

-

If you want to test your skills and challenge yourself further, you can compete with other players in multiplayer mode. In multiplayer mode, you can join a clan or create your own clan with your friends. By joining a clan, you can chat with other clan members, share tips and strategies, and participate in clan events. Clan events are special competitions where you have to work together with your clan members to complete a common goal and earn rewards.

-

You can also enter tournaments in multiplayer mode. Tournaments are daily or weekly competitions where you have to compete with other players from around the world in various levels. You have to pay a fee with gems or tickets to enter a tournament, and you have a time limit to play as many levels as you want. The more levels you complete and the higher your score, the higher your rank on the leaderboard. You can win prizes such as gems, feathers, hats, and chests based on your rank.

-

Competing with other players in multiplayer mode is a great way to improve your skills, learn new tricks, have fun, and earn rewards. You can also make new friends and join a community of Angry Birds fans.

-

Conclusion

-

Angry Birds 2 is a fantastic game that offers hours of fun and entertainment. It has amazing graphics, sound effects, and animations that bring the game to life. It has a variety of levels, modes, features, and characters that keep the game fresh and exciting. It has a simple and intuitive gameplay that anyone can enjoy and master.

-

However, if you want to take your gaming experience to the next level, you should try Angry Birds 2 mod apk dinheiro infinito. This mod apk gives you unlimited access to all the premium features of the game for free. You can have unlimited money, gems, lives, cards, and spells that you can use to beat any level with ease. You can also customize your birds with different hats and accessories that give them extra abilities and bonuses. You can also compete with other players in multiplayer mode and win amazing prizes.

-

So what are you waiting for? Download Angry Birds 2 mod apk dinheiro infinito today and join the ultimate bird flinging adventure!

-

FAQs

-

Here are some frequently asked questions and answers about Angry Birds 2 mod apk dinheiro infinito:

-

Q: Is Angry Birds 2 mod apk dinheiro infinito safe to use?

-

A: Yes, Angry Birds 2 mod apk dinheiro infinito is safe to use as long as you download it from a trusted source like [Angry Birds 2 Mod Apk Dinheiro Infinito]. This website provides high-quality and updated mod apk files that are free from viruses, malware, or spyware. However, you should always be careful when downloading and installing any mod apk file on your device and enable unknown sources on your device settings.

-

Q: Do I need to root my device to use Angry Birds 2 mod apk dinheiro infinito?

-

A: No, you do not need to root your device to use Angry Birds 2 mod apk dinheiro infinito. This mod apk works on any Android device without requiring root access. However, you should always backup your data before installing any mod apk file on your device in case anything goes wrong.

-

Q: Will I get banned from the game if I use Angry Birds 2 mod apk dinheiro infinito?

-

A: No, you will not get banned from the game if you use Angry Birds 2 mod apk dinheiro infinito. This mod apk is undetectable by the game servers and does not interfere with the game's functionality or performance. However, you should always use this mod apk responsibly and not abuse it or cheat in multiplayer mode.

-

Q: Can I update Angry Birds 2 mod apk dinheiro infinito?

-

A: Yes, you can update Angry Birds 2 mod apk dinheiro infinito whenever there is a new version available. However, you should always download the latest version of the mod apk from [Angry Birds 2 Mod Apk Dinheiro Infinito] and not from the Google Play Store or other sources. This will ensure that you get the most updated and compatible version of the mod apk.

-

Q: Can I play Angry Birds 2 mod apk dinheiro infinito offline?

-

A: Yes, you can play Angry Birds 2 mod apk dinheiro infinito offline without requiring an internet connection. However, some features of the game such as multiplayer mode, tournaments, clan events, etc., require an internet connection to work properly. Therefore, you should always connect to a stable and secure internet connection when playing these features.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/1phancelerku/anime-remove-background/Beach Buggy Racing 2 Mod APK How to Download Aplikasi and Unlock All Features.md b/spaces/1phancelerku/anime-remove-background/Beach Buggy Racing 2 Mod APK How to Download Aplikasi and Unlock All Features.md deleted file mode 100644 index cd3fca6b8ec45b05bd982c5655370e4d0889a836..0000000000000000000000000000000000000000 --- a/spaces/1phancelerku/anime-remove-background/Beach Buggy Racing 2 Mod APK How to Download Aplikasi and Unlock All Features.md +++ /dev/null @@ -1,164 +0,0 @@ -
-

Download Aplikasi Beach Buggy Racing 2 Mod Apk

-

If you are looking for a fun and exciting kart racing game on your Android device, you should definitely check out Beach Buggy Racing 2. This is a sequel to the popular Beach Buggy Racing, which has over 100 million downloads on Google Play. In this game, you can race against drivers and cars from around the world, explore different tracks and environments, collect and upgrade power-ups, and customize your own car and driver.

-

download aplikasi beach buggy racing 2 mod apk


Download ✯✯✯ https://jinyurl.com/2uNSTW



-

But what if you want to enjoy the game without any limitations or restrictions? What if you want to have unlimited money, gems, tickets, and power-ups? What if you want to unlock all the cars, drivers, and tracks in the game? Well, there is a way to do that. You can download aplikasi Beach Buggy Racing 2 mod apk, which is a modified version of the original game that gives you access to all the features and content that you want.

-

In this article, we will tell you what is Beach Buggy Racing 2, why you should download its mod apk, how to download and install it on your device, some tips and tricks to play it better, and a review of the game. So, let's get started!

-

What is Beach Buggy Racing 2?

-

Beach Buggy Racing 2 is a kart racing game developed by Vector Unit, the same studio behind other racing games like Riptide GP and Hydro Thunder Hurricane. It was released in December 2018 for Android and iOS devices, and later for PC and consoles. It is a free-to-play game with in-app purchases.

-

Beach Buggy Racing 2 is a fully 3D off-road kart racing game with amazing physics, detailed cars and characters, and spectacular weapons. It has a variety of game modes, such as Adventure mode, Championships, Races, Drift Attacks, Firework Fury, and more. You can also create your own custom game modes with different power-ups, race rules, lap counts, and more.

-

You can choose from over 55 cars in the game, ranging from beach buggies to monster trucks to formula supercars. You can also collect over 45 power-ups in the game, such as Chain Lightning, Donut Tires, Boost Juice, Killer Bees, and more. Each power-up has its own unique effect and can be upgraded to make it more powerful.

-

How to download beach buggy racing 2 mod apk for free
-Beach buggy racing 2 mod apk unlimited money and gems
-Best tips and tricks for beach buggy racing 2 mod apk
-Beach buggy racing 2 mod apk latest version 2023
-Download beach buggy racing 2 hack mod apk
-Beach buggy racing 2 mod apk offline mode
-Beach buggy racing 2 mod apk all cars unlocked
-Beach buggy racing 2 mod apk gameplay and review
-Beach buggy racing 2 mod apk android and ios
-Download beach buggy racing 2 mod apk from APKdone[^1^]
-Beach buggy racing 2 mod apk features and benefits
-Beach buggy racing 2 mod apk vs original version
-Beach buggy racing 2 mod apk no root required
-Beach buggy racing 2 mod apk online multiplayer
-Download beach buggy racing 2 mod apk with obb data
-Beach buggy racing 2 mod apk new update and patch notes
-Beach buggy racing 2 mod apk cheats and codes
-Beach buggy racing 2 mod apk fun and addictive game
-Beach buggy racing 2 mod apk graphics and sound quality
-Download beach buggy racing 2 mod apk safely and securely
-Beach buggy racing 2 mod apk pros and cons
-Beach buggy racing 2 mod apk system requirements and compatibility
-Beach buggy racing 2 mod apk download link and instructions
-Beach buggy racing 2 mod apk ratings and feedbacks
-Download beach buggy racing 2 mod apk for PC and Mac
-Beach buggy racing 2 mod apk challenges and missions
-Beach buggy racing 2 mod apk customization and upgrades
-Beach buggy racing 2 mod apk characters and power-ups
-Download beach buggy racing 2 mod apk from Google Play Store or App Store
-Beach buggy racing 2 mod apk alternatives and similar games

-

You can also build your own team of racers in the game. You can recruit new drivers from the Beach Buggy Racing League, each with their own special ability. For example, Rez can launch beach balls that spin out other racers, Disco Jimmy can make other racers dance and stop racing, Mikka can create holograms of herself to confuse other racers, and so on.

-

You can also test your skills against other players from around the world in online competitions and tournaments. You can race against player avatars in daily races or compete in live tournaments and special events to win exclusive in-game prizes.

-

Features of Beach Buggy Racing 2

-

Here are some of the main features of Beach Buggy Racing 2 that make it a great kart racing game:

- -

Why download Beach Buggy Racing 2 mod apk?

-

Beach Buggy Racing 2 is a free-to-play game, but it also has some limitations and restrictions that can affect your gaming experience. For example, you need to spend real money to buy gems, which are the premium currency in the game. Gems are used to unlock new cars, drivers, power-ups, and tracks. You also need to spend tickets, which are the energy system in the game, to play certain game modes. Tickets are replenished over time or by watching ads.

-

If you don't want to spend money or wait for tickets, you can download aplikasi Beach Buggy Racing 2 mod apk, which is a modified version of the original game that gives you unlimited resources and features. With Beach Buggy Racing 2 mod apk, you can enjoy the following benefits:

- -

How to download and install Beach Buggy Racing 2 mod apk?

-

If you are interested in downloading aplikasi Beach Buggy Racing 2 mod apk, you need to follow these simple steps:

-

Step 1: Enable unknown sources

-

Before you can install any mod apk file on your Android device, you need to enable unknown sources in your settings. This will allow you to install apps from sources other than Google Play. To do this, go to Settings > Security > Unknown Sources and toggle it on.

-

Step 2: Download the mod apk file

-

Next, you need to download the mod apk file of Beach Buggy Racing 2 from a reliable source. You can search for it online or use this link to download it directly. The file size is about 150 MB, so make sure you have enough space on your device.

-

Step 3: Install the mod apk file

-

Once you have downloaded the mod apk file, locate it in your file manager and tap on it. You will see a prompt asking you to install the app. Tap on Install and wait for the installation process to finish.

-

Step 4: Launch the game and enjoy

-

After the installation is done, you can launch the game from your app drawer or home screen. You will see that you have unlimited resources and features in the game. You can now enjoy Beach Buggy Racing 2 without any limitations or restrictions.

-

Tips and tricks for Beach Buggy Racing 2

-

To help you play Beach Buggy Racing 2 better, here are some tips and tricks that you should know:

-

Master the drift and powerslide

-

One of the most important skills in Beach Buggy Racing 2 is drifting and powersliding. Drifting is when you turn your car sharply while accelerating, causing your tires to lose traction and slide sideways. Powersliding is when you tap on the brake button while drifting, causing your car to slide even more and gain more speed. Drifting and powersliding are useful for taking sharp turns, avoiding obstacles, and gaining boost. Boost is a meter that fills up as you drift and powerslide, and when it is full, you can tap on the boost button to get a burst of speed. You can also use boost to ram into other racers and knock them out of the way.

-

Use the driver's ability at the right time

-

As mentioned earlier, each driver in Beach Buggy Racing 2 has their own special ability that can give them an edge in the race. However, these abilities have a cooldown time, so you need to use them wisely and at the right time. For example, Rez's beach ball ability can be used to block other racers behind you, Disco Jimmy's dance ability can be used to distract other racers in front of you, Mikka's hologram ability can be used to confuse other racers around you, and so on. You can also combine your driver's ability with your power-ups to create more chaos and fun.

-

Don't fall into the trap of other racers

-

Beach Buggy Racing 2 is not just about racing, it is also about sabotaging and surviving. Other racers will try to use their power-ups and abilities to slow you down, damage your car, or make you lose control. You need to be alert and avoid falling into their traps. For example, watch out for oil slicks, banana peels, fireballs, rockets, mines, and other hazards on the track. You can also use your power-ups and abilities to counter their attacks or dodge them. For example, you can use the shield power-up to protect yourself from incoming projectiles, the jump power-up to leap over obstacles, the magnet power-up to attract coins and gems, and so on.

-

Build the best deck of power-ups

-

Before each race, you can choose up to three power-ups to bring with you. These power-ups are randomly assigned to you during the race, so you need to choose wisely and build the best deck of power-ups that suits your playstyle and strategy. For example, if you want to be more aggressive and offensive, you can choose power-ups like rockets, fireballs, chain lightning, killer bees, etc. If you want to be more defensive and supportive, you can choose power-ups like shields, repair kits, boost juice, donut tires, etc. If you want to be more balanced and versatile, you can choose power-ups like magnets, jumps, bubbles, etc.

-

Grab those fast bubbles and shortcuts

-

Another way to gain an advantage in Beach Buggy Racing 2 is to grab those fast bubbles and shortcuts on the track. Fast bubbles are blue spheres that give you a temporary speed boost when you drive through them. They are usually located on straight paths or ramps that can help you gain some distance or catch up with other racers. Shortcuts are hidden or alternative paths that can help you avoid obstacles or take a faster route. They are usually marked by arrows or signs that indicate where they lead. However, some shortcuts may also have risks or traps that can backfire on you if you are not careful.

-

Review of Beach Buggy Racing 2

-

Now that we have covered what is Beach Buggy Racing 2, why you should download its mod apk, how to download and install it on your device, and some tips and tricks to play it better, let's take a look at the review of the game. We will discuss the pros and cons of Beach Buggy Racing 2, the user ratings and feedback of the game, and the comparison with other kart racing games.

-

Pros and cons of Beach Buggy Racing 2

-

Beach Buggy Racing 2 is a fun and addictive kart racing game that offers a lot of content and features for players to enjoy. However, it also has some drawbacks that may affect your gaming experience. Here are the pros and cons of Beach Buggy Racing 2:

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

User ratings and feedback of Beach Buggy Racing 2

-

Beach Buggy Racing 2 has received mostly positive ratings and feedback from users who have played the game. On Google Play, it has a rating of 4.4 out of 5 stars, based on over 1.1 million reviews. On App Store, it has a rating of 4.7 out of 5 stars, based on over 30 thousand reviews. On Steam, it has a rating of 9 out of 10, based on over 300 reviews.

-

Most users praise the game for its fun and addictive gameplay, its variety and quality of content and features, its smooth and responsive controls, its beautiful and colorful graphics, and its online multiplayer and cross-platform support. Some users also appreciate the game for its regular updates, its fair and balanced monetization system, and its friendly and helpful customer service.

-

However, some users also criticize the game for some issues and problems that they encounter while playing the game. Some of these issues include the game being too easy or too hard, the game being too repetitive or unfair, the game having some bugs or glitches, the game having some ads or in-app purchases, and the game having some compatibility or performance issues.

-

Comparison with other kart racing games

-

Beach Buggy Racing 2 is not the only kart racing game available on the market. There are other similar games that you can try if you are looking for more options or alternatives. Some of these games include:

- -

These are some of the most popular and well-known kart racing games that you can compare with Beach Buggy Racing 2. Each game has its own strengths and weaknesses, so you can choose the one that suits your preferences and expectations.

-

Conclusion and FAQs

-

In conclusion, Beach Buggy Racing 2 is a fun and exciting kart racing game that offers a lot of content and features for players to enjoy. It has realistic and thrilling kart racing physics, stunning and colorful graphics and animations, over 55 cars to collect and customize, over 45 power-ups to use and upgrade, over 15 drivers to recruit and team up with, over 40 tracks to explore and race on, various game modes and challenges to enjoy, online multiplayer and leaderboards to compete with others, daily rewards and achievements to earn, cross-platform compatibility and cloud saving, and more. However, Beach Buggy Racing 2 also has some limitations and restrictions that can affect your gaming experience. You need to spend real money to buy gems, which are the premium currency in the game. You also need to spend tickets, which are the energy system in the game, to play certain game modes. You also need to watch ads or wait for tickets to replenish. If you don't want to deal with these issues, you can download aplikasi Beach Buggy Racing 2 mod apk, which is a modified version of the original game that gives you unlimited resources and features. With Beach Buggy Racing 2 mod apk, you can have unlimited money, gems, tickets, and power-ups. You can also unlock all the cars, drivers, tracks, and power-ups in the game. You can also play the game without any ads or interruptions. To download aplikasi Beach Buggy Racing 2 mod apk, you need to enable unknown sources in your settings, download the mod apk file from a reliable source, install the mod apk file on your device, and launch the game and enjoy. We also gave you some tips and tricks to play Beach Buggy Racing 2 better, such as mastering the drift and powerslide, using the driver's ability at the right time, avoiding the trap of other racers, building the best deck of power-ups, and grabbing those fast bubbles and shortcuts. We also gave you a review of Beach Buggy Racing 2, where we discussed the pros and cons of the game, the user ratings and feedback of the game, and the comparison with other kart racing games. We hope that this article has helped you learn more about Beach Buggy Racing 2 and how to download its mod apk. If you have any questions or comments about the game or the mod apk, feel free to leave them below. We will try to answer them as soon as possible. Here are some FAQs that you may find useful:

Q: Is Beach Buggy Racing 2 mod apk safe to use?

-

A: Yes, Beach Buggy Racing 2 mod apk is safe to use as long as you download it from a trusted source. However, you should always be careful when downloading any mod apk file from unknown sources. You should also scan the file with an antivirus or malware detector before installing it on your device.

-

Q: Is Beach Buggy Racing 2 mod apk compatible with my device?

-

A: Beach Buggy Racing 2 mod apk is compatible with most Android devices that have Android 4.4 or higher. However, some devices may have compatibility or performance issues depending on their specifications or settings. You should always check the requirements and compatibility of the mod apk file before downloading it.

-

Q: Can I play Beach Buggy Racing 2 mod apk online with other players?

-

A: Yes, you can play Beach Buggy Racing 2 mod apk online with other players who have the same version of the mod apk file. However, you may not be able to play online with players who have the original version of the game or a different version of the mod apk file. You should always check the version and compatibility of the mod apk file before playing online.

-

Q: Can I update Beach Buggy Racing 2 mod apk when a new version is released?

-

A: Yes, you can update Beach Buggy Racing 2 mod apk when a new version is released by downloading and installing the new version of the mod apk file from a reliable source. However, you may lose your progress or data if you update without backing up your files. You should always backup your files before updating any mod apk file.

-

Q: Can I restore my progress or data if I uninstall Beach Buggy Racing 2 mod apk?

-

A: Yes, you can restore your progress or data if you uninstall Beach Buggy Racing 2 mod apk by logging in with your Google Play account or Facebook account. However, you may lose your progress or data if you uninstall without logging in or backing up your files. You should always log in or backup your files before uninstalling any mod apk file.

401be4b1e0
-
-
\ No newline at end of file diff --git a/spaces/232labs/VToonify/vtoonify/model/stylegan/__init__.py b/spaces/232labs/VToonify/vtoonify/model/stylegan/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_clapemb_32khz.py b/spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_clapemb_32khz.py deleted file mode 100644 index 64ad3f8c77afe1ab5908e407ad14d4879e1b1ad1..0000000000000000000000000000000000000000 --- a/spaces/AIConsultant/MusicGen/audiocraft/grids/musicgen/musicgen_clapemb_32khz.py +++ /dev/null @@ -1,32 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from ._explorers import LMExplorer -from ...environment import AudioCraftEnvironment - - -@LMExplorer -def explorer(launcher): - partitions = AudioCraftEnvironment.get_slurm_partitions(['team', 'global']) - launcher.slurm_(gpus=32, partition=partitions) - launcher.bind_(solver='musicgen/musicgen_base_32khz') - # replace this by the desired music dataset - launcher.bind_(dset='internal/music_400k_32khz') - launcher.bind_(conditioner='clapemb2music') - - fsdp = {'autocast': False, 'fsdp.use': True} - cache_path = {'conditioners.description.clap.cache_path': - '/fsx-audio-craft-llm/jadecopet/experiments/audiocraft/caches/clap_embed_music'} - text_wav_training_opt = {'conditioners.description.clap.text_p': 0.5} - - launcher.bind_(fsdp) - - launcher.slurm_(gpus=32).bind_(label='32gpus') - with launcher.job_array(): - launcher() - launcher(text_wav_training_opt) - launcher(cache_path) - launcher(cache_path, text_wav_training_opt) diff --git a/spaces/ARTeLab/ARTeLab-SummIT/app.py b/spaces/ARTeLab/ARTeLab-SummIT/app.py deleted file mode 100644 index 55c0a5f84fed49ba80c5c3681e4bb26668958bf9..0000000000000000000000000000000000000000 --- a/spaces/ARTeLab/ARTeLab-SummIT/app.py +++ /dev/null @@ -1,157 +0,0 @@ -import streamlit as st -import os - -from transformers import AutoTokenizer -from transformers import AutoModelForSeq2SeqLM -from transformers import pipeline -from transformers import set_seed - -debug = False - -MODELS = [ - "ARTeLab/mbart-summarization-fanpage", - "ARTeLab/mbart-summarization-ilpost", - "ARTeLab/mbart-summarization-mlsum", - "ARTeLab/it5-summarization-mlsum", - "ARTeLab/it5-summarization-ilpost", - "ARTeLab/it5-summarization-fanpage" -] - -DEFAULT_TEXT: str = """(Fanpage) Dopo oltre mezzo secolo, il mistero della Natività di Caravaggio resta intatto. L'opera, intitolata la "Natività con i Santi Lorenzo e Francesco d'Assisi", fu trafugata la notte tra il 17 e il 18 ottobre 1969 dall'Oratorio di San Lorenzo a Palermo e tuttora non è stata ancora recuperata. L'olio su tela realizzato da Caravaggio, inserito dagli investigatori nella top ten mondiale delle opere d'arte trafugate e mai più ritrovate, ha un valore di mercato che si aggirerebbe oggi intorno ai 20 milioni di dollari secondo l'FBI. La sua storia è avvolta nel mistero e dopo cinquantuno anni ancora non è stata risolta, dopo il furto della mafia nel 1969 e forse ormai distrutta. L'unica certezza è che nemmeno questo Natale potremo ammirare l'opera raffigurante la nascita di Cristo del grande genio pittorico italiano. E forse, secondo i più pessimisti, non ci riusciremo mai più. Nella notte tra il 17 e il 18 ottobre, nel cuore di Palermo, i boss di Cosa Nostra si intrufolarono nell'Oratorio di San Lorenzo e arrotolarono la "Natività con i Santi Lorenzo e Francesco d'Assisi" di Caravaggio in malo modo, facendo sgranare la tela. Una delle più alte testimonianza dell'arte di ogni tempo fu distrutta così. Ma come facciamo a sapere oggi che la tela è andata distrutta? Fu il pentito Francesco Marino Mannoia, durante il processo Andreotti nel 1996 a raccontare del presunto disastro di un gioiello arrotolato in fretta e portato via in segno di sfregio. Ma questa versione stride con quella di un altro pentito che ricorda il quadro affisso ai summit di mafia, come un trofeo, mentre sui giornali si sussurrava di losche ma non provate trattative da 60 miliardi di vecchie lire fra mediatori e trafficanti. Nel 2017, il mafioso Gaetano Grado asserisce che la tela sarebbe stata nascosta, ma all'estero: nel 1970 il boss Badalamenti l'avrebbe trasferita in Svizzera in cambio di una notevole somma di franchi ad un antiquario svizzero, giunto a Palermo per definire l'affare. Grado riferisce anche che Badalamenti gli avrebbe detto che il quadro era stato scomposto per essere venduto sul mercato clandestino.""" - - -class TextSummarizer: - def __init__(self): - self.tokenizer = None - self.model = None - self.generator = None - self.model_loaded = None - set_seed(42) - - def load(self, model_name): - os.environ["TOKENIZERS_PARALLELISM"] = "false" - self.tokenizer = AutoTokenizer.from_pretrained(model_name) - self.model = AutoModelForSeq2SeqLM.from_pretrained(model_name) - self.generator = pipeline( - "text2text-generation", model=self.model, tokenizer=self.tokenizer - ) - self.model_loaded = model_name - - def summarize(self, model_name, input_text, generate_kwargs) -> str: - if not self.generator or self.model_loaded != model_name: - with st.spinner("meanwhile: downloading/loading selected model...please don't go :("): - self.load(model_name) - return self.generator( - input_text, return_tensors=False, return_text=True, **generate_kwargs - )[0].get("generated_text") - - -@st.cache(allow_output_mutation=True) -def instantiate_generator(): - summarizer = TextSummarizer() - return summarizer - - -def main(): - st.set_page_config( # Alternate names: setup_page, page, layout - page_title="ARTeLab SummIT", - layout="wide", # Can be "centered" or "wide". In the future also "dashboard", etc. - initial_sidebar_state="expanded", # Can be "auto", "expanded", "collapsed" - page_icon="📰", # String, anything supported by st.image, or None. - ) - - with open("style.css") as f: - st.markdown(f"", unsafe_allow_html=True) - - generator = instantiate_generator() - - st.markdown( - """ - - """, - unsafe_allow_html=True, - ) - st.sidebar.markdown("""# ARTeLab SummIT""") - st.sidebar.image("fl.png", width=220) - st.sidebar.markdown( - """ - * Create summaries of Italian news articles. - * Copy paste any Italian news text and press the Generate Summary botton. - """ - ) - st.sidebar.title("Parameters:") - - MODEL = st.sidebar.selectbox("Choose model", index=1, options=MODELS) - - min_length = st.sidebar.number_input( - "Min length", min_value=10, max_value=150, value=40 - ) - max_length = st.sidebar.number_input( - "Max length", min_value=20, max_value=250, value=142 - ) - no_repeat_ngram_size = st.sidebar.number_input( - "No repeat NGram size", min_value=1, max_value=5, value=3 - ) - - if sampling_mode := st.sidebar.selectbox( - "select a Mode", index=0, options=["Beam Search", "Top-k Sampling"] - ): - if sampling_mode == "Beam Search": - num_beams = st.sidebar.number_input( - "Num beams", min_value=1, max_value=10, value=4 - ) - length_penalty = st.sidebar.number_input( - "Length penalty", min_value=0.0, max_value=5.0, value=1.5, step=0.1 - ) - params = { - "min_length": min_length, - "max_length": max_length, - "no_repeat_ngram_size": no_repeat_ngram_size, - "num_beams": num_beams, - "early_stopping": True, - "length_penalty": length_penalty, - "num_return_sequences": 1, - } - else: - top_k = st.sidebar.number_input( - "Top K", min_value=0, max_value=100, value=50 - ) - top_p = st.sidebar.number_input( - "Top P", min_value=0.0, max_value=1.0, value=0.9, step=0.05 - ) - temperature = st.sidebar.number_input( - "Temperature", min_value=0.0, max_value=1.0, value=0.3, step=0.05 - ) - params = { - "min_length": min_length, - "max_length": max_length, - "no_repeat_ngram_size": no_repeat_ngram_size, - "do_sample": True, - "top_k": top_k, - "top_p": top_p, - "temperature": temperature, - "num_return_sequences": 1, - } - - input_text = st.text_area("Enter an Italian news text", DEFAULT_TEXT, height=450) - - if st.button("Generate summary"): - - with st.spinner("Generating summary ..."): - - response = generator.summarize(MODEL, input_text, params) - - st.header("Summary:") - st.markdown(response) - - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1d50.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1d50.py deleted file mode 100644 index 015aaa3d8182cae50f392d7103e24e8ac8a188aa..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/_base_/models/resnetv1d50.py +++ /dev/null @@ -1,17 +0,0 @@ -# model settings -model = dict( - type='ImageClassifier', - backbone=dict( - type='ResNetV1d', - depth=50, - num_stages=4, - out_indices=(3, ), - style='pytorch'), - neck=dict(type='GlobalAveragePooling'), - head=dict( - type='LinearClsHead', - num_classes=1000, - in_channels=2048, - loss=dict(type='CrossEntropyLoss', loss_weight=1.0), - topk=(1, 5), - )) diff --git a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32_in1k.py b/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32_in1k.py deleted file mode 100644 index c32f333b67c255c6101469323636bf242eebb8da..0000000000000000000000000000000000000000 --- a/spaces/ATang0729/Forecast4Muses/Model/Model6/Model6_2_ProfileRecogition/mmpretrain/configs/resnet/resnet50_8xb32_in1k.py +++ /dev/null @@ -1,4 +0,0 @@ -_base_ = [ - '../_base_/models/resnet50.py', '../_base_/datasets/imagenet_bs32.py', - '../_base_/schedules/imagenet_bs256.py', '../_base_/default_runtime.py' -] diff --git a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/data/audio_dataset.py b/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/data/audio_dataset.py deleted file mode 100644 index cf21422ea0059cb2d6553f93e608b8f9fa0d3a50..0000000000000000000000000000000000000000 --- a/spaces/AbandonedMuse/UnlimitedMusicGen/audiocraft/data/audio_dataset.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import copy -from concurrent.futures import ThreadPoolExecutor, Future -from dataclasses import dataclass, fields -from contextlib import ExitStack -import gzip -import json -import logging -import os -from pathlib import Path -import random -import sys -import typing as tp - -import torch -import torch.nn.functional as F - -from .audio import audio_read, audio_info -from .audio_utils import convert_audio -from .zip import PathInZip - -try: - import dora -except ImportError: - dora = None # type: ignore - - -@dataclass(order=True) -class BaseInfo: - - @classmethod - def _dict2fields(cls, dictionary: dict): - return { - field.name: dictionary[field.name] - for field in fields(cls) if field.name in dictionary - } - - @classmethod - def from_dict(cls, dictionary: dict): - _dictionary = cls._dict2fields(dictionary) - return cls(**_dictionary) - - def to_dict(self): - return { - field.name: self.__getattribute__(field.name) - for field in fields(self) - } - - -@dataclass(order=True) -class AudioMeta(BaseInfo): - path: str - duration: float - sample_rate: int - amplitude: tp.Optional[float] = None - weight: tp.Optional[float] = None - # info_path is used to load additional information about the audio file that is stored in zip files. - info_path: tp.Optional[PathInZip] = None - - @classmethod - def from_dict(cls, dictionary: dict): - base = cls._dict2fields(dictionary) - if 'info_path' in base and base['info_path'] is not None: - base['info_path'] = PathInZip(base['info_path']) - return cls(**base) - - def to_dict(self): - d = super().to_dict() - if d['info_path'] is not None: - d['info_path'] = str(d['info_path']) - return d - - -@dataclass(order=True) -class SegmentInfo(BaseInfo): - meta: AudioMeta - seek_time: float - n_frames: int # actual number of frames without padding - total_frames: int # total number of frames, padding included - sample_rate: int # actual sample rate - - -DEFAULT_EXTS = ['.wav', '.mp3', '.flac', '.ogg', '.m4a'] - -logger = logging.getLogger(__name__) - - -def _get_audio_meta(file_path: str, minimal: bool = True) -> AudioMeta: - """AudioMeta from a path to an audio file. - - Args: - file_path (str): Resolved path of valid audio file. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - Returns: - AudioMeta: Audio file path and its metadata. - """ - info = audio_info(file_path) - amplitude: tp.Optional[float] = None - if not minimal: - wav, sr = audio_read(file_path) - amplitude = wav.abs().max().item() - return AudioMeta(file_path, info.duration, info.sample_rate, amplitude) - - -def _resolve_audio_meta(m: AudioMeta, fast: bool = True) -> AudioMeta: - """If Dora is available as a dependency, try to resolve potential relative paths - in list of AudioMeta. This method is expected to be used when loading meta from file. - - Args: - m (AudioMeta): Audio meta to resolve. - fast (bool): If True, uses a really fast check for determining if a file is already absolute or not. - Only valid on Linux/Mac. - Returns: - AudioMeta: Audio meta with resolved path. - """ - def is_abs(m): - if fast: - return str(m)[0] == '/' - else: - os.path.isabs(str(m)) - - if not dora: - return m - - if not is_abs(m.path): - m.path = dora.git_save.to_absolute_path(m.path) - if m.info_path is not None and not is_abs(m.info_path.zip_path): - m.info_path.zip_path = dora.git_save.to_absolute_path(m.path) - return m - - -def find_audio_files(path: tp.Union[Path, str], - exts: tp.List[str] = DEFAULT_EXTS, - resolve: bool = True, - minimal: bool = True, - progress: bool = False, - workers: int = 0) -> tp.List[AudioMeta]: - """Build a list of AudioMeta from a given path, - collecting relevant audio files and fetching meta info. - - Args: - path (str or Path): Path to folder containing audio files. - exts (list of str): List of file extensions to consider for audio files. - minimal (bool): Whether to only load the minimal set of metadata (takes longer if not). - progress (bool): Whether to log progress on audio files collection. - workers (int): number of parallel workers, if 0, use only the current thread. - Returns: - List[AudioMeta]: List of audio file path and its metadata. - """ - audio_files = [] - futures: tp.List[Future] = [] - pool: tp.Optional[ThreadPoolExecutor] = None - with ExitStack() as stack: - if workers > 0: - pool = ThreadPoolExecutor(workers) - stack.enter_context(pool) - - if progress: - print("Finding audio files...") - for root, folders, files in os.walk(path, followlinks=True): - for file in files: - full_path = Path(root) / file - if full_path.suffix.lower() in exts: - audio_files.append(full_path) - if pool is not None: - futures.append(pool.submit(_get_audio_meta, str(audio_files[-1]), minimal)) - if progress: - print(format(len(audio_files), " 8d"), end='\r', file=sys.stderr) - - if progress: - print("Getting audio metadata...") - meta: tp.List[AudioMeta] = [] - for idx, file_path in enumerate(audio_files): - try: - if pool is None: - m = _get_audio_meta(str(file_path), minimal) - else: - m = futures[idx].result() - if resolve: - m = _resolve_audio_meta(m) - except Exception as err: - print("Error with", str(file_path), err, file=sys.stderr) - continue - meta.append(m) - if progress: - print(format((1 + idx) / len(audio_files), " 3.1%"), end='\r', file=sys.stderr) - meta.sort() - return meta - - -def load_audio_meta(path: tp.Union[str, Path], - resolve: bool = True, fast: bool = True) -> tp.List[AudioMeta]: - """Load list of AudioMeta from an optionally compressed json file. - - Args: - path (str or Path): Path to JSON file. - resolve (bool): Whether to resolve the path from AudioMeta (default=True). - fast (bool): activates some tricks to make things faster. - Returns: - List[AudioMeta]: List of audio file path and its total duration. - """ - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'rb') as fp: # type: ignore - lines = fp.readlines() - meta = [] - for line in lines: - d = json.loads(line) - m = AudioMeta.from_dict(d) - if resolve: - m = _resolve_audio_meta(m, fast=fast) - meta.append(m) - return meta - - -def save_audio_meta(path: tp.Union[str, Path], meta: tp.List[AudioMeta]): - """Save the audio metadata to the file pointer as json. - - Args: - path (str or Path): Path to JSON file. - metadata (list of BaseAudioMeta): List of audio meta to save. - """ - Path(path).parent.mkdir(exist_ok=True, parents=True) - open_fn = gzip.open if str(path).lower().endswith('.gz') else open - with open_fn(path, 'wb') as fp: # type: ignore - for m in meta: - json_str = json.dumps(m.to_dict()) + '\n' - json_bytes = json_str.encode('utf-8') - fp.write(json_bytes) - - -class AudioDataset: - """Base audio dataset. - - The dataset takes a list of AudioMeta and create a dataset composed of segments of audio - and potentially additional information, by creating random segments from the list of audio - files referenced in the metadata and applying minimal data pre-processing such as resampling, - mixing of channels, padding, etc. - - If no segment_duration value is provided, the AudioDataset will return the full wav for each - audio file. Otherwise, it will randomly sample audio files and create a segment of the specified - duration, applying padding if required. - - By default, only the torch Tensor corresponding to the waveform is returned. Setting return_info=True - allows to return a tuple containing the torch Tensor and additional metadata on the segment and the - original audio meta. - - Args: - meta (tp.List[AudioMeta]): List of audio files metadata. - segment_duration (float): Optional segment duration of audio to load. - If not specified, the dataset will load the full audio segment from the file. - shuffle (bool): Set to `True` to have the data reshuffled at every epoch. - sample_rate (int): Target sample rate of the loaded audio samples. - channels (int): Target number of channels of the loaded audio samples. - sample_on_duration (bool): Set to `True` to sample segments with probability - dependent on audio file duration. This is only used if `segment_duration` is provided. - sample_on_weight (bool): Set to `True` to sample segments using the `weight` entry of - `AudioMeta`. If `sample_on_duration` is also True, the actual weight will be the product - of the file duration and file weight. This is only used if `segment_duration` is provided. - min_segment_ratio (float): Minimum segment ratio to use when the audio file - is shorter than the desired segment. - max_read_retry (int): Maximum number of retries to sample an audio segment from the dataset. - return_info (bool): Whether to return the wav only or return wav along with segment info and metadata. - min_audio_duration (tp.Optional[float], optional): Minimum audio file duration, in seconds, if provided - audio shorter than this will be filtered out. - max_audio_duration (tp.Optional[float], optional): Maximal audio file duration in seconds, if provided - audio longer than this will be filtered out. - """ - def __init__(self, - meta: tp.List[AudioMeta], - segment_duration: tp.Optional[float] = None, - shuffle: bool = True, - num_samples: int = 10_000, - sample_rate: int = 48_000, - channels: int = 2, - pad: bool = True, - sample_on_duration: bool = True, - sample_on_weight: bool = True, - min_segment_ratio: float = 0.5, - max_read_retry: int = 10, - return_info: bool = False, - min_audio_duration: tp.Optional[float] = None, - max_audio_duration: tp.Optional[float] = None - ): - assert len(meta) > 0, 'No audio meta provided to AudioDataset. Please check loading of audio meta.' - assert segment_duration is None or segment_duration > 0 - assert segment_duration is None or min_segment_ratio >= 0 - logging.debug(f'sample_on_duration: {sample_on_duration}') - logging.debug(f'sample_on_weight: {sample_on_weight}') - logging.debug(f'pad: {pad}') - logging.debug(f'min_segment_ratio: {min_segment_ratio}') - - self.segment_duration = segment_duration - self.min_segment_ratio = min_segment_ratio - self.max_audio_duration = max_audio_duration - self.min_audio_duration = min_audio_duration - if self.min_audio_duration is not None and self.max_audio_duration is not None: - assert self.min_audio_duration <= self.max_audio_duration - self.meta: tp.List[AudioMeta] = self._filter_duration(meta) - assert len(self.meta) # Fail fast if all data has been filtered. - self.total_duration = sum(d.duration for d in self.meta) - - if segment_duration is None: - num_samples = len(self.meta) - self.num_samples = num_samples - self.shuffle = shuffle - self.sample_rate = sample_rate - self.channels = channels - self.pad = pad - self.sample_on_weight = sample_on_weight - self.sample_on_duration = sample_on_duration - self.sampling_probabilities = self._get_sampling_probabilities() - self.max_read_retry = max_read_retry - self.return_info = return_info - - def __len__(self): - return self.num_samples - - def _get_sampling_probabilities(self, normalized: bool = True): - """Return the sampling probabilities for each file inside `self.meta`. - """ - scores: tp.List[float] = [] - for file_meta in self.meta: - score = 1. - if self.sample_on_weight and file_meta.weight is not None: - score *= file_meta.weight - if self.sample_on_duration: - score *= file_meta.duration - scores.append(score) - probabilities = torch.tensor(scores) - if normalized: - probabilities /= probabilities.sum() - return probabilities - - def sample_file(self, rng: torch.Generator) -> AudioMeta: - """Sample a given file from `self.meta`. Can be overriden in subclasses. - This is only called if `segment_duration` is not None. - - You must use the provided random number generator `rng` for reproducibility. - """ - if not self.sample_on_weight and not self.sample_on_duration: - file_index = int(torch.randint(len(self.sampling_probabilities), (1,), generator=rng).item()) - else: - file_index = int(torch.multinomial(self.sampling_probabilities, 1, generator=rng).item()) - - return self.meta[file_index] - - def __getitem__(self, index: int) -> tp.Union[torch.Tensor, tp.Tuple[torch.Tensor, SegmentInfo]]: - if self.segment_duration is None: - file_meta = self.meta[index] - out, sr = audio_read(file_meta.path) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - segment_info = SegmentInfo(file_meta, seek_time=0., n_frames=n_frames, total_frames=n_frames, - sample_rate=self.sample_rate) - else: - rng = torch.Generator() - if self.shuffle: - # We use index, plus extra randomness - rng.manual_seed(index + self.num_samples * random.randint(0, 2**24)) - else: - # We only use index - rng.manual_seed(index) - - for retry in range(self.max_read_retry): - file_meta = self.sample_file(rng) - # We add some variance in the file position even if audio file is smaller than segment - # without ending up with empty segments - max_seek = max(0, file_meta.duration - self.segment_duration * self.min_segment_ratio) - seek_time = torch.rand(1, generator=rng).item() * max_seek - try: - out, sr = audio_read(file_meta.path, seek_time, self.segment_duration, pad=False) - out = convert_audio(out, sr, self.sample_rate, self.channels) - n_frames = out.shape[-1] - target_frames = int(self.segment_duration * self.sample_rate) - if self.pad: - out = F.pad(out, (0, target_frames - n_frames)) - segment_info = SegmentInfo(file_meta, seek_time, n_frames=n_frames, total_frames=target_frames, - sample_rate=self.sample_rate) - except Exception as exc: - logger.warning("Error opening file %s: %r", file_meta.path, exc) - if retry == self.max_read_retry - 1: - raise - else: - break - - if self.return_info: - # Returns the wav and additional information on the wave segment - return out, segment_info - else: - return out - - def collater(self, samples): - """The collater function has to be provided to the dataloader - if AudioDataset has return_info=True in order to properly collate - the samples of a batch. - """ - if self.segment_duration is None and len(samples) > 1: - assert self.pad, "Must allow padding when batching examples of different durations." - - # In this case the audio reaching the collater is of variable length as segment_duration=None. - to_pad = self.segment_duration is None and self.pad - if to_pad: - max_len = max([wav.shape[-1] for wav, _ in samples]) - - def _pad_wav(wav): - return F.pad(wav, (0, max_len - wav.shape[-1])) - - if self.return_info: - if len(samples) > 0: - assert len(samples[0]) == 2 - assert isinstance(samples[0][0], torch.Tensor) - assert isinstance(samples[0][1], SegmentInfo) - - wavs = [wav for wav, _ in samples] - segment_infos = [copy.deepcopy(info) for _, info in samples] - - if to_pad: - # Each wav could be of a different duration as they are not segmented. - for i in range(len(samples)): - # Determines the total legth of the signal with padding, so we update here as we pad. - segment_infos[i].total_frames = max_len - wavs[i] = _pad_wav(wavs[i]) - - wav = torch.stack(wavs) - return wav, segment_infos - else: - assert isinstance(samples[0], torch.Tensor) - if to_pad: - samples = [_pad_wav(s) for s in samples] - return torch.stack(samples) - - def _filter_duration(self, meta: tp.List[AudioMeta]) -> tp.List[AudioMeta]: - """Filters out audio files with short durations. - Removes from meta files that have durations that will not allow to samples examples from them. - """ - orig_len = len(meta) - - # Filter data that is too short. - if self.min_audio_duration is not None: - meta = [m for m in meta if m.duration >= self.min_audio_duration] - - # Filter data that is too long. - if self.max_audio_duration is not None: - meta = [m for m in meta if m.duration <= self.max_audio_duration] - - filtered_len = len(meta) - removed_percentage = 100*(1-float(filtered_len)/orig_len) - msg = 'Removed %.2f percent of the data because it was too short or too long.' % removed_percentage - if removed_percentage < 10: - logging.debug(msg) - else: - logging.warning(msg) - return meta - - @classmethod - def from_meta(cls, root: tp.Union[str, Path], **kwargs): - """Instantiate AudioDataset from a path to a directory containing a manifest as a jsonl file. - - Args: - root (str or Path): Path to root folder containing audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_dir(): - if (root / 'data.jsonl').exists(): - root = root / 'data.jsonl' - elif (root / 'data.jsonl.gz').exists(): - root = root / 'data.jsonl.gz' - else: - raise ValueError("Don't know where to read metadata from in the dir. " - "Expecting either a data.jsonl or data.jsonl.gz file but none found.") - meta = load_audio_meta(root) - return cls(meta, **kwargs) - - @classmethod - def from_path(cls, root: tp.Union[str, Path], minimal_meta: bool = True, - exts: tp.List[str] = DEFAULT_EXTS, **kwargs): - """Instantiate AudioDataset from a path containing (possibly nested) audio files. - - Args: - root (str or Path): Path to root folder containing audio files. - minimal_meta (bool): Whether to only load minimal metadata or not. - exts (list of str): Extensions for audio files. - kwargs: Additional keyword arguments for the AudioDataset. - """ - root = Path(root) - if root.is_file(): - meta = load_audio_meta(root, resolve=True) - else: - meta = find_audio_files(root, exts, minimal=minimal_meta, resolve=True) - return cls(meta, **kwargs) - - -def main(): - logging.basicConfig(stream=sys.stderr, level=logging.INFO) - parser = argparse.ArgumentParser( - prog='audio_dataset', - description='Generate .jsonl files by scanning a folder.') - parser.add_argument('root', help='Root folder with all the audio files') - parser.add_argument('output_meta_file', - help='Output file to store the metadata, ') - parser.add_argument('--complete', - action='store_false', dest='minimal', default=True, - help='Retrieve all metadata, even the one that are expansive ' - 'to compute (e.g. normalization).') - parser.add_argument('--resolve', - action='store_true', default=False, - help='Resolve the paths to be absolute and with no symlinks.') - parser.add_argument('--workers', - default=10, type=int, - help='Number of workers.') - args = parser.parse_args() - meta = find_audio_files(args.root, DEFAULT_EXTS, progress=True, - resolve=args.resolve, minimal=args.minimal, workers=args.workers) - save_audio_meta(args.output_meta_file, meta) - - -if __name__ == '__main__': - main() diff --git a/spaces/AchyuthGamer/OpenGPT-Chat-UI/README.md b/spaces/AchyuthGamer/OpenGPT-Chat-UI/README.md deleted file mode 100644 index 7c3c48cc05c0c007284cf384f174474f63296ffc..0000000000000000000000000000000000000000 --- a/spaces/AchyuthGamer/OpenGPT-Chat-UI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: OpenGPT Chat -emoji: 🚀 -colorFrom: gray -colorTo: pink -sdk: docker -pinned: true -app_port: 3000 -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference \ No newline at end of file diff --git a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/methods/SVPaletteCanvas.js b/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/methods/SVPaletteCanvas.js deleted file mode 100644 index 7ba907a651432e2976f58175b041d9aa4a2470ea..0000000000000000000000000000000000000000 --- a/spaces/AgentVerse/agentVerse/ui/src/phaser3-rex-plugins/templates/ui/colorinput/colorpicker/methods/SVPaletteCanvas.js +++ /dev/null @@ -1,98 +0,0 @@ -import Canvas from '../../../canvas/Canvas.js'; -import { DrawSVPalette } from '../../../../../plugins/utils/canvas/DrawHSVPalette.js'; - -const Color = Phaser.Display.Color; -const Percent = Phaser.Math.Percent; -const ColorToRGBA = Phaser.Display.Color.ColorToRGBA; -const HSVToRGB = Phaser.Display.Color.HSVToRGB; - -class SVPaletteCanvas extends Canvas { - constructor(scene, x, y, width, height, hue) { - if (x === undefined) { x = 0; } - if (y === undefined) { y = 0; } - if (width === undefined) { width = 2; } - if (height === undefined) { height = 2; } - - super(scene, x, y, width, height); - this.type = 'rexColorPicker.SVPaletteCanvas'; - - if (hue === undefined) { - hue = 1; - } - - this.colorObject = new Color(); - - this.setHue(hue); - this.setSize(width, height); - } - - get color() { - return this.colorObject.color; - } - - get hue() { - return this._hue; - } - - set hue(hue) { - if (this._hue === hue) { - return; - } - this._hue = hue; - this.colorObject.h = hue; - this.dirty = true; - } - - setHue(hue) { - this.hue = hue; - return this; - } - - updateTexture() { - DrawSVPalette(this.canvas, this.context, this.hue); - super.updateTexture(); - return this; - } - - getColor(localX, localY) { - if (localX === undefined) { - return this.colorObject.color; - } - - var s = Percent(localX, 0, this.width); - var v = 1 - Percent(localY, 0, this.height); - this.colorObject.setFromRGB(HSVToRGB(this.hue, s, v)); - return this.colorObject.color; - } - - setColor(color) { - if (this.color === color) { - return this; - } - - this.colorObject.setFromRGB(ColorToRGBA(color)); - this.setHue(this.colorObject.h); - return this; - } - - colorToLocalPosition(color, out) { - if (out === undefined) { - out = {}; - } else if (out === true) { - if (LocalXY === undefined) { - LocalXY = {}; - } - out = LocalXY; - } - - this.colorObject.setFromRGB(ColorToRGBA(color)); - out.x = this.width * this.colorObject.s; - out.y = this.height * (1 - this.colorObject.v); - - return out; - } -} - -var LocalXY = undefined; - -export default SVPaletteCanvas; \ No newline at end of file diff --git a/spaces/AlStable/Duke/app.py b/spaces/AlStable/Duke/app.py deleted file mode 100644 index 4ee672d7b3a35a53b9144742a8af6d83483167f9..0000000000000000000000000000000000000000 --- a/spaces/AlStable/Duke/app.py +++ /dev/null @@ -1,137 +0,0 @@ -from diffusers import StableDiffusionPipeline, StableDiffusionImg2ImgPipeline, DPMSolverMultistepScheduler -import gradio as gr -import torch -from PIL import Image - -model_id = 'DucHaiten/DucHaitenDreamWorld' -prefix = '' - -scheduler = DPMSolverMultistepScheduler.from_pretrained(model_id, subfolder="scheduler") - -pipe = StableDiffusionPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -pipe_i2i = StableDiffusionImg2ImgPipeline.from_pretrained( - model_id, - torch_dtype=torch.float16 if torch.cuda.is_available() else torch.float32, - scheduler=scheduler) - -if torch.cuda.is_available(): - pipe = pipe.to("cuda") - pipe_i2i = pipe_i2i.to("cuda") - -def error_str(error, title="Error"): - return f"""#### {title} - {error}""" if error else "" - -def inference(prompt, guidance, steps, width=512, height=512, seed=0, img=None, strength=0.5, neg_prompt="", auto_prefix=False): - - generator = torch.Generator('cuda').manual_seed(seed) if seed != 0 else None - prompt = f"{prefix} {prompt}" if auto_prefix else prompt - - try: - if img is not None: - return img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator), None - else: - return txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator), None - except Exception as e: - return None, error_str(e) - -def txt_to_img(prompt, neg_prompt, guidance, steps, width, height, generator): - - result = pipe( - prompt, - negative_prompt = neg_prompt, - num_inference_steps = int(steps), - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -def img_to_img(prompt, neg_prompt, img, strength, guidance, steps, width, height, generator): - - ratio = min(height / img.height, width / img.width) - img = img.resize((int(img.width * ratio), int(img.height * ratio)), Image.LANCZOS) - result = pipe_i2i( - prompt, - negative_prompt = neg_prompt, - init_image = img, - num_inference_steps = int(steps), - strength = strength, - guidance_scale = guidance, - width = width, - height = height, - generator = generator) - - return result.images[0] - -css = """.main-div div{display:inline-flex;align-items:center;gap:.8rem;font-size:1.75rem}.main-div div h1{font-weight:900;margin-bottom:7px}.main-div p{margin-bottom:10px;font-size:94%}a{text-decoration:underline}.tabs{margin-top:0;margin-bottom:0}#gallery{min-height:20rem} -""" -with gr.Blocks(css=css) as demo: - gr.HTML( - f""" -
-
-

Duchaiten dreamworld

-
-

- Demo for Duchaitendreamworld Stable Diffusion model.
- {"Add the following tokens to your prompts for the model to work properly: prefix" if prefix else ""} -

- Running on {"GPU 🔥" if torch.cuda.is_available() else f"CPU 🥶. For faster inference it is recommended to upgrade to GPU in Settings"} after duplicating the space

- Duplicate Space -
- """ - ) - with gr.Row(): - - with gr.Column(scale=55): - with gr.Group(): - with gr.Row(): - prompt = gr.Textbox(label="Prompt", show_label=False, max_lines=2,placeholder=f"{prefix} [your prompt]").style(container=False) - generate = gr.Button(value="Generate").style(rounded=(False, True, True, False)) - - image_out = gr.Image(height=512) - error_output = gr.Markdown() - - with gr.Column(scale=45): - with gr.Tab("Options"): - with gr.Group(): - neg_prompt = gr.Textbox(label="Negative prompt", placeholder="What to exclude from the image") - auto_prefix = gr.Checkbox(label="Prefix styling tokens automatically ()", value=prefix, visible=prefix) - - with gr.Row(): - guidance = gr.Slider(label="Guidance scale", value=7.5, maximum=15) - steps = gr.Slider(label="Steps", value=25, minimum=2, maximum=75, step=1) - - with gr.Row(): - width = gr.Slider(label="Width", value=512, minimum=64, maximum=1024, step=8) - height = gr.Slider(label="Height", value=512, minimum=64, maximum=1024, step=8) - - seed = gr.Slider(0, 2147483647, label='Seed (0 = random)', value=0, step=1) - - with gr.Tab("Image to image"): - with gr.Group(): - image = gr.Image(label="Image", height=256, tool="editor", type="pil") - strength = gr.Slider(label="Transformation strength", minimum=0, maximum=1, step=0.01, value=0.5) - - auto_prefix.change(lambda x: gr.update(placeholder=f"{prefix} [your prompt]" if x else "[Your prompt]"), inputs=auto_prefix, outputs=prompt, queue=False) - - inputs = [prompt, guidance, steps, width, height, seed, image, strength, neg_prompt, auto_prefix] - outputs = [image_out, error_output] - prompt.submit(inference, inputs=inputs, outputs=outputs) - generate.click(inference, inputs=inputs, outputs=outputs) - - gr.HTML(""" -
-
-

This space was created using SD Space Creator.

-
- """) - -demo.queue(concurrency_count=1) -demo.launch() diff --git a/spaces/Amrrs/portfolio-github/style.css b/spaces/Amrrs/portfolio-github/style.css deleted file mode 100644 index 363d0b7bb0dd45552039e3156a6350989e327db2..0000000000000000000000000000000000000000 --- a/spaces/Amrrs/portfolio-github/style.css +++ /dev/null @@ -1,190 +0,0 @@ -html { - margin: 0; - padding: 0; -} - -body { - font-family: 'Bellota', cursive; - font-size: 26pt; - background-color: #f2f2f2; - padding: 20px; - margin: 0; -} - -h1 { - font-size: 15pt; - color: #ffffff; - text-align: center; - padding: 18px 0 18px 0; - margin: 0 0 10px 0; -} - -h1 span { - border: 8px solid #666666; - border-radius: 8px; - background-image: url("https://media.giphy.com/media/KVZWZQoS0yqfIiTAKq/giphy.gif"); - padding: 12px; -} - -p { - padding: 0; - margin: 0; - color: #000000; -} - -.img-circle { - border: 8px solid white; - border-radius: 50%; -} - -.section { - background-color: #fff; - padding: 20px; - margin-bottom: 10px; - border-radius: 30px; -} - -#header { - background-image: url("https://media.giphy.com/media/KVZWZQoS0yqfIiTAKq/giphy.gif"); - background-size: cover; -} - -#header img { - display: block; - width: 500px; - height: 500px; - margin: auto; -} - -#header p { - font-size: 60pt; - color: #ffffff; - padding-top: 8px; - margin: 0; - font-weight: bold; - text-align: center; -} - -.quote { - font-size: 12pt; - text-align: right; - margin-top: 10px; - color: grey; -} - -#res { - text-align: center; - margin: 50px auto; -} - -#res a { - margin: 20px 20px; - display: inline-block; - text-decoration: none; - color: black; -} - -.selected { - background-color: #f36f48; - font-weight: bold; - color: white; -} - -li { - margin-bottom: 15px; - font-weight: bold; -} - -progress { - width: 70%; - height: 20px; - color: #3fb6b2; - background: #efefef; -} - -progress::-webkit-progress-bar { - background: #efefef; -} - -progress::-webkit-progress-value { - background: #3fb6b2; -} - -progress::-moz-progress-bar { - color: #3fb6b2; - background: #efefef; -} - -iframe, -audio { - display: block; - margin: 0 auto; - border: 3px solid #3fb6b2; - border-radius: 10px; -} - -hr { - border: 0; - height: 1px; - background: #f36f48; -} - -input { - text-align: center; - font-size: 25pt; - border: none; - border-radius: 12px; - padding: 30px 8%; - margin: 20px 5px 10px 5px; - background-color: #d7d7d7; -} - -input:focus { - background-color: #2f2f2f; - color: white; -} - -form { - text-align: center; - font-size: 30pt; - font-family: Helvetica; - font-weight: 500; - margin: 10% 15% 8% 15%; - border-radius: 12px; -} - -#insta-image { - display: block; - width: 100px; - height: 100px; - border: 5px solid #d7d7d7; - border-radius: 50%; - margin: auto; - margin-top: -75px; -} - -#contacts img { - height: 150px; - width: 150px; - margin-left: 7px; - margin-right: 7px; -} - -#contacts a { - text-decoration: none; -} - -#contacts img:hover { - opacity: 0.8; -} - -#contacts { - text-align: center; -} - -.copyright { - font-size: 8pt; - text-align: right; - padding-bottom: 10px; - color: grey; -} \ No newline at end of file diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/write_own_pipeline.md b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/write_own_pipeline.md deleted file mode 100644 index ca5ea38b4ad26f6a4e53e73963fd2de01c1a6405..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/docs/source/en/using-diffusers/write_own_pipeline.md +++ /dev/null @@ -1,290 +0,0 @@ - - -# Understanding pipelines, models and schedulers - -[[open-in-colab]] - -🧨 Diffusers is designed to be a user-friendly and flexible toolbox for building diffusion systems tailored to your use-case. At the core of the toolbox are models and schedulers. While the [`DiffusionPipeline`] bundles these components together for convenience, you can also unbundle the pipeline and use the models and schedulers separately to create new diffusion systems. - -In this tutorial, you'll learn how to use models and schedulers to assemble a diffusion system for inference, starting with a basic pipeline and then progressing to the Stable Diffusion pipeline. - -## Deconstruct a basic pipeline - -A pipeline is a quick and easy way to run a model for inference, requiring no more than four lines of code to generate an image: - -```py ->>> from diffusers import DDPMPipeline - ->>> ddpm = DDPMPipeline.from_pretrained("google/ddpm-cat-256").to("cuda") ->>> image = ddpm(num_inference_steps=25).images[0] ->>> image -``` - -
- Image of cat created from DDPMPipeline -
- -That was super easy, but how did the pipeline do that? Let's breakdown the pipeline and take a look at what's happening under the hood. - -In the example above, the pipeline contains a [`UNet2DModel`] model and a [`DDPMScheduler`]. The pipeline denoises an image by taking random noise the size of the desired output and passing it through the model several times. At each timestep, the model predicts the *noise residual* and the scheduler uses it to predict a less noisy image. The pipeline repeats this process until it reaches the end of the specified number of inference steps. - -To recreate the pipeline with the model and scheduler separately, let's write our own denoising process. - -1. Load the model and scheduler: - -```py ->>> from diffusers import DDPMScheduler, UNet2DModel - ->>> scheduler = DDPMScheduler.from_pretrained("google/ddpm-cat-256") ->>> model = UNet2DModel.from_pretrained("google/ddpm-cat-256").to("cuda") -``` - -2. Set the number of timesteps to run the denoising process for: - -```py ->>> scheduler.set_timesteps(50) -``` - -3. Setting the scheduler timesteps creates a tensor with evenly spaced elements in it, 50 in this example. Each element corresponds to a timestep at which the model denoises an image. When you create the denoising loop later, you'll iterate over this tensor to denoise an image: - -```py ->>> scheduler.timesteps -tensor([980, 960, 940, 920, 900, 880, 860, 840, 820, 800, 780, 760, 740, 720, - 700, 680, 660, 640, 620, 600, 580, 560, 540, 520, 500, 480, 460, 440, - 420, 400, 380, 360, 340, 320, 300, 280, 260, 240, 220, 200, 180, 160, - 140, 120, 100, 80, 60, 40, 20, 0]) -``` - -4. Create some random noise with the same shape as the desired output: - -```py ->>> import torch - ->>> sample_size = model.config.sample_size ->>> noise = torch.randn((1, 3, sample_size, sample_size)).to("cuda") -``` - -5. Now write a loop to iterate over the timesteps. At each timestep, the model does a [`UNet2DModel.forward`] pass and returns the noisy residual. The scheduler's [`~DDPMScheduler.step`] method takes the noisy residual, timestep, and input and it predicts the image at the previous timestep. This output becomes the next input to the model in the denoising loop, and it'll repeat until it reaches the end of the `timesteps` array. - -```py ->>> input = noise - ->>> for t in scheduler.timesteps: -... with torch.no_grad(): -... noisy_residual = model(input, t).sample -... previous_noisy_sample = scheduler.step(noisy_residual, t, input).prev_sample -... input = previous_noisy_sample -``` - -This is the entire denoising process, and you can use this same pattern to write any diffusion system. - -6. The last step is to convert the denoised output into an image: - -```py ->>> from PIL import Image ->>> import numpy as np - ->>> image = (input / 2 + 0.5).clamp(0, 1) ->>> image = image.cpu().permute(0, 2, 3, 1).numpy()[0] ->>> image = Image.fromarray((image * 255).round().astype("uint8")) ->>> image -``` - -In the next section, you'll put your skills to the test and breakdown the more complex Stable Diffusion pipeline. The steps are more or less the same. You'll initialize the necessary components, and set the number of timesteps to create a `timestep` array. The `timestep` array is used in the denoising loop, and for each element in this array, the model predicts a less noisy image. The denoising loop iterates over the `timestep`'s, and at each timestep, it outputs a noisy residual and the scheduler uses it to predict a less noisy image at the previous timestep. This process is repeated until you reach the end of the `timestep` array. - -Let's try it out! - -## Deconstruct the Stable Diffusion pipeline - -Stable Diffusion is a text-to-image *latent diffusion* model. It is called a latent diffusion model because it works with a lower-dimensional representation of the image instead of the actual pixel space, which makes it more memory efficient. The encoder compresses the image into a smaller representation, and a decoder to convert the compressed representation back into an image. For text-to-image models, you'll need a tokenizer and an encoder to generate text embeddings. From the previous example, you already know you need a UNet model and a scheduler. - -As you can see, this is already more complex than the DDPM pipeline which only contains a UNet model. The Stable Diffusion model has three separate pretrained models. - - - -💡 Read the [How does Stable Diffusion work?](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work) blog for more details about how the VAE, UNet, and text encoder models. - - - -Now that you know what you need for the Stable Diffusion pipeline, load all these components with the [`~ModelMixin.from_pretrained`] method. You can find them in the pretrained [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5) checkpoint, and each component is stored in a separate subfolder: - -```py ->>> from PIL import Image ->>> import torch ->>> from transformers import CLIPTextModel, CLIPTokenizer ->>> from diffusers import AutoencoderKL, UNet2DConditionModel, PNDMScheduler - ->>> vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae") ->>> tokenizer = CLIPTokenizer.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="tokenizer") ->>> text_encoder = CLIPTextModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="text_encoder") ->>> unet = UNet2DConditionModel.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="unet") -``` - -Instead of the default [`PNDMScheduler`], exchange it for the [`UniPCMultistepScheduler`] to see how easy it is to plug a different scheduler in: - -```py ->>> from diffusers import UniPCMultistepScheduler - ->>> scheduler = UniPCMultistepScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") -``` - -To speed up inference, move the models to a GPU since, unlike the scheduler, they have trainable weights: - -```py ->>> torch_device = "cuda" ->>> vae.to(torch_device) ->>> text_encoder.to(torch_device) ->>> unet.to(torch_device) -``` - -### Create text embeddings - -The next step is to tokenize the text to generate embeddings. The text is used to condition the UNet model and steer the diffusion process towards something that resembles the input prompt. - - - -💡 The `guidance_scale` parameter determines how much weight should be given to the prompt when generating an image. - - - -Feel free to choose any prompt you like if you want to generate something else! - -```py ->>> prompt = ["a photograph of an astronaut riding a horse"] ->>> height = 512 # default height of Stable Diffusion ->>> width = 512 # default width of Stable Diffusion ->>> num_inference_steps = 25 # Number of denoising steps ->>> guidance_scale = 7.5 # Scale for classifier-free guidance ->>> generator = torch.manual_seed(0) # Seed generator to create the inital latent noise ->>> batch_size = len(prompt) -``` - -Tokenize the text and generate the embeddings from the prompt: - -```py ->>> text_input = tokenizer( -... prompt, padding="max_length", max_length=tokenizer.model_max_length, truncation=True, return_tensors="pt" -... ) - ->>> with torch.no_grad(): -... text_embeddings = text_encoder(text_input.input_ids.to(torch_device))[0] -``` - -You'll also need to generate the *unconditional text embeddings* which are the embeddings for the padding token. These need to have the same shape (`batch_size` and `seq_length`) as the conditional `text_embeddings`: - -```py ->>> max_length = text_input.input_ids.shape[-1] ->>> uncond_input = tokenizer([""] * batch_size, padding="max_length", max_length=max_length, return_tensors="pt") ->>> uncond_embeddings = text_encoder(uncond_input.input_ids.to(torch_device))[0] -``` - -Let's concatenate the conditional and unconditional embeddings into a batch to avoid doing two forward passes: - -```py ->>> text_embeddings = torch.cat([uncond_embeddings, text_embeddings]) -``` - -### Create random noise - -Next, generate some initial random noise as a starting point for the diffusion process. This is the latent representation of the image, and it'll be gradually denoised. At this point, the `latent` image is smaller than the final image size but that's okay though because the model will transform it into the final 512x512 image dimensions later. - - - -💡 The height and width are divided by 8 because the `vae` model has 3 down-sampling layers. You can check by running the following: - -```py -2 ** (len(vae.config.block_out_channels) - 1) == 8 -``` - - - -```py ->>> latents = torch.randn( -... (batch_size, unet.in_channels, height // 8, width // 8), -... generator=generator, -... ) ->>> latents = latents.to(torch_device) -``` - -### Denoise the image - -Start by scaling the input with the initial noise distribution, *sigma*, the noise scale value, which is required for improved schedulers like [`UniPCMultistepScheduler`]: - -```py ->>> latents = latents * scheduler.init_noise_sigma -``` - -The last step is to create the denoising loop that'll progressively transform the pure noise in `latents` to an image described by your prompt. Remember, the denoising loop needs to do three things: - -1. Set the scheduler's timesteps to use during denoising. -2. Iterate over the timesteps. -3. At each timestep, call the UNet model to predict the noise residual and pass it to the scheduler to compute the previous noisy sample. - -```py ->>> from tqdm.auto import tqdm - ->>> scheduler.set_timesteps(num_inference_steps) - ->>> for t in tqdm(scheduler.timesteps): -... # expand the latents if we are doing classifier-free guidance to avoid doing two forward passes. -... latent_model_input = torch.cat([latents] * 2) - -... latent_model_input = scheduler.scale_model_input(latent_model_input, timestep=t) - -... # predict the noise residual -... with torch.no_grad(): -... noise_pred = unet(latent_model_input, t, encoder_hidden_states=text_embeddings).sample - -... # perform guidance -... noise_pred_uncond, noise_pred_text = noise_pred.chunk(2) -... noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) - -... # compute the previous noisy sample x_t -> x_t-1 -... latents = scheduler.step(noise_pred, t, latents).prev_sample -``` - -### Decode the image - -The final step is to use the `vae` to decode the latent representation into an image and get the decoded output with `sample`: - -```py -# scale and decode the image latents with vae -latents = 1 / 0.18215 * latents -with torch.no_grad(): - image = vae.decode(latents).sample -``` - -Lastly, convert the image to a `PIL.Image` to see your generated image! - -```py ->>> image = (image / 2 + 0.5).clamp(0, 1) ->>> image = image.detach().cpu().permute(0, 2, 3, 1).numpy() ->>> images = (image * 255).round().astype("uint8") ->>> pil_images = [Image.fromarray(image) for image in images] ->>> pil_images[0] -``` - -
- -
- -## Next steps - -From basic to complex pipelines, you've seen that all you really need to write your own diffusion system is a denoising loop. The loop should set the scheduler's timesteps, iterate over them, and alternate between calling the UNet model to predict the noise residual and passing it to the scheduler to compute the previous noisy sample. - -This is really what 🧨 Diffusers is designed for: to make it intuitive and easy to write your own diffusion system using models and schedulers. - -For your next steps, feel free to: - -* Learn how to [build and contribute a pipeline](contribute_pipeline) to 🧨 Diffusers. We can't wait and see what you'll come up with! -* Explore [existing pipelines](../api/pipelines/overview) in the library, and see if you can deconstruct and build a pipeline from scratch using the models and schedulers separately. diff --git a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vq_model.py b/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vq_model.py deleted file mode 100644 index 687449e8c7557473c0af994b30ef4c7dfba9718c..0000000000000000000000000000000000000000 --- a/spaces/Androidonnxfork/CivitAi-to-Diffusers/diffusers/src/diffusers/models/vq_model.py +++ /dev/null @@ -1,167 +0,0 @@ -# Copyright 2023 The HuggingFace Team. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. -from dataclasses import dataclass -from typing import Optional, Tuple, Union - -import torch -import torch.nn as nn - -from ..configuration_utils import ConfigMixin, register_to_config -from ..utils import BaseOutput, apply_forward_hook -from .modeling_utils import ModelMixin -from .vae import Decoder, DecoderOutput, Encoder, VectorQuantizer - - -@dataclass -class VQEncoderOutput(BaseOutput): - """ - Output of VQModel encoding method. - - Args: - latents (`torch.FloatTensor` of shape `(batch_size, num_channels, height, width)`): - The encoded output sample from the last layer of the model. - """ - - latents: torch.FloatTensor - - -class VQModel(ModelMixin, ConfigMixin): - r""" - A VQ-VAE model for decoding latent representations. - - This model inherits from [`ModelMixin`]. Check the superclass documentation for it's generic methods implemented - for all models (such as downloading or saving). - - Parameters: - in_channels (int, *optional*, defaults to 3): Number of channels in the input image. - out_channels (int, *optional*, defaults to 3): Number of channels in the output. - down_block_types (`Tuple[str]`, *optional*, defaults to `("DownEncoderBlock2D",)`): - Tuple of downsample block types. - up_block_types (`Tuple[str]`, *optional*, defaults to `("UpDecoderBlock2D",)`): - Tuple of upsample block types. - block_out_channels (`Tuple[int]`, *optional*, defaults to `(64,)`): - Tuple of block output channels. - act_fn (`str`, *optional*, defaults to `"silu"`): The activation function to use. - latent_channels (`int`, *optional*, defaults to `3`): Number of channels in the latent space. - sample_size (`int`, *optional*, defaults to `32`): Sample input size. - num_vq_embeddings (`int`, *optional*, defaults to `256`): Number of codebook vectors in the VQ-VAE. - vq_embed_dim (`int`, *optional*): Hidden dim of codebook vectors in the VQ-VAE. - scaling_factor (`float`, *optional*, defaults to `0.18215`): - The component-wise standard deviation of the trained latent space computed using the first batch of the - training set. This is used to scale the latent space to have unit variance when training the diffusion - model. The latents are scaled with the formula `z = z * scaling_factor` before being passed to the - diffusion model. When decoding, the latents are scaled back to the original scale with the formula: `z = 1 - / scaling_factor * z`. For more details, refer to sections 4.3.2 and D.1 of the [High-Resolution Image - Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) paper. - """ - - @register_to_config - def __init__( - self, - in_channels: int = 3, - out_channels: int = 3, - down_block_types: Tuple[str] = ("DownEncoderBlock2D",), - up_block_types: Tuple[str] = ("UpDecoderBlock2D",), - block_out_channels: Tuple[int] = (64,), - layers_per_block: int = 1, - act_fn: str = "silu", - latent_channels: int = 3, - sample_size: int = 32, - num_vq_embeddings: int = 256, - norm_num_groups: int = 32, - vq_embed_dim: Optional[int] = None, - scaling_factor: float = 0.18215, - norm_type: str = "group", # group, spatial - ): - super().__init__() - - # pass init params to Encoder - self.encoder = Encoder( - in_channels=in_channels, - out_channels=latent_channels, - down_block_types=down_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - double_z=False, - ) - - vq_embed_dim = vq_embed_dim if vq_embed_dim is not None else latent_channels - - self.quant_conv = nn.Conv2d(latent_channels, vq_embed_dim, 1) - self.quantize = VectorQuantizer(num_vq_embeddings, vq_embed_dim, beta=0.25, remap=None, sane_index_shape=False) - self.post_quant_conv = nn.Conv2d(vq_embed_dim, latent_channels, 1) - - # pass init params to Decoder - self.decoder = Decoder( - in_channels=latent_channels, - out_channels=out_channels, - up_block_types=up_block_types, - block_out_channels=block_out_channels, - layers_per_block=layers_per_block, - act_fn=act_fn, - norm_num_groups=norm_num_groups, - norm_type=norm_type, - ) - - @apply_forward_hook - def encode(self, x: torch.FloatTensor, return_dict: bool = True) -> VQEncoderOutput: - h = self.encoder(x) - h = self.quant_conv(h) - - if not return_dict: - return (h,) - - return VQEncoderOutput(latents=h) - - @apply_forward_hook - def decode( - self, h: torch.FloatTensor, force_not_quantize: bool = False, return_dict: bool = True - ) -> Union[DecoderOutput, torch.FloatTensor]: - # also go through quantization layer - if not force_not_quantize: - quant, emb_loss, info = self.quantize(h) - else: - quant = h - quant2 = self.post_quant_conv(quant) - dec = self.decoder(quant2, quant if self.config.norm_type == "spatial" else None) - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) - - def forward(self, sample: torch.FloatTensor, return_dict: bool = True) -> Union[DecoderOutput, torch.FloatTensor]: - r""" - The [`VQModel`] forward method. - - Args: - sample (`torch.FloatTensor`): Input sample. - return_dict (`bool`, *optional*, defaults to `True`): - Whether or not to return a [`models.vq_model.VQEncoderOutput`] instead of a plain tuple. - - Returns: - [`~models.vq_model.VQEncoderOutput`] or `tuple`: - If return_dict is True, a [`~models.vq_model.VQEncoderOutput`] is returned, otherwise a plain `tuple` - is returned. - """ - x = sample - h = self.encode(x).latents - dec = self.decode(h).sample - - if not return_dict: - return (dec,) - - return DecoderOutput(sample=dec) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index 8c707c79d659bc544d242352bcb29686eb40b004..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/deeplabv3/deeplabv3_r101-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './deeplabv3_r50-d8_512x1024_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_40k_cityscapes.py b/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_40k_cityscapes.py deleted file mode 100644 index 575e9d01343a4563e0d3ba89b361ea8e358d2dee..0000000000000000000000000000000000000000 --- a/spaces/Andy1621/uniformer_image_segmentation/configs/dnlnet/dnl_r101-d8_769x769_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './dnl_r50-d8_769x769_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/Anonymous-123/ImageNet-Editing/README.md b/spaces/Anonymous-123/ImageNet-Editing/README.md deleted file mode 100644 index 0bae12587c18d6b0a1cb3bbff97a25d772d90fe6..0000000000000000000000000000000000000000 --- a/spaces/Anonymous-123/ImageNet-Editing/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: ImageNet Editing -emoji: 📊 -colorFrom: gray -colorTo: pink -sdk: gradio -sdk_version: 3.3.1 -app_file: app.py -pinned: false -license: creativeml-openrail-m ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Anthony7906/MengHuiMXD_GPT/run_Windows.bat b/spaces/Anthony7906/MengHuiMXD_GPT/run_Windows.bat deleted file mode 100644 index 4c18f9ccaeea0af972301ffdf48778641221f76d..0000000000000000000000000000000000000000 --- a/spaces/Anthony7906/MengHuiMXD_GPT/run_Windows.bat +++ /dev/null @@ -1,5 +0,0 @@ -@echo off -echo Opening ChuanhuChatGPT... - -REM Open powershell via bat -start powershell.exe -NoExit -Command "python ./ChuanhuChatbot.py" diff --git a/spaces/AnthonyTruchetPoC/persistent-docker/tests/athai/test_hello.py b/spaces/AnthonyTruchetPoC/persistent-docker/tests/athai/test_hello.py deleted file mode 100644 index 709df7aa2182083690c911bd27ad6081147a0eeb..0000000000000000000000000000000000000000 --- a/spaces/AnthonyTruchetPoC/persistent-docker/tests/athai/test_hello.py +++ /dev/null @@ -1,14 +0,0 @@ -from athai import hello - - -def test_hello_default(): - assert hello.build_greetings() == "Hello, World!" - - -def test_hello_name(): - assert hello.build_greetings("Toto") == "Nice to meet you, Toto!" - - -# Given / Arrange -# When / Act -# Then / Assert diff --git a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/launch.py b/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/launch.py deleted file mode 100644 index 0208fdf33b640cd9791359d74673bb90cfb87f96..0000000000000000000000000000000000000000 --- a/spaces/Ataturk-Chatbot/HuggingFaceChat/venv/lib/python3.11/site-packages/setuptools/launch.py +++ /dev/null @@ -1,36 +0,0 @@ -""" -Launch the Python script on the command line after -setuptools is bootstrapped via import. -""" - -# Note that setuptools gets imported implicitly by the -# invocation of this script using python -m setuptools.launch - -import tokenize -import sys - - -def run(): - """ - Run the script in sys.argv[1] as if it had - been invoked naturally. - """ - __builtins__ - script_name = sys.argv[1] - namespace = dict( - __file__=script_name, - __name__='__main__', - __doc__=None, - ) - sys.argv[:] = sys.argv[1:] - - open_ = getattr(tokenize, 'open', open) - with open_(script_name) as fid: - script = fid.read() - norm_script = script.replace('\\r\\n', '\\n') - code = compile(norm_script, script_name, 'exec') - exec(code, namespace) - - -if __name__ == '__main__': - run() diff --git a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/compatibility.md b/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/compatibility.md deleted file mode 100644 index d9ec2ce34ea4f2e1a99c8934338c2fcbe1d19cf6..0000000000000000000000000000000000000000 --- a/spaces/CVPR/Dual-Key_Backdoor_Attacks/datagen/detectron2/docs/notes/compatibility.md +++ /dev/null @@ -1,83 +0,0 @@ -# Compatibility with Other Libraries - -## Compatibility with Detectron (and maskrcnn-benchmark) - -Detectron2 addresses some legacy issues left in Detectron. As a result, their models -are not compatible: -running inference with the same model weights will produce different results in the two code bases. - -The major differences regarding inference are: - -- The height and width of a box with corners (x1, y1) and (x2, y2) is now computed more naturally as - width = x2 - x1 and height = y2 - y1; - In Detectron, a "+ 1" was added both height and width. - - Note that the relevant ops in Caffe2 have [adopted this change of convention](https://github.com/pytorch/pytorch/pull/20550) - with an extra option. - So it is still possible to run inference with a Detectron2-trained model in Caffe2. - - The change in height/width calculations most notably changes: - - encoding/decoding in bounding box regression. - - non-maximum suppression. The effect here is very negligible, though. - -- RPN now uses simpler anchors with fewer quantization artifacts. - - In Detectron, the anchors were quantized and - [do not have accurate areas](https://github.com/facebookresearch/Detectron/issues/227). - In Detectron2, the anchors are center-aligned to feature grid points and not quantized. - -- Classification layers have a different ordering of class labels. - - This involves any trainable parameter with shape (..., num_categories + 1, ...). - In Detectron2, integer labels [0, K-1] correspond to the K = num_categories object categories - and the label "K" corresponds to the special "background" category. - In Detectron, label "0" means background, and labels [1, K] correspond to the K categories. - -- ROIAlign is implemented differently. The new implementation is [available in Caffe2](https://github.com/pytorch/pytorch/pull/23706). - - 1. All the ROIs are shifted by half a pixel compared to Detectron in order to create better image-feature-map alignment. - See `layers/roi_align.py` for details. - To enable the old behavior, use `ROIAlign(aligned=False)`, or `POOLER_TYPE=ROIAlign` instead of - `ROIAlignV2` (the default). - - 1. The ROIs are not required to have a minimum size of 1. - This will lead to tiny differences in the output, but should be negligible. - -- Mask inference function is different. - - In Detectron2, the "paste_mask" function is different and should be more accurate than in Detectron. This change - can improve mask AP on COCO by ~0.5% absolute. - -There are some other differences in training as well, but they won't affect -model-level compatibility. The major ones are: - -- We fixed a [bug](https://github.com/facebookresearch/Detectron/issues/459) in - Detectron, by making `RPN.POST_NMS_TOPK_TRAIN` per-image, rather than per-batch. - The fix may lead to a small accuracy drop for a few models (e.g. keypoint - detection) and will require some parameter tuning to match the Detectron results. -- For simplicity, we change the default loss in bounding box regression to L1 loss, instead of smooth L1 loss. - We have observed that this tends to slightly decrease box AP50 while improving box AP for higher - overlap thresholds (and leading to a slight overall improvement in box AP). -- We interpret the coordinates in COCO bounding box and segmentation annotations - as coordinates in range `[0, width]` or `[0, height]`. The coordinates in - COCO keypoint annotations are interpreted as pixel indices in range `[0, width - 1]` or `[0, height - 1]`. - Note that this affects how flip augmentation is implemented. - - -We will later share more details and rationale behind the above mentioned issues -about pixels, coordinates, and "+1"s. - - -## Compatibility with Caffe2 - -As mentioned above, despite the incompatibilities with Detectron, the relevant -ops have been implemented in Caffe2. -Therefore, models trained with detectron2 can be converted in Caffe2. -See [Deployment](../tutorials/deployment.html) for the tutorial. - -## Compatibility with TensorFlow - -Most ops are available in TensorFlow, although some tiny differences in -the implementation of resize / ROIAlign / padding need to be addressed. -A working conversion script is provided by [tensorpack FasterRCNN](https://github.com/tensorpack/tensorpack/tree/master/examples/FasterRCNN/convert_d2) -to run a standard detectron2 model in TensorFlow. diff --git a/spaces/CVPR/LIVE/pybind11/docs/benchmark.py b/spaces/CVPR/LIVE/pybind11/docs/benchmark.py deleted file mode 100644 index 023477212ee3ca34353067b196e9959144444f33..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/docs/benchmark.py +++ /dev/null @@ -1,89 +0,0 @@ -# -*- coding: utf-8 -*- -import random -import os -import time -import datetime as dt - -nfns = 4 # Functions per class -nargs = 4 # Arguments per function - - -def generate_dummy_code_pybind11(nclasses=10): - decl = "" - bindings = "" - - for cl in range(nclasses): - decl += "class cl%03i;\n" % cl - decl += '\n' - - for cl in range(nclasses): - decl += "class cl%03i {\n" % cl - decl += "public:\n" - bindings += ' py::class_(m, "cl%03i")\n' % (cl, cl) - for fn in range(nfns): - ret = random.randint(0, nclasses - 1) - params = [random.randint(0, nclasses - 1) for i in range(nargs)] - decl += " cl%03i *fn_%03i(" % (ret, fn) - decl += ", ".join("cl%03i *" % p for p in params) - decl += ");\n" - bindings += ' .def("fn_%03i", &cl%03i::fn_%03i)\n' % \ - (fn, cl, fn) - decl += "};\n\n" - bindings += ' ;\n' - - result = "#include \n\n" - result += "namespace py = pybind11;\n\n" - result += decl + '\n' - result += "PYBIND11_MODULE(example, m) {\n" - result += bindings - result += "}" - return result - - -def generate_dummy_code_boost(nclasses=10): - decl = "" - bindings = "" - - for cl in range(nclasses): - decl += "class cl%03i;\n" % cl - decl += '\n' - - for cl in range(nclasses): - decl += "class cl%03i {\n" % cl - decl += "public:\n" - bindings += ' py::class_("cl%03i")\n' % (cl, cl) - for fn in range(nfns): - ret = random.randint(0, nclasses - 1) - params = [random.randint(0, nclasses - 1) for i in range(nargs)] - decl += " cl%03i *fn_%03i(" % (ret, fn) - decl += ", ".join("cl%03i *" % p for p in params) - decl += ");\n" - bindings += ' .def("fn_%03i", &cl%03i::fn_%03i, py::return_value_policy())\n' % \ - (fn, cl, fn) - decl += "};\n\n" - bindings += ' ;\n' - - result = "#include \n\n" - result += "namespace py = boost::python;\n\n" - result += decl + '\n' - result += "BOOST_PYTHON_MODULE(example) {\n" - result += bindings - result += "}" - return result - - -for codegen in [generate_dummy_code_pybind11, generate_dummy_code_boost]: - print ("{") - for i in range(0, 10): - nclasses = 2 ** i - with open("test.cpp", "w") as f: - f.write(codegen(nclasses)) - n1 = dt.datetime.now() - os.system("g++ -Os -shared -rdynamic -undefined dynamic_lookup " - "-fvisibility=hidden -std=c++14 test.cpp -I include " - "-I /System/Library/Frameworks/Python.framework/Headers -o test.so") - n2 = dt.datetime.now() - elapsed = (n2 - n1).total_seconds() - size = os.stat('test.so').st_size - print(" {%i, %f, %i}," % (nclasses * nfns, elapsed, size)) - print ("}") diff --git a/spaces/CVPR/LIVE/pybind11/tests/test_factory_constructors.cpp b/spaces/CVPR/LIVE/pybind11/tests/test_factory_constructors.cpp deleted file mode 100644 index 61cf33d16ed404563a3da803a4c2ecea4453a3b4..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/pybind11/tests/test_factory_constructors.cpp +++ /dev/null @@ -1,342 +0,0 @@ -/* - tests/test_factory_constructors.cpp -- tests construction from a factory function - via py::init_factory() - - Copyright (c) 2017 Jason Rhinelander - - All rights reserved. Use of this source code is governed by a - BSD-style license that can be found in the LICENSE file. -*/ - -#include "pybind11_tests.h" -#include "constructor_stats.h" -#include - -// Classes for testing python construction via C++ factory function: -// Not publicly constructible, copyable, or movable: -class TestFactory1 { - friend class TestFactoryHelper; - TestFactory1() : value("(empty)") { print_default_created(this); } - TestFactory1(int v) : value(std::to_string(v)) { print_created(this, value); } - TestFactory1(std::string v) : value(std::move(v)) { print_created(this, value); } - TestFactory1(TestFactory1 &&) = delete; - TestFactory1(const TestFactory1 &) = delete; - TestFactory1 &operator=(TestFactory1 &&) = delete; - TestFactory1 &operator=(const TestFactory1 &) = delete; -public: - std::string value; - ~TestFactory1() { print_destroyed(this); } -}; -// Non-public construction, but moveable: -class TestFactory2 { - friend class TestFactoryHelper; - TestFactory2() : value("(empty2)") { print_default_created(this); } - TestFactory2(int v) : value(std::to_string(v)) { print_created(this, value); } - TestFactory2(std::string v) : value(std::move(v)) { print_created(this, value); } -public: - TestFactory2(TestFactory2 &&m) { value = std::move(m.value); print_move_created(this); } - TestFactory2 &operator=(TestFactory2 &&m) { value = std::move(m.value); print_move_assigned(this); return *this; } - std::string value; - ~TestFactory2() { print_destroyed(this); } -}; -// Mixed direct/factory construction: -class TestFactory3 { -protected: - friend class TestFactoryHelper; - TestFactory3() : value("(empty3)") { print_default_created(this); } - TestFactory3(int v) : value(std::to_string(v)) { print_created(this, value); } -public: - TestFactory3(std::string v) : value(std::move(v)) { print_created(this, value); } - TestFactory3(TestFactory3 &&m) { value = std::move(m.value); print_move_created(this); } - TestFactory3 &operator=(TestFactory3 &&m) { value = std::move(m.value); print_move_assigned(this); return *this; } - std::string value; - virtual ~TestFactory3() { print_destroyed(this); } -}; -// Inheritance test -class TestFactory4 : public TestFactory3 { -public: - TestFactory4() : TestFactory3() { print_default_created(this); } - TestFactory4(int v) : TestFactory3(v) { print_created(this, v); } - virtual ~TestFactory4() { print_destroyed(this); } -}; -// Another class for an invalid downcast test -class TestFactory5 : public TestFactory3 { -public: - TestFactory5(int i) : TestFactory3(i) { print_created(this, i); } - virtual ~TestFactory5() { print_destroyed(this); } -}; - -class TestFactory6 { -protected: - int value; - bool alias = false; -public: - TestFactory6(int i) : value{i} { print_created(this, i); } - TestFactory6(TestFactory6 &&f) { print_move_created(this); value = f.value; alias = f.alias; } - TestFactory6(const TestFactory6 &f) { print_copy_created(this); value = f.value; alias = f.alias; } - virtual ~TestFactory6() { print_destroyed(this); } - virtual int get() { return value; } - bool has_alias() { return alias; } -}; -class PyTF6 : public TestFactory6 { -public: - // Special constructor that allows the factory to construct a PyTF6 from a TestFactory6 only - // when an alias is needed: - PyTF6(TestFactory6 &&base) : TestFactory6(std::move(base)) { alias = true; print_created(this, "move", value); } - PyTF6(int i) : TestFactory6(i) { alias = true; print_created(this, i); } - PyTF6(PyTF6 &&f) : TestFactory6(std::move(f)) { print_move_created(this); } - PyTF6(const PyTF6 &f) : TestFactory6(f) { print_copy_created(this); } - PyTF6(std::string s) : TestFactory6((int) s.size()) { alias = true; print_created(this, s); } - virtual ~PyTF6() { print_destroyed(this); } - int get() override { PYBIND11_OVERLOAD(int, TestFactory6, get, /*no args*/); } -}; - -class TestFactory7 { -protected: - int value; - bool alias = false; -public: - TestFactory7(int i) : value{i} { print_created(this, i); } - TestFactory7(TestFactory7 &&f) { print_move_created(this); value = f.value; alias = f.alias; } - TestFactory7(const TestFactory7 &f) { print_copy_created(this); value = f.value; alias = f.alias; } - virtual ~TestFactory7() { print_destroyed(this); } - virtual int get() { return value; } - bool has_alias() { return alias; } -}; -class PyTF7 : public TestFactory7 { -public: - PyTF7(int i) : TestFactory7(i) { alias = true; print_created(this, i); } - PyTF7(PyTF7 &&f) : TestFactory7(std::move(f)) { print_move_created(this); } - PyTF7(const PyTF7 &f) : TestFactory7(f) { print_copy_created(this); } - virtual ~PyTF7() { print_destroyed(this); } - int get() override { PYBIND11_OVERLOAD(int, TestFactory7, get, /*no args*/); } -}; - - -class TestFactoryHelper { -public: - // Non-movable, non-copyable type: - // Return via pointer: - static TestFactory1 *construct1() { return new TestFactory1(); } - // Holder: - static std::unique_ptr construct1(int a) { return std::unique_ptr(new TestFactory1(a)); } - // pointer again - static TestFactory1 *construct1_string(std::string a) { return new TestFactory1(a); } - - // Moveable type: - // pointer: - static TestFactory2 *construct2() { return new TestFactory2(); } - // holder: - static std::unique_ptr construct2(int a) { return std::unique_ptr(new TestFactory2(a)); } - // by value moving: - static TestFactory2 construct2(std::string a) { return TestFactory2(a); } - - // shared_ptr holder type: - // pointer: - static TestFactory3 *construct3() { return new TestFactory3(); } - // holder: - static std::shared_ptr construct3(int a) { return std::shared_ptr(new TestFactory3(a)); } -}; - -TEST_SUBMODULE(factory_constructors, m) { - - // Define various trivial types to allow simpler overload resolution: - py::module m_tag = m.def_submodule("tag"); -#define MAKE_TAG_TYPE(Name) \ - struct Name##_tag {}; \ - py::class_(m_tag, #Name "_tag").def(py::init<>()); \ - m_tag.attr(#Name) = py::cast(Name##_tag{}) - MAKE_TAG_TYPE(pointer); - MAKE_TAG_TYPE(unique_ptr); - MAKE_TAG_TYPE(move); - MAKE_TAG_TYPE(shared_ptr); - MAKE_TAG_TYPE(derived); - MAKE_TAG_TYPE(TF4); - MAKE_TAG_TYPE(TF5); - MAKE_TAG_TYPE(null_ptr); - MAKE_TAG_TYPE(null_unique_ptr); - MAKE_TAG_TYPE(null_shared_ptr); - MAKE_TAG_TYPE(base); - MAKE_TAG_TYPE(invalid_base); - MAKE_TAG_TYPE(alias); - MAKE_TAG_TYPE(unaliasable); - MAKE_TAG_TYPE(mixed); - - // test_init_factory_basic, test_bad_type - py::class_(m, "TestFactory1") - .def(py::init([](unique_ptr_tag, int v) { return TestFactoryHelper::construct1(v); })) - .def(py::init(&TestFactoryHelper::construct1_string)) // raw function pointer - .def(py::init([](pointer_tag) { return TestFactoryHelper::construct1(); })) - .def(py::init([](py::handle, int v, py::handle) { return TestFactoryHelper::construct1(v); })) - .def_readwrite("value", &TestFactory1::value) - ; - py::class_(m, "TestFactory2") - .def(py::init([](pointer_tag, int v) { return TestFactoryHelper::construct2(v); })) - .def(py::init([](unique_ptr_tag, std::string v) { return TestFactoryHelper::construct2(v); })) - .def(py::init([](move_tag) { return TestFactoryHelper::construct2(); })) - .def_readwrite("value", &TestFactory2::value) - ; - - // Stateful & reused: - int c = 1; - auto c4a = [c](pointer_tag, TF4_tag, int a) { (void) c; return new TestFactory4(a);}; - - // test_init_factory_basic, test_init_factory_casting - py::class_>(m, "TestFactory3") - .def(py::init([](pointer_tag, int v) { return TestFactoryHelper::construct3(v); })) - .def(py::init([](shared_ptr_tag) { return TestFactoryHelper::construct3(); })) - .def("__init__", [](TestFactory3 &self, std::string v) { new (&self) TestFactory3(v); }) // placement-new ctor - - // factories returning a derived type: - .def(py::init(c4a)) // derived ptr - .def(py::init([](pointer_tag, TF5_tag, int a) { return new TestFactory5(a); })) - // derived shared ptr: - .def(py::init([](shared_ptr_tag, TF4_tag, int a) { return std::make_shared(a); })) - .def(py::init([](shared_ptr_tag, TF5_tag, int a) { return std::make_shared(a); })) - - // Returns nullptr: - .def(py::init([](null_ptr_tag) { return (TestFactory3 *) nullptr; })) - .def(py::init([](null_unique_ptr_tag) { return std::unique_ptr(); })) - .def(py::init([](null_shared_ptr_tag) { return std::shared_ptr(); })) - - .def_readwrite("value", &TestFactory3::value) - ; - - // test_init_factory_casting - py::class_>(m, "TestFactory4") - .def(py::init(c4a)) // pointer - ; - - // Doesn't need to be registered, but registering makes getting ConstructorStats easier: - py::class_>(m, "TestFactory5"); - - // test_init_factory_alias - // Alias testing - py::class_(m, "TestFactory6") - .def(py::init([](base_tag, int i) { return TestFactory6(i); })) - .def(py::init([](alias_tag, int i) { return PyTF6(i); })) - .def(py::init([](alias_tag, std::string s) { return PyTF6(s); })) - .def(py::init([](alias_tag, pointer_tag, int i) { return new PyTF6(i); })) - .def(py::init([](base_tag, pointer_tag, int i) { return new TestFactory6(i); })) - .def(py::init([](base_tag, alias_tag, pointer_tag, int i) { return (TestFactory6 *) new PyTF6(i); })) - - .def("get", &TestFactory6::get) - .def("has_alias", &TestFactory6::has_alias) - - .def_static("get_cstats", &ConstructorStats::get, py::return_value_policy::reference) - .def_static("get_alias_cstats", &ConstructorStats::get, py::return_value_policy::reference) - ; - - // test_init_factory_dual - // Separate alias constructor testing - py::class_>(m, "TestFactory7") - .def(py::init( - [](int i) { return TestFactory7(i); }, - [](int i) { return PyTF7(i); })) - .def(py::init( - [](pointer_tag, int i) { return new TestFactory7(i); }, - [](pointer_tag, int i) { return new PyTF7(i); })) - .def(py::init( - [](mixed_tag, int i) { return new TestFactory7(i); }, - [](mixed_tag, int i) { return PyTF7(i); })) - .def(py::init( - [](mixed_tag, std::string s) { return TestFactory7((int) s.size()); }, - [](mixed_tag, std::string s) { return new PyTF7((int) s.size()); })) - .def(py::init( - [](base_tag, pointer_tag, int i) { return new TestFactory7(i); }, - [](base_tag, pointer_tag, int i) { return (TestFactory7 *) new PyTF7(i); })) - .def(py::init( - [](alias_tag, pointer_tag, int i) { return new PyTF7(i); }, - [](alias_tag, pointer_tag, int i) { return new PyTF7(10*i); })) - .def(py::init( - [](shared_ptr_tag, base_tag, int i) { return std::make_shared(i); }, - [](shared_ptr_tag, base_tag, int i) { auto *p = new PyTF7(i); return std::shared_ptr(p); })) - .def(py::init( - [](shared_ptr_tag, invalid_base_tag, int i) { return std::make_shared(i); }, - [](shared_ptr_tag, invalid_base_tag, int i) { return std::make_shared(i); })) // <-- invalid alias factory - - .def("get", &TestFactory7::get) - .def("has_alias", &TestFactory7::has_alias) - - .def_static("get_cstats", &ConstructorStats::get, py::return_value_policy::reference) - .def_static("get_alias_cstats", &ConstructorStats::get, py::return_value_policy::reference) - ; - - // test_placement_new_alternative - // Class with a custom new operator but *without* a placement new operator (issue #948) - class NoPlacementNew { - public: - NoPlacementNew(int i) : i(i) { } - static void *operator new(std::size_t s) { - auto *p = ::operator new(s); - py::print("operator new called, returning", reinterpret_cast(p)); - return p; - } - static void operator delete(void *p) { - py::print("operator delete called on", reinterpret_cast(p)); - ::operator delete(p); - } - int i; - }; - // As of 2.2, `py::init` no longer requires placement new - py::class_(m, "NoPlacementNew") - .def(py::init()) - .def(py::init([]() { return new NoPlacementNew(100); })) - .def_readwrite("i", &NoPlacementNew::i) - ; - - - // test_reallocations - // Class that has verbose operator_new/operator_delete calls - struct NoisyAlloc { - NoisyAlloc(const NoisyAlloc &) = default; - NoisyAlloc(int i) { py::print(py::str("NoisyAlloc(int {})").format(i)); } - NoisyAlloc(double d) { py::print(py::str("NoisyAlloc(double {})").format(d)); } - ~NoisyAlloc() { py::print("~NoisyAlloc()"); } - - static void *operator new(size_t s) { py::print("noisy new"); return ::operator new(s); } - static void *operator new(size_t, void *p) { py::print("noisy placement new"); return p; } - static void operator delete(void *p, size_t) { py::print("noisy delete"); ::operator delete(p); } - static void operator delete(void *, void *) { py::print("noisy placement delete"); } -#if defined(_MSC_VER) && _MSC_VER < 1910 - // MSVC 2015 bug: the above "noisy delete" isn't invoked (fixed in MSVC 2017) - static void operator delete(void *p) { py::print("noisy delete"); ::operator delete(p); } -#endif - }; - py::class_(m, "NoisyAlloc") - // Since these overloads have the same number of arguments, the dispatcher will try each of - // them until the arguments convert. Thus we can get a pre-allocation here when passing a - // single non-integer: - .def("__init__", [](NoisyAlloc *a, int i) { new (a) NoisyAlloc(i); }) // Regular constructor, runs first, requires preallocation - .def(py::init([](double d) { return new NoisyAlloc(d); })) - - // The two-argument version: first the factory pointer overload. - .def(py::init([](int i, int) { return new NoisyAlloc(i); })) - // Return-by-value: - .def(py::init([](double d, int) { return NoisyAlloc(d); })) - // Old-style placement new init; requires preallocation - .def("__init__", [](NoisyAlloc &a, double d, double) { new (&a) NoisyAlloc(d); }) - // Requires deallocation of previous overload preallocated value: - .def(py::init([](int i, double) { return new NoisyAlloc(i); })) - // Regular again: requires yet another preallocation - .def("__init__", [](NoisyAlloc &a, int i, std::string) { new (&a) NoisyAlloc(i); }) - ; - - - - - // static_assert testing (the following def's should all fail with appropriate compilation errors): -#if 0 - struct BadF1Base {}; - struct BadF1 : BadF1Base {}; - struct PyBadF1 : BadF1 {}; - py::class_> bf1(m, "BadF1"); - // wrapped factory function must return a compatible pointer, holder, or value - bf1.def(py::init([]() { return 3; })); - // incompatible factory function pointer return type - bf1.def(py::init([]() { static int three = 3; return &three; })); - // incompatible factory function std::shared_ptr return type: cannot convert shared_ptr to holder - // (non-polymorphic base) - bf1.def(py::init([]() { return std::shared_ptr(new BadF1()); })); -#endif -} diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/reverse.h b/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/reverse.h deleted file mode 100644 index c6ae90664ad9538e73febfde86c334011de417c8..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cpp/detail/reverse.h +++ /dev/null @@ -1,22 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include - -// this system has no special version of this algorithm - diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/transform_scan.h b/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/transform_scan.h deleted file mode 100644 index fbf70b0a748803f61fac623482c349feaf0be86c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/cuda/detail/transform_scan.h +++ /dev/null @@ -1,111 +0,0 @@ -/****************************************************************************** - * Copyright (c) 2016, NVIDIA CORPORATION. All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions are met: - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in the - * documentation and/or other materials provided with the distribution. - * * Neither the name of the NVIDIA CORPORATION nor the - * names of its contributors may be used to endorse or promote products - * derived from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" - * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE - * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE - * ARE DISCLAIMED. IN NO EVENT SHALL NVIDIA CORPORATION BE LIABLE FOR ANY - * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES - * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; - * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND - * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS - * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - * - ******************************************************************************/ -#pragma once - - -#if THRUST_DEVICE_COMPILER == THRUST_DEVICE_COMPILER_NVCC -#include -#include -#include - -namespace thrust -{ - -namespace cuda_cub { - -template -OutputIt __host__ __device__ -transform_inclusive_scan(execution_policy &policy, - InputIt first, - InputIt last, - OutputIt result, - TransformOp transform_op, - ScanOp scan_op) -{ - // Use the input iterator's value type per https://wg21.link/P0571 - using input_type = typename thrust::iterator_value::type; -#if THRUST_CPP_DIALECT < 2017 - using result_type = typename std::result_of::type; -#else - using result_type = std::invoke_result_t; -#endif - - typedef typename iterator_traits::difference_type size_type; - size_type num_items = static_cast(thrust::distance(first, last)); - typedef transform_input_iterator_t - transformed_iterator_t; - - return cuda_cub::inclusive_scan_n(policy, - transformed_iterator_t(first, transform_op), - num_items, - result, - scan_op); -} - -template -OutputIt __host__ __device__ -transform_exclusive_scan(execution_policy &policy, - InputIt first, - InputIt last, - OutputIt result, - TransformOp transform_op, - InitialValueType init, - ScanOp scan_op) -{ - // Use the initial value type per https://wg21.link/P0571 - using result_type = InitialValueType; - - typedef typename iterator_traits::difference_type size_type; - size_type num_items = static_cast(thrust::distance(first, last)); - typedef transform_input_iterator_t - transformed_iterator_t; - - return cuda_cub::exclusive_scan_n(policy, - transformed_iterator_t(first, transform_op), - num_items, - result, - init, - scan_op); -} - -} // namespace cuda_cub - -} // end namespace thrust -#endif diff --git a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/logical.h b/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/logical.h deleted file mode 100644 index 702dbad852d9e074147368a87b28a082fcfa8242..0000000000000000000000000000000000000000 --- a/spaces/CVPR/LIVE/thrust/thrust/system/detail/generic/logical.h +++ /dev/null @@ -1,63 +0,0 @@ -/* - * Copyright 2008-2013 NVIDIA Corporation - * - * Licensed under the Apache License, Version 2.0 (the "License"); - * you may not use this file except in compliance with the License. - * You may obtain a copy of the License at - * - * http://www.apache.org/licenses/LICENSE-2.0 - * - * Unless required by applicable law or agreed to in writing, software - * distributed under the License is distributed on an "AS IS" BASIS, - * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. - * See the License for the specific language governing permissions and - * limitations under the License. - */ - -#pragma once - -#include -#include -#include -#include -#include - -namespace thrust -{ -namespace system -{ -namespace detail -{ -namespace generic -{ - - -template -__host__ __device__ -bool all_of(thrust::execution_policy &exec, InputIterator first, InputIterator last, Predicate pred) -{ - return thrust::find_if(exec, first, last, thrust::detail::not1(pred)) == last; -} - - -template -__host__ __device__ -bool any_of(thrust::execution_policy &exec, InputIterator first, InputIterator last, Predicate pred) -{ - return thrust::find_if(exec, first, last, pred) != last; -} - - -template -__host__ __device__ -bool none_of(thrust::execution_policy &exec, InputIterator first, InputIterator last, Predicate pred) -{ - return !thrust::any_of(exec, first, last, pred); -} - - -} // end generic -} // end detail -} // end system -} // end thrust - diff --git a/spaces/CVPR/transfiner/app.py b/spaces/CVPR/transfiner/app.py deleted file mode 100644 index dd714808424ca142bce8cbff56a58182887ecc5c..0000000000000000000000000000000000000000 --- a/spaces/CVPR/transfiner/app.py +++ /dev/null @@ -1,84 +0,0 @@ -#try: -# import detectron2 -#except: -import os -os.system('pip install git+https://github.com/SysCV/transfiner.git') - -from matplotlib.pyplot import axis -import gradio as gr -import requests -import numpy as np -from torch import nn -import requests - -import torch - -from detectron2 import model_zoo -from detectron2.engine import DefaultPredictor -from detectron2.config import get_cfg -from detectron2.utils.visualizer import Visualizer -from detectron2.data import MetadataCatalog - - -model_name='./configs/transfiner/mask_rcnn_R_101_FPN_3x_deform.yaml' - - -cfg = get_cfg() -# add project-specific config (e.g., TensorMask) here if you're not running a model in detectron2's core library -cfg.merge_from_file(model_name) -cfg.MODEL.ROI_HEADS.SCORE_THRESH_TEST = 0.5 # set threshold for this model -cfg.VIS_PERIOD = 100 -# Find a model from detectron2's model zoo. You can use the https://dl.fbaipublicfiles... url as w ell -#cfg.MODEL.WEIGHTS = './output_3x_transfiner_r50.pth' -cfg.MODEL.WEIGHTS = './output_3x_transfiner_r101_deform.pth' - -if not torch.cuda.is_available(): - cfg.MODEL.DEVICE='cpu' - -predictor = DefaultPredictor(cfg) - - -def inference(image): - width, height = image.size - if width > 1300: - ratio = float(height) / float(width) - width = 1300 - height = int(ratio * width) - image = image.resize((width, height)) - - img = np.asarray(image) - - #img = np.array(image) - outputs = predictor(img) - - v = Visualizer(img, MetadataCatalog.get(cfg.DATASETS.TRAIN[0])) - out = v.draw_instance_predictions(outputs["instances"].to("cpu")) - - return out.get_image() - - - -title = "Mask Transfiner [CVPR, 2022]" -description = "Demo for Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022 based on R101-FPN. To use it, simply upload your image, or click one of the examples to load them. Note that it runs in CPU environment provided by Hugging Face so the processing speed may be slow." -article = "

Mask Transfiner for High-Quality Instance Segmentation, CVPR 2022 | Mask Transfiner Github Code

" - -gr.Interface( - inference, - [gr.inputs.Image(type="pil", label="Input")], - gr.outputs.Image(type="numpy", label="Output"), - title=title, - description=description, - article=article, - examples=[ - ["demo/sample_imgs/000000131444.jpg"], - ["demo/sample_imgs/000000157365.jpg"], - ["demo/sample_imgs/000000176037.jpg"], - ["demo/sample_imgs/000000018737.jpg"], - ["demo/sample_imgs/000000224200.jpg"], - ["demo/sample_imgs/000000558073.jpg"], - ["demo/sample_imgs/000000404922.jpg"], - ["demo/sample_imgs/000000252776.jpg"], - ["demo/sample_imgs/000000482477.jpg"], - ["demo/sample_imgs/000000344909.jpg"] - ]).launch() - diff --git a/spaces/ChenyangSi/FreeU/style.css b/spaces/ChenyangSi/FreeU/style.css deleted file mode 100644 index af4e23927a03e13fd16ebc7b4eb6eb434c42f65b..0000000000000000000000000000000000000000 --- a/spaces/ChenyangSi/FreeU/style.css +++ /dev/null @@ -1,3 +0,0 @@ -h1 { - text-align: center; -} \ No newline at end of file diff --git a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/help.js b/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/help.js deleted file mode 100644 index 12dca07cfaa7c976b073ff5b4c5b51601953f585..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/Yunzai/Yunzai/plugins/ws-plugin/apps/help.js +++ /dev/null @@ -1,112 +0,0 @@ -import plugin from '../../../lib/plugins/plugin.js' -import { Render, Version, Config } from '../components/index.js' -import { helpCfg, helpList } from '../config/system/help_system.js' -import { style } from '../resources/help/imgs/config.js' -import _ from 'lodash' - -export class setting extends plugin { - constructor() { - super({ - name: '[ws-plugin] 帮助', - dsc: '[ws-plugin] 帮助', - event: 'message', - priority: 1, - rule: [ - { - reg: '^#ws版本$', - fnc: 'version' - }, - { - reg: '^#ws帮助$', - fnc: 'help', - permission: 'master' - } - ] - }) - - } - async version(e) { - return await Render.render('help/version-info', { - currentVersion: Version.version, - changelogs: Version.changelogs, - elem: 'cryo' - }, { e, scale: 1.2 }) - } - - async help(e) { - let helpGroup = [] - _.forEach(helpList, (group) => { - _.forEach(group.list, (help) => { - let icon = help.icon * 1 - if (!icon) { - help.css = 'display:none' - } else { - let x = (icon - 1) % 10 - let y = (icon - x - 1) / 10 - help.css = `background-position:-${x * 50}px -${y * 50}px` - } - }) - - helpGroup.push(group) - }) - - let themeData = await getThemeData(helpCfg, helpCfg) - return await Render.render('help/index', { - helpCfg, - helpGroup, - ...themeData, - element: 'default' - }, { e, scale: 1.6 }) - } - -} - -async function getThemeCfg() { - let resPath = '{{_res_path}}/help/imgs/' - return { - main: `${resPath}/main.png`, - bg: `${resPath}/bg.jpg`, - style: style - } -} - -async function getThemeData(diyStyle, sysStyle) { - let helpConfig = _.extend({}, sysStyle, diyStyle) - let colCount = Math.min(5, Math.max(parseInt(helpConfig?.colCount) || 3, 2)) - let colWidth = Math.min(500, Math.max(100, parseInt(helpConfig?.colWidth) || 265)) - let width = Math.min(2500, Math.max(800, colCount * colWidth + 30)) - let theme = await getThemeCfg() - let themeStyle = theme.style || {} - let ret = [` - body{background-image:url(${theme.bg});width:${width}px;} - .container{background-image:url(${theme.main});width:${width}px;} - .help-table .td,.help-table .th{width:${100 / colCount}%} - `] - let css = function (sel, css, key, def, fn) { - let val = getDef(themeStyle[key], diyStyle[key], sysStyle[key], def) - if (fn) { - val = fn(val) - } - ret.push(`${sel}{${css}:${val}}`) - } - css('.help-title,.help-group', 'color', 'fontColor', '#ceb78b') - css('.help-title,.help-group', 'text-shadow', 'fontShadow', 'none') - css('.help-desc', 'color', 'descColor', '#eee') - css('.cont-box', 'background', 'contBgColor', 'rgba(43, 52, 61, 0.8)') - css('.cont-box', 'backdrop-filter', 'contBgBlur', 3, (n) => diyStyle.bgBlur === false ? 'none' : `blur(${n}px)`) - css('.help-group', 'background', 'headerBgColor', 'rgba(34, 41, 51, .4)') - css('.help-table .tr:nth-child(odd)', 'background', 'rowBgColor1', 'rgba(34, 41, 51, .2)') - css('.help-table .tr:nth-child(even)', 'background', 'rowBgColor2', 'rgba(34, 41, 51, .4)') - return { - style: ``, - colCount - } -} - -function getDef() { - for (let idx in arguments) { - if (!_.isUndefined(arguments[idx])) { - return arguments[idx] - } - } -} \ No newline at end of file diff --git a/spaces/CikeyQI/meme-api/meme_generator/version.py b/spaces/CikeyQI/meme-api/meme_generator/version.py deleted file mode 100644 index 6561790f155f6bfd436e5b19b2f0a1e7f20c0259..0000000000000000000000000000000000000000 --- a/spaces/CikeyQI/meme-api/meme_generator/version.py +++ /dev/null @@ -1 +0,0 @@ -__version__ = "0.0.15" diff --git a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/feature.py b/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/feature.py deleted file mode 100644 index 6dd24e0a24459b16e6032bf33d013a1654fc9f41..0000000000000000000000000000000000000000 --- a/spaces/Cyril666/ContourNet-ABI/maskrcnn_benchmark/data/datasets/evaluation/word/util/feature.py +++ /dev/null @@ -1,14 +0,0 @@ -# encoding utf-8 -def hog(img, bins =9, pixels_per_cell=(8, 8), cells_per_block=(2, 2), transform_sqrt=False, feature_vector=True): - """ - Extract hog feature from image. - See detail at https://github.com/scikit-image/scikit-image/blob/master/skimage/feature/_hog.py - """ - from skimage.feature import hog - return hog(img, - orientations = bins, - pixels_per_cell = pixels_per_cell, - cells_per_block = cells_per_block, - visualise = False, - transform_sqrt=False, - feature_vector=True) diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/__init__.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/__init__.py deleted file mode 100644 index 006d5f5598fbeea4278c60fd5c4be44de19d5e00..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/contourpy/__init__.py +++ /dev/null @@ -1,253 +0,0 @@ -from __future__ import annotations - -from typing import TYPE_CHECKING - -import numpy as np - -from contourpy._contourpy import ( - ContourGenerator, FillType, LineType, Mpl2005ContourGenerator, Mpl2014ContourGenerator, - SerialContourGenerator, ThreadedContourGenerator, ZInterp, max_threads, -) -from contourpy._version import __version__ -from contourpy.chunk import calc_chunk_sizes -from contourpy.enum_util import as_fill_type, as_line_type, as_z_interp - -if TYPE_CHECKING: - from typing import Any - - from numpy.typing import ArrayLike - - from ._contourpy import CoordinateArray, MaskArray - -__all__ = [ - "__version__", - "contour_generator", - "max_threads", - "FillType", - "LineType", - "ContourGenerator", - "Mpl2005ContourGenerator", - "Mpl2014ContourGenerator", - "SerialContourGenerator", - "ThreadedContourGenerator", - "ZInterp", -] - - -# Simple mapping of algorithm name to class name. -_class_lookup: dict[str, type[ContourGenerator]] = dict( - mpl2005=Mpl2005ContourGenerator, - mpl2014=Mpl2014ContourGenerator, - serial=SerialContourGenerator, - threaded=ThreadedContourGenerator, -) - - -def _remove_z_mask( - z: ArrayLike | np.ma.MaskedArray[Any, Any] | None, -) -> tuple[CoordinateArray, MaskArray | None]: - # Preserve mask if present. - z_array = np.ma.asarray(z, dtype=np.float64) # type: ignore[no-untyped-call] - z_masked = np.ma.masked_invalid(z_array, copy=False) # type: ignore[no-untyped-call] - - if np.ma.is_masked(z_masked): # type: ignore[no-untyped-call] - mask = np.ma.getmask(z_masked) # type: ignore[no-untyped-call] - else: - mask = None - - return np.ma.getdata(z_masked), mask # type: ignore[no-untyped-call] - - -def contour_generator( - x: ArrayLike | None = None, - y: ArrayLike | None = None, - z: ArrayLike | np.ma.MaskedArray[Any, Any] | None = None, - *, - name: str = "serial", - corner_mask: bool | None = None, - line_type: LineType | str | None = None, - fill_type: FillType | str | None = None, - chunk_size: int | tuple[int, int] | None = None, - chunk_count: int | tuple[int, int] | None = None, - total_chunk_count: int | None = None, - quad_as_tri: bool = False, - z_interp: ZInterp | str | None = ZInterp.Linear, - thread_count: int = 0, -) -> ContourGenerator: - """Create and return a contour generator object. - - The class and properties of the contour generator are determined by the function arguments, - with sensible defaults. - - Args: - x (array-like of shape (ny, nx) or (nx,), optional): The x-coordinates of the ``z`` values. - May be 2D with the same shape as ``z.shape``, or 1D with length ``nx = z.shape[1]``. - If not specified are assumed to be ``np.arange(nx)``. Must be ordered monotonically. - y (array-like of shape (ny, nx) or (ny,), optional): The y-coordinates of the ``z`` values. - May be 2D with the same shape as ``z.shape``, or 1D with length ``ny = z.shape[0]``. - If not specified are assumed to be ``np.arange(ny)``. Must be ordered monotonically. - z (array-like of shape (ny, nx), may be a masked array): The 2D gridded values to calculate - the contours of. May be a masked array, and any invalid values (``np.inf`` or - ``np.nan``) will also be masked out. - name (str): Algorithm name, one of ``"serial"``, ``"threaded"``, ``"mpl2005"`` or - ``"mpl2014"``, default ``"serial"``. - corner_mask (bool, optional): Enable/disable corner masking, which only has an effect if - ``z`` is a masked array. If ``False``, any quad touching a masked point is masked out. - If ``True``, only the triangular corners of quads nearest these points are always masked - out, other triangular corners comprising three unmasked points are contoured as usual. - If not specified, uses the default provided by the algorithm ``name``. - line_type (LineType, optional): The format of contour line data returned from calls to - :meth:`~contourpy.ContourGenerator.lines`. If not specified, uses the default provided - by the algorithm ``name``. - fill_type (FillType, optional): The format of filled contour data returned from calls to - :meth:`~contourpy.ContourGenerator.filled`. If not specified, uses the default provided - by the algorithm ``name``. - chunk_size (int or tuple(int, int), optional): Chunk size in (y, x) directions, or the same - size in both directions if only one value is specified. - chunk_count (int or tuple(int, int), optional): Chunk count in (y, x) directions, or the - same count in both directions if only one value is specified. - total_chunk_count (int, optional): Total number of chunks. - quad_as_tri (bool): Enable/disable treating quads as 4 triangles, default ``False``. - If ``False``, a contour line within a quad is a straight line between points on two of - its edges. If ``True``, each full quad is divided into 4 triangles using a virtual point - at the centre (mean x, y of the corner points) and a contour line is piecewise linear - within those triangles. Corner-masked triangles are not affected by this setting, only - full unmasked quads. - z_interp (ZInterp): How to interpolate ``z`` values when determining where contour lines - intersect the edges of quads and the ``z`` values of the central points of quads, - default ``ZInterp.Linear``. - thread_count (int): Number of threads to use for contour calculation, default 0. Threads can - only be used with an algorithm ``name`` that supports threads (currently only - ``name="threaded"``) and there must be at least the same number of chunks as threads. - If ``thread_count=0`` and ``name="threaded"`` then it uses the maximum number of threads - as determined by the C++11 call ``std::thread::hardware_concurrency()``. If ``name`` is - something other than ``"threaded"`` then the ``thread_count`` will be set to ``1``. - - Return: - :class:`~contourpy._contourpy.ContourGenerator`. - - Note: - A maximum of one of ``chunk_size``, ``chunk_count`` and ``total_chunk_count`` may be - specified. - - Warning: - The ``name="mpl2005"`` algorithm does not implement chunking for contour lines. - """ - x = np.asarray(x, dtype=np.float64) - y = np.asarray(y, dtype=np.float64) - z, mask = _remove_z_mask(z) - - # Check arguments: z. - if z.ndim != 2: - raise TypeError(f"Input z must be 2D, not {z.ndim}D") - - if z.shape[0] < 2 or z.shape[1] < 2: - raise TypeError(f"Input z must be at least a (2, 2) shaped array, but has shape {z.shape}") - - ny, nx = z.shape - - # Check arguments: x and y. - if x.ndim != y.ndim: - raise TypeError(f"Number of dimensions of x ({x.ndim}) and y ({y.ndim}) do not match") - - if x.ndim == 0: - x = np.arange(nx, dtype=np.float64) - y = np.arange(ny, dtype=np.float64) - x, y = np.meshgrid(x, y) - elif x.ndim == 1: - if len(x) != nx: - raise TypeError(f"Length of x ({len(x)}) must match number of columns in z ({nx})") - if len(y) != ny: - raise TypeError(f"Length of y ({len(y)}) must match number of rows in z ({ny})") - x, y = np.meshgrid(x, y) - elif x.ndim == 2: - if x.shape != z.shape: - raise TypeError(f"Shapes of x {x.shape} and z {z.shape} do not match") - if y.shape != z.shape: - raise TypeError(f"Shapes of y {y.shape} and z {z.shape} do not match") - else: - raise TypeError(f"Inputs x and y must be None, 1D or 2D, not {x.ndim}D") - - # Check mask shape just in case. - if mask is not None and mask.shape != z.shape: - raise ValueError("If mask is set it must be a 2D array with the same shape as z") - - # Check arguments: name. - if name not in _class_lookup: - raise ValueError(f"Unrecognised contour generator name: {name}") - - # Check arguments: chunk_size, chunk_count and total_chunk_count. - y_chunk_size, x_chunk_size = calc_chunk_sizes( - chunk_size, chunk_count, total_chunk_count, ny, nx) - - cls = _class_lookup[name] - - # Check arguments: corner_mask. - if corner_mask is None: - # Set it to default, which is True if the algorithm supports it. - corner_mask = cls.supports_corner_mask() - elif corner_mask and not cls.supports_corner_mask(): - raise ValueError(f"{name} contour generator does not support corner_mask=True") - - # Check arguments: line_type. - if line_type is None: - line_type = cls.default_line_type - else: - line_type = as_line_type(line_type) - - if not cls.supports_line_type(line_type): - raise ValueError(f"{name} contour generator does not support line_type {line_type}") - - # Check arguments: fill_type. - if fill_type is None: - fill_type = cls.default_fill_type - else: - fill_type = as_fill_type(fill_type) - - if not cls.supports_fill_type(fill_type): - raise ValueError(f"{name} contour generator does not support fill_type {fill_type}") - - # Check arguments: quad_as_tri. - if quad_as_tri and not cls.supports_quad_as_tri(): - raise ValueError(f"{name} contour generator does not support quad_as_tri=True") - - # Check arguments: z_interp. - if z_interp is None: - z_interp = ZInterp.Linear - else: - z_interp = as_z_interp(z_interp) - - if z_interp != ZInterp.Linear and not cls.supports_z_interp(): - raise ValueError(f"{name} contour generator does not support z_interp {z_interp}") - - # Check arguments: thread_count. - if thread_count not in (0, 1) and not cls.supports_threads(): - raise ValueError(f"{name} contour generator does not support thread_count {thread_count}") - - # Prepare args and kwargs for contour generator constructor. - args = [x, y, z, mask] - kwargs: dict[str, int | bool | LineType | FillType | ZInterp] = { - "x_chunk_size": x_chunk_size, - "y_chunk_size": y_chunk_size, - } - - if name not in ("mpl2005", "mpl2014"): - kwargs["line_type"] = line_type - kwargs["fill_type"] = fill_type - - if cls.supports_corner_mask(): - kwargs["corner_mask"] = corner_mask - - if cls.supports_quad_as_tri(): - kwargs["quad_as_tri"] = quad_as_tri - - if cls.supports_z_interp(): - kwargs["z_interp"] = z_interp - - if cls.supports_threads(): - kwargs["thread_count"] = thread_count - - # Create contour generator. - cont_gen = cls(*args, **kwargs) - - return cont_gen diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7f39cecc.js b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7f39cecc.js deleted file mode 100644 index 44bdeaca695571ecda2b48901ed0151ef7d4fbdd..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/templates/cdn/assets/index-7f39cecc.js +++ /dev/null @@ -1,2 +0,0 @@ -import{E as u,L as v}from"./index-f8ff95a1.js";import{s as k,t,h as S,L as w,i as z,w as x,f as R,a as U,b as _,I as T,x as V}from"./index-3ba00a4a.js";import"./index-1d65707a.js";import"./Blocks-c9e1499d.js";import"./Button-f155035a.js";import"./BlockLabel-66866176.js";import"./Empty-eec13822.js";import"./Copy-9f1657c4.js";import"./Download-daff1959.js";const Y=94,g=1,C=95,Z=96,f=2,$=[9,10,11,12,13,32,133,160,5760,8192,8193,8194,8195,8196,8197,8198,8199,8200,8201,8202,8232,8233,8239,8287,12288],G=58,N=40,X=95,q=91,c=45,E=46,j=35,D=37;function p(e){return e>=65&&e<=90||e>=97&&e<=122||e>=161}function I(e){return e>=48&&e<=57}const B=new u((e,o)=>{for(let r=!1,a=0,O=0;;O++){let{next:l}=e;if(p(l)||l==c||l==X||r&&I(l))!r&&(l!=c||O>0)&&(r=!0),a===O&&l==c&&a++,e.advance();else{r&&e.acceptToken(l==N?C:a==2&&o.canShift(f)?f:Z);break}}}),A=new u(e=>{if($.includes(e.peek(-1))){let{next:o}=e;(p(o)||o==X||o==j||o==E||o==q||o==G||o==c)&&e.acceptToken(Y)}}),F=new u(e=>{if(!$.includes(e.peek(-1))){let{next:o}=e;if(o==D&&(e.advance(),e.acceptToken(g)),p(o)){do e.advance();while(p(e.next));e.acceptToken(g)}}}),L=k({"AtKeyword import charset namespace keyframes media supports":t.definitionKeyword,"from to selector":t.keyword,NamespaceName:t.namespace,KeyframeName:t.labelName,TagName:t.tagName,ClassName:t.className,PseudoClassName:t.constant(t.className),IdName:t.labelName,"FeatureName PropertyName":t.propertyName,AttributeName:t.attributeName,NumberLiteral:t.number,KeywordQuery:t.keyword,UnaryQueryOp:t.operatorKeyword,"CallTag ValueName":t.atom,VariableName:t.variableName,Callee:t.operatorKeyword,Unit:t.unit,"UniversalSelector NestingSelector":t.definitionOperator,MatchOp:t.compareOperator,"ChildOp SiblingOp, LogicOp":t.logicOperator,BinOp:t.arithmeticOperator,Important:t.modifier,Comment:t.blockComment,ParenthesizedContent:t.special(t.name),ColorLiteral:t.color,StringLiteral:t.string,":":t.punctuation,"PseudoOp #":t.derefOperator,"; ,":t.separator,"( )":t.paren,"[ ]":t.squareBracket,"{ }":t.brace}),K={__proto__:null,lang:32,"nth-child":32,"nth-last-child":32,"nth-of-type":32,"nth-last-of-type":32,dir:32,"host-context":32,url:60,"url-prefix":60,domain:60,regexp:60,selector:134},J={__proto__:null,"@import":114,"@media":138,"@charset":142,"@namespace":146,"@keyframes":152,"@supports":164},H={__proto__:null,not:128,only:128,from:158,to:160},M=v.deserialize({version:14,states:"7WQYQ[OOO#_Q[OOOOQP'#Cd'#CdOOQP'#Cc'#CcO#fQ[O'#CfO$YQXO'#CaO$aQ[O'#ChO$lQ[O'#DPO$qQ[O'#DTOOQP'#Ed'#EdO$vQdO'#DeO%bQ[O'#DrO$vQdO'#DtO%sQ[O'#DvO&OQ[O'#DyO&TQ[O'#EPO&cQ[O'#EROOQS'#Ec'#EcOOQS'#ET'#ETQYQ[OOO&jQXO'#CdO'_QWO'#DaO'dQWO'#EjO'oQ[O'#EjQOQWOOOOQP'#Cg'#CgOOQP,59Q,59QO#fQ[O,59QO'yQ[O'#EWO(eQWO,58{O(mQ[O,59SO$lQ[O,59kO$qQ[O,59oO'yQ[O,59sO'yQ[O,59uO'yQ[O,59vO(xQ[O'#D`OOQS,58{,58{OOQP'#Ck'#CkOOQO'#C}'#C}OOQP,59S,59SO)PQWO,59SO)UQWO,59SOOQP'#DR'#DROOQP,59k,59kOOQO'#DV'#DVO)ZQ`O,59oOOQS'#Cp'#CpO$vQdO'#CqO)cQvO'#CsO*pQtO,5:POOQO'#Cx'#CxO)UQWO'#CwO+UQWO'#CyOOQS'#Eg'#EgOOQO'#Dh'#DhO+ZQ[O'#DoO+iQWO'#EkO&TQ[O'#DmO+wQWO'#DpOOQO'#El'#ElO(hQWO,5:^O+|QpO,5:`OOQS'#Dx'#DxO,UQWO,5:bO,ZQ[O,5:bOOQO'#D{'#D{O,cQWO,5:eO,hQWO,5:kO,pQWO,5:mOOQS-E8R-E8RO$vQdO,59{O,xQ[O'#EYO-VQWO,5;UO-VQWO,5;UOOQP1G.l1G.lO-|QXO,5:rOOQO-E8U-E8UOOQS1G.g1G.gOOQP1G.n1G.nO)PQWO1G.nO)UQWO1G.nOOQP1G/V1G/VO.ZQ`O1G/ZO.tQXO1G/_O/[QXO1G/aO/rQXO1G/bO0YQWO,59zO0_Q[O'#DOO0fQdO'#CoOOQP1G/Z1G/ZO$vQdO1G/ZO0mQpO,59]OOQS,59_,59_O$vQdO,59aO0uQWO1G/kOOQS,59c,59cO0zQ!bO,59eO1SQWO'#DhO1_QWO,5:TO1dQWO,5:ZO&TQ[O,5:VO&TQ[O'#EZO1lQWO,5;VO1wQWO,5:XO'yQ[O,5:[OOQS1G/x1G/xOOQS1G/z1G/zOOQS1G/|1G/|O2YQWO1G/|O2_QdO'#D|OOQS1G0P1G0POOQS1G0V1G0VOOQS1G0X1G0XO2mQtO1G/gOOQO,5:t,5:tO3TQ[O,5:tOOQO-E8W-E8WO3bQWO1G0pOOQP7+$Y7+$YOOQP7+$u7+$uO$vQdO7+$uOOQS1G/f1G/fO3mQXO'#EiO3tQWO,59jO3yQtO'#EUO4nQdO'#EfO4xQWO,59ZO4}QpO7+$uOOQS1G.w1G.wOOQS1G.{1G.{OOQS7+%V7+%VO5VQWO1G/PO$vQdO1G/oOOQO1G/u1G/uOOQO1G/q1G/qO5[QWO,5:uOOQO-E8X-E8XO5jQXO1G/vOOQS7+%h7+%hO5qQYO'#CsO(hQWO'#E[O5yQdO,5:hOOQS,5:h,5:hO6XQtO'#EXO$vQdO'#EXO7VQdO7+%ROOQO7+%R7+%ROOQO1G0`1G0`O7jQpO<T![;'S%^;'S;=`%o<%lO%^^;TUoWOy%^z!Q%^!Q![;g![;'S%^;'S;=`%o<%lO%^^;nYoW#[UOy%^z!Q%^!Q![;g![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^^[[oW#[UOy%^z!O%^!O!P;g!P!Q%^!Q![>T![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^_?VSpVOy%^z;'S%^;'S;=`%o<%lO%^^?hWjSOy%^z!O%^!O!P;O!P!Q%^!Q![>T![;'S%^;'S;=`%o<%lO%^_@VU#XPOy%^z!Q%^!Q![;g![;'S%^;'S;=`%o<%lO%^~@nTjSOy%^z{@}{;'S%^;'S;=`%o<%lO%^~ASUoWOy@}yzAfz{Bm{;'S@};'S;=`Co<%lO@}~AiTOzAfz{Ax{;'SAf;'S;=`Bg<%lOAf~A{VOzAfz{Ax{!PAf!P!QBb!Q;'SAf;'S;=`Bg<%lOAf~BgOR~~BjP;=`<%lAf~BrWoWOy@}yzAfz{Bm{!P@}!P!QC[!Q;'S@};'S;=`Co<%lO@}~CcSoWR~Oy%^z;'S%^;'S;=`%o<%lO%^~CrP;=`<%l@}^Cz[#[UOy%^z!O%^!O!P;g!P!Q%^!Q![>T![!g%^!g!h<^!h#X%^#X#Y<^#Y;'S%^;'S;=`%o<%lO%^XDuU]POy%^z![%^![!]EX!];'S%^;'S;=`%o<%lO%^XE`S^PoWOy%^z;'S%^;'S;=`%o<%lO%^_EqS!WVOy%^z;'S%^;'S;=`%o<%lO%^YFSSzQOy%^z;'S%^;'S;=`%o<%lO%^XFeU|POy%^z!`%^!`!aFw!a;'S%^;'S;=`%o<%lO%^XGOS|PoWOy%^z;'S%^;'S;=`%o<%lO%^XG_WOy%^z!c%^!c!}Gw!}#T%^#T#oGw#o;'S%^;'S;=`%o<%lO%^XHO[!YPoWOy%^z}%^}!OGw!O!Q%^!Q![Gw![!c%^!c!}Gw!}#T%^#T#oGw#o;'S%^;'S;=`%o<%lO%^XHySxPOy%^z;'S%^;'S;=`%o<%lO%^^I[SvUOy%^z;'S%^;'S;=`%o<%lO%^XIkUOy%^z#b%^#b#cI}#c;'S%^;'S;=`%o<%lO%^XJSUoWOy%^z#W%^#W#XJf#X;'S%^;'S;=`%o<%lO%^XJmS!`PoWOy%^z;'S%^;'S;=`%o<%lO%^XJ|UOy%^z#f%^#f#gJf#g;'S%^;'S;=`%o<%lO%^XKeS!RPOy%^z;'S%^;'S;=`%o<%lO%^_KvS!QVOy%^z;'S%^;'S;=`%o<%lO%^ZLXU!PPOy%^z!_%^!_!`6y!`;'S%^;'S;=`%o<%lO%^WLnP;=`<%l$}",tokenizers:[A,F,B,0,1,2,3],topRules:{StyleSheet:[0,4],Styles:[1,84]},specialized:[{term:95,get:e=>K[e]||-1},{term:56,get:e=>J[e]||-1},{term:96,get:e=>H[e]||-1}],tokenPrec:1123});let Q=null;function m(){if(!Q&&typeof document=="object"&&document.body){let{style:e}=document.body,o=[],r=new Set;for(let a in e)a!="cssText"&&a!="cssFloat"&&typeof e[a]=="string"&&(/[A-Z]/.test(a)&&(a=a.replace(/[A-Z]/g,O=>"-"+O.toLowerCase())),r.has(a)||(o.push(a),r.add(a)));Q=o.sort().map(a=>({type:"property",label:a}))}return Q||[]}const h=["active","after","any-link","autofill","backdrop","before","checked","cue","default","defined","disabled","empty","enabled","file-selector-button","first","first-child","first-letter","first-line","first-of-type","focus","focus-visible","focus-within","fullscreen","has","host","host-context","hover","in-range","indeterminate","invalid","is","lang","last-child","last-of-type","left","link","marker","modal","not","nth-child","nth-last-child","nth-last-of-type","nth-of-type","only-child","only-of-type","optional","out-of-range","part","placeholder","placeholder-shown","read-only","read-write","required","right","root","scope","selection","slotted","target","target-text","valid","visited","where"].map(e=>({type:"class",label:e})),b=["above","absolute","activeborder","additive","activecaption","after-white-space","ahead","alias","all","all-scroll","alphabetic","alternate","always","antialiased","appworkspace","asterisks","attr","auto","auto-flow","avoid","avoid-column","avoid-page","avoid-region","axis-pan","background","backwards","baseline","below","bidi-override","blink","block","block-axis","bold","bolder","border","border-box","both","bottom","break","break-all","break-word","bullets","button","button-bevel","buttonface","buttonhighlight","buttonshadow","buttontext","calc","capitalize","caps-lock-indicator","caption","captiontext","caret","cell","center","checkbox","circle","cjk-decimal","clear","clip","close-quote","col-resize","collapse","color","color-burn","color-dodge","column","column-reverse","compact","condensed","contain","content","contents","content-box","context-menu","continuous","copy","counter","counters","cover","crop","cross","crosshair","currentcolor","cursive","cyclic","darken","dashed","decimal","decimal-leading-zero","default","default-button","dense","destination-atop","destination-in","destination-out","destination-over","difference","disc","discard","disclosure-closed","disclosure-open","document","dot-dash","dot-dot-dash","dotted","double","down","e-resize","ease","ease-in","ease-in-out","ease-out","element","ellipse","ellipsis","embed","end","ethiopic-abegede-gez","ethiopic-halehame-aa-er","ethiopic-halehame-gez","ew-resize","exclusion","expanded","extends","extra-condensed","extra-expanded","fantasy","fast","fill","fill-box","fixed","flat","flex","flex-end","flex-start","footnotes","forwards","from","geometricPrecision","graytext","grid","groove","hand","hard-light","help","hidden","hide","higher","highlight","highlighttext","horizontal","hsl","hsla","hue","icon","ignore","inactiveborder","inactivecaption","inactivecaptiontext","infinite","infobackground","infotext","inherit","initial","inline","inline-axis","inline-block","inline-flex","inline-grid","inline-table","inset","inside","intrinsic","invert","italic","justify","keep-all","landscape","large","larger","left","level","lighter","lighten","line-through","linear","linear-gradient","lines","list-item","listbox","listitem","local","logical","loud","lower","lower-hexadecimal","lower-latin","lower-norwegian","lowercase","ltr","luminosity","manipulation","match","matrix","matrix3d","medium","menu","menutext","message-box","middle","min-intrinsic","mix","monospace","move","multiple","multiple_mask_images","multiply","n-resize","narrower","ne-resize","nesw-resize","no-close-quote","no-drop","no-open-quote","no-repeat","none","normal","not-allowed","nowrap","ns-resize","numbers","numeric","nw-resize","nwse-resize","oblique","opacity","open-quote","optimizeLegibility","optimizeSpeed","outset","outside","outside-shape","overlay","overline","padding","padding-box","painted","page","paused","perspective","pinch-zoom","plus-darker","plus-lighter","pointer","polygon","portrait","pre","pre-line","pre-wrap","preserve-3d","progress","push-button","radial-gradient","radio","read-only","read-write","read-write-plaintext-only","rectangle","region","relative","repeat","repeating-linear-gradient","repeating-radial-gradient","repeat-x","repeat-y","reset","reverse","rgb","rgba","ridge","right","rotate","rotate3d","rotateX","rotateY","rotateZ","round","row","row-resize","row-reverse","rtl","run-in","running","s-resize","sans-serif","saturation","scale","scale3d","scaleX","scaleY","scaleZ","screen","scroll","scrollbar","scroll-position","se-resize","self-start","self-end","semi-condensed","semi-expanded","separate","serif","show","single","skew","skewX","skewY","skip-white-space","slide","slider-horizontal","slider-vertical","sliderthumb-horizontal","sliderthumb-vertical","slow","small","small-caps","small-caption","smaller","soft-light","solid","source-atop","source-in","source-out","source-over","space","space-around","space-between","space-evenly","spell-out","square","start","static","status-bar","stretch","stroke","stroke-box","sub","subpixel-antialiased","svg_masks","super","sw-resize","symbolic","symbols","system-ui","table","table-caption","table-cell","table-column","table-column-group","table-footer-group","table-header-group","table-row","table-row-group","text","text-bottom","text-top","textarea","textfield","thick","thin","threeddarkshadow","threedface","threedhighlight","threedlightshadow","threedshadow","to","top","transform","translate","translate3d","translateX","translateY","translateZ","transparent","ultra-condensed","ultra-expanded","underline","unidirectional-pan","unset","up","upper-latin","uppercase","url","var","vertical","vertical-text","view-box","visible","visibleFill","visiblePainted","visibleStroke","visual","w-resize","wait","wave","wider","window","windowframe","windowtext","words","wrap","wrap-reverse","x-large","x-small","xor","xx-large","xx-small"].map(e=>({type:"keyword",label:e})).concat(["aliceblue","antiquewhite","aqua","aquamarine","azure","beige","bisque","black","blanchedalmond","blue","blueviolet","brown","burlywood","cadetblue","chartreuse","chocolate","coral","cornflowerblue","cornsilk","crimson","cyan","darkblue","darkcyan","darkgoldenrod","darkgray","darkgreen","darkkhaki","darkmagenta","darkolivegreen","darkorange","darkorchid","darkred","darksalmon","darkseagreen","darkslateblue","darkslategray","darkturquoise","darkviolet","deeppink","deepskyblue","dimgray","dodgerblue","firebrick","floralwhite","forestgreen","fuchsia","gainsboro","ghostwhite","gold","goldenrod","gray","grey","green","greenyellow","honeydew","hotpink","indianred","indigo","ivory","khaki","lavender","lavenderblush","lawngreen","lemonchiffon","lightblue","lightcoral","lightcyan","lightgoldenrodyellow","lightgray","lightgreen","lightpink","lightsalmon","lightseagreen","lightskyblue","lightslategray","lightsteelblue","lightyellow","lime","limegreen","linen","magenta","maroon","mediumaquamarine","mediumblue","mediumorchid","mediumpurple","mediumseagreen","mediumslateblue","mediumspringgreen","mediumturquoise","mediumvioletred","midnightblue","mintcream","mistyrose","moccasin","navajowhite","navy","oldlace","olive","olivedrab","orange","orangered","orchid","palegoldenrod","palegreen","paleturquoise","palevioletred","papayawhip","peachpuff","peru","pink","plum","powderblue","purple","rebeccapurple","red","rosybrown","royalblue","saddlebrown","salmon","sandybrown","seagreen","seashell","sienna","silver","skyblue","slateblue","slategray","snow","springgreen","steelblue","tan","teal","thistle","tomato","turquoise","violet","wheat","white","whitesmoke","yellow","yellowgreen"].map(e=>({type:"constant",label:e}))),ee=["a","abbr","address","article","aside","b","bdi","bdo","blockquote","body","br","button","canvas","caption","cite","code","col","colgroup","dd","del","details","dfn","dialog","div","dl","dt","em","figcaption","figure","footer","form","header","hgroup","h1","h2","h3","h4","h5","h6","hr","html","i","iframe","img","input","ins","kbd","label","legend","li","main","meter","nav","ol","output","p","pre","ruby","section","select","small","source","span","strong","sub","summary","sup","table","tbody","td","template","textarea","tfoot","th","thead","tr","u","ul"].map(e=>({type:"type",label:e})),n=/^(\w[\w-]*|-\w[\w-]*|)$/,ae=/^-(-[\w-]*)?$/;function Oe(e,o){var r;if((e.name=="("||e.type.isError)&&(e=e.parent||e),e.name!="ArgList")return!1;let a=(r=e.parent)===null||r===void 0?void 0:r.firstChild;return a?.name!="Callee"?!1:o.sliceString(a.from,a.to)=="var"}const y=new V,te=["Declaration"];function W(e,o){if(o.to-o.from>4096){let r=y.get(o);if(r)return r;let a=[],O=new Set,l=o.cursor(T.IncludeAnonymous);if(l.firstChild())do for(let i of W(e,l.node))O.has(i.label)||(O.add(i.label),a.push(i));while(l.nextSibling());return y.set(o,a),a}else{let r=[],a=new Set;return o.cursor().iterate(O=>{var l;if(O.name=="VariableName"&&O.matchContext(te)&&((l=O.node.nextSibling)===null||l===void 0?void 0:l.name)==":"){let i=e.sliceString(O.from,O.to);a.has(i)||(a.add(i),r.push({label:i,type:"variable"}))}}),r}}const oe=e=>{var o;let{state:r,pos:a}=e,O=S(r).resolveInner(a,-1),l=O.type.isError&&O.from==O.to-1&&r.doc.sliceString(O.from,O.to)=="-";if(O.name=="PropertyName"||l&&((o=O.parent)===null||o===void 0?void 0:o.name)=="Block")return{from:O.from,options:m(),validFor:n};if(O.name=="ValueName")return{from:O.from,options:b,validFor:n};if(O.name=="PseudoClassName")return{from:O.from,options:h,validFor:n};if(O.name=="VariableName"||(e.explicit||l)&&Oe(O,r.doc))return{from:O.name=="VariableName"?O.from:a,options:W(r.doc,S(r).topNode),validFor:ae};if(O.name=="TagName"){for(let{parent:d}=O;d;d=d.parent)if(d.name=="Block")return{from:O.from,options:m(),validFor:n};return{from:O.from,options:ee,validFor:n}}if(!e.explicit)return null;let i=O.resolve(a),s=i.childBefore(a);return s&&s.name==":"&&i.name=="PseudoClassSelector"?{from:a,options:h,validFor:n}:s&&s.name==":"&&i.name=="Declaration"||i.name=="ArgList"?{from:a,options:b,validFor:n}:i.name=="Block"?{from:a,options:m(),validFor:n}:null},P=w.define({name:"css",parser:M.configure({props:[z.add({Declaration:x()}),R.add({Block:U})]}),languageData:{commentTokens:{block:{open:"/*",close:"*/"}},indentOnInput:/^\s*\}$/,wordChars:"-"}});function me(){return new _(P,P.data.of({autocomplete:oe}))}export{me as css,oe as cssCompletionSource,P as cssLanguage}; -//# sourceMappingURL=index-7f39cecc.js.map diff --git a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/builder_app.py b/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/builder_app.py deleted file mode 100644 index 54defcf0d5c6620d282480693791c69dde0833da..0000000000000000000000000000000000000000 --- a/spaces/DQChoi/gpt-demo/venv/lib/python3.11/site-packages/gradio/themes/builder_app.py +++ /dev/null @@ -1,1003 +0,0 @@ -import inspect -import time -from typing import Iterable - -from gradio_client.documentation import document_fn - -import gradio as gr - -themes = [ - gr.themes.Base, - gr.themes.Default, - gr.themes.Soft, - gr.themes.Monochrome, - gr.themes.Glass, -] -colors = gr.themes.Color.all -sizes = gr.themes.Size.all - -palette_range = [50, 100, 200, 300, 400, 500, 600, 700, 800, 900, 950] -size_range = ["xxs", "xs", "sm", "md", "lg", "xl", "xxl"] -docs_theme_core = document_fn(gr.themes.Base.__init__, gr.themes.Base)[1] -docs_theme_vars = document_fn(gr.themes.Base.set, gr.themes.Base)[1] - - -def get_docstr(var): - for parameters in docs_theme_core + docs_theme_vars: - if parameters["name"] == var: - return parameters["doc"] - raise ValueError(f"Variable {var} not found in theme documentation.") - - -def get_doc_theme_var_groups(): - source = inspect.getsource(gr.themes.Base.set) - groups = [] - group, desc, variables, flat_variables = None, None, [], [] - for line in source.splitlines(): - line = line.strip() - if line.startswith(")"): - break - elif line.startswith("# "): - if group is not None: - groups.append((group, desc, variables)) - group, desc = line[2:].split(": ") - variables = [] - elif "=" in line: - var = line.split("=")[0] - variables.append(var) - flat_variables.append(var) - groups.append((group, desc, variables)) - return groups, flat_variables - - -variable_groups, flat_variables = get_doc_theme_var_groups() - -css = """ -.gradio-container { - overflow: visible !important; - max-width: none !important; -} -#controls { - max-height: 100vh; - flex-wrap: unset; - overflow-y: scroll; - position: sticky; - top: 0; -} -#controls::-webkit-scrollbar { - -webkit-appearance: none; - width: 7px; -} - -#controls::-webkit-scrollbar-thumb { - border-radius: 4px; - background-color: rgba(0, 0, 0, .5); - box-shadow: 0 0 1px rgba(255, 255, 255, .5); -} -""" - -with gr.Blocks( # noqa: SIM117 - theme=gr.themes.Base(), - css=css, - title="Gradio Theme Builder", -) as demo: - with gr.Row(): - with gr.Column(scale=1, elem_id="controls", min_width=400): - with gr.Row(): - undo_btn = gr.Button("Undo", size="sm") - dark_mode_btn = gr.Button("Dark Mode", variant="primary", size="sm") - with gr.Tabs(): - with gr.TabItem("Source Theme"): - gr.Markdown( - """ - Select a base theme below you would like to build off of. Note: when you click 'Load Theme', all variable values in other tabs will be overwritten! - """ - ) - base_theme_dropdown = gr.Dropdown( - [theme.__name__ for theme in themes], - value="Base", - show_label=False, - ) - load_theme_btn = gr.Button("Load Theme", elem_id="load_theme") - with gr.TabItem("Core Colors"): - gr.Markdown( - """Set the three hues of the theme: `primary_hue`, `secondary_hue`, and `neutral_hue`. - Each of these is a palette ranging from 50 to 950 in brightness. Pick a preset palette - optionally, open the accordion to overwrite specific values. - Note that these variables do not affect elements directly, but are referenced by other variables with asterisks, such as `*primary_200` or `*neutral_950`.""" - ) - primary_hue = gr.Dropdown( - [color.name for color in colors], label="Primary Hue" - ) - with gr.Accordion(label="Primary Hue Palette", open=False): - primary_hues = [] - for i in palette_range: - primary_hues.append( - gr.ColorPicker( - label=f"primary_{i}", - ) - ) - - secondary_hue = gr.Dropdown( - [color.name for color in colors], label="Secondary Hue" - ) - with gr.Accordion(label="Secondary Hue Palette", open=False): - secondary_hues = [] - for i in palette_range: - secondary_hues.append( - gr.ColorPicker( - label=f"secondary_{i}", - ) - ) - - neutral_hue = gr.Dropdown( - [color.name for color in colors], label="Neutral hue" - ) - with gr.Accordion(label="Neutral Hue Palette", open=False): - neutral_hues = [] - for i in palette_range: - neutral_hues.append( - gr.ColorPicker( - label=f"neutral_{i}", - ) - ) - - with gr.TabItem("Core Sizing"): - gr.Markdown( - """Set the sizing of the theme via: `text_size`, `spacing_size`, and `radius_size`. - Each of these is set to a collection of sizes ranging from `xxs` to `xxl`. Pick a preset size collection - optionally, open the accordion to overwrite specific values. - Note that these variables do not affect elements directly, but are referenced by other variables with asterisks, such as `*spacing_xl` or `*text_sm`. - """ - ) - text_size = gr.Dropdown( - [size.name for size in sizes if size.name.startswith("text_")], - label="Text Size", - ) - with gr.Accordion(label="Text Size Range", open=False): - text_sizes = [] - for i in size_range: - text_sizes.append( - gr.Textbox( - label=f"text_{i}", - ) - ) - - spacing_size = gr.Dropdown( - [ - size.name - for size in sizes - if size.name.startswith("spacing_") - ], - label="Spacing Size", - ) - with gr.Accordion(label="Spacing Size Range", open=False): - spacing_sizes = [] - for i in size_range: - spacing_sizes.append( - gr.Textbox( - label=f"spacing_{i}", - ) - ) - - radius_size = gr.Dropdown( - [ - size.name - for size in sizes - if size.name.startswith("radius_") - ], - label="Radius Size", - ) - with gr.Accordion(label="Radius Size Range", open=False): - radius_sizes = [] - for i in size_range: - radius_sizes.append( - gr.Textbox( - label=f"radius_{i}", - ) - ) - - with gr.TabItem("Core Fonts"): - gr.Markdown( - """Set the main `font` and the monospace `font_mono` here. - Set up to 4 values for each (fallbacks in case a font is not available). - Check "Google Font" if font should be loaded from Google Fonts. - """ - ) - gr.Markdown("### Main Font") - main_fonts, main_is_google = [], [] - for i in range(4): - with gr.Row(): - font = gr.Textbox(label=f"Font {i + 1}") - font_is_google = gr.Checkbox(label="Google Font") - main_fonts.append(font) - main_is_google.append(font_is_google) - - mono_fonts, mono_is_google = [], [] - gr.Markdown("### Monospace Font") - for i in range(4): - with gr.Row(): - font = gr.Textbox(label=f"Font {i + 1}") - font_is_google = gr.Checkbox(label="Google Font") - mono_fonts.append(font) - mono_is_google.append(font_is_google) - - theme_var_input = [] - - core_color_suggestions = ( - [f"*primary_{i}" for i in palette_range] - + [f"*secondary_{i}" for i in palette_range] - + [f"*neutral_{i}" for i in palette_range] - ) - - variable_suggestions = { - "fill": core_color_suggestions[:], - "color": core_color_suggestions[:], - "text_size": [f"*text_{i}" for i in size_range], - "radius": [f"*radius_{i}" for i in size_range], - "padding": [f"*spacing_{i}" for i in size_range], - "gap": [f"*spacing_{i}" for i in size_range], - "weight": [ - "100", - "200", - "300", - "400", - "500", - "600", - "700", - "800", - ], - "shadow": ["none"], - "border_width": [], - } - for variable in flat_variables: - if variable.endswith("_dark"): - continue - for style_type in variable_suggestions: - if style_type in variable: - variable_suggestions[style_type].append("*" + variable) - break - - variable_suggestions["fill"], variable_suggestions["color"] = ( - variable_suggestions["fill"] - + variable_suggestions["color"][len(core_color_suggestions) :], - variable_suggestions["color"] - + variable_suggestions["fill"][len(core_color_suggestions) :], - ) - - for group, desc, variables in variable_groups: - with gr.TabItem(group): - gr.Markdown( - desc - + "\nYou can set these to one of the dropdown values, or clear the dropdown to set a custom value." - ) - for variable in variables: - suggestions = [] - for style_type in variable_suggestions: - if style_type in variable: - suggestions = variable_suggestions[style_type][:] - if "*" + variable in suggestions: - suggestions.remove("*" + variable) - break - dropdown = gr.Dropdown( - label=variable, - info=get_docstr(variable), - choices=suggestions, - allow_custom_value=True, - ) - theme_var_input.append(dropdown) - - # App - - with gr.Column(scale=6, elem_id="app"): - with gr.Column(variant="panel"): - gr.Markdown( - """ - # Theme Builder - Welcome to the theme builder. The left panel is where you create the theme. The different aspects of the theme are broken down into different tabs. Here's how to navigate them: - 1. First, set the "Source Theme". This will set the default values that you can override. - 2. Set the "Core Colors", "Core Sizing" and "Core Fonts". These are the core variables that are used to build the rest of the theme. - 3. The rest of the tabs set specific CSS theme variables. These control finer aspects of the UI. Within these theme variables, you can reference the core variables and other theme variables using the variable name preceded by an asterisk, e.g. `*primary_50` or `*body_text_color`. Clear the dropdown to set a custom value. - 4. Once you have finished your theme, click on "View Code" below to see how you can integrate the theme into your app. You can also click on "Upload to Hub" to upload your theme to the Hugging Face Hub, where others can download and use your theme. - """ - ) - with gr.Accordion("View Code", open=False): - output_code = gr.Code(language="python") - with gr.Accordion("Upload to Hub", open=False): - gr.Markdown( - "You can save your theme on the Hugging Face Hub. HF API write token can be found [here](https://huggingface.co/settings/tokens)." - ) - with gr.Row(): - theme_name = gr.Textbox(label="Theme Name") - theme_hf_token = gr.Textbox(label="Hugging Face Write Token") - theme_version = gr.Textbox( - label="Version", - placeholder="Leave blank to automatically update version.", - ) - upload_to_hub_btn = gr.Button("Upload to Hub") - theme_upload_status = gr.Markdown(visible=False) - - gr.Markdown("Below this panel is a dummy app to demo your theme.") - - name = gr.Textbox( - label="Name", - info="Full name, including middle name. No special characters.", - placeholder="John Doe", - value="John Doe", - interactive=True, - ) - - with gr.Row(): - slider1 = gr.Slider(label="Slider 1") - slider2 = gr.Slider(label="Slider 2") - gr.CheckboxGroup(["A", "B", "C"], label="Checkbox Group") - - with gr.Row(): - with gr.Column(variant="panel", scale=1): - gr.Markdown("## Panel 1") - radio = gr.Radio( - ["A", "B", "C"], - label="Radio", - info="Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.", - ) - drop = gr.Dropdown( - ["Option 1", "Option 2", "Option 3"], show_label=False - ) - drop_2 = gr.Dropdown( - ["Option A", "Option B", "Option C"], - multiselect=True, - value=["Option A"], - label="Dropdown", - interactive=True, - ) - check = gr.Checkbox(label="Go") - with gr.Column(variant="panel", scale=2): - img = gr.Image( - "https://raw.githubusercontent.com/gradio-app/gradio/main/js/_website/src/assets/img/header-image.jpg", - label="Image", - height=320, - ) - with gr.Row(): - go_btn = gr.Button( - "Go", label="Primary Button", variant="primary" - ) - clear_btn = gr.Button( - "Clear", label="Secondary Button", variant="secondary" - ) - - def go(*args): - time.sleep(3) - return "https://raw.githubusercontent.com/gradio-app/gradio/main/js/_website/src/assets/img/header-image.jpg" - - go_btn.click( - go, - [radio, drop, drop_2, check, name], - img, - api_name=False, - ) - - def clear(): - time.sleep(0.2) - return None - - clear_btn.click(clear, None, img) - - with gr.Row(): - btn1 = gr.Button("Button 1", size="sm") - btn2 = gr.UploadButton(size="sm") - stop_btn = gr.Button( - "Stop", label="Stop Button", variant="stop", size="sm" - ) - - gr.Examples( - examples=[ - [ - "A", - "Option 1", - ["Option B"], - True, - ], - [ - "B", - "Option 2", - ["Option B", "Option C"], - False, - ], - ], - inputs=[radio, drop, drop_2, check], - label="Examples", - ) - - with gr.Row(): - gr.Dataframe(value=[[1, 2, 3], [4, 5, 6], [7, 8, 9]], label="Dataframe") - gr.JSON( - value={"a": 1, "b": 2, "c": {"test": "a", "test2": [1, 2, 3]}}, - label="JSON", - ) - gr.Label(value={"cat": 0.7, "dog": 0.2, "fish": 0.1}) - gr.File() - with gr.Row(): - gr.ColorPicker() - gr.Video( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/world.mp4" - ) - gr.Gallery( - [ - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/lion.jpg", - "lion", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/logo.png", - "logo", - ), - ( - "https://gradio-static-files.s3.us-west-2.amazonaws.com/tower.jpg", - "tower", - ), - ], - height="200px", - columns=2, - ) - - with gr.Row(): - with gr.Column(scale=2): - chatbot = gr.Chatbot([("Hello", "Hi")], label="Chatbot") - chat_btn = gr.Button("Add messages") - - def chat(history): - time.sleep(2) - yield [["How are you?", "I am good."]] - - chat_btn.click( - lambda history: history - + [["How are you?", "I am good."]] - + (time.sleep(2) or []), - chatbot, - chatbot, - api_name=False, - ) - with gr.Column(scale=1): - with gr.Accordion("Advanced Settings"): - gr.Markdown("Hello") - gr.Number(label="Chatbot control 1") - gr.Number(label="Chatbot control 2") - gr.Number(label="Chatbot control 3") - - # Event Listeners - - secret_css = gr.Textbox(visible=False) - secret_font = gr.JSON(visible=False) - - demo.load( # doing this via python was not working for some reason, so using this hacky method for now - None, - None, - None, - _js="""() => { - document.head.innerHTML += ""; - let evt_listener = window.setTimeout( - () => { - load_theme_btn = document.querySelector('#load_theme'); - if (load_theme_btn) { - load_theme_btn.click(); - window.clearTimeout(evt_listener); - } - }, - 100 - ); - }""", - api_name=False, - ) - - theme_inputs = ( - [primary_hue, secondary_hue, neutral_hue] - + primary_hues - + secondary_hues - + neutral_hues - + [text_size, spacing_size, radius_size] - + text_sizes - + spacing_sizes - + radius_sizes - + main_fonts - + main_is_google - + mono_fonts - + mono_is_google - + theme_var_input - ) - - def load_theme(theme_name): - theme = [theme for theme in themes if theme.__name__ == theme_name][0] - - parameters = inspect.signature(theme.__init__).parameters - primary_hue = parameters["primary_hue"].default - secondary_hue = parameters["secondary_hue"].default - neutral_hue = parameters["neutral_hue"].default - text_size = parameters["text_size"].default - spacing_size = parameters["spacing_size"].default - radius_size = parameters["radius_size"].default - - theme = theme() - - font = theme._font[:4] - font_mono = theme._font_mono[:4] - font_is_google = [isinstance(f, gr.themes.GoogleFont) for f in font] - font_mono_is_google = [ - isinstance(f, gr.themes.GoogleFont) for f in font_mono - ] - - def pad_to_4(x): - return x + [None] * (4 - len(x)) - - var_output = [] - for variable in flat_variables: - theme_val = getattr(theme, variable) - if theme_val is None and variable.endswith("_dark"): - theme_val = getattr(theme, variable[:-5]) - var_output.append(theme_val) - - return ( - [primary_hue.name, secondary_hue.name, neutral_hue.name] - + primary_hue.expand() - + secondary_hue.expand() - + neutral_hue.expand() - + [text_size.name, spacing_size.name, radius_size.name] - + text_size.expand() - + spacing_size.expand() - + radius_size.expand() - + pad_to_4([f.name for f in font]) - + pad_to_4(font_is_google) - + pad_to_4([f.name for f in font_mono]) - + pad_to_4(font_mono_is_google) - + var_output - ) - - def generate_theme_code( - base_theme, final_theme, core_variables, final_main_fonts, final_mono_fonts - ): - base_theme_name = base_theme - base_theme = [theme for theme in themes if theme.__name__ == base_theme][ - 0 - ]() - - parameters = inspect.signature(base_theme.__init__).parameters - primary_hue = parameters["primary_hue"].default - secondary_hue = parameters["secondary_hue"].default - neutral_hue = parameters["neutral_hue"].default - text_size = parameters["text_size"].default - spacing_size = parameters["spacing_size"].default - radius_size = parameters["radius_size"].default - font = parameters["font"].default - font = [font] if not isinstance(font, Iterable) else font - font = [ - gr.themes.Font(f) if not isinstance(f, gr.themes.Font) else f - for f in font - ] - font_mono = parameters["font_mono"].default - font_mono = ( - [font_mono] if not isinstance(font_mono, Iterable) else font_mono - ) - font_mono = [ - gr.themes.Font(f) if not isinstance(f, gr.themes.Font) else f - for f in font_mono - ] - - core_diffs = {} - specific_core_diffs = {} - core_var_names = [ - "primary_hue", - "secondary_hue", - "neutral_hue", - "text_size", - "spacing_size", - "radius_size", - ] - for value_name, base_value, source_class, final_value in zip( - core_var_names, - [ - primary_hue, - secondary_hue, - neutral_hue, - text_size, - spacing_size, - radius_size, - ], - [ - gr.themes.Color, - gr.themes.Color, - gr.themes.Color, - gr.themes.Size, - gr.themes.Size, - gr.themes.Size, - ], - core_variables, - ): - if base_value.name != final_value: - core_diffs[value_name] = final_value - source_obj = [ - obj for obj in source_class.all if obj.name == final_value - ][0] - final_attr_values = {} - diff = False - for attr in dir(source_obj): - if attr in ["all", "name", "expand"] or attr.startswith("_"): - continue - final_theme_attr = ( - value_name.split("_")[0] - + "_" - + (attr[1:] if source_class == gr.themes.Color else attr) - ) - final_attr_values[final_theme_attr] = getattr( - final_theme, final_theme_attr - ) - if getattr(source_obj, attr) != final_attr_values[final_theme_attr]: - diff = True - if diff: - specific_core_diffs[value_name] = (source_class, final_attr_values) - - font_diffs = {} - - final_main_fonts = [font for font in final_main_fonts if font[0]] - final_mono_fonts = [font for font in final_mono_fonts if font[0]] - font = font[:4] - font_mono = font_mono[:4] - for base_font_set, theme_font_set, font_set_name in [ - (font, final_main_fonts, "font"), - (font_mono, final_mono_fonts, "font_mono"), - ]: - if len(base_font_set) != len(theme_font_set) or any( - base_font.name != theme_font[0] - or isinstance(base_font, gr.themes.GoogleFont) != theme_font[1] - for base_font, theme_font in zip(base_font_set, theme_font_set) - ): - font_diffs[font_set_name] = [ - f"gr.themes.GoogleFont('{font_name}')" - if is_google_font - else f"'{font_name}'" - for font_name, is_google_font in theme_font_set - ] - - newline = "\n" - - core_diffs_code = "" - if len(core_diffs) + len(specific_core_diffs) > 0: - for var_name in core_var_names: - if var_name in specific_core_diffs: - cls, vals = specific_core_diffs[var_name] - core_diffs_code += f""" {var_name}=gr.themes.{cls.__name__}({', '.join(f'''{k}="{v}"''' for k, v in vals.items())}),\n""" - elif var_name in core_diffs: - core_diffs_code += ( - f""" {var_name}="{core_diffs[var_name]}",\n""" - ) - - font_diffs_code = "" - - if len(font_diffs) > 0: - font_diffs_code = "".join( - [ - f""" {font_set_name}=[{", ".join(fonts)}],\n""" - for font_set_name, fonts in font_diffs.items() - ] - ) - var_diffs = {} - for variable in flat_variables: - base_theme_val = getattr(base_theme, variable) - final_theme_val = getattr(final_theme, variable) - if base_theme_val is None and variable.endswith("_dark"): - base_theme_val = getattr(base_theme, variable[:-5]) - if base_theme_val != final_theme_val: - var_diffs[variable] = getattr(final_theme, variable) - - newline = "\n" - - vars_diff_code = "" - if len(var_diffs) > 0: - vars_diff_code = f""".set( - {(',' + newline + " ").join([f"{k}='{v}'" for k, v in var_diffs.items()])} -)""" - - output = f""" -import gradio as gr - -theme = gr.themes.{base_theme_name}({newline if core_diffs_code or font_diffs_code else ""}{core_diffs_code}{font_diffs_code}){vars_diff_code} - -with gr.Blocks(theme=theme) as demo: - ...""" - return output - - history = gr.State([]) - current_theme = gr.State(None) - - def render_variables(history, base_theme, *args): - primary_hue, secondary_hue, neutral_hue = args[0:3] - primary_hues = args[3 : 3 + len(palette_range)] - secondary_hues = args[3 + len(palette_range) : 3 + 2 * len(palette_range)] - neutral_hues = args[3 + 2 * len(palette_range) : 3 + 3 * len(palette_range)] - text_size, spacing_size, radius_size = args[ - 3 + 3 * len(palette_range) : 6 + 3 * len(palette_range) - ] - text_sizes = args[ - 6 - + 3 * len(palette_range) : 6 - + 3 * len(palette_range) - + len(size_range) - ] - spacing_sizes = args[ - 6 - + 3 * len(palette_range) - + len(size_range) : 6 - + 3 * len(palette_range) - + 2 * len(size_range) - ] - radius_sizes = args[ - 6 - + 3 * len(palette_range) - + 2 * len(size_range) : 6 - + 3 * len(palette_range) - + 3 * len(size_range) - ] - main_fonts = args[ - 6 - + 3 * len(palette_range) - + 3 * len(size_range) : 6 - + 3 * len(palette_range) - + 3 * len(size_range) - + 4 - ] - main_is_google = args[ - 6 - + 3 * len(palette_range) - + 3 * len(size_range) - + 4 : 6 - + 3 * len(palette_range) - + 3 * len(size_range) - + 8 - ] - mono_fonts = args[ - 6 - + 3 * len(palette_range) - + 3 * len(size_range) - + 8 : 6 - + 3 * len(palette_range) - + 3 * len(size_range) - + 12 - ] - mono_is_google = args[ - 6 - + 3 * len(palette_range) - + 3 * len(size_range) - + 12 : 6 - + 3 * len(palette_range) - + 3 * len(size_range) - + 16 - ] - remaining_args = args[ - 6 + 3 * len(palette_range) + 3 * len(size_range) + 16 : - ] - - final_primary_color = gr.themes.Color(*primary_hues) - final_secondary_color = gr.themes.Color(*secondary_hues) - final_neutral_color = gr.themes.Color(*neutral_hues) - final_text_size = gr.themes.Size(*text_sizes) - final_spacing_size = gr.themes.Size(*spacing_sizes) - final_radius_size = gr.themes.Size(*radius_sizes) - - final_main_fonts = [] - font_weights = set() - for attr, val in zip(flat_variables, remaining_args): - if "weight" in attr: - font_weights.add(val) - font_weights = sorted(font_weights) - - for main_font, is_google in zip(main_fonts, main_is_google): - if not main_font: - continue - if is_google: - main_font = gr.themes.GoogleFont(main_font, weights=font_weights) - final_main_fonts.append(main_font) - final_mono_fonts = [] - for mono_font, is_google in zip(mono_fonts, mono_is_google): - if not mono_font: - continue - if is_google: - mono_font = gr.themes.GoogleFont(mono_font, weights=font_weights) - final_mono_fonts.append(mono_font) - - theme = gr.themes.Base( - primary_hue=final_primary_color, - secondary_hue=final_secondary_color, - neutral_hue=final_neutral_color, - text_size=final_text_size, - spacing_size=final_spacing_size, - radius_size=final_radius_size, - font=final_main_fonts, - font_mono=final_mono_fonts, - ) - - theme.set(**dict(zip(flat_variables, remaining_args))) - new_step = (base_theme, args) - if len(history) == 0 or str(history[-1]) != str(new_step): - history.append(new_step) - - return ( - history, - theme._get_theme_css(), - theme._stylesheets, - generate_theme_code( - base_theme, - theme, - ( - primary_hue, - secondary_hue, - neutral_hue, - text_size, - spacing_size, - radius_size, - ), - list(zip(main_fonts, main_is_google)), - list(zip(mono_fonts, mono_is_google)), - ), - theme, - ) - - def attach_rerender(evt_listener): - return evt_listener( - render_variables, - [history, base_theme_dropdown] + theme_inputs, - [history, secret_css, secret_font, output_code, current_theme], - api_name=False, - ).then( - None, - [secret_css, secret_font], - None, - _js="""(css, fonts) => { - document.getElementById('theme_css').innerHTML = css; - let existing_font_links = document.querySelectorAll('link[rel="stylesheet"][href^="https://fonts.googleapis.com/css"]'); - existing_font_links.forEach(link => { - if (fonts.includes(link.href)) { - fonts = fonts.filter(font => font != link.href); - } else { - link.remove(); - } - }); - fonts.forEach(font => { - let link = document.createElement('link'); - link.rel = 'stylesheet'; - link.href = font; - document.head.appendChild(link); - }); - }""", - api_name=False, - ) - - def load_color(color_name): - color = [color for color in colors if color.name == color_name][0] - return [getattr(color, f"c{i}") for i in palette_range] - - attach_rerender( - primary_hue.select( - load_color, primary_hue, primary_hues, api_name=False - ).then - ) - attach_rerender( - secondary_hue.select( - load_color, secondary_hue, secondary_hue, api_name=False - ).then - ) - attach_rerender( - neutral_hue.select( - load_color, neutral_hue, neutral_hues, api_name=False - ).then - ) - for hue_set in (primary_hues, secondary_hues, neutral_hues): - for hue in hue_set: - attach_rerender(hue.blur) - - def load_size(size_name): - size = [size for size in sizes if size.name == size_name][0] - return [getattr(size, i) for i in size_range] - - attach_rerender( - text_size.change(load_size, text_size, text_sizes, api_name=False).then - ) - attach_rerender( - spacing_size.change( - load_size, spacing_size, spacing_sizes, api_name=False - ).then - ) - attach_rerender( - radius_size.change( - load_size, radius_size, radius_sizes, api_name=False - ).then - ) - - attach_rerender( - load_theme_btn.click( - load_theme, base_theme_dropdown, theme_inputs, api_name=False - ).then - ) - - for theme_box in ( - text_sizes + spacing_sizes + radius_sizes + main_fonts + mono_fonts - ): - attach_rerender(theme_box.blur) - attach_rerender(theme_box.submit) - for theme_box in theme_var_input: - attach_rerender(theme_box.blur) - attach_rerender(theme_box.select) - for checkbox in main_is_google + mono_is_google: - attach_rerender(checkbox.select) - - dark_mode_btn.click( - None, - None, - None, - _js="""() => { - if (document.querySelectorAll('.dark').length) { - document.querySelectorAll('.dark').forEach(el => el.classList.remove('dark')); - } else { - document.querySelector('body').classList.add('dark'); - } - }""", - api_name=False, - ) - - def undo(history_var): - if len(history_var) <= 1: - return {history: gr.skip()} - else: - history_var.pop() - old = history_var.pop() - return [history_var, old[0]] + list(old[1]) - - attach_rerender( - undo_btn.click( - undo, - [history], - [history, base_theme_dropdown] + theme_inputs, - api_name=False, - ).then - ) - - def upload_to_hub(data): - try: - theme_url = data[current_theme].push_to_hub( - repo_name=data[theme_name], - version=data[theme_version] or None, - hf_token=data[theme_hf_token], - theme_name=data[theme_name], - ) - space_name = "/".join(theme_url.split("/")[-2:]) - return ( - gr.Markdown.update( - value=f"Theme uploaded [here!]({theme_url})! Load it as `gr.Blocks(theme='{space_name}')`", - visible=True, - ), - "Upload to Hub", - ) - except Exception as e: - return ( - gr.Markdown.update( - value=f"Error: {e}", - visible=True, - ), - "Upload to Hub", - ) - - upload_to_hub_btn.click( - lambda: "Uploading...", - None, - upload_to_hub_btn, - api_name=False, - ).then( - upload_to_hub, - { - current_theme, - theme_name, - theme_hf_token, - theme_version, - }, - [theme_upload_status, upload_to_hub_btn], - api_name=False, - ) - - -if __name__ == "__main__": - demo.launch() diff --git a/spaces/Dagfinn1962/stablediffusion-members/app.py b/spaces/Dagfinn1962/stablediffusion-members/app.py deleted file mode 100644 index 4b9422724ffe3fb829839f712d994354da77a0c1..0000000000000000000000000000000000000000 --- a/spaces/Dagfinn1962/stablediffusion-members/app.py +++ /dev/null @@ -1,87 +0,0 @@ -import gradio as gr -import os -import sys -from pathlib import Path -import numpy as np -from PIL import Image - - - - -models = [ - {"name": "SD ComVis 1.2","url": "CompVis/stable-diffusion-v1-2"}, - {"name": "SD Comvis 1.4","url": "CompVis/stable-diffusion-v1-4"}, - {"name": "SD runawayml 1.5","url": "runwayml/stable-diffusion-v1-5"}, - {"name": "SD Stability 2.1 Unclip","url": "stabilityai/stable-diffusion-2-1-unclip"}, - {"name": "SD Dreamshaper-Anime","url": "Lykon/DreamShaper"}, - ] - -current_model = models[0] - -text_gen = gr.Interface.load("spaces/Avenuenw/prompt-extend") - -models2 = [] -for model in models: - model_url = f"models/{model['url']}" - loaded_model = gr.Interface.load(model_url, live=True, preprocess=True) - models2.append(loaded_model) - - -def text_it(inputs, text_gen=text_gen): - return text_gen(inputs) - - -def set_model(current_model_index): - global current_model - current_model = models[current_model_index] - return gr.update(value=f"{current_model['name']}") - - -def send_it(inputs, model_choice): - proc = models2[model_choice] - return proc(inputs) - - -with gr.Blocks (css ='main.css') as myface: - - gr.HTML("
Your Promt Here
Choose model here
" ) - with gr.Row(): - input_text = gr.Textbox(label=" ",placeholder="1.PROMPT IDEA HERE ! ",lines=4) - # Model selection dropdown - model_name1 = gr.Dropdown( - label="2 Choose model here", - choices=[m["name"] for m in models], - type="index", - value=current_model["name"], - interactive=True, - - - ) - with gr.Row(): - see_prompts = gr.Button("3. GENERATE YOUR PROMT IDEA HERE!") - run = gr.Button("4. GENERATE THE IMAGE HERE!", varant="primery") - - # - with gr.Row(): - output1 = gr.Image(label="") - output2 = gr.Image(label="") - output3 = gr.Image(label="") - with gr.Row(): - magic1 = gr.Textbox(label="Generated Prompt", lines=2) - magic2 = gr.Textbox(label="Generated Prompt", lines=2) - magic3 = gr.Textbox(label="Generated Prompt", lines=2) - - model_name1.change(set_model, inputs=model_name1, outputs=[output1, output2, output3,]) - - run.click(send_it, inputs=[magic1, model_name1], outputs=[output1]) - run.click(send_it, inputs=[magic2, model_name1], outputs=[output2]) - run.click(send_it, inputs=[magic3, model_name1], outputs=[output3]) - - - see_prompts.click(text_it, inputs=[input_text], outputs=[magic1]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic2]) - see_prompts.click(text_it, inputs=[input_text], outputs=[magic3]) - -title="Daylight (SD) ", -myface.queue(concurrency_count=200) -myface.launch(inline=True, max_threads=400) \ No newline at end of file diff --git a/spaces/Docfile/open_llm_leaderboard/src/load_from_hub.py b/spaces/Docfile/open_llm_leaderboard/src/load_from_hub.py deleted file mode 100644 index 9062b77a0e8e3828df71cd8486b2e5a6c4cd7d59..0000000000000000000000000000000000000000 --- a/spaces/Docfile/open_llm_leaderboard/src/load_from_hub.py +++ /dev/null @@ -1,151 +0,0 @@ -import json -import os - -import pandas as pd -from huggingface_hub import Repository -from transformers import AutoConfig -from collections import defaultdict - -from src.assets.hardcoded_evals import baseline, gpt4_values, gpt35_values -from src.display_models.get_model_metadata import apply_metadata -from src.display_models.read_results import get_eval_results_dicts, make_clickable_model -from src.display_models.utils import AutoEvalColumn, EvalQueueColumn, has_no_nan_values - -IS_PUBLIC = bool(os.environ.get("IS_PUBLIC", True)) - - -def get_all_requested_models(requested_models_dir: str) -> set[str]: - depth = 1 - file_names = [] - users_to_submission_dates = defaultdict(list) - - for root, _, files in os.walk(requested_models_dir): - current_depth = root.count(os.sep) - requested_models_dir.count(os.sep) - if current_depth == depth: - for file in files: - if not file.endswith(".json"): continue - with open(os.path.join(root, file), "r") as f: - info = json.load(f) - file_names.append(f"{info['model']}_{info['revision']}_{info['precision']}") - - # Select organisation - if info["model"].count("/") == 0 or "submitted_time" not in info: - continue - organisation, _ = info["model"].split("/") - users_to_submission_dates[organisation].append(info["submitted_time"]) - - return set(file_names), users_to_submission_dates - - -def load_all_info_from_hub(QUEUE_REPO: str, RESULTS_REPO: str, QUEUE_PATH: str, RESULTS_PATH: str) -> list[Repository]: - eval_queue_repo = None - eval_results_repo = None - requested_models = None - - print("Pulling evaluation requests and results.") - - eval_queue_repo = Repository( - local_dir=QUEUE_PATH, - clone_from=QUEUE_REPO, - repo_type="dataset", - ) - eval_queue_repo.git_pull() - - eval_results_repo = Repository( - local_dir=RESULTS_PATH, - clone_from=RESULTS_REPO, - repo_type="dataset", - ) - eval_results_repo.git_pull() - - requested_models, users_to_submission_dates = get_all_requested_models("eval-queue") - - return eval_queue_repo, requested_models, eval_results_repo, users_to_submission_dates - - -def get_leaderboard_df( - eval_results: Repository, eval_results_private: Repository, cols: list, benchmark_cols: list -) -> pd.DataFrame: - if eval_results: - print("Pulling evaluation results for the leaderboard.") - eval_results.git_pull() - if eval_results_private: - print("Pulling evaluation results for the leaderboard.") - eval_results_private.git_pull() - - all_data = get_eval_results_dicts() - - if not IS_PUBLIC: - all_data.append(gpt4_values) - all_data.append(gpt35_values) - - all_data.append(baseline) - apply_metadata(all_data) # Populate model type based on known hardcoded values in `metadata.py` - - df = pd.DataFrame.from_records(all_data) - df = df.sort_values(by=[AutoEvalColumn.average.name], ascending=False) - df = df[cols].round(decimals=2) - - # filter out if any of the benchmarks have not been produced - df = df[has_no_nan_values(df, benchmark_cols)] - return df - - -def get_evaluation_queue_df( - eval_queue: Repository, eval_queue_private: Repository, save_path: str, cols: list -) -> list[pd.DataFrame]: - if eval_queue: - print("Pulling changes for the evaluation queue.") - eval_queue.git_pull() - if eval_queue_private: - print("Pulling changes for the evaluation queue.") - eval_queue_private.git_pull() - - entries = [entry for entry in os.listdir(save_path) if not entry.startswith(".")] - all_evals = [] - - for entry in entries: - if ".json" in entry: - file_path = os.path.join(save_path, entry) - with open(file_path) as fp: - data = json.load(fp) - - data[EvalQueueColumn.model.name] = make_clickable_model(data["model"]) - data[EvalQueueColumn.revision.name] = data.get("revision", "main") - - all_evals.append(data) - elif ".md" not in entry: - # this is a folder - sub_entries = [e for e in os.listdir(f"{save_path}/{entry}") if not e.startswith(".")] - for sub_entry in sub_entries: - file_path = os.path.join(save_path, entry, sub_entry) - with open(file_path) as fp: - data = json.load(fp) - - data[EvalQueueColumn.model.name] = make_clickable_model(data["model"]) - data[EvalQueueColumn.revision.name] = data.get("revision", "main") - all_evals.append(data) - - pending_list = [e for e in all_evals if e["status"] in ["PENDING", "RERUN"]] - running_list = [e for e in all_evals if e["status"] == "RUNNING"] - finished_list = [e for e in all_evals if e["status"].startswith("FINISHED") or e["status"] == "PENDING_NEW_EVAL"] - df_pending = pd.DataFrame.from_records(pending_list, columns=cols) - df_running = pd.DataFrame.from_records(running_list, columns=cols) - df_finished = pd.DataFrame.from_records(finished_list, columns=cols) - return df_finished[cols], df_running[cols], df_pending[cols] - - -def is_model_on_hub(model_name: str, revision: str) -> bool: - try: - AutoConfig.from_pretrained(model_name, revision=revision, trust_remote_code=False) - return True, None - - except ValueError: - return ( - False, - "needs to be launched with `trust_remote_code=True`. For safety reason, we do not allow these models to be automatically submitted to the leaderboard.", - ) - - except Exception as e: - print(f"Could not get the model config from the hub.: {e}") - return False, "was not found on hub!" diff --git a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/conv2d_resample.py b/spaces/DragGan/DragGan-Inversion/torch_utils/ops/conv2d_resample.py deleted file mode 100644 index e89e1253094036046e326f3a6e57527c541fae8b..0000000000000000000000000000000000000000 --- a/spaces/DragGan/DragGan-Inversion/torch_utils/ops/conv2d_resample.py +++ /dev/null @@ -1,158 +0,0 @@ -# Copyright (c) 2021, NVIDIA CORPORATION & AFFILIATES. All rights reserved. -# -# NVIDIA CORPORATION and its licensors retain all intellectual property -# and proprietary rights in and to this software, related documentation -# and any modifications thereto. Any use, reproduction, disclosure or -# distribution of this software and related documentation without an express -# license agreement from NVIDIA CORPORATION is strictly prohibited. - -"""2D convolution with optional up/downsampling.""" - -import torch - -from .. import misc -from . import conv2d_gradfix -from . import upfirdn2d -from .upfirdn2d import _parse_padding -from .upfirdn2d import _get_filter_size - -# ---------------------------------------------------------------------------- - - -def _get_weight_shape(w): - with misc.suppress_tracer_warnings(): # this value will be treated as a constant - shape = [int(sz) for sz in w.shape] - misc.assert_shape(w, shape) - return shape - -# ---------------------------------------------------------------------------- - - -def _conv2d_wrapper(x, w, stride=1, padding=0, groups=1, transpose=False, flip_weight=True): - """Wrapper for the underlying `conv2d()` and `conv_transpose2d()` implementations. - """ - _out_channels, _in_channels_per_group, kh, kw = _get_weight_shape(w) - - # Flip weight if requested. - # Note: conv2d() actually performs correlation (flip_weight=True) not convolution (flip_weight=False). - if not flip_weight and (kw > 1 or kh > 1): - w = w.flip([2, 3]) - - # Execute using conv2d_gradfix. - op = conv2d_gradfix.conv_transpose2d if transpose else conv2d_gradfix.conv2d - return op(x, w, stride=stride, padding=padding, groups=groups) - -# ---------------------------------------------------------------------------- - - -@misc.profiled_function -def conv2d_resample(x, w, f=None, up=1, down=1, padding=0, groups=1, flip_weight=True, flip_filter=False): - r"""2D convolution with optional up/downsampling. - - Padding is performed only once at the beginning, not between the operations. - - Args: - x: Input tensor of shape - `[batch_size, in_channels, in_height, in_width]`. - w: Weight tensor of shape - `[out_channels, in_channels//groups, kernel_height, kernel_width]`. - f: Low-pass filter for up/downsampling. Must be prepared beforehand by - calling upfirdn2d.setup_filter(). None = identity (default). - up: Integer upsampling factor (default: 1). - down: Integer downsampling factor (default: 1). - padding: Padding with respect to the upsampled image. Can be a single number - or a list/tuple `[x, y]` or `[x_before, x_after, y_before, y_after]` - (default: 0). - groups: Split input channels into N groups (default: 1). - flip_weight: False = convolution, True = correlation (default: True). - flip_filter: False = convolution, True = correlation (default: False). - - Returns: - Tensor of the shape `[batch_size, num_channels, out_height, out_width]`. - """ - # Validate arguments. - assert isinstance(x, torch.Tensor) and (x.ndim == 4) - assert isinstance(w, torch.Tensor) and ( - w.ndim == 4) and (w.dtype == x.dtype) - assert f is None or (isinstance(f, torch.Tensor) and f.ndim in [ - 1, 2] and f.dtype == torch.float32) - assert isinstance(up, int) and (up >= 1) - assert isinstance(down, int) and (down >= 1) - assert isinstance(groups, int) and (groups >= 1) - out_channels, in_channels_per_group, kh, kw = _get_weight_shape(w) - fw, fh = _get_filter_size(f) - px0, px1, py0, py1 = _parse_padding(padding) - - # Adjust padding to account for up/downsampling. - if up > 1: - px0 += (fw + up - 1) // 2 - px1 += (fw - up) // 2 - py0 += (fh + up - 1) // 2 - py1 += (fh - up) // 2 - if down > 1: - px0 += (fw - down + 1) // 2 - px1 += (fw - down) // 2 - py0 += (fh - down + 1) // 2 - py1 += (fh - down) // 2 - - # Fast path: 1x1 convolution with downsampling only => downsample first, then convolve. - if kw == 1 and kh == 1 and (down > 1 and up == 1): - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, padding=[ - px0, px1, py0, py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - return x - - # Fast path: 1x1 convolution with upsampling only => convolve first, then upsample. - if kw == 1 and kh == 1 and (up > 1 and down == 1): - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - x = upfirdn2d.upfirdn2d(x=x, f=f, up=up, padding=[ - px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter) - return x - - # Fast path: downsampling only => use strided convolution. - if down > 1 and up == 1: - x = upfirdn2d.upfirdn2d( - x=x, f=f, padding=[px0, px1, py0, py1], flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, stride=down, - groups=groups, flip_weight=flip_weight) - return x - - # Fast path: upsampling with optional downsampling => use transpose strided convolution. - if up > 1: - if groups == 1: - w = w.transpose(0, 1) - else: - w = w.reshape(groups, out_channels // groups, - in_channels_per_group, kh, kw) - w = w.transpose(1, 2) - w = w.reshape(groups * in_channels_per_group, - out_channels // groups, kh, kw) - px0 -= kw - 1 - px1 -= kw - up - py0 -= kh - 1 - py1 -= kh - up - pxt = max(min(-px0, -px1), 0) - pyt = max(min(-py0, -py1), 0) - x = _conv2d_wrapper(x=x, w=w, stride=up, padding=[ - pyt, pxt], groups=groups, transpose=True, flip_weight=(not flip_weight)) - x = upfirdn2d.upfirdn2d(x=x, f=f, padding=[ - px0+pxt, px1+pxt, py0+pyt, py1+pyt], gain=up**2, flip_filter=flip_filter) - if down > 1: - x = upfirdn2d.upfirdn2d( - x=x, f=f, down=down, flip_filter=flip_filter) - return x - - # Fast path: no up/downsampling, padding supported by the underlying implementation => use plain conv2d. - if up == 1 and down == 1: - if px0 == px1 and py0 == py1 and px0 >= 0 and py0 >= 0: - return _conv2d_wrapper(x=x, w=w, padding=[py0, px0], groups=groups, flip_weight=flip_weight) - - # Fallback: Generic reference implementation. - x = upfirdn2d.upfirdn2d(x=x, f=(f if up > 1 else None), up=up, padding=[ - px0, px1, py0, py1], gain=up**2, flip_filter=flip_filter) - x = _conv2d_wrapper(x=x, w=w, groups=groups, flip_weight=flip_weight) - if down > 1: - x = upfirdn2d.upfirdn2d(x=x, f=f, down=down, flip_filter=flip_filter) - return x - -# ---------------------------------------------------------------------------- diff --git a/spaces/Duskfallcrew/textual-inversion-training/textual_inversion.py b/spaces/Duskfallcrew/textual-inversion-training/textual_inversion.py deleted file mode 100644 index 1eb6f5df52bcff384012fc93484d1a0435dbdde1..0000000000000000000000000000000000000000 --- a/spaces/Duskfallcrew/textual-inversion-training/textual_inversion.py +++ /dev/null @@ -1,612 +0,0 @@ -import argparse -import itertools -import math -import os -import random -from pathlib import Path -from typing import Optional - -import numpy as np -import torch -import torch.nn.functional as F -import torch.utils.checkpoint -from torch.utils.data import Dataset - -import PIL -from accelerate import Accelerator -from accelerate.logging import get_logger -from accelerate.utils import set_seed -from diffusers import AutoencoderKL, DDPMScheduler, PNDMScheduler, StableDiffusionPipeline, UNet2DConditionModel -from diffusers.optimization import get_scheduler -from diffusers.pipelines.stable_diffusion import StableDiffusionSafetyChecker -from huggingface_hub import HfFolder, Repository, whoami -from PIL import Image -from torchvision import transforms -from tqdm.auto import tqdm -from transformers import CLIPFeatureExtractor, CLIPTextModel, CLIPTokenizer -import gc - -logger = get_logger(__name__) - - -def save_progress(text_encoder, placeholder_token_id, accelerator, args): - logger.info("Saving embeddings") - learned_embeds = accelerator.unwrap_model(text_encoder).get_input_embeddings().weight[placeholder_token_id] - learned_embeds_dict = {args.placeholder_token: learned_embeds.detach().cpu()} - torch.save(learned_embeds_dict, os.path.join(args.output_dir, "learned_embeds.bin")) - - -def parse_args(): - parser = argparse.ArgumentParser(description="Simple example of a training script.") - parser.add_argument( - "--save_steps", - type=int, - default=500, - help="Save learned_embeds.bin every X updates steps.", - ) - parser.add_argument( - "--pretrained_model_name_or_path", - type=str, - default=None, - help="Path to pretrained model or model identifier from huggingface.co/models.", - ) - parser.add_argument( - "--tokenizer_name", - type=str, - default=None, - help="Pretrained tokenizer name or path if not the same as model_name", - ) - parser.add_argument( - "--train_data_dir", type=str, default=None, help="A folder containing the training data." - ) - parser.add_argument( - "--placeholder_token", - type=str, - default=None, - help="A token to use as a placeholder for the concept.", - ) - parser.add_argument( - "--initializer_token", type=str, default=None, help="A token to use as initializer word." - ) - parser.add_argument("--learnable_property", type=str, default="object", help="Choose between 'object' and 'style'") - parser.add_argument("--repeats", type=int, default=100, help="How many times to repeat the training data.") - parser.add_argument( - "--output_dir", - type=str, - default="text-inversion-model", - help="The output directory where the model predictions and checkpoints will be written.", - ) - parser.add_argument("--seed", type=int, default=None, help="A seed for reproducible training.") - parser.add_argument( - "--resolution", - type=int, - default=512, - help=( - "The resolution for input images, all the images in the train/validation dataset will be resized to this" - " resolution" - ), - ) - parser.add_argument( - "--center_crop", action="store_true", help="Whether to center crop images before resizing to resolution" - ) - parser.add_argument( - "--train_batch_size", type=int, default=16, help="Batch size (per device) for the training dataloader." - ) - parser.add_argument("--num_train_epochs", type=int, default=100) - parser.add_argument( - "--max_train_steps", - type=int, - default=5000, - help="Total number of training steps to perform. If provided, overrides num_train_epochs.", - ) - parser.add_argument( - "--gradient_accumulation_steps", - type=int, - default=1, - help="Number of updates steps to accumulate before performing a backward/update pass.", - ) - parser.add_argument( - "--learning_rate", - type=float, - default=1e-4, - help="Initial learning rate (after the potential warmup period) to use.", - ) - parser.add_argument( - "--scale_lr", - action="store_true", - default=True, - help="Scale the learning rate by the number of GPUs, gradient accumulation steps, and batch size.", - ) - parser.add_argument( - "--lr_scheduler", - type=str, - default="constant", - help=( - 'The scheduler type to use. Choose between ["linear", "cosine", "cosine_with_restarts", "polynomial",' - ' "constant", "constant_with_warmup"]' - ), - ) - parser.add_argument( - "--lr_warmup_steps", type=int, default=500, help="Number of steps for the warmup in the lr scheduler." - ) - parser.add_argument("--adam_beta1", type=float, default=0.9, help="The beta1 parameter for the Adam optimizer.") - parser.add_argument("--adam_beta2", type=float, default=0.999, help="The beta2 parameter for the Adam optimizer.") - parser.add_argument("--adam_weight_decay", type=float, default=1e-2, help="Weight decay to use.") - parser.add_argument("--adam_epsilon", type=float, default=1e-08, help="Epsilon value for the Adam optimizer") - parser.add_argument("--push_to_hub", action="store_true", help="Whether or not to push the model to the Hub.") - parser.add_argument("--hub_token", type=str, default=None, help="The token to use to push to the Model Hub.") - parser.add_argument( - "--hub_model_id", - type=str, - default=None, - help="The name of the repository to keep in sync with the local `output_dir`.", - ) - parser.add_argument( - "--logging_dir", - type=str, - default="logs", - help=( - "[TensorBoard](https://www.tensorflow.org/tensorboard) log directory. Will default to" - " *output_dir/runs/**CURRENT_DATETIME_HOSTNAME***." - ), - ) - parser.add_argument( - "--mixed_precision", - type=str, - default="no", - choices=["no", "fp16", "bf16"], - help=( - "Whether to use mixed precision. Choose" - "between fp16 and bf16 (bfloat16). Bf16 requires PyTorch >= 1.10." - "and an Nvidia Ampere GPU." - ), - ) - parser.add_argument("--local_rank", type=int, default=-1, help="For distributed training: local_rank") - - args = parser.parse_args() - env_local_rank = int(os.environ.get("LOCAL_RANK", -1)) - if env_local_rank != -1 and env_local_rank != args.local_rank: - args.local_rank = env_local_rank - - ''' - if args.train_data_dir is None: - raise ValueError("You must specify a train data directory.") - ''' - - return args - - -imagenet_templates_small = [ - "a photo of a {}", - "a rendering of a {}", - "a cropped photo of the {}", - "the photo of a {}", - "a photo of a clean {}", - "a photo of a dirty {}", - "a dark photo of the {}", - "a photo of my {}", - "a photo of the cool {}", - "a close-up photo of a {}", - "a bright photo of the {}", - "a cropped photo of a {}", - "a photo of the {}", - "a good photo of the {}", - "a photo of one {}", - "a close-up photo of the {}", - "a rendition of the {}", - "a photo of the clean {}", - "a rendition of a {}", - "a photo of a nice {}", - "a good photo of a {}", - "a photo of the nice {}", - "a photo of the small {}", - "a photo of the weird {}", - "a photo of the large {}", - "a photo of a cool {}", - "a photo of a small {}", -] - -imagenet_style_templates_small = [ - "a painting in the style of {}", - "a rendering in the style of {}", - "a cropped painting in the style of {}", - "the painting in the style of {}", - "a clean painting in the style of {}", - "a dirty painting in the style of {}", - "a dark painting in the style of {}", - "a picture in the style of {}", - "a cool painting in the style of {}", - "a close-up painting in the style of {}", - "a bright painting in the style of {}", - "a cropped painting in the style of {}", - "a good painting in the style of {}", - "a close-up painting in the style of {}", - "a rendition in the style of {}", - "a nice painting in the style of {}", - "a small painting in the style of {}", - "a weird painting in the style of {}", - "a large painting in the style of {}", -] - - -class TextualInversionDataset(Dataset): - def __init__( - self, - data_root, - tokenizer, - learnable_property="object", # [object, style] - size=512, - repeats=100, - interpolation="bicubic", - flip_p=0.5, - set="train", - placeholder_token="*", - center_crop=False, - ): - self.data_root = data_root - self.tokenizer = tokenizer - self.learnable_property = learnable_property - self.size = size - self.placeholder_token = placeholder_token - self.center_crop = center_crop - self.flip_p = flip_p - - self.image_paths = [os.path.join(self.data_root, file_path) for file_path in os.listdir(self.data_root)] - - self.num_images = len(self.image_paths) - self._length = self.num_images - - if set == "train": - self._length = self.num_images * repeats - - self.interpolation = { - "linear": PIL.Image.LINEAR, - "bilinear": PIL.Image.BILINEAR, - "bicubic": PIL.Image.BICUBIC, - "lanczos": PIL.Image.LANCZOS, - }[interpolation] - - self.templates = imagenet_style_templates_small if learnable_property == "style" else imagenet_templates_small - self.flip_transform = transforms.RandomHorizontalFlip(p=self.flip_p) - - def __len__(self): - return self._length - - def __getitem__(self, i): - example = {} - image = Image.open(self.image_paths[i % self.num_images]) - - if not image.mode == "RGB": - image = image.convert("RGB") - - placeholder_string = self.placeholder_token - text = random.choice(self.templates).format(placeholder_string) - - example["input_ids"] = self.tokenizer( - text, - padding="max_length", - truncation=True, - max_length=self.tokenizer.model_max_length, - return_tensors="pt", - ).input_ids[0] - - # default to score-sde preprocessing - img = np.array(image).astype(np.uint8) - - if self.center_crop: - crop = min(img.shape[0], img.shape[1]) - h, w, = ( - img.shape[0], - img.shape[1], - ) - img = img[(h - crop) // 2 : (h + crop) // 2, (w - crop) // 2 : (w + crop) // 2] - - image = Image.fromarray(img) - image = image.resize((self.size, self.size), resample=self.interpolation) - - image = self.flip_transform(image) - image = np.array(image).astype(np.uint8) - image = (image / 127.5 - 1.0).astype(np.float32) - - example["pixel_values"] = torch.from_numpy(image).permute(2, 0, 1) - return example - - -def get_full_repo_name(model_id: str, organization: Optional[str] = None, token: Optional[str] = None): - if token is None: - token = HfFolder.get_token() - if organization is None: - username = whoami(token)["name"] - return f"{username}/{model_id}" - else: - return f"{organization}/{model_id}" - - -def freeze_params(params): - for param in params: - param.requires_grad = False - - -def merge_two_dicts(starting_dict: dict, updater_dict: dict) -> dict: - """ - Starts from base starting dict and then adds the remaining key values from updater replacing the values from - the first starting/base dict with the second updater dict. - - For later: how does d = {**d1, **d2} replace collision? - - :param starting_dict: - :param updater_dict: - :return: - """ - new_dict: dict = starting_dict.copy() # start with keys and values of starting_dict - new_dict.update(updater_dict) # modifies starting_dict with keys and values of updater_dict - return new_dict - -def merge_args(args1: argparse.Namespace, args2: argparse.Namespace) -> argparse.Namespace: - """ - - ref: https://stackoverflow.com/questions/56136549/how-can-i-merge-two-argparse-namespaces-in-python-2-x - :param args1: - :param args2: - :return: - """ - # - the merged args - # The vars() function returns the __dict__ attribute to values of the given object e.g {field:value}. - merged_key_values_for_namespace: dict = merge_two_dicts(vars(args1), vars(args2)) - args = argparse.Namespace(**merged_key_values_for_namespace) - return args - -def run_training(args_imported): - args_default = parse_args() - args = merge_args(args_default, args_imported) - - print(args) - logging_dir = os.path.join(args.output_dir, args.logging_dir) - - accelerator = Accelerator( - gradient_accumulation_steps=args.gradient_accumulation_steps, - mixed_precision=args.mixed_precision, - log_with="tensorboard", - logging_dir=logging_dir, - ) - - # If passed along, set the training seed now. - if args.seed is not None: - set_seed(args.seed) - - # Handle the repository creation - if accelerator.is_main_process: - if args.push_to_hub: - if args.hub_model_id is None: - repo_name = get_full_repo_name(Path(args.output_dir).name, token=args.hub_token) - else: - repo_name = args.hub_model_id - repo = Repository(args.output_dir, clone_from=repo_name) - - with open(os.path.join(args.output_dir, ".gitignore"), "w+") as gitignore: - if "step_*" not in gitignore: - gitignore.write("step_*\n") - if "epoch_*" not in gitignore: - gitignore.write("epoch_*\n") - elif args.output_dir is not None: - os.makedirs(args.output_dir, exist_ok=True) - - # Load the tokenizer and add the placeholder token as a additional special token - if args.tokenizer_name: - tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) - elif args.pretrained_model_name_or_path: - tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") - - # Add the placeholder token in tokenizer - num_added_tokens = tokenizer.add_tokens(args.placeholder_token) - if num_added_tokens == 0: - raise ValueError( - f"The tokenizer already contains the token {args.placeholder_token}. Please pass a different" - " `placeholder_token` that is not already in the tokenizer." - ) - - # Convert the initializer_token, placeholder_token to ids - token_ids = tokenizer.encode(args.initializer_token, add_special_tokens=False) - # Check if initializer_token is a single token or a sequence of tokens - if len(token_ids) > 1: - raise ValueError("The initializer token must be a single token.") - - initializer_token_id = token_ids[0] - placeholder_token_id = tokenizer.convert_tokens_to_ids(args.placeholder_token) - - # Load models and create wrapper for stable diffusion - text_encoder = CLIPTextModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="text_encoder") - vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae") - unet = UNet2DConditionModel.from_pretrained(args.pretrained_model_name_or_path, subfolder="unet") - - # Resize the token embeddings as we are adding new special tokens to the tokenizer - text_encoder.resize_token_embeddings(len(tokenizer)) - - # Initialise the newly added placeholder token with the embeddings of the initializer token - token_embeds = text_encoder.get_input_embeddings().weight.data - token_embeds[placeholder_token_id] = token_embeds[initializer_token_id] - - # Freeze vae and unet - freeze_params(vae.parameters()) - freeze_params(unet.parameters()) - # Freeze all parameters except for the token embeddings in text encoder - params_to_freeze = itertools.chain( - text_encoder.text_model.encoder.parameters(), - text_encoder.text_model.final_layer_norm.parameters(), - text_encoder.text_model.embeddings.position_embedding.parameters(), - ) - freeze_params(params_to_freeze) - - if args.scale_lr: - args.learning_rate = ( - args.learning_rate * args.gradient_accumulation_steps * args.train_batch_size * accelerator.num_processes - ) - - # Initialize the optimizer - optimizer = torch.optim.AdamW( - text_encoder.get_input_embeddings().parameters(), # only optimize the embeddings - lr=args.learning_rate, - betas=(args.adam_beta1, args.adam_beta2), - weight_decay=args.adam_weight_decay, - eps=args.adam_epsilon, - ) - - # TODO (patil-suraj): load scheduler using args - noise_scheduler = DDPMScheduler( - beta_start=0.00085, - beta_end=0.012, - beta_schedule="scaled_linear", - num_train_timesteps=1000, - ) - - train_dataset = TextualInversionDataset( - data_root=args.train_data_dir, - tokenizer=tokenizer, - size=args.resolution, - placeholder_token=args.placeholder_token, - repeats=args.repeats, - learnable_property=args.learnable_property, - center_crop=args.center_crop, - set="train", - ) - train_dataloader = torch.utils.data.DataLoader(train_dataset, batch_size=args.train_batch_size, shuffle=True) - - # Scheduler and math around the number of training steps. - overrode_max_train_steps = False - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if args.max_train_steps is None: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - overrode_max_train_steps = True - - lr_scheduler = get_scheduler( - args.lr_scheduler, - optimizer=optimizer, - num_warmup_steps=args.lr_warmup_steps * args.gradient_accumulation_steps, - num_training_steps=args.max_train_steps * args.gradient_accumulation_steps, - ) - - text_encoder, optimizer, train_dataloader, lr_scheduler = accelerator.prepare( - text_encoder, optimizer, train_dataloader, lr_scheduler - ) - - # Move vae and unet to device - vae.to(accelerator.device) - unet.to(accelerator.device) - - # Keep vae and unet in eval model as we don't train these - vae.eval() - unet.eval() - - # We need to recalculate our total training steps as the size of the training dataloader may have changed. - num_update_steps_per_epoch = math.ceil(len(train_dataloader) / args.gradient_accumulation_steps) - if overrode_max_train_steps: - args.max_train_steps = args.num_train_epochs * num_update_steps_per_epoch - # Afterwards we recalculate our number of training epochs - args.num_train_epochs = math.ceil(args.max_train_steps / num_update_steps_per_epoch) - - # We need to initialize the trackers we use, and also store our configuration. - # The trackers initializes automatically on the main process. - if accelerator.is_main_process: - accelerator.init_trackers("textual_inversion", config=vars(args)) - - # Train! - total_batch_size = args.train_batch_size * accelerator.num_processes * args.gradient_accumulation_steps - - logger.info("***** Running training *****") - logger.info(f" Num examples = {len(train_dataset)}") - logger.info(f" Num Epochs = {args.num_train_epochs}") - logger.info(f" Instantaneous batch size per device = {args.train_batch_size}") - logger.info(f" Total train batch size (w. parallel, distributed & accumulation) = {total_batch_size}") - logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") - logger.info(f" Total optimization steps = {args.max_train_steps}") - # Only show the progress bar once on each machine. - progress_bar = tqdm(range(args.max_train_steps), disable=not accelerator.is_local_main_process) - progress_bar.set_description("Steps") - global_step = 0 - - for epoch in range(args.num_train_epochs): - text_encoder.train() - for step, batch in enumerate(train_dataloader): - with accelerator.accumulate(text_encoder): - # Convert images to latent space - latents = vae.encode(batch["pixel_values"]).latent_dist.sample().detach() - latents = latents * 0.18215 - - # Sample noise that we'll add to the latents - noise = torch.randn(latents.shape).to(latents.device) - bsz = latents.shape[0] - # Sample a random timestep for each image - timesteps = torch.randint( - 0, noise_scheduler.config.num_train_timesteps, (bsz,), device=latents.device - ).long() - - # Add noise to the latents according to the noise magnitude at each timestep - # (this is the forward diffusion process) - noisy_latents = noise_scheduler.add_noise(latents, noise, timesteps) - - # Get the text embedding for conditioning - encoder_hidden_states = text_encoder(batch["input_ids"])[0] - - # Predict the noise residual - noise_pred = unet(noisy_latents, timesteps, encoder_hidden_states).sample - - loss = F.mse_loss(noise_pred, noise, reduction="none").mean([1, 2, 3]).mean() - accelerator.backward(loss) - - # Zero out the gradients for all token embeddings except the newly added - # embeddings for the concept, as we only want to optimize the concept embeddings - if accelerator.num_processes > 1: - grads = text_encoder.module.get_input_embeddings().weight.grad - else: - grads = text_encoder.get_input_embeddings().weight.grad - # Get the index for tokens that we want to zero the grads for - index_grads_to_zero = torch.arange(len(tokenizer)) != placeholder_token_id - grads.data[index_grads_to_zero, :] = grads.data[index_grads_to_zero, :].fill_(0) - - optimizer.step() - lr_scheduler.step() - optimizer.zero_grad() - - # Checks if the accelerator has performed an optimization step behind the scenes - if accelerator.sync_gradients: - progress_bar.update(1) - global_step += 1 - if global_step % args.save_steps == 0: - save_progress(text_encoder, placeholder_token_id, accelerator, args) - - logs = {"loss": loss.detach().item(), "lr": lr_scheduler.get_last_lr()[0]} - progress_bar.set_postfix(**logs) - accelerator.log(logs, step=global_step) - - if global_step >= args.max_train_steps: - break - - accelerator.wait_for_everyone() - - # Create the pipeline using using the trained modules and save it. - if accelerator.is_main_process: - pipeline = StableDiffusionPipeline( - text_encoder=accelerator.unwrap_model(text_encoder), - vae=vae, - unet=unet, - tokenizer=tokenizer, - scheduler=PNDMScheduler( - beta_start=0.00085, beta_end=0.012, beta_schedule="scaled_linear", skip_prk_steps=True - ), - safety_checker=StableDiffusionSafetyChecker.from_pretrained("CompVis/stable-diffusion-safety-checker"), - feature_extractor=CLIPFeatureExtractor.from_pretrained("openai/clip-vit-base-patch32"), - ) - pipeline.save_pretrained(args.output_dir) - # Also save the newly trained embeddings - save_progress(text_encoder, placeholder_token_id, accelerator, args) - - if args.push_to_hub: - repo.push_to_hub(commit_message="End of training", blocking=False, auto_lfs_prune=True) - - accelerator.end_training() - torch.cuda.empty_cache() - gc.collect() - - -if __name__ == "__main__": - main() diff --git a/spaces/EMS-TU-Ilmenau/deepest-demo/helper.py b/spaces/EMS-TU-Ilmenau/deepest-demo/helper.py deleted file mode 100644 index 2fc32221ec35a6d4c939948965bd00f71070b9d7..0000000000000000000000000000000000000000 --- a/spaces/EMS-TU-Ilmenau/deepest-demo/helper.py +++ /dev/null @@ -1,59 +0,0 @@ -from torch.utils.data import DataLoader -from deepest.modules import Parameter2dNet -from deepest.datasets import InferenceDelayDataset -from deepest.metrics import match_components -import numpy as np - -class Runner: - def __init__(self, model: str, dataset: str, bs: int, num_worker: int): - self.module = Parameter2dNet.from_file(f"{model}") - self.dataset_config = self.module.get_datasetconfig() - self.dataset = InferenceDelayDataset(path=dataset, **self.dataset_config) - self.bs = bs - self.num_worker = num_worker - - def _preallocate(self, data_shape: tuple[int, ...], eta_shape: tuple[int, ...]): - data = np.empty((len(self), *data_shape), dtype=np.complex128) - truth = np.empty((len(self), *eta_shape)) - estim = np.empty((len(self), *eta_shape)) - return data, truth, estim - - def _get_batchrange_for_index(self, ii: int): - start_idx = ii*self.bs - stop_idx = (ii+1)*self.bs - if stop_idx > len(self.dataset): - stop_idx = len(self.dataset) - - return range(start_idx, stop_idx) - - def run(self, snr: int) -> tuple[np.ndarray, np.ndarray, np.ndarray]: - self.dataset.noise_var = (snr, snr) - dataloader = DataLoader( - self.dataset, - batch_size=self.bs, - num_workers=self.num_worker, - worker_init_fn=lambda worker_id: np.random.seed(worker_id), - shuffle=False, - ) - - for ii, (x, _, z) in enumerate(dataloader): - z = z[0][:, :2, :] - if ii == 0: - data, truth, estim = self._preallocate(x.shape[1:], z.shape[1:]) - - idx_range = self._get_batchrange_for_index(ii) - - data[idx_range] = x.cpu().numpy() - truth[idx_range] = z.cpu().numpy() - estim[idx_range] = self.module.fit(x)[:, :2, :] - - estim, truth = match_components(estim, truth) - - return data, truth, estim - - def fit(self, data: np.ndarray) -> np.ndarray: - x = self.module.fit(data) - return x[:, :2, :] - - def __len__(self): - return len(self.dataset) \ No newline at end of file diff --git a/spaces/EnigmaOfTheWorld/TechnoForge_Automotive/README.md b/spaces/EnigmaOfTheWorld/TechnoForge_Automotive/README.md deleted file mode 100644 index 1c63016042be0a05c7fa526b73b7c3866faafbe5..0000000000000000000000000000000000000000 --- a/spaces/EnigmaOfTheWorld/TechnoForge_Automotive/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: TechnoForge Automotive -emoji: 🐠 -colorFrom: blue -colorTo: yellow -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/EronSamez/RVC_HFmeu/demucs/separate.py b/spaces/EronSamez/RVC_HFmeu/demucs/separate.py deleted file mode 100644 index 3fc7af9e711978b3e21398aa6f1deb9ae87dd370..0000000000000000000000000000000000000000 --- a/spaces/EronSamez/RVC_HFmeu/demucs/separate.py +++ /dev/null @@ -1,185 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import sys -from pathlib import Path -import subprocess - -import julius -import torch as th -import torchaudio as ta - -from .audio import AudioFile, convert_audio_channels -from .pretrained import is_pretrained, load_pretrained -from .utils import apply_model, load_model - - -def load_track(track, device, audio_channels, samplerate): - errors = {} - wav = None - - try: - wav = AudioFile(track).read( - streams=0, - samplerate=samplerate, - channels=audio_channels).to(device) - except FileNotFoundError: - errors['ffmpeg'] = 'Ffmpeg is not installed.' - except subprocess.CalledProcessError: - errors['ffmpeg'] = 'FFmpeg could not read the file.' - - if wav is None: - try: - wav, sr = ta.load(str(track)) - except RuntimeError as err: - errors['torchaudio'] = err.args[0] - else: - wav = convert_audio_channels(wav, audio_channels) - wav = wav.to(device) - wav = julius.resample_frac(wav, sr, samplerate) - - if wav is None: - print(f"Could not load file {track}. " - "Maybe it is not a supported file format? ") - for backend, error in errors.items(): - print(f"When trying to load using {backend}, got the following error: {error}") - sys.exit(1) - return wav - - -def encode_mp3(wav, path, bitrate=320, samplerate=44100, channels=2, verbose=False): - try: - import lameenc - except ImportError: - print("Failed to call lame encoder. Maybe it is not installed? " - "On windows, run `python.exe -m pip install -U lameenc`, " - "on OSX/Linux, run `python3 -m pip install -U lameenc`, " - "then try again.", file=sys.stderr) - sys.exit(1) - encoder = lameenc.Encoder() - encoder.set_bit_rate(bitrate) - encoder.set_in_sample_rate(samplerate) - encoder.set_channels(channels) - encoder.set_quality(2) # 2-highest, 7-fastest - if not verbose: - encoder.silence() - wav = wav.transpose(0, 1).numpy() - mp3_data = encoder.encode(wav.tobytes()) - mp3_data += encoder.flush() - with open(path, "wb") as f: - f.write(mp3_data) - - -def main(): - parser = argparse.ArgumentParser("demucs.separate", - description="Separate the sources for the given tracks") - parser.add_argument("tracks", nargs='+', type=Path, default=[], help='Path to tracks') - parser.add_argument("-n", - "--name", - default="demucs_quantized", - help="Model name. See README.md for the list of pretrained models. " - "Default is demucs_quantized.") - parser.add_argument("-v", "--verbose", action="store_true") - parser.add_argument("-o", - "--out", - type=Path, - default=Path("separated"), - help="Folder where to put extracted tracks. A subfolder " - "with the model name will be created.") - parser.add_argument("--models", - type=Path, - default=Path("models"), - help="Path to trained models. " - "Also used to store downloaded pretrained models") - parser.add_argument("-d", - "--device", - default="cuda" if th.cuda.is_available() else "cpu", - help="Device to use, default is cuda if available else cpu") - parser.add_argument("--shifts", - default=0, - type=int, - help="Number of random shifts for equivariant stabilization." - "Increase separation time but improves quality for Demucs. 10 was used " - "in the original paper.") - parser.add_argument("--overlap", - default=0.25, - type=float, - help="Overlap between the splits.") - parser.add_argument("--no-split", - action="store_false", - dest="split", - default=True, - help="Doesn't split audio in chunks. This can use large amounts of memory.") - parser.add_argument("--float32", - action="store_true", - help="Convert the output wavefile to use pcm f32 format instead of s16. " - "This should not make a difference if you just plan on listening to the " - "audio but might be needed to compute exactly metrics like SDR etc.") - parser.add_argument("--int16", - action="store_false", - dest="float32", - help="Opposite of --float32, here for compatibility.") - parser.add_argument("--mp3", action="store_true", - help="Convert the output wavs to mp3.") - parser.add_argument("--mp3-bitrate", - default=320, - type=int, - help="Bitrate of converted mp3.") - - args = parser.parse_args() - name = args.name + ".th" - model_path = args.models / name - if model_path.is_file(): - model = load_model(model_path) - else: - if is_pretrained(args.name): - model = load_pretrained(args.name) - else: - print(f"No pre-trained model {args.name}", file=sys.stderr) - sys.exit(1) - model.to(args.device) - - out = args.out / args.name - out.mkdir(parents=True, exist_ok=True) - print(f"Separated tracks will be stored in {out.resolve()}") - for track in args.tracks: - if not track.exists(): - print( - f"File {track} does not exist. If the path contains spaces, " - "please try again after surrounding the entire path with quotes \"\".", - file=sys.stderr) - continue - print(f"Separating track {track}") - wav = load_track(track, args.device, model.audio_channels, model.samplerate) - - ref = wav.mean(0) - wav = (wav - ref.mean()) / ref.std() - sources = apply_model(model, wav, shifts=args.shifts, split=args.split, - overlap=args.overlap, progress=True) - sources = sources * ref.std() + ref.mean() - - track_folder = out / track.name.rsplit(".", 1)[0] - track_folder.mkdir(exist_ok=True) - for source, name in zip(sources, model.sources): - source = source / max(1.01 * source.abs().max(), 1) - if args.mp3 or not args.float32: - source = (source * 2**15).clamp_(-2**15, 2**15 - 1).short() - source = source.cpu() - stem = str(track_folder / name) - if args.mp3: - encode_mp3(source, stem + ".mp3", - bitrate=args.mp3_bitrate, - samplerate=model.samplerate, - channels=model.audio_channels, - verbose=args.verbose) - else: - wavname = str(track_folder / f"{name}.wav") - ta.save(wavname, source, sample_rate=model.samplerate) - - -if __name__ == "__main__": - main() diff --git a/spaces/EvgenyK/Text-To-Image/README.md b/spaces/EvgenyK/Text-To-Image/README.md deleted file mode 100644 index e8183fb2c71cb9843a3dd3d4dcdb0d1c65508490..0000000000000000000000000000000000000000 --- a/spaces/EvgenyK/Text-To-Image/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Text To Image -emoji: 🌍 -colorFrom: indigo -colorTo: pink -sdk: gradio -sdk_version: 3.12.0 -app_file: app.py -pinned: false -license: openrail ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/GMFTBY/PandaGPT/config/__init__.py b/spaces/GMFTBY/PandaGPT/config/__init__.py deleted file mode 100644 index 826b6ef41067725c02ac33210e773bb1a8123896..0000000000000000000000000000000000000000 --- a/spaces/GMFTBY/PandaGPT/config/__init__.py +++ /dev/null @@ -1,37 +0,0 @@ -import yaml - -def load_model_config(model, mode): - # load special config for each model - config_path = f'config/{model}.yaml' - print(f'[!] load configuration from {config_path}') - with open(config_path) as f: - configuration = yaml.load(f, Loader=yaml.FullLoader) - new_config = {} - for key, value in configuration.items(): - if key in ['train', 'test', 'validation']: - if mode == key: - new_config.update(value) - else: - new_config[key] = value - configuration = new_config - return configuration - -def load_config(args): - '''the configuration of each model can rewrite the base configuration''' - # base config - base_configuration = load_base_config() - - # load one model config - configuration = load_model_config(args['model'], args['mode']) - - # update and append the special config for base config - base_configuration.update(configuration) - configuration = base_configuration - return configuration - -def load_base_config(): - config_path = f'config/base.yaml' - with open(config_path) as f: - configuration = yaml.load(f, Loader=yaml.FullLoader) - print(f'[!] load base configuration: {config_path}') - return configuration diff --git a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/align_box_corner.py b/spaces/Gen-Sim/Gen-Sim/cliport/tasks/align_box_corner.py deleted file mode 100644 index 6890e1cd755013e83beff8c1265367da4cd3cda8..0000000000000000000000000000000000000000 --- a/spaces/Gen-Sim/Gen-Sim/cliport/tasks/align_box_corner.py +++ /dev/null @@ -1,59 +0,0 @@ -import os -import numpy as np -from cliport.tasks.task import Task -from cliport.utils import utils - - -class AlignBoxCorner(Task): - """Pick up the randomly sized box and align one of its corners to the L-shaped marker on the tabletop.""" - - def __init__(self): - super().__init__() - self.max_steps = 3 - self.lang_template = "align the brown box with the green corner" - self.task_completed_desc = "done with alignment" - self.additional_reset() - - def reset(self, env): - super().reset(env) - - # Generate randomly shaped box. - box_size = self.get_random_size(0.05, 0.15, 0.05, 0.15, 0.01, 0.06) - - # Add corner. - dimx = (box_size[0] / 2 - 0.025 + 0.0025, box_size[0] / 2 + 0.0025) - dimy = (box_size[1] / 2 + 0.0025, box_size[1] / 2 - 0.025 + 0.0025) - corner_template = 'corner/corner-template.urdf' - replace = {'DIMX': dimx, 'DIMY': dimy} - - # IMPORTANT: REPLACE THE TEMPLATE URDF - corner_urdf = self.fill_template(corner_template, replace) - corner_size = (box_size[0], box_size[1], 0) - corner_pose = self.get_random_pose(env, corner_size) - env.add_object(corner_urdf, corner_pose, 'fixed') - - # Add possible placing poses. - theta = utils.quatXYZW_to_eulerXYZ(corner_pose[1])[2] - fip_rot = utils.eulerXYZ_to_quatXYZW((0, 0, theta + np.pi)) - pose1 = (corner_pose[0], fip_rot) - alt_x = (box_size[0] / 2) - (box_size[1] / 2) - alt_y = (box_size[1] / 2) - (box_size[0] / 2) - alt_pos = (alt_x, alt_y, 0) - alt_rot0 = utils.eulerXYZ_to_quatXYZW((0, 0, np.pi / 2)) - alt_rot1 = utils.eulerXYZ_to_quatXYZW((0, 0, 3 * np.pi / 2)) - pose2 = utils.multiply(corner_pose, (alt_pos, alt_rot0)) - pose3 = utils.multiply(corner_pose, (alt_pos, alt_rot1)) - - # Add box. - box_template = 'box/box-template.urdf' - - # IMPORTANT: REPLACE THE TEMPLATE URDF - box_urdf = self.fill_template(box_template, {'DIM': np.float32(box_size)}) - box_pose = self.get_random_pose(env, box_size) - box_id = env.add_object(box_urdf, box_pose) - self.color_random_brown(box_id) - - # Goal: box is aligned with corner (1 of 4 possible poses). - self.add_goal(objs=[box_id], matches=np.int32([[1, 1, 1, 1]]), targ_poses=[corner_pose, pose1, pose2, pose3], replace=False, - rotations=True, metric='pose', params=None, step_max_reward=1, symmetries=[2 * np.pi], - language_goal=self.lang_template) diff --git a/spaces/GilbertClaus/VideoCutter/rule34.py b/spaces/GilbertClaus/VideoCutter/rule34.py deleted file mode 100644 index 051835a86dc6d5067dc0727e364ec95ff3e2cf41..0000000000000000000000000000000000000000 --- a/spaces/GilbertClaus/VideoCutter/rule34.py +++ /dev/null @@ -1,103 +0,0 @@ -from bs4 import BeautifulSoup -import json, os -from others import * -import cloudscraper -scraper = cloudscraper.create_scraper() - -def get_info_rule34(link): - - response = scraper.get(link) - soup = BeautifulSoup(response.text, 'html.parser') - - # Mencari judul video di elemen dengan class title_video - title = soup.find(class_="title_video") - if title: - video_title = title.text.strip().replace('/', ' -') - idx = video_title.find(']') - if idx != -1 and idx + 1 < len(video_title) and video_title[idx + 1].isalpha(): - video_title = video_title[:idx + 1] + ' ' + video_title[idx + 1:] - - video_title = video_title.title() - print(f"Judul Video: {video_title}") - else: - print("Judul Video tidak ditemukan") - - # Mencari nama artist di elemen dengan class col - cols = soup.find_all(class_="col") # Menggunakan find_all untuk mendapatkan semua elemen dengan class col - if cols: - for col in cols: # Melakukan iterasi untuk setiap elemen col - # Mencari elemen dengan class label yang memiliki teks yang cocok dengan regex "Artist.*" - label = col.find(class_="label", string="Artist:") - if label: - # Mencari elemen dengan class item yang merupakan saudara dari label - item = label.find_next_sibling(class_="item") - if item: - # Mencari elemen dengan class name yang merupakan anak dari item - name = item.find(class_="name") - if name: - artist = name.text.strip() - print(f"Nama Artist: {artist}") - break # Keluar dari loop jika sudah menemukan nama artist - else: # Menambahkan else di akhir loop - print("Nama Artist tidak ditemukan") # Mencetak pesan jika tidak ada nama artist yang ditemukan - else: - print("Elemen col tidak ditemukan") - - # Mencari thumbnailUrl di script type="application/ld+json" - script = soup.find("script", type="application/ld+json") - if script: - data = json.loads(script.string) - if "thumbnailUrl" in data: - thumbnail_url = data['thumbnailUrl'] - print(f"Thumbnail URL: {thumbnail_url}") - else: - print("Tidak ditemukan thumbnail URL") - else: - print("Tidak ditemukan elemen script dengan type application/ld+json") - - # Mencari resolusi yang tersedia - resolutions = [] - for a in soup.find_all('a'): - if 'MP4' in a.text and 'p' in a.text: - resolutions.append(a.text.split()[1]) - if resolutions: - print("Resolusi yang tersedia: " + ", ".join(resolutions)) - else: - print("Tidak ditemukan resolusi yang tersedia") - - # Mencari kualitas video 720p atau 480p - video_quality_elements = soup.find_all("a", class_="tag_item") - video_quality_720p = None - video_quality_480p = None - for element in video_quality_elements: - if "720p" in element.text: - video_quality_720p = element['href'] - elif "480p" in element.text: - video_quality_480p = element['href'] - - if video_quality_720p: - print(f"Video kualitas 720p: {video_quality_720p}") - video_url = video_quality_720p - elif video_quality_480p: - print(f"Video kualitas 480p: {video_quality_480p}") - video_url = video_quality_480p - else: - print("Tidak ditemukan video kualitas 720p atau 480p") - video_url = None - - return video_title, artist, video_url, thumbnail_url - -def rule34(link): - video_info = "" - video_title, artist, video_url, thumbnail_url = get_info_rule34(link) - directory = f"/home/user/app/Hasil Download/Rule34/{artist}" - if not os.path.exists(directory): - os.makedirs(directory) - # Menentukan nama file thumbnail - thumbnail_file = download_file(thumbnail_url, video_title, directory) - video_file = download_file(video_url, video_title, directory) - - video_info = f"Nama Channel: {artist}\n" - video_info += f"Judul Video: {video_title}\n" - - return video_file, video_title, video_info, thumbnail_file \ No newline at end of file diff --git a/spaces/GoAPI/Midjourney-zoom-video-generator-GoAPI/README.md b/spaces/GoAPI/Midjourney-zoom-video-generator-GoAPI/README.md deleted file mode 100644 index b6261dd301981cdda74471aae6a46e0f1a353aaa..0000000000000000000000000000000000000000 --- a/spaces/GoAPI/Midjourney-zoom-video-generator-GoAPI/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Goapi Zoom Out Video -emoji: 👀 -colorFrom: yellow -colorTo: green -sdk: gradio -sdk_version: 3.42.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/resnest/cascade_rcnn_s50_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/resnest/cascade_rcnn_s50_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py deleted file mode 100644 index 78a154bba2e12e1daec0efaa6a1cb67016084671..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/resnest/cascade_rcnn_s50_fpn_syncbn-backbone+head_mstrain-range_1x_coco.py +++ /dev/null @@ -1,116 +0,0 @@ -_base_ = '../cascade_rcnn/cascade_rcnn_r50_fpn_1x_coco.py' -norm_cfg = dict(type='SyncBN', requires_grad=True) -model = dict( - pretrained='open-mmlab://resnest50', - backbone=dict( - type='ResNeSt', - stem_channels=64, - depth=50, - radix=2, - reduction_factor=4, - avg_down_stride=True, - num_stages=4, - out_indices=(0, 1, 2, 3), - frozen_stages=1, - norm_cfg=norm_cfg, - norm_eval=False, - style='pytorch'), - roi_head=dict( - bbox_head=[ - dict( - type='Shared4Conv1FCBBoxHead', - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - norm_cfg=norm_cfg, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.1, 0.1, 0.2, 0.2]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared4Conv1FCBBoxHead', - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - norm_cfg=norm_cfg, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.05, 0.05, 0.1, 0.1]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, - loss_weight=1.0)), - dict( - type='Shared4Conv1FCBBoxHead', - in_channels=256, - conv_out_channels=256, - fc_out_channels=1024, - norm_cfg=norm_cfg, - roi_feat_size=7, - num_classes=80, - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - target_means=[0., 0., 0., 0.], - target_stds=[0.033, 0.033, 0.067, 0.067]), - reg_class_agnostic=True, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=False, - loss_weight=1.0), - loss_bbox=dict(type='SmoothL1Loss', beta=1.0, loss_weight=1.0)) - ], )) -# # use ResNeSt img_norm -img_norm_cfg = dict( - mean=[123.68, 116.779, 103.939], std=[58.393, 57.12, 57.375], to_rgb=True) -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='LoadAnnotations', - with_bbox=True, - with_mask=False, - poly2mask=False), - dict( - type='Resize', - img_scale=[(1333, 640), (1333, 800)], - multiscale_mode='range', - keep_ratio=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels']), -] -test_pipeline = [ - dict(type='LoadImageFromFile'), - dict( - type='MultiScaleFlipAug', - img_scale=(1333, 800), - flip=False, - transforms=[ - dict(type='Resize', keep_ratio=True), - dict(type='RandomFlip'), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='ImageToTensor', keys=['img']), - dict(type='Collect', keys=['img']), - ]) -] -data = dict( - train=dict(pipeline=train_pipeline), - val=dict(pipeline=test_pipeline), - test=dict(pipeline=test_pipeline)) diff --git a/spaces/Gradio-Blocks/uniformer_image_detection/configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_3x_coco.py b/spaces/Gradio-Blocks/uniformer_image_detection/configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_3x_coco.py deleted file mode 100644 index c58057747d7d922293b6838e6eb1e13aa520aa3a..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_detection/configs/swin/mask_rcnn_swin_tiny_patch4_window7_mstrain_480-800_adamw_3x_coco.py +++ /dev/null @@ -1,80 +0,0 @@ -_base_ = [ - '../_base_/models/mask_rcnn_swin_fpn.py', - '../_base_/datasets/coco_instance.py', - '../_base_/schedules/schedule_1x.py', '../_base_/default_runtime.py' -] - -model = dict( - backbone=dict( - embed_dim=96, - depths=[2, 2, 6, 2], - num_heads=[3, 6, 12, 24], - window_size=7, - ape=False, - drop_path_rate=0.2, - patch_norm=True, - use_checkpoint=False - ), - neck=dict(in_channels=[96, 192, 384, 768])) - -img_norm_cfg = dict( - mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True) - -# augmentation strategy originates from DETR / Sparse RCNN -train_pipeline = [ - dict(type='LoadImageFromFile'), - dict(type='LoadAnnotations', with_bbox=True, with_mask=True), - dict(type='RandomFlip', flip_ratio=0.5), - dict(type='AutoAugment', - policies=[ - [ - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), (576, 1333), - (608, 1333), (640, 1333), (672, 1333), (704, 1333), - (736, 1333), (768, 1333), (800, 1333)], - multiscale_mode='value', - keep_ratio=True) - ], - [ - dict(type='Resize', - img_scale=[(400, 1333), (500, 1333), (600, 1333)], - multiscale_mode='value', - keep_ratio=True), - dict(type='RandomCrop', - crop_type='absolute_range', - crop_size=(384, 600), - allow_negative_crop=True), - dict(type='Resize', - img_scale=[(480, 1333), (512, 1333), (544, 1333), - (576, 1333), (608, 1333), (640, 1333), - (672, 1333), (704, 1333), (736, 1333), - (768, 1333), (800, 1333)], - multiscale_mode='value', - override=True, - keep_ratio=True) - ] - ]), - dict(type='Normalize', **img_norm_cfg), - dict(type='Pad', size_divisor=32), - dict(type='DefaultFormatBundle'), - dict(type='Collect', keys=['img', 'gt_bboxes', 'gt_labels', 'gt_masks']), -] -data = dict(train=dict(pipeline=train_pipeline)) - -optimizer = dict(_delete_=True, type='AdamW', lr=0.0001, betas=(0.9, 0.999), weight_decay=0.05, - paramwise_cfg=dict(custom_keys={'absolute_pos_embed': dict(decay_mult=0.), - 'relative_position_bias_table': dict(decay_mult=0.), - 'norm': dict(decay_mult=0.)})) -lr_config = dict(step=[27, 33]) -runner = dict(type='EpochBasedRunnerAmp', max_epochs=36) - -# do not use mmdet version fp16 -fp16 = None -optimizer_config = dict( - type="DistOptimizerHook", - update_interval=1, - grad_clip=None, - coalesce=True, - bucket_size_mb=-1, - use_fp16=True, -) diff --git a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_512x1024_40k_cityscapes.py b/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_512x1024_40k_cityscapes.py deleted file mode 100644 index fd6897691d3f8f200783fae7bfe231735f25a11b..0000000000000000000000000000000000000000 --- a/spaces/Gradio-Blocks/uniformer_image_segmentation/configs/dmnet/dmnet_r101-d8_512x1024_40k_cityscapes.py +++ /dev/null @@ -1,2 +0,0 @@ -_base_ = './dmnet_r50-d8_512x1024_40k_cityscapes.py' -model = dict(pretrained='open-mmlab://resnet101_v1c', backbone=dict(depth=101)) diff --git a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/common_utils/temp_utils.py b/spaces/GrandaddyShmax/AudioCraft_Plus/tests/common_utils/temp_utils.py deleted file mode 100644 index b45d896836799edcf1fee271409b390b3b6e4127..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/AudioCraft_Plus/tests/common_utils/temp_utils.py +++ /dev/null @@ -1,56 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import os -import tempfile - - -class TempDirMixin: - """Mixin to provide easy access to temp dir. - """ - - temp_dir_ = None - - @classmethod - def get_base_temp_dir(cls): - # If AUDIOCRAFT_TEST_DIR is set, use it instead of temporary directory. - # this is handy for debugging. - key = "AUDIOCRAFT_TEST_DIR" - if key in os.environ: - return os.environ[key] - if cls.temp_dir_ is None: - cls.temp_dir_ = tempfile.TemporaryDirectory() - return cls.temp_dir_.name - - @classmethod - def tearDownClass(cls): - if cls.temp_dir_ is not None: - try: - cls.temp_dir_.cleanup() - cls.temp_dir_ = None - except PermissionError: - # On Windows there is a know issue with `shutil.rmtree`, - # which fails intermittently. - # https://github.com/python/cpython/issues/74168 - # Following the above thread, we ignore it. - pass - super().tearDownClass() - - @property - def id(self): - return self.__class__.__name__ - - def get_temp_path(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(os.path.dirname(path), exist_ok=True) - return path - - def get_temp_dir(self, *paths): - temp_dir = os.path.join(self.get_base_temp_dir(), self.id) - path = os.path.join(temp_dir, *paths) - os.makedirs(path, exist_ok=True) - return path diff --git a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/data/test_audio.py b/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/data/test_audio.py deleted file mode 100644 index 40c0d5ed69eff92a766dc6d176e532f0df6c2b5e..0000000000000000000000000000000000000000 --- a/spaces/GrandaddyShmax/MusicGen_Plus_hfv2/tests/data/test_audio.py +++ /dev/null @@ -1,239 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -from itertools import product -import random - -import numpy as np -import torch -import torchaudio - -from audiocraft.data.audio import audio_info, audio_read, audio_write, _av_read - -from ..common_utils import TempDirMixin, get_white_noise, save_wav - - -class TestInfo(TempDirMixin): - - def test_info_mp3(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - wav = get_white_noise(ch, int(sample_rate * duration)) - path = self.get_temp_path('sample_wav.mp3') - save_wav(path, wav, sample_rate) - info = audio_info(path) - assert info.sample_rate == sample_rate - assert info.channels == ch - # we cannot trust torchaudio for num_frames, so we don't check - - def _test_info_format(self, ext: str): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'sample_wav{ext}') - save_wav(path, wav, sample_rate) - info = audio_info(path) - assert info.sample_rate == sample_rate - assert info.channels == ch - assert np.isclose(info.duration, duration, atol=1e-5) - - def test_info_wav(self): - self._test_info_format('.wav') - - def test_info_flac(self): - self._test_info_format('.flac') - - def test_info_ogg(self): - self._test_info_format('.ogg') - - def test_info_m4a(self): - # TODO: generate m4a file programmatically - # self._test_info_format('.m4a') - pass - - -class TestRead(TempDirMixin): - - def test_read_full_wav(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - read_wav, read_sr = audio_read(path) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == wav.shape[1] - assert torch.allclose(read_wav, wav, rtol=1e-03, atol=1e-04) - - def test_read_partial_wav(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - read_duration = torch.rand(1).item() - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - read_frames = int(sample_rate * read_duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - read_wav, read_sr = audio_read(path, 0, read_duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == read_frames - assert torch.allclose(read_wav[..., 0:read_frames], wav[..., 0:read_frames], rtol=1e-03, atol=1e-04) - - def test_read_seek_time_wav(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - read_duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - seek_time = torch.rand(1).item() - read_wav, read_sr = audio_read(path, seek_time, read_duration) - seek_frames = int(sample_rate * seek_time) - expected_frames = n_frames - seek_frames - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == expected_frames - assert torch.allclose(read_wav, wav[..., seek_frames:], rtol=1e-03, atol=1e-04) - - def test_read_seek_time_wav_padded(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - read_duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - read_frames = int(sample_rate * read_duration) - wav = get_white_noise(ch, n_frames).clamp(-0.99, 0.99) - path = self.get_temp_path('sample_wav.wav') - save_wav(path, wav, sample_rate) - seek_time = torch.rand(1).item() - seek_frames = int(sample_rate * seek_time) - expected_frames = n_frames - seek_frames - read_wav, read_sr = audio_read(path, seek_time, read_duration, pad=True) - expected_pad_wav = torch.zeros(wav.shape[0], read_frames - expected_frames) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[1] == read_frames - assert torch.allclose(read_wav[..., :expected_frames], wav[..., seek_frames:], rtol=1e-03, atol=1e-04) - assert torch.allclose(read_wav[..., expected_frames:], expected_pad_wav) - - -class TestAvRead(TempDirMixin): - - def test_avread_seek_base(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 2. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'reference_a_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - for _ in range(100): - # seek will always load a full duration segment in the file - seek_time = random.uniform(0.0, 1.0) - seek_duration = random.uniform(0.001, 1.0) - read_wav, read_sr = _av_read(path, seek_time, seek_duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == int(seek_duration * sample_rate) - - def test_avread_seek_partial(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'reference_b_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - for _ in range(100): - # seek will always load a partial segment - seek_time = random.uniform(0.5, 1.) - seek_duration = 1. - expected_num_frames = n_frames - int(seek_time * sample_rate) - read_wav, read_sr = _av_read(path, seek_time, seek_duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == expected_num_frames - - def test_avread_seek_outofbound(self): - sample_rates = [8000, 16_000] - channels = [1, 2] - duration = 1. - for sample_rate, ch in product(sample_rates, channels): - n_frames = int(sample_rate * duration) - wav = get_white_noise(ch, n_frames) - path = self.get_temp_path(f'reference_c_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - seek_time = 1.5 - read_wav, read_sr = _av_read(path, seek_time, 1.) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == 0 - - def test_avread_seek_edge(self): - sample_rates = [8000, 16_000] - # some of these values will have - # int(((frames - 1) / sample_rate) * sample_rate) != (frames - 1) - n_frames = [1000, 1001, 1002] - channels = [1, 2] - for sample_rate, ch, frames in product(sample_rates, channels, n_frames): - duration = frames / sample_rate - wav = get_white_noise(ch, frames) - path = self.get_temp_path(f'reference_d_{sample_rate}_{ch}.wav') - save_wav(path, wav, sample_rate) - seek_time = (frames - 1) / sample_rate - seek_frames = int(seek_time * sample_rate) - read_wav, read_sr = _av_read(path, seek_time, duration) - assert read_sr == sample_rate - assert read_wav.shape[0] == wav.shape[0] - assert read_wav.shape[-1] == (frames - seek_frames) - - -class TestAudioWrite(TempDirMixin): - - def test_audio_write_wav(self): - torch.manual_seed(1234) - sample_rates = [8000, 16_000] - n_frames = [1000, 1001, 1002] - channels = [1, 2] - strategies = ["peak", "clip", "rms"] - formats = ["wav", "mp3"] - for sample_rate, ch, frames in product(sample_rates, channels, n_frames): - for format_, strategy in product(formats, strategies): - wav = get_white_noise(ch, frames) - path = self.get_temp_path(f'pred_{sample_rate}_{ch}') - audio_write(path, wav, sample_rate, format_, strategy=strategy) - read_wav, read_sr = torchaudio.load(f'{path}.{format_}') - if format_ == "wav": - assert read_wav.shape == wav.shape - - if format_ == "wav" and strategy in ["peak", "rms"]: - rescaled_read_wav = read_wav / read_wav.abs().max() * wav.abs().max() - # for a Gaussian, the typical max scale will be less than ~5x the std. - # The error when writing to disk will ~ 1/2**15, and when rescaling, 5x that. - # For RMS target, rescaling leaves more headroom by default, leading - # to a 20x rescaling typically - atol = (5 if strategy == "peak" else 20) / 2**15 - delta = (rescaled_read_wav - wav).abs().max() - assert torch.allclose(wav, rescaled_read_wav, rtol=0, atol=atol), (delta, atol) - formats = ["wav"] # faster unit tests diff --git a/spaces/GroveStreet/GTA_SOVITS/modules/enhancer.py b/spaces/GroveStreet/GTA_SOVITS/modules/enhancer.py deleted file mode 100644 index 37676311f7d8dc4ddc2a5244dedc27b2437e04f5..0000000000000000000000000000000000000000 --- a/spaces/GroveStreet/GTA_SOVITS/modules/enhancer.py +++ /dev/null @@ -1,105 +0,0 @@ -import numpy as np -import torch -import torch.nn.functional as F -from vdecoder.nsf_hifigan.nvSTFT import STFT -from vdecoder.nsf_hifigan.models import load_model -from torchaudio.transforms import Resample - -class Enhancer: - def __init__(self, enhancer_type, enhancer_ckpt, device=None): - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = device - - if enhancer_type == 'nsf-hifigan': - self.enhancer = NsfHifiGAN(enhancer_ckpt, device=self.device) - else: - raise ValueError(f" [x] Unknown enhancer: {enhancer_type}") - - self.resample_kernel = {} - self.enhancer_sample_rate = self.enhancer.sample_rate() - self.enhancer_hop_size = self.enhancer.hop_size() - - def enhance(self, - audio, # 1, T - sample_rate, - f0, # 1, n_frames, 1 - hop_size, - adaptive_key = 0, - silence_front = 0 - ): - # enhancer start time - start_frame = int(silence_front * sample_rate / hop_size) - real_silence_front = start_frame * hop_size / sample_rate - audio = audio[:, int(np.round(real_silence_front * sample_rate)) : ] - f0 = f0[: , start_frame :, :] - - # adaptive parameters - adaptive_factor = 2 ** ( -adaptive_key / 12) - adaptive_sample_rate = 100 * int(np.round(self.enhancer_sample_rate / adaptive_factor / 100)) - real_factor = self.enhancer_sample_rate / adaptive_sample_rate - - # resample the ddsp output - if sample_rate == adaptive_sample_rate: - audio_res = audio - else: - key_str = str(sample_rate) + str(adaptive_sample_rate) - if key_str not in self.resample_kernel: - self.resample_kernel[key_str] = Resample(sample_rate, adaptive_sample_rate, lowpass_filter_width = 128).to(self.device) - audio_res = self.resample_kernel[key_str](audio) - - n_frames = int(audio_res.size(-1) // self.enhancer_hop_size + 1) - - # resample f0 - f0_np = f0.squeeze(0).squeeze(-1).cpu().numpy() - f0_np *= real_factor - time_org = (hop_size / sample_rate) * np.arange(len(f0_np)) / real_factor - time_frame = (self.enhancer_hop_size / self.enhancer_sample_rate) * np.arange(n_frames) - f0_res = np.interp(time_frame, time_org, f0_np, left=f0_np[0], right=f0_np[-1]) - f0_res = torch.from_numpy(f0_res).unsqueeze(0).float().to(self.device) # 1, n_frames - - # enhance - enhanced_audio, enhancer_sample_rate = self.enhancer(audio_res, f0_res) - - # resample the enhanced output - if adaptive_factor != 0: - key_str = str(adaptive_sample_rate) + str(enhancer_sample_rate) - if key_str not in self.resample_kernel: - self.resample_kernel[key_str] = Resample(adaptive_sample_rate, enhancer_sample_rate, lowpass_filter_width = 128).to(self.device) - enhanced_audio = self.resample_kernel[key_str](enhanced_audio) - - # pad the silence frames - if start_frame > 0: - enhanced_audio = F.pad(enhanced_audio, (int(np.round(enhancer_sample_rate * real_silence_front)), 0)) - - return enhanced_audio, enhancer_sample_rate - - -class NsfHifiGAN(torch.nn.Module): - def __init__(self, model_path, device=None): - super().__init__() - if device is None: - device = 'cuda' if torch.cuda.is_available() else 'cpu' - self.device = device - print('| Load HifiGAN: ', model_path) - self.model, self.h = load_model(model_path, device=self.device) - - def sample_rate(self): - return self.h.sampling_rate - - def hop_size(self): - return self.h.hop_size - - def forward(self, audio, f0): - stft = STFT( - self.h.sampling_rate, - self.h.num_mels, - self.h.n_fft, - self.h.win_size, - self.h.hop_size, - self.h.fmin, - self.h.fmax) - with torch.no_grad(): - mel = stft.get_mel(audio) - enhanced_audio = self.model(mel, f0[:,:mel.size(-1)]).view(-1) - return enhanced_audio, self.h.sampling_rate \ No newline at end of file diff --git a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/megatron_trainer.py b/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/megatron_trainer.py deleted file mode 100644 index 8ab4657f73c6cda91e95637921edb84ccb76b3d0..0000000000000000000000000000000000000000 --- a/spaces/HarryLee/eCommerceImageCaptioning/fairseq/fairseq/model_parallel/megatron_trainer.py +++ /dev/null @@ -1,71 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -""" -Train a network across multiple GPUs. -""" - -from fairseq.dataclass.configs import FairseqConfig -from fairseq.distributed import utils as distributed_utils -from fairseq.trainer import Trainer - -try: - from fairseq.model_parallel.megatron.mpu import ( - get_data_parallel_rank, - get_data_parallel_world_size, - get_model_parallel_src_rank, - get_cuda_rng_tracker, - ) - - has_megatron_submodule = True -except (ImportError, ModuleNotFoundError): - has_megatron_submodule = False - - -class MegatronTrainer(Trainer): - """Main class for model parallel with data parallel training.""" - - def __init__(self, cfg: FairseqConfig, task, model, criterion, **kwargs): - if not has_megatron_submodule: - raise ImportError( - "\n\nPlease install the megatron submodule:" - "\n\n git submodule update --init " - "fairseq/model_parallel/megatron" - ) - super().__init__(cfg, task, model, criterion, **kwargs) - - def clip_grad_norm(self, clip_norm): - def _aggregate_model_parallel_grad_norm(total_norm): - total_norm = total_norm ** 2 - distributed_utils.all_reduce( - total_norm, group=distributed_utils.get_model_parallel_group() - ) - total_norm = total_norm ** 0.5 - return total_norm - - return self.optimizer.clip_grad_norm( - clip_norm, - aggregate_norm_fn=_aggregate_model_parallel_grad_norm, - ) - - def save_checkpoint(self, filename, extra_state): - """Save all training state in a checkpoint file.""" - extra_state['rng_tracker_states'] \ - = get_cuda_rng_tracker().get_states() - super().save_checkpoint(filename, extra_state) - - def load_checkpoint( - self, - filename, - reset_optimizer=False, - reset_lr_scheduler=False, - optimizer_overrides=None, - reset_meters=False, - ): - extra_state = super().load_checkpoint(filename, reset_optimizer=reset_optimizer, reset_lr_scheduler=reset_lr_scheduler, optimizer_overrides=optimizer_overrides, reset_meters=reset_meters) - if extra_state is not None and 'rng_tracker_states' in extra_state: - get_cuda_rng_tracker().set_states( - extra_state['rng_tracker_states']) - return extra_state diff --git a/spaces/Hexequin/dreamlike-photoreal-2.0/README.md b/spaces/Hexequin/dreamlike-photoreal-2.0/README.md deleted file mode 100644 index 0b4a00455e0eafdfab30671a0aa4ad5aa3c6276b..0000000000000000000000000000000000000000 --- a/spaces/Hexequin/dreamlike-photoreal-2.0/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Dreamlike Photoreal 2.0 -emoji: 🏢 -colorFrom: red -colorTo: yellow -sdk: gradio -sdk_version: 3.20.1 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HuggingFaceH4/reward-modeling-chat-ui/README.md b/spaces/HuggingFaceH4/reward-modeling-chat-ui/README.md deleted file mode 100644 index 5dd2193b8a7a59d213ab3754ec9f76876fcdd4fd..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceH4/reward-modeling-chat-ui/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Reward Modeling UI -emoji: 🎁 -colorFrom: orange -colorTo: indigo -sdk: gradio -python_version: 3.9.13 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HuggingFaceH4/starchat-playground/README.md b/spaces/HuggingFaceH4/starchat-playground/README.md deleted file mode 100644 index a220058af2796c94f985a2bf4dd46e4cfd48986a..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceH4/starchat-playground/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: StarChat Playground -emoji: ⭐️💬 -colorFrom: pink -colorTo: indigo -sdk: gradio -sdk_version: 3.33.1 -app_file: app.py -pinned: true -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/npmi/app.py b/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/npmi/app.py deleted file mode 100644 index 018d7a9579ceba243be2f0f25e0d0ff4228b6e4f..0000000000000000000000000000000000000000 --- a/spaces/HuggingFaceM4/IDEFICS_Data_Measurement_Tool/npmi/app.py +++ /dev/null @@ -1,6 +0,0 @@ -import evaluate -from evaluate.utils import launch_gradio_widget - - -module = evaluate.load("npmi", module_type= "measurement") -launch_gradio_widget(module) \ No newline at end of file diff --git a/spaces/ICML2022/OFA/fairseq/fairseq/data/backtranslation_dataset.py b/spaces/ICML2022/OFA/fairseq/fairseq/data/backtranslation_dataset.py deleted file mode 100644 index 8f70c90df3d237077537993e125d366c95292f1a..0000000000000000000000000000000000000000 --- a/spaces/ICML2022/OFA/fairseq/fairseq/data/backtranslation_dataset.py +++ /dev/null @@ -1,165 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import torch -from fairseq import utils - -from . import FairseqDataset - - -def backtranslate_samples(samples, collate_fn, generate_fn, cuda=True): - """Backtranslate a list of samples. - - Given an input (*samples*) of the form: - - [{'id': 1, 'source': 'hallo welt'}] - - this will return: - - [{'id': 1, 'source': 'hello world', 'target': 'hallo welt'}] - - Args: - samples (List[dict]): samples to backtranslate. Individual samples are - expected to have a 'source' key, which will become the 'target' - after backtranslation. - collate_fn (callable): function to collate samples into a mini-batch - generate_fn (callable): function to generate backtranslations - cuda (bool): use GPU for generation (default: ``True``) - - Returns: - List[dict]: an updated list of samples with a backtranslated source - """ - collated_samples = collate_fn(samples) - s = utils.move_to_cuda(collated_samples) if cuda else collated_samples - generated_sources = generate_fn(s) - - id_to_src = {sample["id"]: sample["source"] for sample in samples} - - # Go through each tgt sentence in batch and its corresponding best - # generated hypothesis and create a backtranslation data pair - # {id: id, source: generated backtranslation, target: original tgt} - return [ - { - "id": id.item(), - "target": id_to_src[id.item()], - "source": hypos[0]["tokens"].cpu(), - } - for id, hypos in zip(collated_samples["id"], generated_sources) - ] - - -class BacktranslationDataset(FairseqDataset): - """ - Sets up a backtranslation dataset which takes a tgt batch, generates - a src using a tgt-src backtranslation function (*backtranslation_fn*), - and returns the corresponding `{generated src, input tgt}` batch. - - Args: - tgt_dataset (~fairseq.data.FairseqDataset): the dataset to be - backtranslated. Only the source side of this dataset will be used. - After backtranslation, the source sentences in this dataset will be - returned as the targets. - src_dict (~fairseq.data.Dictionary): the dictionary of backtranslated - sentences. - tgt_dict (~fairseq.data.Dictionary, optional): the dictionary of - sentences to be backtranslated. - backtranslation_fn (callable, optional): function to call to generate - backtranslations. This is typically the `generate` method of a - :class:`~fairseq.sequence_generator.SequenceGenerator` object. - Pass in None when it is not available at initialization time, and - use set_backtranslation_fn function to set it when available. - output_collater (callable, optional): function to call on the - backtranslated samples to create the final batch - (default: ``tgt_dataset.collater``). - cuda: use GPU for generation - """ - - def __init__( - self, - tgt_dataset, - src_dict, - tgt_dict=None, - backtranslation_fn=None, - output_collater=None, - cuda=True, - **kwargs - ): - self.tgt_dataset = tgt_dataset - self.backtranslation_fn = backtranslation_fn - self.output_collater = ( - output_collater if output_collater is not None else tgt_dataset.collater - ) - self.cuda = cuda if torch.cuda.is_available() else False - self.src_dict = src_dict - self.tgt_dict = tgt_dict - - def __getitem__(self, index): - """ - Returns a single sample from *tgt_dataset*. Note that backtranslation is - not applied in this step; use :func:`collater` instead to backtranslate - a batch of samples. - """ - return self.tgt_dataset[index] - - def __len__(self): - return len(self.tgt_dataset) - - def set_backtranslation_fn(self, backtranslation_fn): - self.backtranslation_fn = backtranslation_fn - - def collater(self, samples): - """Merge and backtranslate a list of samples to form a mini-batch. - - Using the samples from *tgt_dataset*, load a collated target sample to - feed to the backtranslation model. Then take the backtranslation with - the best score as the source and the original input as the target. - - Note: we expect *tgt_dataset* to provide a function `collater()` that - will collate samples into the format expected by *backtranslation_fn*. - After backtranslation, we will feed the new list of samples (i.e., the - `(backtranslated source, original source)` pairs) to *output_collater* - and return the result. - - Args: - samples (List[dict]): samples to backtranslate and collate - - Returns: - dict: a mini-batch with keys coming from *output_collater* - """ - if samples[0].get("is_dummy", False): - return samples - samples = backtranslate_samples( - samples=samples, - collate_fn=self.tgt_dataset.collater, - generate_fn=(lambda net_input: self.backtranslation_fn(net_input)), - cuda=self.cuda, - ) - return self.output_collater(samples) - - def num_tokens(self, index): - """Just use the tgt dataset num_tokens""" - return self.tgt_dataset.num_tokens(index) - - def ordered_indices(self): - """Just use the tgt dataset ordered_indices""" - return self.tgt_dataset.ordered_indices() - - def size(self, index): - """Return an example's size as a float or tuple. This value is used - when filtering a dataset with ``--max-positions``. - - Note: we use *tgt_dataset* to approximate the length of the source - sentence, since we do not know the actual length until after - backtranslation. - """ - tgt_size = self.tgt_dataset.size(index)[0] - return (tgt_size, tgt_size) - - @property - def supports_prefetch(self): - return getattr(self.tgt_dataset, "supports_prefetch", False) - - def prefetch(self, indices): - return self.tgt_dataset.prefetch(indices) diff --git a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/cluster/__init__.py b/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/cluster/__init__.py deleted file mode 100644 index f1b9bde04e73e9218a5d534227caa4c25332f424..0000000000000000000000000000000000000000 --- a/spaces/Ikaros521/so-vits-svc-4.0-ikaros2/cluster/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -import numpy as np -import torch -from sklearn.cluster import KMeans - -def get_cluster_model(ckpt_path): - checkpoint = torch.load(ckpt_path) - kmeans_dict = {} - for spk, ckpt in checkpoint.items(): - km = KMeans(ckpt["n_features_in_"]) - km.__dict__["n_features_in_"] = ckpt["n_features_in_"] - km.__dict__["_n_threads"] = ckpt["_n_threads"] - km.__dict__["cluster_centers_"] = ckpt["cluster_centers_"] - kmeans_dict[spk] = km - return kmeans_dict - -def get_cluster_result(model, x, speaker): - """ - x: np.array [t, 256] - return cluster class result - """ - return model[speaker].predict(x) - -def get_cluster_center_result(model, x,speaker): - """x: np.array [t, 256]""" - predict = model[speaker].predict(x) - return model[speaker].cluster_centers_[predict] - -def get_center(model, x,speaker): - return model[speaker].cluster_centers_[x] diff --git a/spaces/Jack003/PixelDayAvatoon/DESCRIPTION.md b/spaces/Jack003/PixelDayAvatoon/DESCRIPTION.md deleted file mode 100644 index e66cbc0582bd61f2bd0bef76e81fd060c2f9526c..0000000000000000000000000000000000000000 --- a/spaces/Jack003/PixelDayAvatoon/DESCRIPTION.md +++ /dev/null @@ -1 +0,0 @@ -Recreate the viral AnimeGAN image transformation demo. \ No newline at end of file diff --git a/spaces/JavaFXpert/GPT-3.5-Express-inator/README.md b/spaces/JavaFXpert/GPT-3.5-Express-inator/README.md deleted file mode 100644 index a54d1f283e6d54e337977c73846d551c9ad5bdf4..0000000000000000000000000000000000000000 --- a/spaces/JavaFXpert/GPT-3.5-Express-inator/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: GPT 3.5 Express Inator -emoji: 💻 -colorFrom: pink -colorTo: yellow -sdk: gradio -sdk_version: 3.15.0 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/message-button.js b/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/message-button.js deleted file mode 100644 index e16b065c8c0ea84b927ebbb46b7ff336d085b8d9..0000000000000000000000000000000000000000 --- a/spaces/JohnSmith9982/ChuanhuChatGPT/web_assets/javascript/message-button.js +++ /dev/null @@ -1,92 +0,0 @@ - -// 为 bot 消息添加复制与切换显示按钮 - -function addChuanhuButton(botElement) { - var rawMessage = botElement.querySelector('.raw-message'); - var mdMessage = botElement.querySelector('.md-message'); - - if (!rawMessage) { // 如果没有 raw message,说明是早期历史记录,去除按钮 - var buttons = botElement.querySelectorAll('button.chuanhu-btn'); - for (var i = 0; i < buttons.length; i++) { - buttons[i].parentNode.removeChild(buttons[i]); - } - return; - } - botElement.querySelectorAll('button.copy-bot-btn, button.toggle-md-btn').forEach(btn => btn.remove()); // 就算原先有了,也必须重新添加,而不是跳过 - - // Copy bot button - var copyButton = document.createElement('button'); - copyButton.classList.add('chuanhu-btn'); - copyButton.classList.add('copy-bot-btn'); - copyButton.setAttribute('aria-label', 'Copy'); - copyButton.innerHTML = copyIcon; - - copyButton.addEventListener('click', async () => { - const textToCopy = rawMessage.innerText; - try { - if ("clipboard" in navigator) { - await navigator.clipboard.writeText(textToCopy); - copyButton.innerHTML = copiedIcon; - setTimeout(() => { - copyButton.innerHTML = copyIcon; - }, 1500); - } else { - const textArea = document.createElement("textarea"); - textArea.value = textToCopy; - document.body.appendChild(textArea); - textArea.select(); - try { - document.execCommand('copy'); - copyButton.innerHTML = copiedIcon; - setTimeout(() => { - copyButton.innerHTML = copyIcon; - }, 1500); - } catch (error) { - console.error("Copy failed: ", error); - } - document.body.removeChild(textArea); - } - } catch (error) { - console.error("Copy failed: ", error); - } - }); - botElement.appendChild(copyButton); - - // Toggle button - var toggleButton = document.createElement('button'); - toggleButton.classList.add('chuanhu-btn'); - toggleButton.classList.add('toggle-md-btn'); - toggleButton.setAttribute('aria-label', 'Toggle'); - var renderMarkdown = mdMessage.classList.contains('hideM'); - toggleButton.innerHTML = renderMarkdown ? mdIcon : rawIcon; - toggleButton.addEventListener('click', () => { - renderMarkdown = mdMessage.classList.contains('hideM'); - if (renderMarkdown) { - renderMarkdownText(botElement); - toggleButton.innerHTML=rawIcon; - } else { - removeMarkdownText(botElement); - toggleButton.innerHTML=mdIcon; - } - chatbotContentChanged(1); // to set md or raw in read-only history html - }); - botElement.insertBefore(toggleButton, copyButton); - - function renderMarkdownText(message) { - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.remove('hideM'); - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) rawDiv.classList.add('hideM'); - } - function removeMarkdownText(message) { - var rawDiv = message.querySelector('.raw-message'); - if (rawDiv) { - rawDiv.innerHTML = rawDiv.querySelector('pre')?.innerHTML || rawDiv.innerHTML; - rawDiv.classList.remove('hideM'); - } - var mdDiv = message.querySelector('.md-message'); - if (mdDiv) mdDiv.classList.add('hideM'); - } -} - - diff --git a/spaces/Kangarroar/ApplioRVC-Inference/demucs/__main__.py b/spaces/Kangarroar/ApplioRVC-Inference/demucs/__main__.py deleted file mode 100644 index 5148f20623bdaa827777558844796ded1876d7d0..0000000000000000000000000000000000000000 --- a/spaces/Kangarroar/ApplioRVC-Inference/demucs/__main__.py +++ /dev/null @@ -1,317 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -import json -import math -import os -import sys -import time -from dataclasses import dataclass, field - -import torch as th -from torch import distributed, nn -from torch.nn.parallel.distributed import DistributedDataParallel - -from .augment import FlipChannels, FlipSign, Remix, Scale, Shift -from .compressed import get_compressed_datasets -from .model import Demucs -from .parser import get_name, get_parser -from .raw import Rawset -from .repitch import RepitchedWrapper -from .pretrained import load_pretrained, SOURCES -from .tasnet import ConvTasNet -from .test import evaluate -from .train import train_model, validate_model -from .utils import (human_seconds, load_model, save_model, get_state, - save_state, sizeof_fmt, get_quantizer) -from .wav import get_wav_datasets, get_musdb_wav_datasets - - -@dataclass -class SavedState: - metrics: list = field(default_factory=list) - last_state: dict = None - best_state: dict = None - optimizer: dict = None - - -def main(): - parser = get_parser() - args = parser.parse_args() - name = get_name(parser, args) - print(f"Experiment {name}") - - if args.musdb is None and args.rank == 0: - print( - "You must provide the path to the MusDB dataset with the --musdb flag. " - "To download the MusDB dataset, see https://sigsep.github.io/datasets/musdb.html.", - file=sys.stderr) - sys.exit(1) - - eval_folder = args.evals / name - eval_folder.mkdir(exist_ok=True, parents=True) - args.logs.mkdir(exist_ok=True) - metrics_path = args.logs / f"{name}.json" - eval_folder.mkdir(exist_ok=True, parents=True) - args.checkpoints.mkdir(exist_ok=True, parents=True) - args.models.mkdir(exist_ok=True, parents=True) - - if args.device is None: - device = "cpu" - if th.cuda.is_available(): - device = "cuda" - else: - device = args.device - - th.manual_seed(args.seed) - # Prevents too many threads to be started when running `museval` as it can be quite - # inefficient on NUMA architectures. - os.environ["OMP_NUM_THREADS"] = "1" - os.environ["MKL_NUM_THREADS"] = "1" - - if args.world_size > 1: - if device != "cuda" and args.rank == 0: - print("Error: distributed training is only available with cuda device", file=sys.stderr) - sys.exit(1) - th.cuda.set_device(args.rank % th.cuda.device_count()) - distributed.init_process_group(backend="nccl", - init_method="tcp://" + args.master, - rank=args.rank, - world_size=args.world_size) - - checkpoint = args.checkpoints / f"{name}.th" - checkpoint_tmp = args.checkpoints / f"{name}.th.tmp" - if args.restart and checkpoint.exists() and args.rank == 0: - checkpoint.unlink() - - if args.test or args.test_pretrained: - args.epochs = 1 - args.repeat = 0 - if args.test: - model = load_model(args.models / args.test) - else: - model = load_pretrained(args.test_pretrained) - elif args.tasnet: - model = ConvTasNet(audio_channels=args.audio_channels, - samplerate=args.samplerate, X=args.X, - segment_length=4 * args.samples, - sources=SOURCES) - else: - model = Demucs( - audio_channels=args.audio_channels, - channels=args.channels, - context=args.context, - depth=args.depth, - glu=args.glu, - growth=args.growth, - kernel_size=args.kernel_size, - lstm_layers=args.lstm_layers, - rescale=args.rescale, - rewrite=args.rewrite, - stride=args.conv_stride, - resample=args.resample, - normalize=args.normalize, - samplerate=args.samplerate, - segment_length=4 * args.samples, - sources=SOURCES, - ) - model.to(device) - if args.init: - model.load_state_dict(load_pretrained(args.init).state_dict()) - - if args.show: - print(model) - size = sizeof_fmt(4 * sum(p.numel() for p in model.parameters())) - print(f"Model size {size}") - return - - try: - saved = th.load(checkpoint, map_location='cpu') - except IOError: - saved = SavedState() - - optimizer = th.optim.Adam(model.parameters(), lr=args.lr) - - quantizer = None - quantizer = get_quantizer(model, args, optimizer) - - if saved.last_state is not None: - model.load_state_dict(saved.last_state, strict=False) - if saved.optimizer is not None: - optimizer.load_state_dict(saved.optimizer) - - model_name = f"{name}.th" - if args.save_model: - if args.rank == 0: - model.to("cpu") - model.load_state_dict(saved.best_state) - save_model(model, quantizer, args, args.models / model_name) - return - elif args.save_state: - model_name = f"{args.save_state}.th" - if args.rank == 0: - model.to("cpu") - model.load_state_dict(saved.best_state) - state = get_state(model, quantizer) - save_state(state, args.models / model_name) - return - - if args.rank == 0: - done = args.logs / f"{name}.done" - if done.exists(): - done.unlink() - - augment = [Shift(args.data_stride)] - if args.augment: - augment += [FlipSign(), FlipChannels(), Scale(), - Remix(group_size=args.remix_group_size)] - augment = nn.Sequential(*augment).to(device) - print("Agumentation pipeline:", augment) - - if args.mse: - criterion = nn.MSELoss() - else: - criterion = nn.L1Loss() - - # Setting number of samples so that all convolution windows are full. - # Prevents hard to debug mistake with the prediction being shifted compared - # to the input mixture. - samples = model.valid_length(args.samples) - print(f"Number of training samples adjusted to {samples}") - samples = samples + args.data_stride - if args.repitch: - # We need a bit more audio samples, to account for potential - # tempo change. - samples = math.ceil(samples / (1 - 0.01 * args.max_tempo)) - - args.metadata.mkdir(exist_ok=True, parents=True) - if args.raw: - train_set = Rawset(args.raw / "train", - samples=samples, - channels=args.audio_channels, - streams=range(1, len(model.sources) + 1), - stride=args.data_stride) - - valid_set = Rawset(args.raw / "valid", channels=args.audio_channels) - elif args.wav: - train_set, valid_set = get_wav_datasets(args, samples, model.sources) - elif args.is_wav: - train_set, valid_set = get_musdb_wav_datasets(args, samples, model.sources) - else: - train_set, valid_set = get_compressed_datasets(args, samples) - - if args.repitch: - train_set = RepitchedWrapper( - train_set, - proba=args.repitch, - max_tempo=args.max_tempo) - - best_loss = float("inf") - for epoch, metrics in enumerate(saved.metrics): - print(f"Epoch {epoch:03d}: " - f"train={metrics['train']:.8f} " - f"valid={metrics['valid']:.8f} " - f"best={metrics['best']:.4f} " - f"ms={metrics.get('true_model_size', 0):.2f}MB " - f"cms={metrics.get('compressed_model_size', 0):.2f}MB " - f"duration={human_seconds(metrics['duration'])}") - best_loss = metrics['best'] - - if args.world_size > 1: - dmodel = DistributedDataParallel(model, - device_ids=[th.cuda.current_device()], - output_device=th.cuda.current_device()) - else: - dmodel = model - - for epoch in range(len(saved.metrics), args.epochs): - begin = time.time() - model.train() - train_loss, model_size = train_model( - epoch, train_set, dmodel, criterion, optimizer, augment, - quantizer=quantizer, - batch_size=args.batch_size, - device=device, - repeat=args.repeat, - seed=args.seed, - diffq=args.diffq, - workers=args.workers, - world_size=args.world_size) - model.eval() - valid_loss = validate_model( - epoch, valid_set, model, criterion, - device=device, - rank=args.rank, - split=args.split_valid, - overlap=args.overlap, - world_size=args.world_size) - - ms = 0 - cms = 0 - if quantizer and args.rank == 0: - ms = quantizer.true_model_size() - cms = quantizer.compressed_model_size(num_workers=min(40, args.world_size * 10)) - - duration = time.time() - begin - if valid_loss < best_loss and ms <= args.ms_target: - best_loss = valid_loss - saved.best_state = { - key: value.to("cpu").clone() - for key, value in model.state_dict().items() - } - - saved.metrics.append({ - "train": train_loss, - "valid": valid_loss, - "best": best_loss, - "duration": duration, - "model_size": model_size, - "true_model_size": ms, - "compressed_model_size": cms, - }) - if args.rank == 0: - json.dump(saved.metrics, open(metrics_path, "w")) - - saved.last_state = model.state_dict() - saved.optimizer = optimizer.state_dict() - if args.rank == 0 and not args.test: - th.save(saved, checkpoint_tmp) - checkpoint_tmp.rename(checkpoint) - - print(f"Epoch {epoch:03d}: " - f"train={train_loss:.8f} valid={valid_loss:.8f} best={best_loss:.4f} ms={ms:.2f}MB " - f"cms={cms:.2f}MB " - f"duration={human_seconds(duration)}") - - if args.world_size > 1: - distributed.barrier() - - del dmodel - model.load_state_dict(saved.best_state) - if args.eval_cpu: - device = "cpu" - model.to(device) - model.eval() - evaluate(model, args.musdb, eval_folder, - is_wav=args.is_wav, - rank=args.rank, - world_size=args.world_size, - device=device, - save=args.save, - split=args.split_valid, - shifts=args.shifts, - overlap=args.overlap, - workers=args.eval_workers) - model.to("cpu") - if args.rank == 0: - if not (args.test or args.test_pretrained): - save_model(model, quantizer, args, args.models / model_name) - print("done") - done.write_text("done") - - -if __name__ == "__main__": - main() diff --git a/spaces/KyanChen/FunSR/tools/data_tools/get_all_data_list.py b/spaces/KyanChen/FunSR/tools/data_tools/get_all_data_list.py deleted file mode 100644 index 6d3fab7d8525b555a362d7a43bff267c1ce6889a..0000000000000000000000000000000000000000 --- a/spaces/KyanChen/FunSR/tools/data_tools/get_all_data_list.py +++ /dev/null @@ -1,25 +0,0 @@ -import glob -import os - -import numpy as np -import pickle -import sys -import tqdm -import shutil -from skimage import io - -pre_path = r'H:\DataSet\SceneCls\UCMerced_LandUse\UCMerced_LandUse\Images' -sub_folder_list = glob.glob(pre_path +'/*') -all_data_list = [] -for sub_folder in sub_folder_list: - img_list = glob.glob(sub_folder+'/*') - all_data_list += img_list - -with open(pre_path+f'/../all_img_list.txt', 'w') as f: - for file in tqdm.tqdm(all_data_list): - img = io.imread(file, as_gray=True) - if 0 < img.shape[0]: - file_name = os.path.basename(os.path.dirname(file)) + '/' + os.path.basename(file) - gt_label = os.path.basename(os.path.dirname(file)) - f.write(file_name+' '+gt_label+'\n') - diff --git a/spaces/LanguageBind/LanguageBind/TRAIN_AND_VALIDATE.md b/spaces/LanguageBind/LanguageBind/TRAIN_AND_VALIDATE.md deleted file mode 100644 index 01ed15fe8f4ed2687597ad97c62d755e62dfdca9..0000000000000000000000000000000000000000 --- a/spaces/LanguageBind/LanguageBind/TRAIN_AND_VALIDATE.md +++ /dev/null @@ -1,207 +0,0 @@ -We provide the **off-the-shelf** scripts in the [scripts folder](scripts). - -## Training LanguageBind - -For example, to **train** LanguageBind on **Depth-Language** with 16 GPUs (2 nodes x 8 GPUs). -* First download the [cache of pretrained weight](https://github.com/PKU-YuanGroup/LanguageBind#-model-zoo) and specify ```CACHE_DIR```. -* The second step is to develop a path to ```TRAIN_DATA``` according to the [dataset preparation](https://github.com/PKU-YuanGroup/LanguageBind#-vidal-10m). -* Then you can run - -```bash -CACHE_DIR="path/to/pretrained/weight" -TRAIN_DATA="path/to/data" -cd /path/to/LanguageBind -TORCH_DISTRIBUTED_DEBUG=DETAIL HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 torchrun --nnodes=1 --nproc_per_node 8 \ - -m main \ - --train-data ${TRAIN_DATA} \ - --train-num-samples 3020000 \ - --clip-type "dl" --max-depth 10 \ - --do_train \ - --lock-text --lock-image --text-type "polish_mplug" \ - --init-temp 0.07 --learn-temp \ - --model "ViT-L-14" --cache-dir ${CACHE_DIR} \ - --convert_to_lora --lora_r 2 \ - --lr 5e-4 --coef-lr 1e-3 \ - --beta1 0.9 --beta2 0.98 --wd 0.2 --eps 1e-6 \ - --num-frames 1 --force-patch-dropout 0.5 \ - --epochs 1 --batch-size 128 --accum-freq 1 --warmup 200 \ - --precision "amp" --workers 10 --video-decode-backend "imgs" \ - --save-frequency 1 --log-every-n-steps 20 --report-to "tensorboard" --resume "latest" \ - --do_eval \ - --val_d_cls_data "NYUV2" -``` - - -## Validating LanguageBind - -For example, to **validate** LanguageBind on **Depth-Language** with 1 GPUs. -* First specify ```RESUME```. -* The second step is to prepare the [downstream dataset](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/TRAIN_AND_VALIDATE.md#downstream-datasets). -* Then you can run - -```bash -CACHE_DIR="path/to/pretrained/weight" -RESUME="thermal_language.pt" -TRAIN_DATA="path/to/data" -cd /path/to/LanguageBind -TORCH_DISTRIBUTED_DEBUG=DETAIL HF_DATASETS_OFFLINE=1 TRANSFORMERS_OFFLINE=1 torchrun --nproc_per_node 1 \ - -m main \ - --train-data ${TRAIN_DATA} \ - --train-num-samples 3020000 \ - --clip-type "dl" --max-depth 10 \ - --lock-text --lock-image --text-type "polish_mplug" \ - --init-temp 0.07 --learn-temp \ - --model "ViT-L-14" --cache-dir ${CACHE_DIR} \ - --convert_to_lora --lora_r 2 \ - --lr 5e-4 --coef-lr 1e-3 \ - --beta1 0.9 --beta2 0.98 --wd 0.2 --eps 1e-6 \ - --num-frames 1 --force-patch-dropout 0.5 \ - --epochs 1 --batch-size 128 --accum-freq 1 --warmup 200 \ - --precision "amp" --workers 10 --video-decode-backend "imgs" \ - --save-frequency 1 --log-every-n-steps 20 --report-to "tensorboard" --resume ${RESUME} \ - --do_eval \ - --val_d_cls_data "NYUV2" -``` - -## Downstream datasets - -### Depth -NYU V2 dataset is downloaded from [this repo](https://github.com/TUI-NICR/nicr-scene-analysis-datasets/tree/main/nicr_scene_analysis_datasets/datasets/nyuv2) and we reformat them to conform to the standard ImageNet format. Change the ```data_root``` [here](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/data/build_datasets.py#L148). - -### Video -Video datasets are downloaded from [this repo](https://github.com/jpthu17/HBI) and we show the folder structure. Change the ```data_root``` [here](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/data/build_datasets.py#L74). - -### Audio -Audio datasets are downloaded from [this repo](https://github.com/OFA-Sys/ONE-PEACE/blob/main/datasets.md#audio) and we reformat them to conform to the standard ImageNet format. Change the ```data_root``` [here](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/data/build_datasets.py#L127). - -### Infrared (Thermal) -We download LLVIP from [official website](https://bupt-ai-cz.github.io/LLVIP/), and FLIR from [here](https://www.flir.com/oem/adas/adas-dataset-form/). We reformat them to conform to the standard ImageNet format. Change the ```data_root``` [here](https://github.com/PKU-YuanGroup/LanguageBind/blob/main/data/build_datasets.py#L160). We also provide the processed data as follows. - -
-
ProsCons
- Exciting and realistic kart racing physics- Some tracks and power-ups can be repetitive or unfair
- Stunning 3D graphics and animations- Some cars and drivers can be overpowered or underpowered
- Over 55 cars to collect and customize- Some game modes and challenges can be too easy or too hard
- Over 45 power-ups to use and upgrade- Some in-app purchases can be expensive or unnecessary
- Over 15 drivers to recruit and team up with- Some ads can be annoying or intrusive
- Over 40 tracks to explore and race on- Some bugs or glitches can occur occasionally
- Various game modes and challenges to enjoy
- Online multiplayer and leaderboards to compete with others
- Daily rewards and achievements to earn
- Cross-platform compatibility and cloud saving
- - - - - - - - - - - - -
DatasetsBaidu YunGoogle CloudPeking University Yun
LLVIPLinkLinkLink
FLIR V1LinkLinkLink
FLIR V2LinkLinkLink
- - -### Folder structure -```bash -downstream_datasets -├── Audio -│   ├── esc50 -│   │   └── test -│   │   ├── airplane -│   │   ├── breathing -│   │   ├── brushing_teeth -│   │   ├── can_opening -│   │   ├── car_horn -│   │   ├── cat -│   │   ├── chainsaw -│   │   ├── chirping_birds -│   │   ├── church_bells -│   │   ├── clapping -│   │   ├── clock_alarm -│   │   ├── clock_tick -│   │   ├── coughing -│   │   ├── cow -│   │   ├── crackling_fire -│   │   ├── crickets -│   │   ├── crow -│   │   ├── crying_baby -│   │   ├── dog -│   │   ├── door_wood_creaks -│   │   ├── door_wood_knock -│   │   ├── drinking_sipping -│   │   ├── engine -│   │   ├── fireworks -│   │   ├── footsteps -│   │   ├── frog -│   │   ├── glass_breaking -│   │   ├── hand_saw -│   │   ├── helicopter -│   │   ├── hen -│   │   ├── insects -│   │   ├── keyboard_typing -│   │   ├── laughing -│   │   ├── mouse_click -│   │   ├── pig -│   │   ├── pouring_water -│   │   ├── rain -│   │   ├── rooster -│   │   ├── sea_waves -│   │   ├── sheep -│   │   ├── siren -│   │   ├── sneezing -│   │   ├── snoring -│   │   ├── thunderstorm -│   │   ├── toilet_flush -│   │   ├── train -│   │   ├── vacuum_cleaner -│   │   ├── washing_machine -│   │   ├── water_drops -│   │   └── wind -├── Depth -│   ├── nyuv2 -│   │   ├── data -│   │   │   └── val -│   │   │   ├── bathroom -│   │   │   ├── bedroom -│   │   │   ├── bookstore -│   │   │   ├── classroom -│   │   │   ├── dining_room -│   │   │   ├── home_office -│   │   │   ├── kitchen -│   │   │   ├── living_room -│   │   │   ├── office -│   │   │   └── others -├── Thermal -│   ├── flirv1 -│   │   └── val -│   │   ├── bicycle -│   │   ├── car -│   │   ├── dog -│   │   └── person -│   ├── flirv2 -│   │   └── val -│   │   ├── bike -│   │   ├── bus -│   │   ├── car -│   │   ├── hydrant -│   │   ├── light -│   │   ├── motor -│   │   ├── other\ vehicle -│   │   ├── person -│   │   ├── sign -│   │   ├── skateboard -│   │   ├── stroller -│   │   └── truck -│   ├── llvip -│   │   ├── train -│   │   │   ├── background -│   │   │   └── person -│   │   └── val -│   │   ├── background -│   │   └── person -└── VideoTextRetrieval - ├── vtRetdata - │   ├── ActivityNet - │   │   └── Videos - │   │   └── Activity_Videos - │   ├── Didemo - │   │   └── videos - │   ├── MSRVTT - │   │   └── MSRVTT_Videos - │   └── MSVD - │   └── MSVD_Videos -``` - diff --git a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py b/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py deleted file mode 100644 index a38b7bb3ae3136b07eadfc2db445fef4c2de186b..0000000000000000000000000000000000000000 --- a/spaces/LaynzKunz/Aesthetic_RVC_Inference_HF/lib/uvr5_pack/lib_v5/layers_537238KB.py +++ /dev/null @@ -1,126 +0,0 @@ -import torch -from torch import nn -import torch.nn.functional as F - -from . import spec_utils - - -class Conv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(Conv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nout, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - bias=False, - ), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class SeperableConv2DBNActiv(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, dilation=1, activ=nn.ReLU): - super(SeperableConv2DBNActiv, self).__init__() - self.conv = nn.Sequential( - nn.Conv2d( - nin, - nin, - kernel_size=ksize, - stride=stride, - padding=pad, - dilation=dilation, - groups=nin, - bias=False, - ), - nn.Conv2d(nin, nout, kernel_size=1, bias=False), - nn.BatchNorm2d(nout), - activ(), - ) - - def __call__(self, x): - return self.conv(x) - - -class Encoder(nn.Module): - def __init__(self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.LeakyReLU): - super(Encoder, self).__init__() - self.conv1 = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.conv2 = Conv2DBNActiv(nout, nout, ksize, stride, pad, activ=activ) - - def __call__(self, x): - skip = self.conv1(x) - h = self.conv2(skip) - - return h, skip - - -class Decoder(nn.Module): - def __init__( - self, nin, nout, ksize=3, stride=1, pad=1, activ=nn.ReLU, dropout=False - ): - super(Decoder, self).__init__() - self.conv = Conv2DBNActiv(nin, nout, ksize, 1, pad, activ=activ) - self.dropout = nn.Dropout2d(0.1) if dropout else None - - def __call__(self, x, skip=None): - x = F.interpolate(x, scale_factor=2, mode="bilinear", align_corners=True) - if skip is not None: - skip = spec_utils.crop_center(skip, x) - x = torch.cat([x, skip], dim=1) - h = self.conv(x) - - if self.dropout is not None: - h = self.dropout(h) - - return h - - -class ASPPModule(nn.Module): - def __init__(self, nin, nout, dilations=(4, 8, 16, 32, 64), activ=nn.ReLU): - super(ASPPModule, self).__init__() - self.conv1 = nn.Sequential( - nn.AdaptiveAvgPool2d((1, None)), - Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ), - ) - self.conv2 = Conv2DBNActiv(nin, nin, 1, 1, 0, activ=activ) - self.conv3 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[0], dilations[0], activ=activ - ) - self.conv4 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[1], dilations[1], activ=activ - ) - self.conv5 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv6 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.conv7 = SeperableConv2DBNActiv( - nin, nin, 3, 1, dilations[2], dilations[2], activ=activ - ) - self.bottleneck = nn.Sequential( - Conv2DBNActiv(nin * 7, nout, 1, 1, 0, activ=activ), nn.Dropout2d(0.1) - ) - - def forward(self, x): - _, _, h, w = x.size() - feat1 = F.interpolate( - self.conv1(x), size=(h, w), mode="bilinear", align_corners=True - ) - feat2 = self.conv2(x) - feat3 = self.conv3(x) - feat4 = self.conv4(x) - feat5 = self.conv5(x) - feat6 = self.conv6(x) - feat7 = self.conv7(x) - out = torch.cat((feat1, feat2, feat3, feat4, feat5, feat6, feat7), dim=1) - bottle = self.bottleneck(out) - return bottle diff --git a/spaces/Lbin123/Lbingo/postcss.config.js b/spaces/Lbin123/Lbingo/postcss.config.js deleted file mode 100644 index 33ad091d26d8a9dc95ebdf616e217d985ec215b8..0000000000000000000000000000000000000000 --- a/spaces/Lbin123/Lbingo/postcss.config.js +++ /dev/null @@ -1,6 +0,0 @@ -module.exports = { - plugins: { - tailwindcss: {}, - autoprefixer: {}, - }, -} diff --git a/spaces/Loreleihunny/total_capy-love/Dockerfile b/spaces/Loreleihunny/total_capy-love/Dockerfile deleted file mode 100644 index e6158e4b2d67eeea6e30ad3c1bb6043ec09b7b9b..0000000000000000000000000000000000000000 --- a/spaces/Loreleihunny/total_capy-love/Dockerfile +++ /dev/null @@ -1,11 +0,0 @@ -FROM node:18-bullseye-slim -RUN apt-get update && \ -apt-get install -y git -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app -WORKDIR /app -RUN npm install -COPY Dockerfile greeting.md* .env* ./ -RUN npm run build -EXPOSE 7860 -ENV NODE_ENV=production -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/LudvigDoeser/TSLA_stock_predictions/README.md b/spaces/LudvigDoeser/TSLA_stock_predictions/README.md deleted file mode 100644 index 92b8a7f267a83be5091fa8098d69a199d7f6045e..0000000000000000000000000000000000000000 --- a/spaces/LudvigDoeser/TSLA_stock_predictions/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: TSLA Stock Predictions -emoji: 🌍 -colorFrom: red -colorTo: indigo -sdk: gradio -sdk_version: 3.16.1 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Lycorisdeve/DeepDanbooru_string/app.py b/spaces/Lycorisdeve/DeepDanbooru_string/app.py deleted file mode 100644 index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000 --- a/spaces/Lycorisdeve/DeepDanbooru_string/app.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import functools -import os -import html -import pathlib -import tarfile - -import deepdanbooru as dd -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import tensorflow as tf -import piexif -import piexif.helper - -TITLE = 'DeepDanbooru String' - -TOKEN = os.environ['TOKEN'] -MODEL_REPO = 'CikeyQI/DeepDanbooru_string' -MODEL_FILENAME = 'model-resnet_custom_v3.h5' -LABEL_FILENAME = 'tags.txt' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--score-slider-step', type=float, default=0.05) - parser.add_argument('--score-threshold', type=float, default=0.5) - parser.add_argument('--theme', type=str, default='dark-grass') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset', - use_auth_token=TOKEN) - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model() -> tf.keras.Model: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - MODEL_FILENAME, - use_auth_token=TOKEN) - model = tf.keras.models.load_model(path) - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - LABEL_FILENAME, - use_auth_token=TOKEN) - with open(path) as f: - labels = [line.strip() for line in f.readlines()] - return labels - -def plaintext_to_html(text): - text = "

" + "
\n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

" - return text - -def predict(image: PIL.Image.Image, score_threshold: float, - model: tf.keras.Model, labels: list[str]) -> dict[str, float]: - rawimage = image - _, height, width, _ = model.input_shape - image = np.asarray(image) - image = tf.image.resize(image, - size=(height, width), - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True) - image = image.numpy() - image = dd.image.transform_and_pad_image(image, width, height) - image = image / 255. - probs = model.predict(image[None, ...])[0] - probs = probs.astype(float) - res = dict() - for prob, label in zip(probs.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True)) - a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)') - c = ', '.join(list(b.keys())) - - items = rawimage.info - geninfo = '' - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode('utf8', errors="ignore") - - items['exif comment'] = exif_comment - geninfo = exif_comment - - for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', - 'loop', 'background', 'timestamp', 'duration']: - items.pop(field, None) - - geninfo = items.get('parameters', geninfo) - - info = f""" -

PNG Info

-""" - for key, text in items.items(): - info += f""" -
-

{plaintext_to_html(str(key))}

-

{plaintext_to_html(str(text))}

-
-""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

{message}

" - - return (a,c,res,info) - - -def main(): - args = parse_args() - model = load_model() - labels = load_labels() - - func = functools.partial(predict, model=model, labels=labels) - func = functools.update_wrapper(func, predict) - - gr.Interface( - func, - [ - gr.inputs.Image(type='pil', label='Input'), - gr.inputs.Slider(0, - 1, - step=args.score_slider_step, - default=args.score_threshold, - label='Score Threshold'), - ], - [ - gr.outputs.Textbox(label='Output (string)'), - gr.outputs.Textbox(label='Output (raw string)'), - gr.outputs.Label(label='Output (label)'), - gr.outputs.HTML() - ], - examples=[ - ['miku.jpg',0.5], - ['miku2.jpg',0.5] - ], - title=TITLE, - description=''' -Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer. - -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - ''', - theme=args.theme, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/MariaK/Audio-Course-Certification/app.py b/spaces/MariaK/Audio-Course-Certification/app.py deleted file mode 100644 index e1fa5e92df0c49085ad08c8019fad25fb6b65a80..0000000000000000000000000000000000000000 --- a/spaces/MariaK/Audio-Course-Certification/app.py +++ /dev/null @@ -1,197 +0,0 @@ -import gradio as gr -from huggingface_hub import HfApi, hf_hub_download, Repository -from huggingface_hub.repocard import metadata_load -from gradio_client import Client -from PIL import Image, ImageDraw, ImageFont - -from datetime import date -import time - -import os -import pandas as pd -import json - -api = HfApi() -HF_TOKEN = os.environ.get("HF_TOKEN") - -# Private dataset repo containing the list of already certified users -DATASET_REPO_URL = "https://huggingface.co/datasets/MariaK/audio-course" -CERTIFIED_USERS_FILENAME = "usernames.csv" - -# Private space to check if a user has passed. -SPACE_ID = "MariaK/Check-Audio-Course-Progress" - - -def check_if_passed(username): - """ - Check if given user passed enough assignments - :param username: User HF username - """ - - passed = False - certificate_type = "" - - client = Client(SPACE_ID, hf_token=HF_TOKEN) - result = client.predict(username, fn_index=0) - with open(result) as json_data: - data = json.load(json_data) - - df = pd.DataFrame(data['data']) - if len(df[df.iloc[:,0] == '✅']) == 4: - passed = True - certificate_type = "excellence" - elif len(df[df.iloc[:,0] == '✅']) == 3: - passed = True - certificate_type = "completion" - - return passed, certificate_type - - -def generate_certificate(certificate_template, first_name, last_name): - """ - Generates certificate from the template - :param certificate_template: type of the certificate to generate - :param first_name: first name entered by user - :param last_name: last name entered by user - """ - - im = Image.open(certificate_template) - d = ImageDraw.Draw(im) - - name_font = ImageFont.truetype("Quattrocento-Regular.ttf", 100) - date_font = ImageFont.truetype("Quattrocento-Regular.ttf", 48) - - name = str(first_name) + " " + str(last_name) - print("NAME", name) - - # Debug line name - #d.line(((200, 740), (1800, 740)), "gray") - #d.line(((1000, 0), (1000, 1400)), "gray") - - # Name - d.text((1000, 740), name, fill="black", anchor="mm", font=name_font) - - # Debug line date - #d.line(((1500, 0), (1500, 1400)), "gray") - - # Date of certification - d.text((1480, 1170), str(date.today()), fill="black", anchor="mm", font=date_font) - - - pdf = im.convert('RGB') - pdf.save('certificate.pdf') - - return im, "./certificate.pdf" - - -def add_certified_user(hf_username, first_name, last_name, certificate_type): - """ - Add the certified user to the database - """ - - print("ADD CERTIFIED USER") - repo = Repository(local_dir="usernames", clone_from=DATASET_REPO_URL, use_auth_token=HF_TOKEN) - repo.git_pull() - - history = pd.read_csv(os.path.join("usernames", CERTIFIED_USERS_FILENAME)) - - # Check if this hf_username is already in our dataset: - check = history.loc[history['hf_username'] == hf_username] - if not check.empty: - history = history.drop(labels=check.index[0], axis=0) - - new_row = pd.DataFrame({'hf_username': hf_username, 'first_name': first_name, 'last_name': last_name, 'certificate_type': certificate_type, 'datetime': time.time()}, index=[0]) - history = pd.concat([new_row, history[:]]).reset_index(drop=True) - - history.to_csv(os.path.join("usernames", CERTIFIED_USERS_FILENAME), index=False) - repo.push_to_hub(commit_message="Update certified users list") - - -def create_certificate(passed, certificate_type, hf_username, first_name, last_name): - """ - Generates certificate, adds message, saves username of the certified user - :param passed: boolean whether the user passed enough assignments - :param certificate_type: type of the certificate - completion or excellence - :param first_name: first name entered by user - :param last_name: last name entered by user - """ - - if passed and certificate_type == "excellence": - # Generate a certificate of - certificate, pdf = generate_certificate("./certificate-excellence.png", first_name, last_name) - # Add this user to our database - add_certified_user(hf_username, first_name, last_name, certificate_type) - # Add a message - message = """ - Congratulations, you successfully completed the Hugging Face Audio Course 🎉! \n - Since you pass 100% of the hands-on you get a Certificate of Excellence 🎓. \n - You can download your certificate below ⬇️ \n - Don't hesitate to share your certificate image below on Twitter and Linkedin (you can tag me @mariakhalusova and @huggingface) 🤗 - """ - elif passed and certificate_type == "completion": - # Generate a certificate of completion - certificate, pdf = generate_certificate("./certificate-completion.png", first_name, last_name) - # Add this user to our database - add_certified_user(hf_username, first_name, last_name, certificate_type) - # Add a message - message = """ - Congratulations, you successfully completed the Hugging Face Audio Course 🎉! \n - Since you pass 3 out of 4 of the hands-on you get a Certificate of Completion 🎓. \n - You can download your certificate below ⬇️ \n - Don't hesitate to share your certificate image below on Twitter and Linkedin (you can tag me @mariakhalusova and @huggingface) 🤗 \n - You can try to get a Certificate of Excellence if you pass 100% of the hands-on, don't hesitate to check which unit you didn't pass and update these models. - """ - else: - # Not passed yet - certificate = Image.new("RGB", (100, 100), (255, 255, 255)) - pdf = "./fail.pdf" - # Add a message - message = """ - You didn't pass the minimum of 3 out of 4 of the hands-on to get a certificate of completion. - For more information about the certification process, refer to Unit 8 of the course. To see what hands-on you still need to complete, use the self-evaluation space linked in the description above. - If the results here differ from your results in the self-evaluation space, make sure that your model's metrics automatically uploaded by Trainer have not been manually altered. - """ - return certificate, message, pdf - - -def certification(hf_username, first_name, last_name): - passed, certificate_type = check_if_passed(hf_username) - certificate, message, pdf = create_certificate(passed, certificate_type, hf_username, first_name, last_name) - print("MESSAGE", message) - - if passed: - visible = True - else: - visible = False - - return message, pdf, certificate, output_row.update(visible=visible) - -with gr.Blocks() as demo: - gr.Markdown(f""" - # Get your Hugging Face Audio Course Certificate 🎓 - The certification process is completely free: - - To get a *certificate of completion*: you need to **pass 3 out of 4 hands-on assignments**. - - To get a *certificate of excellence*: you need to **pass 4 out of 4 hands-on assignments**. - - For more information about the certification process [check the course page on certification](https://huggingface.co/learn/audio-course/chapter8/certification). - - To check which assignments you still need to complete, use the [self-evaluation space](https://huggingface.co/spaces/MariaK/Check-my-progress-Audio-Course). - - Don't hesitate to share your certificate on Twitter (tag me [@mariakhalusova](https://twitter.com/mariaKhalusova) and [@huggingface](https://twitter.com/huggingface)) and on LinkedIn. - """) - - hf_username = gr.Textbox(placeholder="MariaK", label="Your Hugging Face Username (case sensitive)") - first_name = gr.Textbox(placeholder="Maria", label="Your First Name") - last_name = gr.Textbox(placeholder="Khalusova", label="Your Last Name") - - check_progress_button = gr.Button(value="Check if I pass and get the certificate") - output_text = gr.components.Textbox() - - with gr.Row(visible=True) as output_row: - output_pdf = gr.File() - output_img = gr.components.Image(type="pil") - - check_progress_button.click(fn=certification, inputs=[hf_username, first_name, last_name], outputs=[output_text, output_pdf, output_img, output_row]) - - -demo.launch(debug=True) \ No newline at end of file diff --git a/spaces/MetaWabbit/Auto-GPT/autogpt/config/__init__.py b/spaces/MetaWabbit/Auto-GPT/autogpt/config/__init__.py deleted file mode 100644 index 726b6dcf3da95968b948c4d897e97a9cdd0928ff..0000000000000000000000000000000000000000 --- a/spaces/MetaWabbit/Auto-GPT/autogpt/config/__init__.py +++ /dev/null @@ -1,14 +0,0 @@ -""" -This module contains the configuration classes for AutoGPT. -""" -from autogpt.config.ai_config import AIConfig -from autogpt.config.config import Config, check_openai_api_key -from autogpt.config.singleton import AbstractSingleton, Singleton - -__all__ = [ - "check_openai_api_key", - "AbstractSingleton", - "AIConfig", - "Config", - "Singleton", -] diff --git a/spaces/MirageML/sjc/guided_diffusion/fp16_util.py b/spaces/MirageML/sjc/guided_diffusion/fp16_util.py deleted file mode 100644 index d599568f3197bcc236e9ae617829fa060640795f..0000000000000000000000000000000000000000 --- a/spaces/MirageML/sjc/guided_diffusion/fp16_util.py +++ /dev/null @@ -1,237 +0,0 @@ -""" -Helpers to train with 16-bit precision. -""" - -import numpy as np -import torch as th -import torch.nn as nn -from torch._utils import _flatten_dense_tensors, _unflatten_dense_tensors - -# from . import logger - -INITIAL_LOG_LOSS_SCALE = 20.0 - - -def convert_module_to_f16(l): - """ - Convert primitive modules to float16. - """ - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)): - l.weight.data = l.weight.data.half() - if l.bias is not None: - l.bias.data = l.bias.data.half() - - -def convert_module_to_f32(l): - """ - Convert primitive modules to float32, undoing convert_module_to_f16(). - """ - if isinstance(l, (nn.Conv1d, nn.Conv2d, nn.Conv3d)): - l.weight.data = l.weight.data.float() - if l.bias is not None: - l.bias.data = l.bias.data.float() - - -def make_master_params(param_groups_and_shapes): - """ - Copy model parameters into a (differently-shaped) list of full-precision - parameters. - """ - master_params = [] - for param_group, shape in param_groups_and_shapes: - master_param = nn.Parameter( - _flatten_dense_tensors( - [param.detach().float() for (_, param) in param_group] - ).view(shape) - ) - master_param.requires_grad = True - master_params.append(master_param) - return master_params - - -def model_grads_to_master_grads(param_groups_and_shapes, master_params): - """ - Copy the gradients from the model parameters into the master parameters - from make_master_params(). - """ - for master_param, (param_group, shape) in zip( - master_params, param_groups_and_shapes - ): - master_param.grad = _flatten_dense_tensors( - [param_grad_or_zeros(param) for (_, param) in param_group] - ).view(shape) - - -def master_params_to_model_params(param_groups_and_shapes, master_params): - """ - Copy the master parameter data back into the model parameters. - """ - # Without copying to a list, if a generator is passed, this will - # silently not copy any parameters. - for master_param, (param_group, _) in zip(master_params, param_groups_and_shapes): - for (_, param), unflat_master_param in zip( - param_group, unflatten_master_params(param_group, master_param.view(-1)) - ): - param.detach().copy_(unflat_master_param) - - -def unflatten_master_params(param_group, master_param): - return _unflatten_dense_tensors(master_param, [param for (_, param) in param_group]) - - -def get_param_groups_and_shapes(named_model_params): - named_model_params = list(named_model_params) - scalar_vector_named_params = ( - [(n, p) for (n, p) in named_model_params if p.ndim <= 1], - (-1), - ) - matrix_named_params = ( - [(n, p) for (n, p) in named_model_params if p.ndim > 1], - (1, -1), - ) - return [scalar_vector_named_params, matrix_named_params] - - -def master_params_to_state_dict( - model, param_groups_and_shapes, master_params, use_fp16 -): - if use_fp16: - state_dict = model.state_dict() - for master_param, (param_group, _) in zip( - master_params, param_groups_and_shapes - ): - for (name, _), unflat_master_param in zip( - param_group, unflatten_master_params(param_group, master_param.view(-1)) - ): - assert name in state_dict - state_dict[name] = unflat_master_param - else: - state_dict = model.state_dict() - for i, (name, _value) in enumerate(model.named_parameters()): - assert name in state_dict - state_dict[name] = master_params[i] - return state_dict - - -def state_dict_to_master_params(model, state_dict, use_fp16): - if use_fp16: - named_model_params = [ - (name, state_dict[name]) for name, _ in model.named_parameters() - ] - param_groups_and_shapes = get_param_groups_and_shapes(named_model_params) - master_params = make_master_params(param_groups_and_shapes) - else: - master_params = [state_dict[name] for name, _ in model.named_parameters()] - return master_params - - -def zero_master_grads(master_params): - for param in master_params: - param.grad = None - - -def zero_grad(model_params): - for param in model_params: - # Taken from https://pytorch.org/docs/stable/_modules/torch/optim/optimizer.html#Optimizer.add_param_group - if param.grad is not None: - param.grad.detach_() - param.grad.zero_() - - -def param_grad_or_zeros(param): - if param.grad is not None: - return param.grad.data.detach() - else: - return th.zeros_like(param) - - -class MixedPrecisionTrainer: - def __init__( - self, - *, - model, - use_fp16=False, - fp16_scale_growth=1e-3, - initial_lg_loss_scale=INITIAL_LOG_LOSS_SCALE, - ): - self.model = model - self.use_fp16 = use_fp16 - self.fp16_scale_growth = fp16_scale_growth - - self.model_params = list(self.model.parameters()) - self.master_params = self.model_params - self.param_groups_and_shapes = None - self.lg_loss_scale = initial_lg_loss_scale - - if self.use_fp16: - self.param_groups_and_shapes = get_param_groups_and_shapes( - self.model.named_parameters() - ) - self.master_params = make_master_params(self.param_groups_and_shapes) - self.model.convert_to_fp16() - - def zero_grad(self): - zero_grad(self.model_params) - - def backward(self, loss: th.Tensor): - if self.use_fp16: - loss_scale = 2 ** self.lg_loss_scale - (loss * loss_scale).backward() - else: - loss.backward() - - def optimize(self, opt: th.optim.Optimizer): - if self.use_fp16: - return self._optimize_fp16(opt) - else: - return self._optimize_normal(opt) - - def _optimize_fp16(self, opt: th.optim.Optimizer): - logger.logkv_mean("lg_loss_scale", self.lg_loss_scale) - model_grads_to_master_grads(self.param_groups_and_shapes, self.master_params) - grad_norm, param_norm = self._compute_norms(grad_scale=2 ** self.lg_loss_scale) - if check_overflow(grad_norm): - self.lg_loss_scale -= 1 - logger.log(f"Found NaN, decreased lg_loss_scale to {self.lg_loss_scale}") - zero_master_grads(self.master_params) - return False - - logger.logkv_mean("grad_norm", grad_norm) - logger.logkv_mean("param_norm", param_norm) - - for p in self.master_params: - p.grad.mul_(1.0 / (2 ** self.lg_loss_scale)) - opt.step() - zero_master_grads(self.master_params) - master_params_to_model_params(self.param_groups_and_shapes, self.master_params) - self.lg_loss_scale += self.fp16_scale_growth - return True - - def _optimize_normal(self, opt: th.optim.Optimizer): - grad_norm, param_norm = self._compute_norms() - logger.logkv_mean("grad_norm", grad_norm) - logger.logkv_mean("param_norm", param_norm) - opt.step() - return True - - def _compute_norms(self, grad_scale=1.0): - grad_norm = 0.0 - param_norm = 0.0 - for p in self.master_params: - with th.no_grad(): - param_norm += th.norm(p, p=2, dtype=th.float32).item() ** 2 - if p.grad is not None: - grad_norm += th.norm(p.grad, p=2, dtype=th.float32).item() ** 2 - return np.sqrt(grad_norm) / grad_scale, np.sqrt(param_norm) - - def master_params_to_state_dict(self, master_params): - return master_params_to_state_dict( - self.model, self.param_groups_and_shapes, master_params, self.use_fp16 - ) - - def state_dict_to_master_params(self, state_dict): - return state_dict_to_master_params(self.model, state_dict, self.use_fp16) - - -def check_overflow(value): - return (value == float("inf")) or (value == -float("inf")) or (value != value) diff --git a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/AttModel.py b/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/AttModel.py deleted file mode 100644 index 3dc4e5b7a78c4affbfba4044ca8c96c30b26e36a..0000000000000000000000000000000000000000 --- a/spaces/NAACL2022/CLIP-Caption-Reward/captioning/models/AttModel.py +++ /dev/null @@ -1,969 +0,0 @@ -# This file contains Att2in2, AdaAtt, AdaAttMO, UpDown model - -# AdaAtt is from Knowing When to Look: Adaptive Attention via A Visual Sentinel for Image Captioning -# https://arxiv.org/abs/1612.01887 -# AdaAttMO is a modified version with maxout lstm - -# Att2in is from Self-critical Sequence Training for Image Captioning -# https://arxiv.org/abs/1612.00563 -# In this file we only have Att2in2, which is a slightly different version of att2in, -# in which the img feature embedding and word embedding is the same as what in adaatt. - -# UpDown is from Bottom-Up and Top-Down Attention for Image Captioning and VQA -# https://arxiv.org/abs/1707.07998 -# However, it may not be identical to the author's architecture. - -from __future__ import absolute_import -from __future__ import division -from __future__ import print_function - -import numpy as np -import torch -import torch.nn as nn -import torch.nn.functional as F -from . import utils -from torch.nn.utils.rnn import PackedSequence, pack_padded_sequence, pad_packed_sequence - -from .CaptionModel import CaptionModel - -bad_endings = ['a','an','the','in','for','at','of','with','before','after','on','upon','near','to','is','are','am'] -bad_endings += ['the'] - -def sort_pack_padded_sequence(input, lengths): - sorted_lengths, indices = torch.sort(lengths, descending=True) - # tmp = pack_padded_sequence(input[indices], sorted_lengths, batch_first=True) - tmp = pack_padded_sequence(input[indices], sorted_lengths.cpu(), batch_first=True) - inv_ix = indices.clone() - inv_ix[indices] = torch.arange(0,len(indices)).type_as(inv_ix) - return tmp, inv_ix - -def pad_unsort_packed_sequence(input, inv_ix): - tmp, _ = pad_packed_sequence(input, batch_first=True) - tmp = tmp[inv_ix] - return tmp - -def pack_wrapper(module, att_feats, att_masks): - if att_masks is not None: - packed, inv_ix = sort_pack_padded_sequence(att_feats, att_masks.data.long().sum(1)) - return pad_unsort_packed_sequence(PackedSequence(module(packed[0]), packed[1]), inv_ix) - else: - return module(att_feats) - -class AttModel(CaptionModel): - def __init__(self, opt): - super(AttModel, self).__init__() - self.vocab_size = opt.vocab_size - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - self.num_layers = opt.num_layers - self.drop_prob_lm = opt.drop_prob_lm - self.seq_length = getattr(opt, 'max_length', 20) or opt.seq_length # maximum sample length - self.fc_feat_size = opt.fc_feat_size - self.att_feat_size = opt.att_feat_size - self.att_hid_size = opt.att_hid_size - - self.bos_idx = getattr(opt, 'bos_idx', 0) - self.eos_idx = getattr(opt, 'eos_idx', 0) - self.pad_idx = getattr(opt, 'pad_idx', 0) - - self.use_bn = getattr(opt, 'use_bn', 0) - - self.ss_prob = 0.0 # Schedule sampling probability - - self.embed = nn.Sequential(nn.Embedding(self.vocab_size + 1, self.input_encoding_size), - nn.ReLU(), - nn.Dropout(self.drop_prob_lm)) - self.fc_embed = nn.Sequential(nn.Linear(self.fc_feat_size, self.rnn_size), - nn.ReLU(), - nn.Dropout(self.drop_prob_lm)) - self.att_embed = nn.Sequential(*( - ((nn.BatchNorm1d(self.att_feat_size),) if self.use_bn else ())+ - (nn.Linear(self.att_feat_size, self.rnn_size), - nn.ReLU(), - nn.Dropout(self.drop_prob_lm))+ - ((nn.BatchNorm1d(self.rnn_size),) if self.use_bn==2 else ()))) - - self.logit_layers = getattr(opt, 'logit_layers', 1) - if self.logit_layers == 1: - self.logit = nn.Linear(self.rnn_size, self.vocab_size + 1) - else: - self.logit = [[nn.Linear(self.rnn_size, self.rnn_size), nn.ReLU(), nn.Dropout(0.5)] for _ in range(opt.logit_layers - 1)] - self.logit = nn.Sequential(*(reduce(lambda x,y:x+y, self.logit) + [nn.Linear(self.rnn_size, self.vocab_size + 1)])) - self.ctx2att = nn.Linear(self.rnn_size, self.att_hid_size) - - # For remove bad endding - self.vocab = opt.vocab - self.bad_endings_ix = [int(k) for k,v in self.vocab.items() if v in bad_endings] - - def init_hidden(self, bsz): - weight = self.logit.weight \ - if hasattr(self.logit, "weight") \ - else self.logit[0].weight - return (weight.new_zeros(self.num_layers, bsz, self.rnn_size), - weight.new_zeros(self.num_layers, bsz, self.rnn_size)) - - def clip_att(self, att_feats, att_masks): - # Clip the length of att_masks and att_feats to the maximum length - if att_masks is not None: - max_len = att_masks.data.long().sum(1).max() - att_feats = att_feats[:, :max_len].contiguous() - att_masks = att_masks[:, :max_len].contiguous() - return att_feats, att_masks - - def _prepare_feature(self, fc_feats, att_feats, att_masks): - att_feats, att_masks = self.clip_att(att_feats, att_masks) - - # embed fc and att feats - fc_feats = self.fc_embed(fc_feats) - att_feats = pack_wrapper(self.att_embed, att_feats, att_masks) - - # Project the attention feats first to reduce memory and computation comsumptions. - p_att_feats = self.ctx2att(att_feats) - - return fc_feats, att_feats, p_att_feats, att_masks - - def _forward(self, fc_feats, att_feats, seq, att_masks=None): - batch_size = fc_feats.size(0) - if seq.ndim == 3: # B * seq_per_img * seq_len - seq = seq.reshape(-1, seq.shape[2]) - seq_per_img = seq.shape[0] // batch_size - state = self.init_hidden(batch_size*seq_per_img) - - outputs = fc_feats.new_zeros(batch_size*seq_per_img, seq.size(1), self.vocab_size+1) - - # Prepare the features - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - # pp_att_feats is used for attention, we cache it in advance to reduce computation cost - - if seq_per_img > 1: - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = utils.repeat_tensors(seq_per_img, - [p_fc_feats, p_att_feats, pp_att_feats, p_att_masks] - ) - - for i in range(seq.size(1)): - if self.training and i >= 1 and self.ss_prob > 0.0: # otherwiste no need to sample - sample_prob = fc_feats.new(batch_size*seq_per_img).uniform_(0, 1) - sample_mask = sample_prob < self.ss_prob - if sample_mask.sum() == 0: - it = seq[:, i].clone() - else: - sample_ind = sample_mask.nonzero().view(-1) - it = seq[:, i].data.clone() - prob_prev = torch.exp(outputs[:, i-1].detach()) # fetch prev distribution: shape Nx(M+1) - it.index_copy_(0, sample_ind, torch.multinomial(prob_prev, 1).view(-1).index_select(0, sample_ind)) - else: - it = seq[:, i].clone() - # break if all the sequences end - if i >= 1 and seq[:, i].sum() == 0: - break - - output, state = self.get_logprobs_state(it, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, state) - outputs[:, i] = output - - return outputs - - def get_logprobs_state(self, it, fc_feats, att_feats, p_att_feats, att_masks, state, output_logsoftmax=1): - # 'it' contains a word index - xt = self.embed(it) - - output, state = self.core(xt, fc_feats, att_feats, p_att_feats, state, att_masks) - if output_logsoftmax: - logprobs = F.log_softmax(self.logit(output), dim=1) - else: - logprobs = self.logit(output) - - return logprobs, state - - def _old_sample_beam(self, fc_feats, att_feats, att_masks=None, opt={}): - beam_size = opt.get('beam_size', 10) - group_size = opt.get('group_size', 1) - sample_n = opt.get('sample_n', 10) - # when sample_n == beam_size then each beam is a sample. - assert sample_n == 1 or sample_n == beam_size // group_size, 'when beam search, sample_n == 1 or beam search' - batch_size = fc_feats.size(0) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - - assert beam_size <= self.vocab_size + 1, 'lets assume this for now, otherwise this corner case causes a few headaches down the road. can be dealt with in future if needed' - seq = fc_feats.new_full((batch_size*sample_n, self.seq_length), self.pad_idx, dtype=torch.long) - seqLogprobs = fc_feats.new_zeros(batch_size*sample_n, self.seq_length, self.vocab_size + 1) - # lets process every image independently for now, for simplicity - - self.done_beams = [[] for _ in range(batch_size)] - for k in range(batch_size): - state = self.init_hidden(beam_size) - tmp_fc_feats, tmp_att_feats, tmp_p_att_feats, tmp_att_masks = utils.repeat_tensors(beam_size, - [p_fc_feats[k:k+1], p_att_feats[k:k+1], pp_att_feats[k:k+1], p_att_masks[k:k+1] if att_masks is not None else None] - ) - - for t in range(1): - if t == 0: # input - it = fc_feats.new_full([beam_size], self.bos_idx, dtype=torch.long) - - logprobs, state = self.get_logprobs_state(it, tmp_fc_feats, tmp_att_feats, tmp_p_att_feats, tmp_att_masks, state) - - self.done_beams[k] = self.old_beam_search(state, logprobs, tmp_fc_feats, tmp_att_feats, tmp_p_att_feats, tmp_att_masks, opt=opt) - if sample_n == beam_size: - for _n in range(sample_n): - seq[k*sample_n+_n, :] = self.done_beams[k][_n]['seq'] - seqLogprobs[k*sample_n+_n, :] = self.done_beams[k][_n]['logps'] - else: - seq[k, :] = self.done_beams[k][0]['seq'] # the first beam has highest cumulative score - seqLogprobs[k, :] = self.done_beams[k][0]['logps'] - # return the samples and their log likelihoods - return seq, seqLogprobs - - - def _sample_beam(self, fc_feats, att_feats, att_masks=None, opt={}): - beam_size = opt.get('beam_size', 10) - group_size = opt.get('group_size', 1) - sample_n = opt.get('sample_n', 10) - # when sample_n == beam_size then each beam is a sample. - assert sample_n == 1 or sample_n == beam_size // group_size, 'when beam search, sample_n == 1 or beam search' - batch_size = fc_feats.size(0) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - - assert beam_size <= self.vocab_size + 1, 'lets assume this for now, otherwise this corner case causes a few headaches down the road. can be dealt with in future if needed' - seq = fc_feats.new_full((batch_size*sample_n, self.seq_length), self.pad_idx, dtype=torch.long) - seqLogprobs = fc_feats.new_zeros(batch_size*sample_n, self.seq_length, self.vocab_size + 1) - # lets process every image independently for now, for simplicity - - self.done_beams = [[] for _ in range(batch_size)] - - state = self.init_hidden(batch_size) - - # first step, feed bos - it = fc_feats.new_full([batch_size], self.bos_idx, dtype=torch.long) - logprobs, state = self.get_logprobs_state(it, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, state) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = utils.repeat_tensors(beam_size, - [p_fc_feats, p_att_feats, pp_att_feats, p_att_masks] - ) - self.done_beams = self.beam_search(state, logprobs, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, opt=opt) - for k in range(batch_size): - if sample_n == beam_size: - for _n in range(sample_n): - seq_len = self.done_beams[k][_n]['seq'].shape[0] - seq[k*sample_n+_n, :seq_len] = self.done_beams[k][_n]['seq'] - seqLogprobs[k*sample_n+_n, :seq_len] = self.done_beams[k][_n]['logps'] - else: - seq_len = self.done_beams[k][0]['seq'].shape[0] - seq[k, :seq_len] = self.done_beams[k][0]['seq'] # the first beam has highest cumulative score - seqLogprobs[k, :seq_len] = self.done_beams[k][0]['logps'] - # return the samples and their log likelihoods - return seq, seqLogprobs - - def _sample(self, fc_feats, att_feats, att_masks=None, opt={}): - - sample_method = opt.get('sample_method', 'greedy') - beam_size = opt.get('beam_size', 1) - temperature = opt.get('temperature', 1.0) - sample_n = int(opt.get('sample_n', 1)) - group_size = opt.get('group_size', 1) - output_logsoftmax = opt.get('output_logsoftmax', 1) - decoding_constraint = opt.get('decoding_constraint', 0) - block_trigrams = opt.get('block_trigrams', 0) - remove_bad_endings = opt.get('remove_bad_endings', 0) - if beam_size > 1 and sample_method in ['greedy', 'beam_search']: - return self._sample_beam(fc_feats, att_feats, att_masks, opt) - if group_size > 1: - return self._diverse_sample(fc_feats, att_feats, att_masks, opt) - - batch_size = fc_feats.size(0) - state = self.init_hidden(batch_size*sample_n) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - - if sample_n > 1: - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = utils.repeat_tensors(sample_n, - [p_fc_feats, p_att_feats, pp_att_feats, p_att_masks] - ) - - trigrams = [] # will be a list of batch_size dictionaries - - seq = fc_feats.new_full((batch_size*sample_n, self.seq_length), self.pad_idx, dtype=torch.long) - seqLogprobs = fc_feats.new_zeros(batch_size*sample_n, self.seq_length, self.vocab_size + 1) - for t in range(self.seq_length + 1): - if t == 0: # input - it = fc_feats.new_full([batch_size*sample_n], self.bos_idx, dtype=torch.long) - - logprobs, state = self.get_logprobs_state(it, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, state, output_logsoftmax=output_logsoftmax) - - if decoding_constraint and t > 0: - tmp = logprobs.new_zeros(logprobs.size()) - tmp.scatter_(1, seq[:,t-1].data.unsqueeze(1), float('-inf')) - logprobs = logprobs + tmp - - if remove_bad_endings and t > 0: - tmp = logprobs.new_zeros(logprobs.size()) - prev_bad = np.isin(seq[:,t-1].data.cpu().numpy(), self.bad_endings_ix) - # Make it impossible to generate bad_endings - tmp[torch.from_numpy(prev_bad.astype('uint8')), 0] = float('-inf') - logprobs = logprobs + tmp - - # Mess with trigrams - # Copy from https://github.com/lukemelas/image-paragraph-captioning - if block_trigrams and t >= 3: - # Store trigram generated at last step - prev_two_batch = seq[:,t-3:t-1] - for i in range(batch_size): # = seq.size(0) - prev_two = (prev_two_batch[i][0].item(), prev_two_batch[i][1].item()) - current = seq[i][t-1] - if t == 3: # initialize - trigrams.append({prev_two: [current]}) # {LongTensor: list containing 1 int} - elif t > 3: - if prev_two in trigrams[i]: # add to list - trigrams[i][prev_two].append(current) - else: # create list - trigrams[i][prev_two] = [current] - # Block used trigrams at next step - prev_two_batch = seq[:,t-2:t] - mask = torch.zeros(logprobs.size(), requires_grad=False).to(logprobs.device) # batch_size x vocab_size - for i in range(batch_size): - prev_two = (prev_two_batch[i][0].item(), prev_two_batch[i][1].item()) - if prev_two in trigrams[i]: - for j in trigrams[i][prev_two]: - mask[i,j] += 1 - # Apply mask to log probs - #logprobs = logprobs - (mask * 1e9) - alpha = 2.0 # = 4 - logprobs = logprobs + (mask * -0.693 * alpha) # ln(1/2) * alpha (alpha -> infty works best) - - # sample the next word - if t == self.seq_length: # skip if we achieve maximum length - break - it, sampleLogprobs = self.sample_next_word(logprobs, sample_method, temperature) - - # stop when all finished - if t == 0: - unfinished = it != self.eos_idx - else: - it[~unfinished] = self.pad_idx # This allows eos_idx not being overwritten to 0 - logprobs = logprobs * unfinished.unsqueeze(1).to(logprobs) - unfinished = unfinished & (it != self.eos_idx) - seq[:,t] = it - seqLogprobs[:,t] = logprobs - # quit loop if all sequences have finished - if unfinished.sum() == 0: - break - - return seq, seqLogprobs - - def _diverse_sample(self, fc_feats, att_feats, att_masks=None, opt={}): - - sample_method = opt.get('sample_method', 'greedy') - beam_size = opt.get('beam_size', 1) - temperature = opt.get('temperature', 1.0) - group_size = opt.get('group_size', 1) - diversity_lambda = opt.get('diversity_lambda', 0.5) - decoding_constraint = opt.get('decoding_constraint', 0) - block_trigrams = opt.get('block_trigrams', 0) - remove_bad_endings = opt.get('remove_bad_endings', 0) - - batch_size = fc_feats.size(0) - state = self.init_hidden(batch_size) - - p_fc_feats, p_att_feats, pp_att_feats, p_att_masks = self._prepare_feature(fc_feats, att_feats, att_masks) - - trigrams_table = [[] for _ in range(group_size)] # will be a list of batch_size dictionaries - - seq_table = [fc_feats.new_full((batch_size, self.seq_length), self.pad_idx, dtype=torch.long) for _ in range(group_size)] - seqLogprobs_table = [fc_feats.new_zeros(batch_size, self.seq_length) for _ in range(group_size)] - state_table = [self.init_hidden(batch_size) for _ in range(group_size)] - - for tt in range(self.seq_length + group_size): - for divm in range(group_size): - t = tt - divm - seq = seq_table[divm] - seqLogprobs = seqLogprobs_table[divm] - trigrams = trigrams_table[divm] - if t >= 0 and t <= self.seq_length-1: - if t == 0: # input - it = fc_feats.new_full([batch_size], self.bos_idx, dtype=torch.long) - else: - it = seq[:, t-1] # changed - - logprobs, state_table[divm] = self.get_logprobs_state(it, p_fc_feats, p_att_feats, pp_att_feats, p_att_masks, state_table[divm]) # changed - logprobs = F.log_softmax(logprobs / temperature, dim=-1) - - # Add diversity - if divm > 0: - unaug_logprobs = logprobs.clone() - for prev_choice in range(divm): - prev_decisions = seq_table[prev_choice][:, t] - logprobs[:, prev_decisions] = logprobs[:, prev_decisions] - diversity_lambda - - if decoding_constraint and t > 0: - tmp = logprobs.new_zeros(logprobs.size()) - tmp.scatter_(1, seq[:,t-1].data.unsqueeze(1), float('-inf')) - logprobs = logprobs + tmp - - if remove_bad_endings and t > 0: - tmp = logprobs.new_zeros(logprobs.size()) - prev_bad = np.isin(seq[:,t-1].data.cpu().numpy(), self.bad_endings_ix) - # Impossible to generate remove_bad_endings - tmp[torch.from_numpy(prev_bad.astype('uint8')), 0] = float('-inf') - logprobs = logprobs + tmp - - # Mess with trigrams - if block_trigrams and t >= 3: - # Store trigram generated at last step - prev_two_batch = seq[:,t-3:t-1] - for i in range(batch_size): # = seq.size(0) - prev_two = (prev_two_batch[i][0].item(), prev_two_batch[i][1].item()) - current = seq[i][t-1] - if t == 3: # initialize - trigrams.append({prev_two: [current]}) # {LongTensor: list containing 1 int} - elif t > 3: - if prev_two in trigrams[i]: # add to list - trigrams[i][prev_two].append(current) - else: # create list - trigrams[i][prev_two] = [current] - # Block used trigrams at next step - prev_two_batch = seq[:,t-2:t] - mask = torch.zeros(logprobs.size(), requires_grad=False).cuda() # batch_size x vocab_size - for i in range(batch_size): - prev_two = (prev_two_batch[i][0].item(), prev_two_batch[i][1].item()) - if prev_two in trigrams[i]: - for j in trigrams[i][prev_two]: - mask[i,j] += 1 - # Apply mask to log probs - #logprobs = logprobs - (mask * 1e9) - alpha = 2.0 # = 4 - logprobs = logprobs + (mask * -0.693 * alpha) # ln(1/2) * alpha (alpha -> infty works best) - - it, sampleLogprobs = self.sample_next_word(logprobs, sample_method, 1) - - # stop when all finished - if t == 0: - unfinished = it != self.eos_idx - else: - unfinished = (seq[:,t-1] != self.pad_idx) & (seq[:,t-1] != self.eos_idx) - it[~unfinished] = self.pad_idx - unfinished = unfinished & (it != self.eos_idx) # changed - seq[:,t] = it - seqLogprobs[:,t] = sampleLogprobs.view(-1) - - return torch.stack(seq_table, 1).reshape(batch_size * group_size, -1), torch.stack(seqLogprobs_table, 1).reshape(batch_size * group_size, -1) - -class AdaAtt_lstm(nn.Module): - def __init__(self, opt, use_maxout=True): - super(AdaAtt_lstm, self).__init__() - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - self.num_layers = opt.num_layers - self.drop_prob_lm = opt.drop_prob_lm - self.fc_feat_size = opt.fc_feat_size - self.att_feat_size = opt.att_feat_size - self.att_hid_size = opt.att_hid_size - - self.use_maxout = use_maxout - - # Build a LSTM - self.w2h = nn.Linear(self.input_encoding_size, (4+(use_maxout==True)) * self.rnn_size) - self.v2h = nn.Linear(self.rnn_size, (4+(use_maxout==True)) * self.rnn_size) - - self.i2h = nn.ModuleList([nn.Linear(self.rnn_size, (4+(use_maxout==True)) * self.rnn_size) for _ in range(self.num_layers - 1)]) - self.h2h = nn.ModuleList([nn.Linear(self.rnn_size, (4+(use_maxout==True)) * self.rnn_size) for _ in range(self.num_layers)]) - - # Layers for getting the fake region - if self.num_layers == 1: - self.r_w2h = nn.Linear(self.input_encoding_size, self.rnn_size) - self.r_v2h = nn.Linear(self.rnn_size, self.rnn_size) - else: - self.r_i2h = nn.Linear(self.rnn_size, self.rnn_size) - self.r_h2h = nn.Linear(self.rnn_size, self.rnn_size) - - - def forward(self, xt, img_fc, state): - - hs = [] - cs = [] - for L in range(self.num_layers): - # c,h from previous timesteps - prev_h = state[0][L] - prev_c = state[1][L] - # the input to this layer - if L == 0: - x = xt - i2h = self.w2h(x) + self.v2h(img_fc) - else: - x = hs[-1] - x = F.dropout(x, self.drop_prob_lm, self.training) - i2h = self.i2h[L-1](x) - - all_input_sums = i2h+self.h2h[L](prev_h) - - sigmoid_chunk = all_input_sums.narrow(1, 0, 3 * self.rnn_size) - sigmoid_chunk = torch.sigmoid(sigmoid_chunk) - # decode the gates - in_gate = sigmoid_chunk.narrow(1, 0, self.rnn_size) - forget_gate = sigmoid_chunk.narrow(1, self.rnn_size, self.rnn_size) - out_gate = sigmoid_chunk.narrow(1, self.rnn_size * 2, self.rnn_size) - # decode the write inputs - if not self.use_maxout: - in_transform = torch.tanh(all_input_sums.narrow(1, 3 * self.rnn_size, self.rnn_size)) - else: - in_transform = all_input_sums.narrow(1, 3 * self.rnn_size, 2 * self.rnn_size) - in_transform = torch.max(\ - in_transform.narrow(1, 0, self.rnn_size), - in_transform.narrow(1, self.rnn_size, self.rnn_size)) - # perform the LSTM update - next_c = forget_gate * prev_c + in_gate * in_transform - # gated cells form the output - tanh_nex_c = torch.tanh(next_c) - next_h = out_gate * tanh_nex_c - if L == self.num_layers-1: - if L == 0: - i2h = self.r_w2h(x) + self.r_v2h(img_fc) - else: - i2h = self.r_i2h(x) - n5 = i2h+self.r_h2h(prev_h) - fake_region = torch.sigmoid(n5) * tanh_nex_c - - cs.append(next_c) - hs.append(next_h) - - # set up the decoder - top_h = hs[-1] - top_h = F.dropout(top_h, self.drop_prob_lm, self.training) - fake_region = F.dropout(fake_region, self.drop_prob_lm, self.training) - - state = (torch.cat([_.unsqueeze(0) for _ in hs], 0), - torch.cat([_.unsqueeze(0) for _ in cs], 0)) - return top_h, fake_region, state - -class AdaAtt_attention(nn.Module): - def __init__(self, opt): - super(AdaAtt_attention, self).__init__() - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - self.drop_prob_lm = opt.drop_prob_lm - self.att_hid_size = opt.att_hid_size - - # fake region embed - self.fr_linear = nn.Sequential( - nn.Linear(self.rnn_size, self.input_encoding_size), - nn.ReLU(), - nn.Dropout(self.drop_prob_lm)) - self.fr_embed = nn.Linear(self.input_encoding_size, self.att_hid_size) - - # h out embed - self.ho_linear = nn.Sequential( - nn.Linear(self.rnn_size, self.input_encoding_size), - nn.Tanh(), - nn.Dropout(self.drop_prob_lm)) - self.ho_embed = nn.Linear(self.input_encoding_size, self.att_hid_size) - - self.alpha_net = nn.Linear(self.att_hid_size, 1) - self.att2h = nn.Linear(self.rnn_size, self.rnn_size) - - def forward(self, h_out, fake_region, conv_feat, conv_feat_embed, att_masks=None): - - # View into three dimensions - att_size = conv_feat.numel() // conv_feat.size(0) // self.rnn_size - conv_feat = conv_feat.view(-1, att_size, self.rnn_size) - conv_feat_embed = conv_feat_embed.view(-1, att_size, self.att_hid_size) - - # view neighbor from bach_size * neighbor_num x rnn_size to bach_size x rnn_size * neighbor_num - fake_region = self.fr_linear(fake_region) - fake_region_embed = self.fr_embed(fake_region) - - h_out_linear = self.ho_linear(h_out) - h_out_embed = self.ho_embed(h_out_linear) - - txt_replicate = h_out_embed.unsqueeze(1).expand(h_out_embed.size(0), att_size + 1, h_out_embed.size(1)) - - img_all = torch.cat([fake_region.view(-1,1,self.input_encoding_size), conv_feat], 1) - img_all_embed = torch.cat([fake_region_embed.view(-1,1,self.input_encoding_size), conv_feat_embed], 1) - - hA = torch.tanh(img_all_embed + txt_replicate) - hA = F.dropout(hA,self.drop_prob_lm, self.training) - - hAflat = self.alpha_net(hA.view(-1, self.att_hid_size)) - PI = F.softmax(hAflat.view(-1, att_size + 1), dim=1) - - if att_masks is not None: - att_masks = att_masks.view(-1, att_size) - PI = PI * torch.cat([att_masks[:,:1], att_masks], 1) # assume one one at the first time step. - PI = PI / PI.sum(1, keepdim=True) - - visAtt = torch.bmm(PI.unsqueeze(1), img_all) - visAttdim = visAtt.squeeze(1) - - atten_out = visAttdim + h_out_linear - - h = torch.tanh(self.att2h(atten_out)) - h = F.dropout(h, self.drop_prob_lm, self.training) - return h - -class AdaAttCore(nn.Module): - def __init__(self, opt, use_maxout=False): - super(AdaAttCore, self).__init__() - self.lstm = AdaAtt_lstm(opt, use_maxout) - self.attention = AdaAtt_attention(opt) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - h_out, p_out, state = self.lstm(xt, fc_feats, state) - atten_out = self.attention(h_out, p_out, att_feats, p_att_feats, att_masks) - return atten_out, state - -class UpDownCore(nn.Module): - def __init__(self, opt, use_maxout=False): - super(UpDownCore, self).__init__() - self.drop_prob_lm = opt.drop_prob_lm - - self.att_lstm = nn.LSTMCell(opt.input_encoding_size + opt.rnn_size * 2, opt.rnn_size) # we, fc, h^2_t-1 - self.lang_lstm = nn.LSTMCell(opt.rnn_size * 2, opt.rnn_size) # h^1_t, \hat v - self.attention = Attention(opt) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - prev_h = state[0][-1] - att_lstm_input = torch.cat([prev_h, fc_feats, xt], 1) - - h_att, c_att = self.att_lstm(att_lstm_input, (state[0][0], state[1][0])) - - att = self.attention(h_att, att_feats, p_att_feats, att_masks) - - lang_lstm_input = torch.cat([att, h_att], 1) - # lang_lstm_input = torch.cat([att, F.dropout(h_att, self.drop_prob_lm, self.training)], 1) ????? - - h_lang, c_lang = self.lang_lstm(lang_lstm_input, (state[0][1], state[1][1])) - - output = F.dropout(h_lang, self.drop_prob_lm, self.training) - state = (torch.stack([h_att, h_lang]), torch.stack([c_att, c_lang])) - - return output, state - - -############################################################################ -# Notice: -# StackAtt and DenseAtt are models that I randomly designed. -# They are not related to any paper. -############################################################################ - -from .FCModel import LSTMCore -class StackAttCore(nn.Module): - def __init__(self, opt, use_maxout=False): - super(StackAttCore, self).__init__() - self.drop_prob_lm = opt.drop_prob_lm - - # self.att0 = Attention(opt) - self.att1 = Attention(opt) - self.att2 = Attention(opt) - - opt_input_encoding_size = opt.input_encoding_size - opt.input_encoding_size = opt.input_encoding_size + opt.rnn_size - self.lstm0 = LSTMCore(opt) # att_feat + word_embedding - opt.input_encoding_size = opt.rnn_size * 2 - self.lstm1 = LSTMCore(opt) - self.lstm2 = LSTMCore(opt) - opt.input_encoding_size = opt_input_encoding_size - - # self.emb1 = nn.Linear(opt.rnn_size, opt.rnn_size) - self.emb2 = nn.Linear(opt.rnn_size, opt.rnn_size) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - # att_res_0 = self.att0(state[0][-1], att_feats, p_att_feats, att_masks) - h_0, state_0 = self.lstm0(torch.cat([xt,fc_feats],1), [state[0][0:1], state[1][0:1]]) - att_res_1 = self.att1(h_0, att_feats, p_att_feats, att_masks) - h_1, state_1 = self.lstm1(torch.cat([h_0,att_res_1],1), [state[0][1:2], state[1][1:2]]) - att_res_2 = self.att2(h_1 + self.emb2(att_res_1), att_feats, p_att_feats, att_masks) - h_2, state_2 = self.lstm2(torch.cat([h_1,att_res_2],1), [state[0][2:3], state[1][2:3]]) - - return h_2, [torch.cat(_, 0) for _ in zip(state_0, state_1, state_2)] - -class DenseAttCore(nn.Module): - def __init__(self, opt, use_maxout=False): - super(DenseAttCore, self).__init__() - self.drop_prob_lm = opt.drop_prob_lm - - # self.att0 = Attention(opt) - self.att1 = Attention(opt) - self.att2 = Attention(opt) - - opt_input_encoding_size = opt.input_encoding_size - opt.input_encoding_size = opt.input_encoding_size + opt.rnn_size - self.lstm0 = LSTMCore(opt) # att_feat + word_embedding - opt.input_encoding_size = opt.rnn_size * 2 - self.lstm1 = LSTMCore(opt) - self.lstm2 = LSTMCore(opt) - opt.input_encoding_size = opt_input_encoding_size - - # self.emb1 = nn.Linear(opt.rnn_size, opt.rnn_size) - self.emb2 = nn.Linear(opt.rnn_size, opt.rnn_size) - - # fuse h_0 and h_1 - self.fusion1 = nn.Sequential(nn.Linear(opt.rnn_size*2, opt.rnn_size), - nn.ReLU(), - nn.Dropout(opt.drop_prob_lm)) - # fuse h_0, h_1 and h_2 - self.fusion2 = nn.Sequential(nn.Linear(opt.rnn_size*3, opt.rnn_size), - nn.ReLU(), - nn.Dropout(opt.drop_prob_lm)) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - # att_res_0 = self.att0(state[0][-1], att_feats, p_att_feats, att_masks) - h_0, state_0 = self.lstm0(torch.cat([xt,fc_feats],1), [state[0][0:1], state[1][0:1]]) - att_res_1 = self.att1(h_0, att_feats, p_att_feats, att_masks) - h_1, state_1 = self.lstm1(torch.cat([h_0,att_res_1],1), [state[0][1:2], state[1][1:2]]) - att_res_2 = self.att2(h_1 + self.emb2(att_res_1), att_feats, p_att_feats, att_masks) - h_2, state_2 = self.lstm2(torch.cat([self.fusion1(torch.cat([h_0, h_1], 1)),att_res_2],1), [state[0][2:3], state[1][2:3]]) - - return self.fusion2(torch.cat([h_0, h_1, h_2], 1)), [torch.cat(_, 0) for _ in zip(state_0, state_1, state_2)] - -class Attention(nn.Module): - def __init__(self, opt): - super(Attention, self).__init__() - self.rnn_size = opt.rnn_size - self.att_hid_size = opt.att_hid_size - - self.h2att = nn.Linear(self.rnn_size, self.att_hid_size) - self.alpha_net = nn.Linear(self.att_hid_size, 1) - - def forward(self, h, att_feats, p_att_feats, att_masks=None): - # The p_att_feats here is already projected - att_size = att_feats.numel() // att_feats.size(0) // att_feats.size(-1) - att = p_att_feats.view(-1, att_size, self.att_hid_size) - - att_h = self.h2att(h) # batch * att_hid_size - att_h = att_h.unsqueeze(1).expand_as(att) # batch * att_size * att_hid_size - dot = att + att_h # batch * att_size * att_hid_size - dot = torch.tanh(dot) # batch * att_size * att_hid_size - dot = dot.view(-1, self.att_hid_size) # (batch * att_size) * att_hid_size - dot = self.alpha_net(dot) # (batch * att_size) * 1 - dot = dot.view(-1, att_size) # batch * att_size - - weight = F.softmax(dot, dim=1) # batch * att_size - if att_masks is not None: - weight = weight * att_masks.view(-1, att_size).to(weight) - weight = weight / weight.sum(1, keepdim=True) # normalize to 1 - att_feats_ = att_feats.view(-1, att_size, att_feats.size(-1)) # batch * att_size * att_feat_size - att_res = torch.bmm(weight.unsqueeze(1), att_feats_).squeeze(1) # batch * att_feat_size - - return att_res - -class Att2in2Core(nn.Module): - def __init__(self, opt): - super(Att2in2Core, self).__init__() - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - #self.num_layers = opt.num_layers - self.drop_prob_lm = opt.drop_prob_lm - self.fc_feat_size = opt.fc_feat_size - self.att_feat_size = opt.att_feat_size - self.att_hid_size = opt.att_hid_size - - # Build a LSTM - self.a2c = nn.Linear(self.rnn_size, 2 * self.rnn_size) - self.i2h = nn.Linear(self.input_encoding_size, 5 * self.rnn_size) - self.h2h = nn.Linear(self.rnn_size, 5 * self.rnn_size) - self.dropout = nn.Dropout(self.drop_prob_lm) - - self.attention = Attention(opt) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - att_res = self.attention(state[0][-1], att_feats, p_att_feats, att_masks) - - all_input_sums = self.i2h(xt) + self.h2h(state[0][-1]) - sigmoid_chunk = all_input_sums.narrow(1, 0, 3 * self.rnn_size) - sigmoid_chunk = torch.sigmoid(sigmoid_chunk) - in_gate = sigmoid_chunk.narrow(1, 0, self.rnn_size) - forget_gate = sigmoid_chunk.narrow(1, self.rnn_size, self.rnn_size) - out_gate = sigmoid_chunk.narrow(1, self.rnn_size * 2, self.rnn_size) - - in_transform = all_input_sums.narrow(1, 3 * self.rnn_size, 2 * self.rnn_size) + \ - self.a2c(att_res) - in_transform = torch.max(\ - in_transform.narrow(1, 0, self.rnn_size), - in_transform.narrow(1, self.rnn_size, self.rnn_size)) - next_c = forget_gate * state[1][-1] + in_gate * in_transform - next_h = out_gate * torch.tanh(next_c) - - output = self.dropout(next_h) - state = (next_h.unsqueeze(0), next_c.unsqueeze(0)) - return output, state - -class Att2inCore(Att2in2Core): - def __init__(self, opt): - super(Att2inCore, self).__init__(opt) - del self.a2c - self.a2c = nn.Linear(self.att_feat_size, 2 * self.rnn_size) - -""" -Note this is my attempt to replicate att2all model in self-critical paper. -However, this is not a correct replication actually. Will fix it. -""" -class Att2all2Core(nn.Module): - def __init__(self, opt): - super(Att2all2Core, self).__init__() - self.input_encoding_size = opt.input_encoding_size - #self.rnn_type = opt.rnn_type - self.rnn_size = opt.rnn_size - #self.num_layers = opt.num_layers - self.drop_prob_lm = opt.drop_prob_lm - self.fc_feat_size = opt.fc_feat_size - self.att_feat_size = opt.att_feat_size - self.att_hid_size = opt.att_hid_size - - # Build a LSTM - self.a2h = nn.Linear(self.rnn_size, 5 * self.rnn_size) - self.i2h = nn.Linear(self.input_encoding_size, 5 * self.rnn_size) - self.h2h = nn.Linear(self.rnn_size, 5 * self.rnn_size) - self.dropout = nn.Dropout(self.drop_prob_lm) - - self.attention = Attention(opt) - - def forward(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks=None): - att_res = self.attention(state[0][-1], att_feats, p_att_feats, att_masks) - - all_input_sums = self.i2h(xt) + self.h2h(state[0][-1]) + self.a2h(att_res) - sigmoid_chunk = all_input_sums.narrow(1, 0, 3 * self.rnn_size) - sigmoid_chunk = torch.sigmoid(sigmoid_chunk) - in_gate = sigmoid_chunk.narrow(1, 0, self.rnn_size) - forget_gate = sigmoid_chunk.narrow(1, self.rnn_size, self.rnn_size) - out_gate = sigmoid_chunk.narrow(1, self.rnn_size * 2, self.rnn_size) - - in_transform = all_input_sums.narrow(1, 3 * self.rnn_size, 2 * self.rnn_size) - in_transform = torch.max(\ - in_transform.narrow(1, 0, self.rnn_size), - in_transform.narrow(1, self.rnn_size, self.rnn_size)) - next_c = forget_gate * state[1][-1] + in_gate * in_transform - next_h = out_gate * torch.tanh(next_c) - - output = self.dropout(next_h) - state = (next_h.unsqueeze(0), next_c.unsqueeze(0)) - return output, state - -class AdaAttModel(AttModel): - def __init__(self, opt): - super(AdaAttModel, self).__init__(opt) - self.core = AdaAttCore(opt) - -# AdaAtt with maxout lstm -class AdaAttMOModel(AttModel): - def __init__(self, opt): - super(AdaAttMOModel, self).__init__(opt) - self.core = AdaAttCore(opt, True) - -class Att2in2Model(AttModel): - def __init__(self, opt): - super(Att2in2Model, self).__init__(opt) - self.core = Att2in2Core(opt) - delattr(self, 'fc_embed') - self.fc_embed = lambda x : x - -class Att2all2Model(AttModel): - def __init__(self, opt): - super(Att2all2Model, self).__init__(opt) - self.core = Att2all2Core(opt) - delattr(self, 'fc_embed') - self.fc_embed = lambda x : x - -class UpDownModel(AttModel): - def __init__(self, opt): - super(UpDownModel, self).__init__(opt) - self.num_layers = 2 - self.core = UpDownCore(opt) - -class StackAttModel(AttModel): - def __init__(self, opt): - super(StackAttModel, self).__init__(opt) - self.num_layers = 3 - self.core = StackAttCore(opt) - -class DenseAttModel(AttModel): - def __init__(self, opt): - super(DenseAttModel, self).__init__(opt) - self.num_layers = 3 - self.core = DenseAttCore(opt) - -class Att2inModel(AttModel): - def __init__(self, opt): - super(Att2inModel, self).__init__(opt) - del self.embed, self.fc_embed, self.att_embed - self.embed = nn.Embedding(self.vocab_size + 1, self.input_encoding_size) - self.fc_embed = self.att_embed = lambda x: x - del self.ctx2att - self.ctx2att = nn.Linear(self.att_feat_size, self.att_hid_size) - self.core = Att2inCore(opt) - self.init_weights() - - def init_weights(self): - initrange = 0.1 - self.embed.weight.data.uniform_(-initrange, initrange) - self.logit.bias.data.fill_(0) - self.logit.weight.data.uniform_(-initrange, initrange) - - -class NewFCModel(AttModel): - def __init__(self, opt): - super(NewFCModel, self).__init__(opt) - self.fc_embed = nn.Linear(self.fc_feat_size, self.input_encoding_size) - self.embed = nn.Embedding(self.vocab_size + 1, self.input_encoding_size) - self._core = LSTMCore(opt) - delattr(self, 'att_embed') - self.att_embed = lambda x : x - delattr(self, 'ctx2att') - self.ctx2att = lambda x: x - - def core(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks): - # Step 0, feed the input image - # if (self.training and state[0].is_leaf) or \ - # (not self.training and state[0].sum() == 0): - # _, state = self._core(fc_feats, state) - # three cases - # normal mle training - # Sample - # beam search (diverse beam search) - # fixed captioning module. - is_first_step = (state[0]==0).all(2).all(0) # size: B - if is_first_step.all(): - _, state = self._core(fc_feats, state) - elif is_first_step.any(): - # This is mostly for diverse beam search I think - new_state = [torch.zeros_like(_) for _ in state] - new_state[0][:, ~is_first_step] = state[0][:, ~is_first_step] - new_state[1][:, ~is_first_step] = state[1][:, ~is_first_step] - _, state = self._core(fc_feats, state) - new_state[0][:, is_first_step] = state[0][:, is_first_step] - new_state[1][:, is_first_step] = state[1][:, is_first_step] - state = new_state - # if (state[0]==0).all(): - # # Let's forget about diverse beam search first - # _, state = self._core(fc_feats, state) - return self._core(xt, state) - - def _prepare_feature(self, fc_feats, att_feats, att_masks): - fc_feats = self.fc_embed(fc_feats) - - return fc_feats, att_feats, att_feats, att_masks - - -class LMModel(AttModel): - def __init__(self, opt): - super(LMModel, self).__init__(opt) - delattr(self, 'fc_embed') - self.fc_embed = lambda x: x.new_zeros(x.shape[0], self.input_encoding_size) - self.embed = nn.Embedding(self.vocab_size + 1, self.input_encoding_size) - self._core = LSTMCore(opt) - delattr(self, 'att_embed') - self.att_embed = lambda x : x - delattr(self, 'ctx2att') - self.ctx2att = lambda x: x - - def core(self, xt, fc_feats, att_feats, p_att_feats, state, att_masks): - if (state[0]==0).all(): - # Let's forget about diverse beam search first - _, state = self._core(fc_feats, state) - return self._core(xt, state) - - def _prepare_feature(self, fc_feats, att_feats, att_masks): - fc_feats = self.fc_embed(fc_feats) - - return fc_feats, None, None, None \ No newline at end of file diff --git a/spaces/Nalla/PDF_tables_to_CSV_output/README.md b/spaces/Nalla/PDF_tables_to_CSV_output/README.md deleted file mode 100644 index 31e688d36117a66f2e73af2caa76d9d5a2e7bbcf..0000000000000000000000000000000000000000 --- a/spaces/Nalla/PDF_tables_to_CSV_output/README.md +++ /dev/null @@ -1,11 +0,0 @@ ---- -title: PDF_tables_to_CSV_output -emoji: ⚡ -colorFrom: blue -colorTo: red -sdk: streamlit -app_file: App_For_PDF_To_Dataframe.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/Nixic/rvc-models/infer_pack/modules.py b/spaces/Nixic/rvc-models/infer_pack/modules.py deleted file mode 100644 index 960481cedad9a6106f2bf0b9e86e82b120f7b33f..0000000000000000000000000000000000000000 --- a/spaces/Nixic/rvc-models/infer_pack/modules.py +++ /dev/null @@ -1,522 +0,0 @@ -import copy -import math -import numpy as np -import scipy -import torch -from torch import nn -from torch.nn import functional as F - -from torch.nn import Conv1d, ConvTranspose1d, AvgPool1d, Conv2d -from torch.nn.utils import weight_norm, remove_weight_norm - -from infer_pack import commons -from infer_pack.commons import init_weights, get_padding -from infer_pack.transforms import piecewise_rational_quadratic_transform - - -LRELU_SLOPE = 0.1 - - -class LayerNorm(nn.Module): - def __init__(self, channels, eps=1e-5): - super().__init__() - self.channels = channels - self.eps = eps - - self.gamma = nn.Parameter(torch.ones(channels)) - self.beta = nn.Parameter(torch.zeros(channels)) - - def forward(self, x): - x = x.transpose(1, -1) - x = F.layer_norm(x, (self.channels,), self.gamma, self.beta, self.eps) - return x.transpose(1, -1) - - -class ConvReluNorm(nn.Module): - def __init__( - self, - in_channels, - hidden_channels, - out_channels, - kernel_size, - n_layers, - p_dropout, - ): - super().__init__() - self.in_channels = in_channels - self.hidden_channels = hidden_channels - self.out_channels = out_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - assert n_layers > 1, "Number of layers should be larger than 0." - - self.conv_layers = nn.ModuleList() - self.norm_layers = nn.ModuleList() - self.conv_layers.append( - nn.Conv1d( - in_channels, hidden_channels, kernel_size, padding=kernel_size // 2 - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.relu_drop = nn.Sequential(nn.ReLU(), nn.Dropout(p_dropout)) - for _ in range(n_layers - 1): - self.conv_layers.append( - nn.Conv1d( - hidden_channels, - hidden_channels, - kernel_size, - padding=kernel_size // 2, - ) - ) - self.norm_layers.append(LayerNorm(hidden_channels)) - self.proj = nn.Conv1d(hidden_channels, out_channels, 1) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask): - x_org = x - for i in range(self.n_layers): - x = self.conv_layers[i](x * x_mask) - x = self.norm_layers[i](x) - x = self.relu_drop(x) - x = x_org + self.proj(x) - return x * x_mask - - -class DDSConv(nn.Module): - """ - Dialted and Depth-Separable Convolution - """ - - def __init__(self, channels, kernel_size, n_layers, p_dropout=0.0): - super().__init__() - self.channels = channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.p_dropout = p_dropout - - self.drop = nn.Dropout(p_dropout) - self.convs_sep = nn.ModuleList() - self.convs_1x1 = nn.ModuleList() - self.norms_1 = nn.ModuleList() - self.norms_2 = nn.ModuleList() - for i in range(n_layers): - dilation = kernel_size**i - padding = (kernel_size * dilation - dilation) // 2 - self.convs_sep.append( - nn.Conv1d( - channels, - channels, - kernel_size, - groups=channels, - dilation=dilation, - padding=padding, - ) - ) - self.convs_1x1.append(nn.Conv1d(channels, channels, 1)) - self.norms_1.append(LayerNorm(channels)) - self.norms_2.append(LayerNorm(channels)) - - def forward(self, x, x_mask, g=None): - if g is not None: - x = x + g - for i in range(self.n_layers): - y = self.convs_sep[i](x * x_mask) - y = self.norms_1[i](y) - y = F.gelu(y) - y = self.convs_1x1[i](y) - y = self.norms_2[i](y) - y = F.gelu(y) - y = self.drop(y) - x = x + y - return x * x_mask - - -class WN(torch.nn.Module): - def __init__( - self, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - gin_channels=0, - p_dropout=0, - ): - super(WN, self).__init__() - assert kernel_size % 2 == 1 - self.hidden_channels = hidden_channels - self.kernel_size = (kernel_size,) - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.gin_channels = gin_channels - self.p_dropout = p_dropout - - self.in_layers = torch.nn.ModuleList() - self.res_skip_layers = torch.nn.ModuleList() - self.drop = nn.Dropout(p_dropout) - - if gin_channels != 0: - cond_layer = torch.nn.Conv1d( - gin_channels, 2 * hidden_channels * n_layers, 1 - ) - self.cond_layer = torch.nn.utils.weight_norm(cond_layer, name="weight") - - for i in range(n_layers): - dilation = dilation_rate**i - padding = int((kernel_size * dilation - dilation) / 2) - in_layer = torch.nn.Conv1d( - hidden_channels, - 2 * hidden_channels, - kernel_size, - dilation=dilation, - padding=padding, - ) - in_layer = torch.nn.utils.weight_norm(in_layer, name="weight") - self.in_layers.append(in_layer) - - # last one is not necessary - if i < n_layers - 1: - res_skip_channels = 2 * hidden_channels - else: - res_skip_channels = hidden_channels - - res_skip_layer = torch.nn.Conv1d(hidden_channels, res_skip_channels, 1) - res_skip_layer = torch.nn.utils.weight_norm(res_skip_layer, name="weight") - self.res_skip_layers.append(res_skip_layer) - - def forward(self, x, x_mask, g=None, **kwargs): - output = torch.zeros_like(x) - n_channels_tensor = torch.IntTensor([self.hidden_channels]) - - if g is not None: - g = self.cond_layer(g) - - for i in range(self.n_layers): - x_in = self.in_layers[i](x) - if g is not None: - cond_offset = i * 2 * self.hidden_channels - g_l = g[:, cond_offset : cond_offset + 2 * self.hidden_channels, :] - else: - g_l = torch.zeros_like(x_in) - - acts = commons.fused_add_tanh_sigmoid_multiply(x_in, g_l, n_channels_tensor) - acts = self.drop(acts) - - res_skip_acts = self.res_skip_layers[i](acts) - if i < self.n_layers - 1: - res_acts = res_skip_acts[:, : self.hidden_channels, :] - x = (x + res_acts) * x_mask - output = output + res_skip_acts[:, self.hidden_channels :, :] - else: - output = output + res_skip_acts - return output * x_mask - - def remove_weight_norm(self): - if self.gin_channels != 0: - torch.nn.utils.remove_weight_norm(self.cond_layer) - for l in self.in_layers: - torch.nn.utils.remove_weight_norm(l) - for l in self.res_skip_layers: - torch.nn.utils.remove_weight_norm(l) - - -class ResBlock1(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3, 5)): - super(ResBlock1, self).__init__() - self.convs1 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[2], - padding=get_padding(kernel_size, dilation[2]), - ) - ), - ] - ) - self.convs1.apply(init_weights) - - self.convs2 = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=1, - padding=get_padding(kernel_size, 1), - ) - ), - ] - ) - self.convs2.apply(init_weights) - - def forward(self, x, x_mask=None): - for c1, c2 in zip(self.convs1, self.convs2): - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c1(xt) - xt = F.leaky_relu(xt, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c2(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs1: - remove_weight_norm(l) - for l in self.convs2: - remove_weight_norm(l) - - -class ResBlock2(torch.nn.Module): - def __init__(self, channels, kernel_size=3, dilation=(1, 3)): - super(ResBlock2, self).__init__() - self.convs = nn.ModuleList( - [ - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[0], - padding=get_padding(kernel_size, dilation[0]), - ) - ), - weight_norm( - Conv1d( - channels, - channels, - kernel_size, - 1, - dilation=dilation[1], - padding=get_padding(kernel_size, dilation[1]), - ) - ), - ] - ) - self.convs.apply(init_weights) - - def forward(self, x, x_mask=None): - for c in self.convs: - xt = F.leaky_relu(x, LRELU_SLOPE) - if x_mask is not None: - xt = xt * x_mask - xt = c(xt) - x = xt + x - if x_mask is not None: - x = x * x_mask - return x - - def remove_weight_norm(self): - for l in self.convs: - remove_weight_norm(l) - - -class Log(nn.Module): - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = torch.log(torch.clamp_min(x, 1e-5)) * x_mask - logdet = torch.sum(-y, [1, 2]) - return y, logdet - else: - x = torch.exp(x) * x_mask - return x - - -class Flip(nn.Module): - def forward(self, x, *args, reverse=False, **kwargs): - x = torch.flip(x, [1]) - if not reverse: - logdet = torch.zeros(x.size(0)).to(dtype=x.dtype, device=x.device) - return x, logdet - else: - return x - - -class ElementwiseAffine(nn.Module): - def __init__(self, channels): - super().__init__() - self.channels = channels - self.m = nn.Parameter(torch.zeros(channels, 1)) - self.logs = nn.Parameter(torch.zeros(channels, 1)) - - def forward(self, x, x_mask, reverse=False, **kwargs): - if not reverse: - y = self.m + torch.exp(self.logs) * x - y = y * x_mask - logdet = torch.sum(self.logs * x_mask, [1, 2]) - return y, logdet - else: - x = (x - self.m) * torch.exp(-self.logs) * x_mask - return x - - -class ResidualCouplingLayer(nn.Module): - def __init__( - self, - channels, - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=0, - gin_channels=0, - mean_only=False, - ): - assert channels % 2 == 0, "channels should be divisible by 2" - super().__init__() - self.channels = channels - self.hidden_channels = hidden_channels - self.kernel_size = kernel_size - self.dilation_rate = dilation_rate - self.n_layers = n_layers - self.half_channels = channels // 2 - self.mean_only = mean_only - - self.pre = nn.Conv1d(self.half_channels, hidden_channels, 1) - self.enc = WN( - hidden_channels, - kernel_size, - dilation_rate, - n_layers, - p_dropout=p_dropout, - gin_channels=gin_channels, - ) - self.post = nn.Conv1d(hidden_channels, self.half_channels * (2 - mean_only), 1) - self.post.weight.data.zero_() - self.post.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) * x_mask - h = self.enc(h, x_mask, g=g) - stats = self.post(h) * x_mask - if not self.mean_only: - m, logs = torch.split(stats, [self.half_channels] * 2, 1) - else: - m = stats - logs = torch.zeros_like(m) - - if not reverse: - x1 = m + x1 * torch.exp(logs) * x_mask - x = torch.cat([x0, x1], 1) - logdet = torch.sum(logs, [1, 2]) - return x, logdet - else: - x1 = (x1 - m) * torch.exp(-logs) * x_mask - x = torch.cat([x0, x1], 1) - return x - - def remove_weight_norm(self): - self.enc.remove_weight_norm() - - -class ConvFlow(nn.Module): - def __init__( - self, - in_channels, - filter_channels, - kernel_size, - n_layers, - num_bins=10, - tail_bound=5.0, - ): - super().__init__() - self.in_channels = in_channels - self.filter_channels = filter_channels - self.kernel_size = kernel_size - self.n_layers = n_layers - self.num_bins = num_bins - self.tail_bound = tail_bound - self.half_channels = in_channels // 2 - - self.pre = nn.Conv1d(self.half_channels, filter_channels, 1) - self.convs = DDSConv(filter_channels, kernel_size, n_layers, p_dropout=0.0) - self.proj = nn.Conv1d( - filter_channels, self.half_channels * (num_bins * 3 - 1), 1 - ) - self.proj.weight.data.zero_() - self.proj.bias.data.zero_() - - def forward(self, x, x_mask, g=None, reverse=False): - x0, x1 = torch.split(x, [self.half_channels] * 2, 1) - h = self.pre(x0) - h = self.convs(h, x_mask, g=g) - h = self.proj(h) * x_mask - - b, c, t = x0.shape - h = h.reshape(b, c, -1, t).permute(0, 1, 3, 2) # [b, cx?, t] -> [b, c, t, ?] - - unnormalized_widths = h[..., : self.num_bins] / math.sqrt(self.filter_channels) - unnormalized_heights = h[..., self.num_bins : 2 * self.num_bins] / math.sqrt( - self.filter_channels - ) - unnormalized_derivatives = h[..., 2 * self.num_bins :] - - x1, logabsdet = piecewise_rational_quadratic_transform( - x1, - unnormalized_widths, - unnormalized_heights, - unnormalized_derivatives, - inverse=reverse, - tails="linear", - tail_bound=self.tail_bound, - ) - - x = torch.cat([x0, x1], 1) * x_mask - logdet = torch.sum(logabsdet * x_mask, [1, 2]) - if not reverse: - return x, logdet - else: - return x diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/README.md b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/README.md deleted file mode 100644 index 0b213fd202d04bce2149936ec149c23c6d483745..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/examples/wav2vec/unsupervised/README.md +++ /dev/null @@ -1,103 +0,0 @@ -# wav2vec Unsupervised (wav2vec-U) - -Wav2vec Unsupervised (wav2vec-U) is a framework for building speech recognition systems without any labeled training data as described in [Unsupervised Speech Recognition (Baevski et al., 2021)](https://ai.facebook.com/research/publications/unsupervised-speech-recognition). The model takes as input wav2vec 2.0 or XLSR representations (see [pretrained models](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec)) as well as unlabeled speech and text data. - - The wav2vec-U training procedure consists of three consecutive main steps: -* Preparation of speech representations and text data -* Generative adversarial training (GAN) -* Iterative self-training + Kaldi LM-decoding - -## Preparation of speech and text data -Similar to [wav2vec 2.0](https://github.com/pytorch/fairseq/blob/main/examples/wav2vec/README.md), data folders contain {train,valid,test}.{tsv,wrd,phn} files, where audio paths are stored in tsv files, and word, letter or phoneme transcriptions are stored in .{wrd,ltr,phn}. - -In **/path/to/data/with_silence** you need a *train.tsv* file as well as (optionally) *{valid,test}.{tsv,wrd,phn}*. It is nice to have *10h.{tsv,phn}* files there too for reproducing the ablation study on layer selection. In **/path/to/data/without_silence** you have the same files, except *.tsv* files contain audios with silences removed using rVAD. - -Pre-requisites: -* set FAIRSEQ_ROOT environmental variable to your fairseq installation -* set RVAD_ROOT environmental variable to a checkout of [rVADfast](https://github.com/zhenghuatan/rVADfast) -* set KENLM_ROOT environmental variable to the location of [KenLM](https://github.com/kpu/kenlm) binaries -* install [PyKaldi](https://github.com/pykaldi/pykaldi) and set KALDI_ROOT environmental variable to the location of your kaldi installation. To use the version bundled with PyKaldi, you can use /path/to/pykaldi/tools/kaldi - -Create new audio files without silences: -```shell -# create a manifest file for the set original of audio files -python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py /dir/to/save/audio/files --ext wav --dest /path/to/new/train.tsv --valid-percent 0 - -python scripts/vads.py -r $RVAD_ROOT < /path/to/train.tsv > train.vads - -python scripts/remove_silence.py --tsv /path/to/train.tsv --vads train.vads --out /dir/to/save/audio/files - -python $FAIRSEQ_ROOT/examples/wav2vec/wav2vec_manifest.py /dir/to/save/audio/files --ext wav --dest /path/to/new/train.tsv --valid-percent 0.01 -``` - -Next, we need to preprocess the audio data to better match phonemized text data: - -```shell -zsh scripts/prepare_audio.sh /dir/with/{train,test,valid}.tsv /output/dir /path/to/wav2vec2/model.pt 512 14 -``` -Note that if you have splits different than train/valid/test, you will need to modify this script. The last two arguments are the PCA dimensionality and the 0-based index of the layer from which to extract representations. - -Now we need to prepare text data: -```shell -zsh scripts/prepare_text.sh language /path/to/text/file /output/dir 1000 espeak /path/to/fasttext/lid/model -``` - -The fourth argument is minimum number observations of phones to keep. If your text corpus is small, you might want to reduce this number. - -The fifth argument is which phonemizer to use. Supported values are [espeak](http://espeak.sourceforge.net/), [espeak-ng](https://github.com/espeak-ng/espeak-ng), and [G2P](https://github.com/Kyubyong/g2p) (english only). - -Pre-trained fasttext LID models can be downloaded [here](https://fasttext.cc/docs/en/language-identification.html). - -### Prepare TIMIT data -TIMIT transcripts include silence. Therefore VAD is not used for audio preprocessing, and we do not wrap transcripts with silences or insert random silence in between words. - -To prepare TIMIT data for both the matched an unmatched setup: -```shell -bash scripts/prepare_timit.sh /dir/to/timit/raw/data /output/dir /path/to/wav2vec2/model.pt -``` - -Note that we assume the TIMIT distribution with capitalized directories and filenames are used (e.g., `TRAIN/DR1/FCJF0/SA1.PHN`). - -## Generative adversarial training (GAN) - -We then use a GAN model to build a first unsupervised ASR model. The data preparation above of both speech features and text data is a necessary procedure that enables the generator to match speech to text in an unsupervised way. - -Launching GAN training on top of preprocessed features, with default hyperparameters can be done with: - -``` -PREFIX=w2v_unsup_gan_xp -TASK_DATA=/path/to/features/precompute_unfiltered_pca512_cls128_mean_pooled -TEXT_DATA=/path/to/data/phones # path to fairseq-preprocessed GAN data (phones dir) -KENLM_PATH=/path/to/data/phones/kenlm.phn.o4.bin # KenLM 4-gram phoneme language model (LM data = GAN data here) - -PYTHONPATH=$FAIRSEQ_ROOT PREFIX=$PREFIX fairseq-hydra-train \ - -m --config-dir config/gan \ - --config-name w2vu \ - task.data=${TASK_DATA} \ - task.text_data=${TEXT_DATA} \ - task.kenlm_path=${KENLM_PATH} \ - common.user_dir=${FAIRSEQ_ROOT}/examples/wav2vec/unsupervised \ - model.code_penalty=2,4 model.gradient_penalty=1.5,2.0 \ - model.smoothness_weight=0.5,0.75,1.0 'common.seed=range(0,5)' -``` - - -Once we find the best checkpoint (chosen using unsupervised metric that combined language model perplexity and vocabulary usage), we can use it to generate phone labels (or word labels with an appropriate kaldi WFST): - -```shell -python w2vu_generate.py --config-dir config/generate --config-name viterbi \ -fairseq.common.user_dir=${FAIRSEQ_ROOT}/examples/wav2vec/unsupervised \ -fairseq.task.data=/path/to/dir/with/features \ -fairseq.common_eval.path=/path/to/gan/checkpoint \ -fairseq.dataset.gen_subset=valid results_path=/where/to/save/transcriptions -``` - -The decoding without LM works best on the same adjacent-mean-pooled features that the gan was trained on, while decoding with LM works better on features before the adjacent timestep mean-pooling step (without the "_pooled" suffix). - -## Iterative self-training + Kaldi LM-decoding -After the GAN training provides a first unsupervised model, we can then progressively refine the quality of transcriptions using several iterations of semi-supervised learning. We perform two iterations: first, pseudo-label the training data with the unsupervised GAN model and train an HMM on the pseudo-labels. Second, we relabel the training data with the HMM and then fine-tune the original wav2vec 2.0 model using the HMM pseudo-labels with a CTC loss. Note that HMM models use phonemes as output, while wav2vec 2.0 use letter. Both are decoded using WFST decoders into words. - - -Please see [this README](kaldi_self_train/README.md) for more instructions on how to do iterative self-training + Kaldi LM-decoding. - -*** Note: these instructions are a work in progress and will be updated over the next few days diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/speech_to_text_dataset.py b/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/speech_to_text_dataset.py deleted file mode 100644 index 164bf413e4fd41b895348c9ef0bb57421843eb17..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/fairseq/fairseq/data/audio/speech_to_text_dataset.py +++ /dev/null @@ -1,525 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import csv -import io -import logging -import re -from collections import defaultdict -from pathlib import Path -from typing import Dict, List, Optional -from dataclasses import dataclass - -import numpy as np -import torch -from fairseq.data import ( - ConcatDataset, - Dictionary, - FairseqDataset, - ResamplingDataset, - data_utils as fairseq_data_utils, -) -from fairseq.data.audio.audio_utils import ( - get_fbank, - get_waveform, - read_from_stored_zip, - is_npy_data, - is_sf_audio_data, - parse_path, - FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS, -) -from fairseq.data.audio.feature_transforms import CompositeAudioFeatureTransform -from fairseq.data.audio.data_cfg import S2TDataConfig - - -logger = logging.getLogger(__name__) - - -def get_features_from_npy_or_audio(path): - ext = Path(path).suffix - if ext not in FEATURE_OR_SF_AUDIO_FILE_EXTENSIONS: - raise ValueError(f'Unsupported file format for "{path}"') - return np.load(path) if ext == ".npy" else get_fbank(path) - - -def get_features_or_waveform_from_stored_zip( - path, byte_offset, byte_size, need_waveform=False, use_sample_rate=None, -): - assert path.endswith(".zip") - data = read_from_stored_zip(path, byte_offset, byte_size) - f = io.BytesIO(data) - if is_npy_data(data): - features_or_waveform = np.load(f) - elif is_sf_audio_data(data): - features_or_waveform = \ - get_waveform( - f, always_2d=False, output_sample_rate=use_sample_rate - )[0] if need_waveform else get_fbank(f) - else: - raise ValueError(f'Unknown file format for "{path}"') - return features_or_waveform - - -def get_features_or_waveform( - path: str, need_waveform=False, use_sample_rate=None -): - """Get speech features from .npy file or waveform from .wav/.flac file. - The file may be inside an uncompressed ZIP file and is accessed via byte - offset and length. - - Args: - path (str): File path in the format of "<.npy/.wav/.flac path>" or - "::". - need_waveform (bool): return waveform instead of features. - use_sample_rate (int): change sample rate for the input wave file - - Returns: - features_or_waveform (numpy.ndarray): speech features or waveform. - """ - _path, slice_ptr = parse_path(path) - if len(slice_ptr) == 0: - if need_waveform: - return get_waveform( - _path, always_2d=False, output_sample_rate=use_sample_rate - )[0] - return get_features_from_npy_or_audio(_path) - elif len(slice_ptr) == 2: - features_or_waveform = get_features_or_waveform_from_stored_zip( - _path, slice_ptr[0], slice_ptr[1], need_waveform=need_waveform, - use_sample_rate=use_sample_rate - ) - else: - raise ValueError(f"Invalid path: {path}") - - return features_or_waveform - - -def _collate_frames( - frames: List[torch.Tensor], is_audio_input: bool = False -) -> torch.Tensor: - """ - Convert a list of 2D frames into a padded 3D tensor - Args: - frames (list): list of 2D frames of size L[i]*f_dim. Where L[i] is - length of i-th frame and f_dim is static dimension of features - Returns: - 3D tensor of size len(frames)*len_max*f_dim where len_max is max of L[i] - """ - max_len = max(frame.size(0) for frame in frames) - if is_audio_input: - out = frames[0].new_zeros((len(frames), max_len)) - else: - out = frames[0].new_zeros((len(frames), max_len, frames[0].size(1))) - for i, v in enumerate(frames): - out[i, : v.size(0)] = v - return out - - -@dataclass -class SpeechToTextDatasetItem(object): - index: int - source: torch.Tensor - target: Optional[torch.Tensor] = None - speaker_id: Optional[int] = None - - -class SpeechToTextDataset(FairseqDataset): - LANG_TAG_TEMPLATE = "" - - def __init__( - self, - split: str, - is_train_split: bool, - cfg: S2TDataConfig, - audio_paths: List[str], - n_frames: List[int], - src_texts: Optional[List[str]] = None, - tgt_texts: Optional[List[str]] = None, - speakers: Optional[List[str]] = None, - src_langs: Optional[List[str]] = None, - tgt_langs: Optional[List[str]] = None, - ids: Optional[List[str]] = None, - tgt_dict: Optional[Dictionary] = None, - pre_tokenizer=None, - bpe_tokenizer=None, - n_frames_per_step=1, - speaker_to_id=None - ): - self.split, self.is_train_split = split, is_train_split - self.cfg = cfg - self.audio_paths, self.n_frames = audio_paths, n_frames - self.n_samples = len(audio_paths) - assert len(n_frames) == self.n_samples > 0 - assert src_texts is None or len(src_texts) == self.n_samples - assert tgt_texts is None or len(tgt_texts) == self.n_samples - assert speakers is None or len(speakers) == self.n_samples - assert src_langs is None or len(src_langs) == self.n_samples - assert tgt_langs is None or len(tgt_langs) == self.n_samples - assert ids is None or len(ids) == self.n_samples - assert (tgt_dict is None and tgt_texts is None) or ( - tgt_dict is not None and tgt_texts is not None - ) - self.src_texts, self.tgt_texts = src_texts, tgt_texts - self.src_langs, self.tgt_langs = src_langs, tgt_langs - self.speakers = speakers - self.tgt_dict = tgt_dict - self.check_tgt_lang_tag() - self.ids = ids - self.shuffle = cfg.shuffle if is_train_split else False - - self.feature_transforms = CompositeAudioFeatureTransform.from_config_dict( - self.cfg.get_feature_transforms(split, is_train_split) - ) - - self.pre_tokenizer = pre_tokenizer - self.bpe_tokenizer = bpe_tokenizer - self.n_frames_per_step = n_frames_per_step - self.speaker_to_id = speaker_to_id - - self.tgt_lens = self.get_tgt_lens_and_check_oov() - - logger.info(self.__repr__()) - - def get_tgt_lens_and_check_oov(self): - if self.tgt_texts is None: - return [0 for _ in range(self.n_samples)] - tgt_lens = [] - n_tokens, n_oov_tokens = 0, 0 - for i in range(self.n_samples): - tokenized = self.get_tokenized_tgt_text(i).split(" ") - oov_tokens = [ - t - for t in tokenized - if self.tgt_dict.index(t) == self.tgt_dict.unk_index - ] - n_tokens += len(tokenized) - n_oov_tokens += len(oov_tokens) - tgt_lens.append(len(tokenized)) - logger.info(f"'{self.split}' has {n_oov_tokens / n_tokens * 100:.2f}% OOV") - return tgt_lens - - def __repr__(self): - return ( - self.__class__.__name__ - + f'(split="{self.split}", n_samples={self.n_samples:_}, ' - f"prepend_tgt_lang_tag={self.cfg.prepend_tgt_lang_tag}, " - f"shuffle={self.shuffle}, transforms={self.feature_transforms}, " - f"n_frames_per_step={self.n_frames_per_step}" - ) - - @classmethod - def is_lang_tag(cls, token): - pattern = cls.LANG_TAG_TEMPLATE.replace("{}", "(.*)") - return re.match(pattern, token) - - def check_tgt_lang_tag(self): - if self.cfg.prepend_tgt_lang_tag: - assert self.tgt_langs is not None and self.tgt_dict is not None - tgt_lang_tags = [ - self.LANG_TAG_TEMPLATE.format(t) for t in set(self.tgt_langs) - ] - assert all(t in self.tgt_dict for t in tgt_lang_tags) - - @classmethod - def tokenize(cls, tokenizer, text: str): - return text if tokenizer is None else tokenizer.encode(text) - - def get_tokenized_tgt_text(self, index: int): - text = self.tokenize(self.pre_tokenizer, self.tgt_texts[index]) - text = self.tokenize(self.bpe_tokenizer, text) - return text - - def pack_frames(self, feature: torch.Tensor): - if self.n_frames_per_step == 1: - return feature - n_packed_frames = feature.shape[0] // self.n_frames_per_step - feature = feature[:self.n_frames_per_step * n_packed_frames] - return feature.reshape(n_packed_frames, -1) - - @classmethod - def get_lang_tag_idx(cls, lang: str, dictionary: Dictionary): - lang_tag_idx = dictionary.index(cls.LANG_TAG_TEMPLATE.format(lang)) - assert lang_tag_idx != dictionary.unk() - return lang_tag_idx - - def __getitem__(self, index: int) -> SpeechToTextDatasetItem: - source = get_features_or_waveform( - self.audio_paths[index], - need_waveform=self.cfg.use_audio_input, - use_sample_rate=self.cfg.use_sample_rate, - ) - if self.feature_transforms is not None: - assert not self.cfg.use_audio_input - source = self.feature_transforms(source) - source = torch.from_numpy(source).float() - source = self.pack_frames(source) - - target = None - if self.tgt_texts is not None: - tokenized = self.get_tokenized_tgt_text(index) - target = self.tgt_dict.encode_line( - tokenized, add_if_not_exist=False, append_eos=True - ).long() - if self.cfg.prepend_tgt_lang_tag: - lang_tag_idx = self.get_lang_tag_idx( - self.tgt_langs[index], self.tgt_dict - ) - target = torch.cat((torch.LongTensor([lang_tag_idx]), target), 0) - - speaker_id = None - if self.speaker_to_id is not None: - speaker_id = self.speaker_to_id[self.speakers[index]] - return SpeechToTextDatasetItem( - index=index, source=source, target=target, speaker_id=speaker_id - ) - - def __len__(self): - return self.n_samples - - def collater( - self, samples: List[SpeechToTextDatasetItem], return_order: bool = False - ) -> Dict: - if len(samples) == 0: - return {} - indices = torch.tensor([x.index for x in samples], dtype=torch.long) - frames = _collate_frames([x.source for x in samples], self.cfg.use_audio_input) - # sort samples by descending number of frames - n_frames = torch.tensor([x.source.size(0) for x in samples], dtype=torch.long) - n_frames, order = n_frames.sort(descending=True) - indices = indices.index_select(0, order) - frames = frames.index_select(0, order) - - target, target_lengths = None, None - prev_output_tokens = None - ntokens = None - if self.tgt_texts is not None: - target = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=False, - ) - target = target.index_select(0, order) - target_lengths = torch.tensor( - [x.target.size(0) for x in samples], dtype=torch.long - ).index_select(0, order) - prev_output_tokens = fairseq_data_utils.collate_tokens( - [x.target for x in samples], - self.tgt_dict.pad(), - self.tgt_dict.eos(), - left_pad=False, - move_eos_to_beginning=True, - ) - prev_output_tokens = prev_output_tokens.index_select(0, order) - ntokens = sum(x.target.size(0) for x in samples) - - speaker = None - if self.speaker_to_id is not None: - speaker = torch.tensor( - [s.speaker_id for s in samples], dtype=torch.long - ).index_select(0, order).view(-1, 1) - - net_input = { - "src_tokens": frames, - "src_lengths": n_frames, - "prev_output_tokens": prev_output_tokens, - } - out = { - "id": indices, - "net_input": net_input, - "speaker": speaker, - "target": target, - "target_lengths": target_lengths, - "ntokens": ntokens, - "nsentences": len(samples), - } - if return_order: - out["order"] = order - return out - - def num_tokens(self, index): - return self.n_frames[index] - - def size(self, index): - return self.n_frames[index], self.tgt_lens[index] - - @property - def sizes(self): - return np.array(self.n_frames) - - @property - def can_reuse_epoch_itr_across_epochs(self): - return True - - def ordered_indices(self): - if self.shuffle: - order = [np.random.permutation(len(self))] - else: - order = [np.arange(len(self))] - # first by descending order of # of frames then by original/random order - order.append([-n for n in self.n_frames]) - return np.lexsort(order) - - def prefetch(self, indices): - raise False - - -class SpeechToTextDatasetCreator(object): - # mandatory columns - KEY_ID, KEY_AUDIO, KEY_N_FRAMES = "id", "audio", "n_frames" - KEY_TGT_TEXT = "tgt_text" - # optional columns - KEY_SPEAKER, KEY_SRC_TEXT = "speaker", "src_text" - KEY_SRC_LANG, KEY_TGT_LANG = "src_lang", "tgt_lang" - # default values - DEFAULT_SPEAKER = DEFAULT_SRC_TEXT = DEFAULT_LANG = "" - - @classmethod - def _from_list( - cls, - split_name: str, - is_train_split, - samples: List[Dict], - cfg: S2TDataConfig, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> SpeechToTextDataset: - audio_root = Path(cfg.audio_root) - ids = [s[cls.KEY_ID] for s in samples] - audio_paths = [(audio_root / s[cls.KEY_AUDIO]).as_posix() for s in samples] - n_frames = [int(s[cls.KEY_N_FRAMES]) for s in samples] - tgt_texts = [s[cls.KEY_TGT_TEXT] for s in samples] - src_texts = [s.get(cls.KEY_SRC_TEXT, cls.DEFAULT_SRC_TEXT) for s in samples] - speakers = [s.get(cls.KEY_SPEAKER, cls.DEFAULT_SPEAKER) for s in samples] - src_langs = [s.get(cls.KEY_SRC_LANG, cls.DEFAULT_LANG) for s in samples] - tgt_langs = [s.get(cls.KEY_TGT_LANG, cls.DEFAULT_LANG) for s in samples] - return SpeechToTextDataset( - split_name, - is_train_split, - cfg, - audio_paths, - n_frames, - src_texts=src_texts, - tgt_texts=tgt_texts, - speakers=speakers, - src_langs=src_langs, - tgt_langs=tgt_langs, - ids=ids, - tgt_dict=tgt_dict, - pre_tokenizer=pre_tokenizer, - bpe_tokenizer=bpe_tokenizer, - n_frames_per_step=n_frames_per_step, - speaker_to_id=speaker_to_id - ) - - @classmethod - def get_size_ratios( - cls, datasets: List[SpeechToTextDataset], alpha: float = 1.0 - ) -> List[float]: - """Size ratios for temperature-based sampling - (https://arxiv.org/abs/1907.05019)""" - - id_to_lp, lp_to_sz = {}, defaultdict(int) - for ds in datasets: - lang_pairs = {f"{s}->{t}" for s, t in zip(ds.src_langs, ds.tgt_langs)} - assert len(lang_pairs) == 1 - lang_pair = list(lang_pairs)[0] - id_to_lp[ds.split] = lang_pair - lp_to_sz[lang_pair] += sum(ds.n_frames) - - sz_sum = sum(v for v in lp_to_sz.values()) - lp_to_prob = {k: v / sz_sum for k, v in lp_to_sz.items()} - lp_to_tgt_prob = {k: v ** alpha for k, v in lp_to_prob.items()} - prob_sum = sum(v for v in lp_to_tgt_prob.values()) - lp_to_tgt_prob = {k: v / prob_sum for k, v in lp_to_tgt_prob.items()} - lp_to_sz_ratio = { - k: (lp_to_tgt_prob[k] * sz_sum) / v for k, v in lp_to_sz.items() - } - size_ratio = [lp_to_sz_ratio[id_to_lp[ds.split]] for ds in datasets] - - p_formatted = { - k: f"{lp_to_prob[k]:.3f}->{lp_to_tgt_prob[k]:.3f}" for k in lp_to_sz - } - logger.info(f"sampling probability balancing: {p_formatted}") - sr_formatted = {ds.split: f"{r:.3f}" for ds, r in zip(datasets, size_ratio)} - logger.info(f"balanced sampling size ratio: {sr_formatted}") - return size_ratio - - @classmethod - def _load_samples_from_tsv(cls, root: str, split: str): - tsv_path = Path(root) / f"{split}.tsv" - if not tsv_path.is_file(): - raise FileNotFoundError(f"Dataset not found: {tsv_path}") - with open(tsv_path) as f: - reader = csv.DictReader( - f, - delimiter="\t", - quotechar=None, - doublequote=False, - lineterminator="\n", - quoting=csv.QUOTE_NONE, - ) - samples = [dict(e) for e in reader] - if len(samples) == 0: - raise ValueError(f"Empty manifest: {tsv_path}") - return samples - - @classmethod - def _from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - split: str, - tgt_dict, - is_train_split: bool, - pre_tokenizer, - bpe_tokenizer, - n_frames_per_step, - speaker_to_id - ) -> SpeechToTextDataset: - samples = cls._load_samples_from_tsv(root, split) - return cls._from_list( - split, is_train_split, samples, cfg, tgt_dict, pre_tokenizer, - bpe_tokenizer, n_frames_per_step, speaker_to_id - ) - - @classmethod - def from_tsv( - cls, - root: str, - cfg: S2TDataConfig, - splits: str, - tgt_dict, - pre_tokenizer, - bpe_tokenizer, - is_train_split: bool, - epoch: int, - seed: int, - n_frames_per_step: int = 1, - speaker_to_id=None - ) -> SpeechToTextDataset: - datasets = [ - cls._from_tsv( - root, cfg, split, tgt_dict, is_train_split, pre_tokenizer, - bpe_tokenizer, n_frames_per_step, speaker_to_id - ) - for split in splits.split(",") - ] - - if is_train_split and len(datasets) > 1 and cfg.sampling_alpha != 1.0: - # temperature-based sampling - size_ratios = cls.get_size_ratios(datasets, alpha=cfg.sampling_alpha) - datasets = [ - ResamplingDataset( - d, size_ratio=r, seed=seed, epoch=epoch, replace=(r >= 1.0) - ) - for r, d in zip(size_ratios, datasets) - ] - - return ConcatDataset(datasets) if len(datasets) > 1 else datasets[0] diff --git a/spaces/OFA-Sys/OFA-Generic_Interface/ofa_module/__init__.py b/spaces/OFA-Sys/OFA-Generic_Interface/ofa_module/__init__.py deleted file mode 100644 index 30b147a95464b55f55a0dd1dc8555ca69ebec358..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Generic_Interface/ofa_module/__init__.py +++ /dev/null @@ -1,5 +0,0 @@ -import data -import models -import tasks -import criterions -import utils \ No newline at end of file diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py deleted file mode 100644 index 674b5849cba829cf4f07a69369e9cc6eed376d4c..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/examples/m2m_100/tokenizers/tokenize_zh.py +++ /dev/null @@ -1,14 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - - -import fileinput - -import sacrebleu - - -for line in fileinput.input(): - print(sacrebleu.tokenize_zh(line)) diff --git a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/roberta/__init__.py b/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/roberta/__init__.py deleted file mode 100644 index 4cd723ae96aec8e3182773483f123109d23b620e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-Image_Caption/fairseq/fairseq/models/roberta/__init__.py +++ /dev/null @@ -1,11 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -from .hub_interface import * # noqa -from .model import * # noqa -from .enc_dec import * # noqa -from .model_camembert import * # noqa -from .model_gottbert import * # noqa -from .model_xlmr import * # noqa diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/token_generation_constraints.py b/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/token_generation_constraints.py deleted file mode 100644 index e708dc51bcb0ffb7b411496239c74d5e6f3c2448..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/fairseq/token_generation_constraints.py +++ /dev/null @@ -1,506 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -"""Implements tracking of constraints for a beam item. - -A list of constraints is given as a list of one or more token -sequences, each of length at least one token. For example, for an input sentence - -> Die maschinelle Übersetzung ist schwer zu kontrollieren. - -We could have the constraints: -* to influence -* hard - -There are two implementations: -* OrderedConstraintState: Tracks progress through an ordered list of multitoken constraints. -* UnorderedConstraintState: Tracks progress through an unordered list of multitoken constraints. - -The difference is that in the first, the constraints are assumed to be -in order; the algorithm will permit zero or more tokens between them. -In the second, the constraints are not ordered, so many orderings will -be explored. - -The same sequence can be present any number of times, and will appear -that many times in the output. -""" - -from collections import Counter -from typing import List, Optional, Set, Tuple - -import torch - - -class ConstraintState: - def __init__(self): - pass - - -def pack_constraints(batch_constraints: List[List[torch.Tensor]]) -> torch.Tensor: - """Takes a list of list of constraints in tensor form (a list of - tensor constraints for each sentence) and transforms it into a - packed Tensor. For example, here is a batch of size 3 with 3, 0, - and 1 constraints: - - [ [ [3 1 2], [3], [4 5 6 7], ] - [], - [ [1 8 9 10 1 4 11 12], ] - ] - - Its corresponding packed structure is: - - [ [ 3 3 1 2 0 3 0 4 5 6 7 0], - [ 0 0 0 0 0 0 0 0 0 0 0 0], - [ 1 1 8 9 10 1 4 11 12 0 0 0] ] - - The packed tensor has shape (batch size, maxlen), where - maxlen is defined below. Each row contains concatenated - constraint tokens for that sentence, with 0 appended after - each constraint. The first item in each row is the number - of constraints for that sentence. So maxlen is the maximum - of - - (number of constraints) + (sum length of constraints) + 1. - - across all sentences in the batch. - """ - # The maximum word length of concatenated constraints for any sentence - max_constraints_len = 1 - for sentence_constraints in batch_constraints: - if len(sentence_constraints): - # number of constraints, plus sum of constrain lens, plus a zero after each - constraints_len = ( - 1 - + sum([c.size(0) for c in sentence_constraints]) - + len(sentence_constraints) - ) - max_constraints_len = max(max_constraints_len, constraints_len) - - batch_size = len(batch_constraints) - constraints_tensor = torch.zeros((batch_size, max_constraints_len)).long() - for i, sentence_constraints in enumerate(batch_constraints): - constraints_tensor[i, 0] = len(sentence_constraints) - offset = 1 - for j, constraint in enumerate(sentence_constraints): - this_len = constraint.size(0) - constraints_tensor[i, offset : offset + this_len] = constraint - offset += this_len + 1 - - return constraints_tensor.long() - - -def unpack_constraints(constraint_tensor: torch.Tensor) -> List[torch.Tensor]: - """ - Transforms *one row* of a packed constraint tensor (e.g., for one - sentence in the batch) into a list of constraint tensors. - """ - constraint_list = [] - num_constraints = constraint_tensor[0] - constraints = constraint_tensor.tolist() - offset = 1 - for i in range(num_constraints): - where = constraints.index(0, offset) - constraint_list.append(constraint_tensor[offset:where]) - offset = where + 1 - - return constraint_list - - -class ConstraintNode: - """ - Represents a node in a trie managing unordered constraints. - """ - - def __init__(self, token: int = None, parent=None): - # The token associate with this node (None for the root) - self.token = int(token) if token is not None else None - # The parent (None at the root) - self.parent = parent - # Whether this node is a completed constraint - self.terminal = 0 - # List of child nodes - self.children = {} - - # The cumulative number of constraints from this point in the - # trie forward - self.num_constraints = 0 - - @property - def id(self): - return self.token - - def __str__(self): - term = self.terminal != 0 - return f"[{self.token}].{term}#{self.num_constraints}" - - def __getitem__(self, key: int): - return self.children.get(key, None) - - def next_tokens(self) -> Set[int]: - """The set of child labels.""" - return set(self.children.keys()) - - @staticmethod - def create(constraints: List[List[int]]): - root = ConstraintNode() - for sequence in constraints: - root.add_sequence(sequence) - - return root - - @staticmethod - def print_graph(node: "ConstraintNode"): - if len(node.children) == 0: - return str(node) - else: - s = f"({node}" - for child in node.children.values(): - s += " " + ConstraintNode.print_graph(child) - s += ")" - return s - - def token_counts(self) -> Counter: - """Returns a counter of the number of times each token is used - in a constraint. - """ - token_counts = Counter() - kids = list(self.children.values()) - while len(kids) > 0: - kid = kids.pop() - token_counts[kid.id] += kid.num_constraints - kids += list(kid.children.values()) - - return token_counts - - def tokens(self) -> Set[int]: - """Returns the set of tokens in constraints.""" - return set(self.token_counts().keys()) - - def add_sequence(self, sequence: List[int]): - """Adds a constraint, represented as a list of integers, to - the trie.""" - assert len(sequence) > 0 - - token = int(sequence[0]) - if token not in self.children: - self.children[token] = ConstraintNode(token, parent=self) - - node = self.children[token] - if len(sequence) == 1: - node.terminal += 1 - node.num_constraints += 1 - parent = node.parent - while parent is not None: - parent.num_constraints += 1 - parent = parent.parent - else: - node.add_sequence(sequence[1:]) - - -class UnorderedConstraintState(ConstraintState): - """ - Records progress through the set of constraints for each item in the beam - using a trie. - """ - - def __init__(self, node: ConstraintNode, copy_from: "ConstraintState" = None): - self.node = node - - if copy_from is None: - # The root node - self.root = node - # The set of states in the graph that have been completed - self.completed = Counter() - # The... - self.generated = Counter() - # The list of tokens we need to generate - self.needed_tokens = self.root.tokens() - else: - self.completed = Counter(copy_from.completed) - self.generated = Counter(copy_from.generated) - self.root = copy_from.root - - # Mark the node as generated - if self.node != self.root: - self.generated[node] += 1 - - @staticmethod - def create(constraint_tensor: torch.Tensor): - constraint_list = unpack_constraints(constraint_tensor) - constraint_trie_root = ConstraintNode.create(constraint_list) - return UnorderedConstraintState(constraint_trie_root) - - def __str__(self): - gen_str = ",".join([str(node) for node in self.generated]) - return f"{self.name}/{self.bank}({gen_str})x{self.num_completed}" - - def __copy__(self): - copied_state = UnorderedConstraintState(self.node, copy_from=self) - return copied_state - - def copy(self): - return self.__copy__() - - @property - def name(self): - if self.node.id is None: - return "ROOT" - else: - return str(self.node.id) - - @property - def is_root(self): - return self.node == self.root - - @property - def bank(self): - return sum(self.generated.values()) - - @property - def num_completed(self): - """The number of constraints (not constraint tokens) that are completed. - In addition to the already-completed states, we need to account for the - current state, which might get marked as completed when another token - is generated. - """ - in_final = self.node.terminal and self.completed[self.node] < self.node.terminal - return sum(self.completed.values()) + in_final - - @property - def finished(self): - return self.root.num_constraints - self.num_completed == 0 - - @property - def token_counts(self): - return self.root.token_counts() - - @property - def tokens(self): - return self.root.tokens() - - @property - def num_constraint_tokens(self): - return sum(self.token_counts.values()) - - def next_tokens(self) -> Set[int]: - """Returns the list of tokens that could come next. - These are (a) all tokens extending the root state and, for - non-root states, additionally all tokens extending the current - state.""" - - if self.node != self.root: - return self.root.next_tokens().union(self.node.next_tokens()) - else: - return self.root.next_tokens() - - def advance(self, token: int): - """Reads in a token and advances the state. Here's how it works. - - We can advance to the next state if: - - there is a matching child - - its path isn't blocked - - A path is blocked when all constraints that are descendants of - that node have already been generated, in the current state. - - If we are not able to advance from the current state, we "fall - off the graph" and return to the root state. There, we again - try to advance, checking the same criteria. - - In any case, when falling off the graph, we need to do some - bookkeeping. We: - - check whether any constraints were met (all prefixes of - current state) - - if one is found, mark it as completed - - adjust visited nodes accordingly - """ - token = int(token) - - next_state = None - child = self.node[token] - if child is not None and self.generated[child] < child.num_constraints: - next_state = UnorderedConstraintState(child, copy_from=self) - - def rewind(): - """If we're mid-trie and an "illegal" token is chosen next, we need - to reset our state to the root state. However, along the way, we need - to check whether a prefix of the current trie state represents a state - we could mark as completed. - """ - node = self.node - while node != self.root: - if node.terminal and self.completed[node] < node.terminal: - next_state.completed[node] += 1 - return - - next_state.generated[node] -= 1 - node = node.parent - - # Fall off the graph, check the root - if next_state is None and token in self.root.next_tokens(): - child = self.root[token] - # We can only traverse this edge if it's not saturated - if self.generated[child] < child.num_constraints: - next_state = UnorderedConstraintState(child, copy_from=self) - else: - next_state = UnorderedConstraintState(self.root, copy_from=self) - - # Rewind - rewind() - - elif next_state is None: - next_state = UnorderedConstraintState(self.root, copy_from=self) - # Rewind - rewind() - - return next_state - - -class ConstraintSequence: - def __init__(self, sequences: List[List[int]]): - """Represents a set of possibly multitoken constraints by - concatenating them and internally recording the end points. - """ - self.sequences = [] - self.endpoints = [] - self.num_tokens = 0 - self.tokens = set() - for sequence in sequences: - for token in sequence: - self.tokens.add(token) - self.num_tokens += len(sequence) - self.endpoints += [False for x in range(len(sequence) - 1)] + [True] - self.sequences += sequence - - def __getitem__(self, key: int): - return self.sequences[key] - - def __len__(self): - return len(self.sequences) - - def __str__(self): - return str(self.sequences) - - -class OrderedConstraintState(ConstraintState): - """ - Records progress through the set of linear nonbranching constraints with gaps. - """ - - def __init__(self, sequence: ConstraintSequence, state: int = -1): - self.sequence = sequence - self.state = state - - @staticmethod - def create(constraint_tensor: torch.Tensor): - constraint_list = unpack_constraints(constraint_tensor) - return OrderedConstraintState(ConstraintSequence(constraint_list), -1) - - def __str__(self): - return f"{self.state}/{self.bank}x{self.num_completed}" - - def __copy__(self): - return OrderedConstraintState(self.sequence, self.state) - - def copy(self): - return self.__copy__() - - @property - def num_completed(self): - if self.state == -1: - return 0 - count = len( - list(filter(lambda x: x, self.sequence.endpoints[0 : self.state + 1])) - ) - return count - - @property - def is_root(self): - return self.state == -1 - - @property - def name(self): - if self.state == -1: - return "ROOT" - else: - return str(self.sequence[self.state]) - - @property - def bank(self) -> int: - return self.state + 1 - - @property - def finished(self): - return self.state + 1 == len(self.sequence) - - @property - def token_counts(self): - return self.sequence.token_counts() - - @property - def tokens(self): - return self.sequence.tokens - - @property - def num_constraint_tokens(self): - return sum(self.token_counts.values()) - - def next_tokens(self) -> Set[int]: - """Returns the list of tokens that could come next. - These are (a) all tokens extending the root state and, for - non-root states, additionally all tokens extending the current - state.""" - - tokens = set() - if self.state > 0: - tokens.add(self.sequence[0]) - if not self.finished: - tokens.add(self.sequence[self.state + 1]) - return tokens - - def advance(self, token: int): - """Reads in a token and advances the state. Here's how it works. - - We can advance to the next state if: - - there is a matching child - - its path isn't blocked - - A path is blocked when all constraints that are descendants of - that node have already been generated, in the current state. - - If we are not able to advance from the current state, we "fall - off the graph" and return to the root state. There, we again - try to advance, checking the same criteria. - - In any case, when falling off the graph, we need to do some - bookkeeping. We: - - check whether any constraints were met (all prefixes of - current state) - - if one is found, mark it as completed - - adjust visited nodes accordingly - """ - token = int(token) - # print(f"{self} ADVANCE({token}) {self.sequence} -> ", end="") - - if self.finished: - # Accept anything - next_state = self.copy() - - elif self.sequence[self.state + 1] == token: - # Advance to the next token - next_state = OrderedConstraintState(self.sequence, self.state + 1) - - elif self.sequence.endpoints[self.state]: - # Accept anything between constraints (*) - next_state = self.copy() - - elif token == self.sequence[0]: - # Start over having generated the first token - next_state = OrderedConstraintState(self.sequence, 0) - else: - # Start over from the root - next_state = OrderedConstraintState(self.sequence, -1) - - return next_state diff --git a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/distributed/test_bmuf.py b/spaces/OFA-Sys/OFA-vqa/fairseq/tests/distributed/test_bmuf.py deleted file mode 100644 index 8b7cadb094d49587b6b82432248459fdcf42457e..0000000000000000000000000000000000000000 --- a/spaces/OFA-Sys/OFA-vqa/fairseq/tests/distributed/test_bmuf.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -# -# This source code is licensed under the MIT license found in the -# LICENSE file in the root directory of this source tree. - -import argparse -import functools -import random -import unittest -from multiprocessing import Manager - -import torch -import torch.nn as nn -from fairseq import optim -from fairseq.distributed import utils as distributed_utils -from omegaconf import OmegaConf - - -class Model(nn.Module): - def __init__(self, input_size, output_size): - super(Model, self).__init__() - self.fc = nn.Linear(input_size, output_size) - - def forward(self, input): - output = self.fc(input) - return output - - -def setup_model_loss_criterion(cfg, args, rank, is_cuda): - """ - setup model, criterion and optimizer based on input args - """ - args.distributed_rank = rank - cfg.distributed_training.distributed_rank = args.distributed_rank - if cfg.distributed_training.distributed_world_size > 1: - distributed_utils.distributed_init(cfg) - torch.manual_seed(1) - model = Model(args.input_size, args.nb_classes) - loss_fn = nn.CrossEntropyLoss() - if is_cuda: - model = model.cuda() - loss_fn = loss_fn.cuda() - - optimizer = optim.sgd.SGD(args, model.parameters()) - optimizer = optim.FairseqBMUF( - cfg=cfg.bmuf, - optimizer=optimizer - ) - - return model, loss_fn, optimizer - - -def train_step(input, target, model, loss_fn, optimizer, **unused): - """Do forward, backward and parameter update.""" - model.train() - output = model(input) - loss = loss_fn(output, target) - optimizer.backward(loss) - optimizer.step() - - -def single_gpu_training(cfg, args, rank, iterations, shared_results): - - is_cuda = torch.cuda.is_available() - if is_cuda: - torch.cuda.set_device(rank) - - model, loss_fn, optimizer = setup_model_loss_criterion(cfg, args, rank, is_cuda) - - for _ in range(iterations): - input = torch.randn(1, args.input_size) - target = torch.empty(args.batch_size, dtype=torch.long).random_(args.nb_classes) - - if is_cuda: - input = input.cuda() - target = target.cuda() - train_step(input, target, model, loss_fn, optimizer) - - results = [] - for param in model.parameters(): - if len(results) == 0: - results = param.flatten().cpu().data - else: - results = torch.cat((results, param.flatten().cpu().data), 0) - - shared_results[rank] = results - - -def setup_args(): - args = argparse.Namespace() - args.global_sync_iter = 20 - args.block_momentum = 0.875 - args.block_lr = 0.5 - args.input_size = 5 - args.nb_classes = 2 - args.batch_size = 1 - args.lr = [1e-3] - args.momentum = 0 - args.weight_decay = 0 - args.warmup_iterations = 0 - args.use_nbm = True - args.average_sync = True - args.global_sync_iter = 1 - args.model_parallel_size = 1 - args.distributed_backend = "gloo" - - args.distributed_world_size = 2 - port = random.randint(10000, 20000) - args.distributed_init_method = "tcp://localhost:{port}".format(port=port) - args.distributed_init_host = "localhost" - args.distributed_port = port + 1 - args.local_world_size = args.distributed_world_size - - cfg = OmegaConf.create() - cfg.optimization = OmegaConf.create() - cfg.common = OmegaConf.create() - cfg.distributed_training = OmegaConf.create() - cfg.dataset = OmegaConf.create() - cfg.bmuf = OmegaConf.create() - cfg.optimizer = OmegaConf.create() - - cfg.bmuf.global_sync_iter = args.global_sync_iter - cfg.bmuf.block_momentum = args.block_momentum - cfg.bmuf.block_lr = args.block_lr - cfg.dataset.batch_size = args.batch_size - cfg.optimization.lr = args.lr - cfg.optimizer.momentum = args.momentum - cfg.optimizer.weight_decay = args.weight_decay - cfg.bmuf.warmup_iterations = args.warmup_iterations - cfg.bmuf.use_nbm = args.use_nbm - cfg.bmuf.average_sync = args.average_sync - cfg.common.model_parallel_size = args.model_parallel_size - cfg.distributed_training.distributed_backend = args.distributed_backend - cfg.distributed_training.distributed_world_size = args.distributed_world_size - cfg.bmuf.distributed_world_size = args.distributed_world_size - cfg.distributed_training.distributed_init_method = args.distributed_init_method - cfg.distributed_training.distributed_port = args.distributed_port - - return cfg, args - - -@unittest.skipIf(torch.cuda.device_count() < 2, "test requires 2 GPUs") -class TestBMUF(unittest.TestCase): - def bmuf_process(self, cfg, args, iterations): - processes = [] - results = Manager().dict() - torch.multiprocessing.spawn( - fn=functools.partial(single_gpu_training, cfg, args), - args=(iterations, results), - nprocs=args.distributed_world_size, - join=True, - ) - return results - - def test_bmuf_sync(self): - # Train model for 1 iteration and do bmuf sync without doing warmup - cfg, args = setup_args() - iterations = 1 - results = self.bmuf_process(cfg, args, iterations) - # Make sure params in both machines are same - assert len(results) == 2 - self.assertAlmostEqual(results[0], results[1]) - - def test_warmup_sync(self): - # Train model for 20 iteration and do warmup sync without doing bmuf sync - cfg, args = setup_args() - args.warmup_iterations = 20 - cfg.bmuf.warmup_iterations = args.warmup_iterations - iterations = 20 - results = self.bmuf_process(cfg, args, iterations) - # Make sure params in both machines are same - assert len(results) == 2 - self.assertAlmostEqual(results[0], results[1]) - - def test_warmup_sync_bmuf_sync(self): - # Train model for 25 iteration and do warmup sync after 20 iteration - # and bmuf sync after 25 iteration - cfg, args = setup_args() - args.warmup_iterations = 20 - args.global_sync_iter = 5 - cfg.bmuf.warmup_iterations = args.warmup_iterations - cfg.bmuf.global_sync_iter = args.global_sync_iter - iterations = 25 - results = self.bmuf_process(cfg, args, iterations) - # Make sure params in both machines are same - assert len(results) == 2 - self.assertAlmostEqual(results[0], results[1]) - - def test_single_gpu_bmuf(self): - # Train model for 5 iterations and use GPU 1 - cfg, args = setup_args() - args.distributed_world_size = 1 - args.warmup_iterations = 5 - cfg.distributed_training.distributed_world_size = args.distributed_world_size - cfg.bmuf.distributed_world_size = args.distributed_world_size - cfg.bmuf.warmup_iterations = args.warmup_iterations - iterations = 20 - results = self.bmuf_process(cfg, args, iterations) - assert len(results) == 1 - - def assertAlmostEqual(self, t1, t2): - self.assertEqual(t1.size(), t2.size(), "size mismatch") - self.assertLess((t1 - t2).abs().max(), 1e-4) - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/PKUWilliamYang/StyleGANEX/utils/common.py b/spaces/PKUWilliamYang/StyleGANEX/utils/common.py deleted file mode 100644 index 4813fe311ee40720697e4862c5fbfad811d39237..0000000000000000000000000000000000000000 --- a/spaces/PKUWilliamYang/StyleGANEX/utils/common.py +++ /dev/null @@ -1,87 +0,0 @@ -import cv2 -import numpy as np -from PIL import Image -import matplotlib.pyplot as plt - - -# Log images -def log_input_image(x, opts): - if opts.label_nc == 0: - return tensor2im(x) - elif opts.label_nc == 1: - return tensor2sketch(x) - else: - return tensor2map(x) - - -def tensor2im(var): - var = var.cpu().detach().transpose(0, 2).transpose(0, 1).numpy() - var = ((var + 1) / 2) - var[var < 0] = 0 - var[var > 1] = 1 - var = var * 255 - return Image.fromarray(var.astype('uint8')) - - -def tensor2map(var): - mask = np.argmax(var.data.cpu().numpy(), axis=0) - colors = get_colors() - mask_image = np.ones(shape=(mask.shape[0], mask.shape[1], 3)) - for class_idx in np.unique(mask): - mask_image[mask == class_idx] = colors[class_idx] - mask_image = mask_image.astype('uint8') - return Image.fromarray(mask_image) - - -def tensor2sketch(var): - im = var[0].cpu().detach().numpy() - im = cv2.cvtColor(im, cv2.COLOR_GRAY2BGR) - im = (im * 255).astype(np.uint8) - return Image.fromarray(im) - - -# Visualization utils -def get_colors(): - # currently support up to 19 classes (for the celebs-hq-mask dataset) - colors = [[0, 0, 0], [204, 0, 0], [76, 153, 0], [204, 204, 0], [51, 51, 255], [204, 0, 204], [0, 255, 255], - [255, 204, 204], [102, 51, 0], [255, 0, 0], [102, 204, 0], [255, 255, 0], [0, 0, 153], [0, 0, 204], - [255, 51, 153], [0, 204, 204], [0, 51, 0], [255, 153, 51], [0, 204, 0]] - return colors - - -def vis_faces(log_hooks): - display_count = len(log_hooks) - fig = plt.figure(figsize=(8, 4 * display_count)) - gs = fig.add_gridspec(display_count, 3) - for i in range(display_count): - hooks_dict = log_hooks[i] - fig.add_subplot(gs[i, 0]) - if 'diff_input' in hooks_dict: - vis_faces_with_id(hooks_dict, fig, gs, i) - else: - vis_faces_no_id(hooks_dict, fig, gs, i) - plt.tight_layout() - return fig - - -def vis_faces_with_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face']) - plt.title('Input\nOut Sim={:.2f}'.format(float(hooks_dict['diff_input']))) - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target\nIn={:.2f}, Out={:.2f}'.format(float(hooks_dict['diff_views']), - float(hooks_dict['diff_target']))) - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output\n Target Sim={:.2f}'.format(float(hooks_dict['diff_target']))) - - -def vis_faces_no_id(hooks_dict, fig, gs, i): - plt.imshow(hooks_dict['input_face'], cmap="gray") - plt.title('Input') - fig.add_subplot(gs[i, 1]) - plt.imshow(hooks_dict['target_face']) - plt.title('Target') - fig.add_subplot(gs[i, 2]) - plt.imshow(hooks_dict['output_face']) - plt.title('Output') diff --git a/spaces/PaddlePaddle/ERNIE-Zeus/app.py b/spaces/PaddlePaddle/ERNIE-Zeus/app.py deleted file mode 100644 index 3ed9f1c4f560ee715080ba1c642afbca751cb4e5..0000000000000000000000000000000000000000 --- a/spaces/PaddlePaddle/ERNIE-Zeus/app.py +++ /dev/null @@ -1,217 +0,0 @@ -import gradio as gr -import paddlehub as hub - -ernie_zeus = hub.Module(name='ernie_zeus') - - -def inference(task: str, - text: str, - min_dec_len: int = 2, - seq_len: int = 512, - topp: float = 0.9, - penalty_score: float = 1.0): - - func = getattr(ernie_zeus, task) - try: - result = func(text, min_dec_len, seq_len, topp, penalty_score) - return result - except Exception as error: - return str(error) - - -title = "ERNIE-Zeus" - -description = "ERNIE-Zeus model, which supports Chinese text generates task." - -css = """ - .gradio-container { - font-family: 'IBM Plex Sans', sans-serif; - } - .gr-button { - color: white; - border-color: black; - background: black; - } - input[type='range'] { - accent-color: black; - } - .dark input[type='range'] { - accent-color: #dfdfdf; - } - .container { - max-width: 730px; - margin: auto; - padding-top: 1.5rem; - } - - .details:hover { - text-decoration: underline; - } - .gr-button { - white-space: nowrap; - } - .gr-button:focus { - border-color: rgb(147 197 253 / var(--tw-border-opacity)); - outline: none; - box-shadow: var(--tw-ring-offset-shadow), var(--tw-ring-shadow), var(--tw-shadow, 0 0 #0000); - --tw-border-opacity: 1; - --tw-ring-offset-shadow: var(--tw-ring-inset) 0 0 0 var(--tw-ring-offset-width) var(--tw-ring-offset-color); - --tw-ring-shadow: var(--tw-ring-inset) 0 0 0 calc(3px var(--tw-ring-offset-width)) var(--tw-ring-color); - --tw-ring-color: rgb(191 219 254 / var(--tw-ring-opacity)); - --tw-ring-opacity: .5; - } - .footer { - margin-bottom: 45px; - margin-top: 35px; - text-align: center; - border-bottom: 1px solid #e5e5e5; - } - .footer>p { - font-size: .8rem; - display: inline-block; - padding: 0 10px; - transform: translateY(10px); - background: white; - } - .dark .footer { - border-color: #303030; - } - .dark .footer>p { - background: #0b0f19; - } - .prompt h4{ - margin: 1.25em 0 .25em 0; - font-weight: bold; - font-size: 115%; - } -""" - -block = gr.Blocks(css=css) - -examples = [ - [ - 'text_summarization', - '外媒7月18日报道,阿联酋政府当日证实该国将建设首个核电站,以应对不断上涨的用电需求。分析称阿联酋作为世界第三大石油出口国,更愿意将该能源用于出口,而非发电。首座核反应堆预计在2017年运行。cntv李婉然编译报道', - 4, 512, 0.3, 1.0 - ], - [ - 'copywriting_generation', - '芍药香氛的沐浴乳', - 8, 512, 0.9, 1.2 - ], - [ - 'novel_continuation', - '昆仑山可以说是天下龙脉的根源,所有的山脉都可以看作是昆仑的分支。这些分出来的枝枝杈杈,都可以看作是一条条独立的龙脉。', - 2, 512, 0.9, 1.2 - ], - [ - 'answer_generation', - '做生意的基本原则是什么?', - 2, 512, 0.5, 1.2 - ], - [ - 'couplet_continuation', - '天增岁月人增寿', - 2, 512, 1.0, 1.0 - ], - [ - 'composition_generation', - '拔河比赛', - 128, 512, 0.9, 1.2 - ], - [ - 'text_cloze', - '她有着一双[MASK]的眼眸。', - 1, 512, 0.3, 1.2 - ], -] - -with block: - gr.HTML( - """ -
-
- Paddlehub -
-
-

- ERNIE-Zeus Demo -

-
-

- ERNIE-Zeus is a state-of-the-art Chinese text generates model. -

-
- """ - ) - with gr.Blocks(): - text = gr.Textbox( - label="input_text", - placeholder="Please enter Chinese text.", - ) - task = gr.Dropdown(label="task", - choices=[ - 'text_summarization', - 'copywriting_generation', - 'novel_continuation', - 'answer_generation', - 'couplet_continuation', - 'composition_generation', - 'text_cloze' - ], - value='text_summarization') - - min_dec_len = gr.Slider( - minimum=1, maximum=511, value=1, label="min_dec_len", step=1, interactive=True) - seq_len = gr.Slider(minimum=2, maximum=512, value=128, - label="seq_len", step=1, interactive=True) - topp = gr.Slider(minimum=0.0, maximum=1.0, value=1.0, - label="topp", step=0.01, interactive=True) - penalty_score = gr.Slider( - minimum=1.0, maximum=2.0, value=1.0, label="penalty_score", step=0.01, interactive=True) - - text_gen = gr.Textbox(label="generated_text") - btn = gr.Button(value="Generate text") - - ex = gr.Examples(examples=examples, fn=inference, inputs=[ - task, text, min_dec_len, seq_len, topp, penalty_score], outputs=text_gen, cache_examples=False) - - text.submit(inference, inputs=[ - task, text, min_dec_len, seq_len, topp, penalty_score], outputs=text_gen) - btn.click(inference, inputs=[ - task, text, min_dec_len, seq_len, topp, penalty_score], outputs=text_gen) - gr.Markdown( - ''' -## More -* There are more interesting models in [PaddleHub](https://github.com/PaddlePaddle/PaddleHub), you can star [PaddleHub](https://github.com/PaddlePaddle/PaddleHub) to follow. -* Besides, you can use free GPU resourses in [AIStudio](https://aistudio.baidu.com/aistudio/projectdetail/4462918) to enjoy more cases, have fun. - [![](https://user-images.githubusercontent.com/22424850/187849103-074cb6d2-a9b4-49a1-b1f0-fc130049769f.png)](https://github.com/PaddlePaddle/PaddleHub/stargazers) - ''' - ) - gr.HTML( - """ - - """ - ) - -block.queue(max_size=100000, concurrency_count=100000).launch() diff --git a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/vm/coverage.go b/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/vm/coverage.go deleted file mode 100644 index 8248f104cb5b5639fec27f638d80478220671371..0000000000000000000000000000000000000000 Binary files a/spaces/Pattr/DrumClassification/lilypond-2.24.2/lib/guile/2.2/ccache/system/vm/coverage.go and /dev/null differ diff --git a/spaces/PeepDaSlan9/rvc-models/README.md b/spaces/PeepDaSlan9/rvc-models/README.md deleted file mode 100644 index 56936f1df15477c0ae2fdcfe59a77c175e1905d8..0000000000000000000000000000000000000000 --- a/spaces/PeepDaSlan9/rvc-models/README.md +++ /dev/null @@ -1,14 +0,0 @@ ---- -title: Rvc Models -emoji: 🎤 -colorFrom: red -colorTo: blue -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit -duplicated_from: zomehwh/rvc-models ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/temp_dir.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/temp_dir.py deleted file mode 100644 index 8ee8a1cb18017880cd0bebd66bc2cec5702118c6..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_internal/utils/temp_dir.py +++ /dev/null @@ -1,246 +0,0 @@ -import errno -import itertools -import logging -import os.path -import tempfile -from contextlib import ExitStack, contextmanager -from typing import Any, Dict, Generator, Optional, TypeVar, Union - -from pip._internal.utils.misc import enum, rmtree - -logger = logging.getLogger(__name__) - -_T = TypeVar("_T", bound="TempDirectory") - - -# Kinds of temporary directories. Only needed for ones that are -# globally-managed. -tempdir_kinds = enum( - BUILD_ENV="build-env", - EPHEM_WHEEL_CACHE="ephem-wheel-cache", - REQ_BUILD="req-build", -) - - -_tempdir_manager: Optional[ExitStack] = None - - -@contextmanager -def global_tempdir_manager() -> Generator[None, None, None]: - global _tempdir_manager - with ExitStack() as stack: - old_tempdir_manager, _tempdir_manager = _tempdir_manager, stack - try: - yield - finally: - _tempdir_manager = old_tempdir_manager - - -class TempDirectoryTypeRegistry: - """Manages temp directory behavior""" - - def __init__(self) -> None: - self._should_delete: Dict[str, bool] = {} - - def set_delete(self, kind: str, value: bool) -> None: - """Indicate whether a TempDirectory of the given kind should be - auto-deleted. - """ - self._should_delete[kind] = value - - def get_delete(self, kind: str) -> bool: - """Get configured auto-delete flag for a given TempDirectory type, - default True. - """ - return self._should_delete.get(kind, True) - - -_tempdir_registry: Optional[TempDirectoryTypeRegistry] = None - - -@contextmanager -def tempdir_registry() -> Generator[TempDirectoryTypeRegistry, None, None]: - """Provides a scoped global tempdir registry that can be used to dictate - whether directories should be deleted. - """ - global _tempdir_registry - old_tempdir_registry = _tempdir_registry - _tempdir_registry = TempDirectoryTypeRegistry() - try: - yield _tempdir_registry - finally: - _tempdir_registry = old_tempdir_registry - - -class _Default: - pass - - -_default = _Default() - - -class TempDirectory: - """Helper class that owns and cleans up a temporary directory. - - This class can be used as a context manager or as an OO representation of a - temporary directory. - - Attributes: - path - Location to the created temporary directory - delete - Whether the directory should be deleted when exiting - (when used as a contextmanager) - - Methods: - cleanup() - Deletes the temporary directory - - When used as a context manager, if the delete attribute is True, on - exiting the context the temporary directory is deleted. - """ - - def __init__( - self, - path: Optional[str] = None, - delete: Union[bool, None, _Default] = _default, - kind: str = "temp", - globally_managed: bool = False, - ): - super().__init__() - - if delete is _default: - if path is not None: - # If we were given an explicit directory, resolve delete option - # now. - delete = False - else: - # Otherwise, we wait until cleanup and see what - # tempdir_registry says. - delete = None - - # The only time we specify path is in for editables where it - # is the value of the --src option. - if path is None: - path = self._create(kind) - - self._path = path - self._deleted = False - self.delete = delete - self.kind = kind - - if globally_managed: - assert _tempdir_manager is not None - _tempdir_manager.enter_context(self) - - @property - def path(self) -> str: - assert not self._deleted, f"Attempted to access deleted path: {self._path}" - return self._path - - def __repr__(self) -> str: - return f"<{self.__class__.__name__} {self.path!r}>" - - def __enter__(self: _T) -> _T: - return self - - def __exit__(self, exc: Any, value: Any, tb: Any) -> None: - if self.delete is not None: - delete = self.delete - elif _tempdir_registry: - delete = _tempdir_registry.get_delete(self.kind) - else: - delete = True - - if delete: - self.cleanup() - - def _create(self, kind: str) -> str: - """Create a temporary directory and store its path in self.path""" - # We realpath here because some systems have their default tmpdir - # symlinked to another directory. This tends to confuse build - # scripts, so we canonicalize the path by traversing potential - # symlinks here. - path = os.path.realpath(tempfile.mkdtemp(prefix=f"pip-{kind}-")) - logger.debug("Created temporary directory: %s", path) - return path - - def cleanup(self) -> None: - """Remove the temporary directory created and reset state""" - self._deleted = True - if not os.path.exists(self._path): - return - rmtree(self._path) - - -class AdjacentTempDirectory(TempDirectory): - """Helper class that creates a temporary directory adjacent to a real one. - - Attributes: - original - The original directory to create a temp directory for. - path - After calling create() or entering, contains the full - path to the temporary directory. - delete - Whether the directory should be deleted when exiting - (when used as a contextmanager) - - """ - - # The characters that may be used to name the temp directory - # We always prepend a ~ and then rotate through these until - # a usable name is found. - # pkg_resources raises a different error for .dist-info folder - # with leading '-' and invalid metadata - LEADING_CHARS = "-~.=%0123456789" - - def __init__(self, original: str, delete: Optional[bool] = None) -> None: - self.original = original.rstrip("/\\") - super().__init__(delete=delete) - - @classmethod - def _generate_names(cls, name: str) -> Generator[str, None, None]: - """Generates a series of temporary names. - - The algorithm replaces the leading characters in the name - with ones that are valid filesystem characters, but are not - valid package names (for both Python and pip definitions of - package). - """ - for i in range(1, len(name)): - for candidate in itertools.combinations_with_replacement( - cls.LEADING_CHARS, i - 1 - ): - new_name = "~" + "".join(candidate) + name[i:] - if new_name != name: - yield new_name - - # If we make it this far, we will have to make a longer name - for i in range(len(cls.LEADING_CHARS)): - for candidate in itertools.combinations_with_replacement( - cls.LEADING_CHARS, i - ): - new_name = "~" + "".join(candidate) + name - if new_name != name: - yield new_name - - def _create(self, kind: str) -> str: - root, name = os.path.split(self.original) - for candidate in self._generate_names(name): - path = os.path.join(root, candidate) - try: - os.mkdir(path) - except OSError as ex: - # Continue if the name exists already - if ex.errno != errno.EEXIST: - raise - else: - path = os.path.realpath(path) - break - else: - # Final fallback on the default behavior. - path = os.path.realpath(tempfile.mkdtemp(prefix=f"pip-{kind}-")) - - logger.debug("Created temporary directory: %s", path) - return path diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/_collections.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/_collections.py deleted file mode 100644 index da9857e986d89acac3ba05a6735dc08c249bde1a..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/pip/_vendor/urllib3/_collections.py +++ /dev/null @@ -1,337 +0,0 @@ -from __future__ import absolute_import - -try: - from collections.abc import Mapping, MutableMapping -except ImportError: - from collections import Mapping, MutableMapping -try: - from threading import RLock -except ImportError: # Platform-specific: No threads available - - class RLock: - def __enter__(self): - pass - - def __exit__(self, exc_type, exc_value, traceback): - pass - - -from collections import OrderedDict - -from .exceptions import InvalidHeader -from .packages import six -from .packages.six import iterkeys, itervalues - -__all__ = ["RecentlyUsedContainer", "HTTPHeaderDict"] - - -_Null = object() - - -class RecentlyUsedContainer(MutableMapping): - """ - Provides a thread-safe dict-like container which maintains up to - ``maxsize`` keys while throwing away the least-recently-used keys beyond - ``maxsize``. - - :param maxsize: - Maximum number of recent elements to retain. - - :param dispose_func: - Every time an item is evicted from the container, - ``dispose_func(value)`` is called. Callback which will get called - """ - - ContainerCls = OrderedDict - - def __init__(self, maxsize=10, dispose_func=None): - self._maxsize = maxsize - self.dispose_func = dispose_func - - self._container = self.ContainerCls() - self.lock = RLock() - - def __getitem__(self, key): - # Re-insert the item, moving it to the end of the eviction line. - with self.lock: - item = self._container.pop(key) - self._container[key] = item - return item - - def __setitem__(self, key, value): - evicted_value = _Null - with self.lock: - # Possibly evict the existing value of 'key' - evicted_value = self._container.get(key, _Null) - self._container[key] = value - - # If we didn't evict an existing value, we might have to evict the - # least recently used item from the beginning of the container. - if len(self._container) > self._maxsize: - _key, evicted_value = self._container.popitem(last=False) - - if self.dispose_func and evicted_value is not _Null: - self.dispose_func(evicted_value) - - def __delitem__(self, key): - with self.lock: - value = self._container.pop(key) - - if self.dispose_func: - self.dispose_func(value) - - def __len__(self): - with self.lock: - return len(self._container) - - def __iter__(self): - raise NotImplementedError( - "Iteration over this class is unlikely to be threadsafe." - ) - - def clear(self): - with self.lock: - # Copy pointers to all values, then wipe the mapping - values = list(itervalues(self._container)) - self._container.clear() - - if self.dispose_func: - for value in values: - self.dispose_func(value) - - def keys(self): - with self.lock: - return list(iterkeys(self._container)) - - -class HTTPHeaderDict(MutableMapping): - """ - :param headers: - An iterable of field-value pairs. Must not contain multiple field names - when compared case-insensitively. - - :param kwargs: - Additional field-value pairs to pass in to ``dict.update``. - - A ``dict`` like container for storing HTTP Headers. - - Field names are stored and compared case-insensitively in compliance with - RFC 7230. Iteration provides the first case-sensitive key seen for each - case-insensitive pair. - - Using ``__setitem__`` syntax overwrites fields that compare equal - case-insensitively in order to maintain ``dict``'s api. For fields that - compare equal, instead create a new ``HTTPHeaderDict`` and use ``.add`` - in a loop. - - If multiple fields that are equal case-insensitively are passed to the - constructor or ``.update``, the behavior is undefined and some will be - lost. - - >>> headers = HTTPHeaderDict() - >>> headers.add('Set-Cookie', 'foo=bar') - >>> headers.add('set-cookie', 'baz=quxx') - >>> headers['content-length'] = '7' - >>> headers['SET-cookie'] - 'foo=bar, baz=quxx' - >>> headers['Content-Length'] - '7' - """ - - def __init__(self, headers=None, **kwargs): - super(HTTPHeaderDict, self).__init__() - self._container = OrderedDict() - if headers is not None: - if isinstance(headers, HTTPHeaderDict): - self._copy_from(headers) - else: - self.extend(headers) - if kwargs: - self.extend(kwargs) - - def __setitem__(self, key, val): - self._container[key.lower()] = [key, val] - return self._container[key.lower()] - - def __getitem__(self, key): - val = self._container[key.lower()] - return ", ".join(val[1:]) - - def __delitem__(self, key): - del self._container[key.lower()] - - def __contains__(self, key): - return key.lower() in self._container - - def __eq__(self, other): - if not isinstance(other, Mapping) and not hasattr(other, "keys"): - return False - if not isinstance(other, type(self)): - other = type(self)(other) - return dict((k.lower(), v) for k, v in self.itermerged()) == dict( - (k.lower(), v) for k, v in other.itermerged() - ) - - def __ne__(self, other): - return not self.__eq__(other) - - if six.PY2: # Python 2 - iterkeys = MutableMapping.iterkeys - itervalues = MutableMapping.itervalues - - __marker = object() - - def __len__(self): - return len(self._container) - - def __iter__(self): - # Only provide the originally cased names - for vals in self._container.values(): - yield vals[0] - - def pop(self, key, default=__marker): - """D.pop(k[,d]) -> v, remove specified key and return the corresponding value. - If key is not found, d is returned if given, otherwise KeyError is raised. - """ - # Using the MutableMapping function directly fails due to the private marker. - # Using ordinary dict.pop would expose the internal structures. - # So let's reinvent the wheel. - try: - value = self[key] - except KeyError: - if default is self.__marker: - raise - return default - else: - del self[key] - return value - - def discard(self, key): - try: - del self[key] - except KeyError: - pass - - def add(self, key, val): - """Adds a (name, value) pair, doesn't overwrite the value if it already - exists. - - >>> headers = HTTPHeaderDict(foo='bar') - >>> headers.add('Foo', 'baz') - >>> headers['foo'] - 'bar, baz' - """ - key_lower = key.lower() - new_vals = [key, val] - # Keep the common case aka no item present as fast as possible - vals = self._container.setdefault(key_lower, new_vals) - if new_vals is not vals: - vals.append(val) - - def extend(self, *args, **kwargs): - """Generic import function for any type of header-like object. - Adapted version of MutableMapping.update in order to insert items - with self.add instead of self.__setitem__ - """ - if len(args) > 1: - raise TypeError( - "extend() takes at most 1 positional " - "arguments ({0} given)".format(len(args)) - ) - other = args[0] if len(args) >= 1 else () - - if isinstance(other, HTTPHeaderDict): - for key, val in other.iteritems(): - self.add(key, val) - elif isinstance(other, Mapping): - for key in other: - self.add(key, other[key]) - elif hasattr(other, "keys"): - for key in other.keys(): - self.add(key, other[key]) - else: - for key, value in other: - self.add(key, value) - - for key, value in kwargs.items(): - self.add(key, value) - - def getlist(self, key, default=__marker): - """Returns a list of all the values for the named field. Returns an - empty list if the key doesn't exist.""" - try: - vals = self._container[key.lower()] - except KeyError: - if default is self.__marker: - return [] - return default - else: - return vals[1:] - - # Backwards compatibility for httplib - getheaders = getlist - getallmatchingheaders = getlist - iget = getlist - - # Backwards compatibility for http.cookiejar - get_all = getlist - - def __repr__(self): - return "%s(%s)" % (type(self).__name__, dict(self.itermerged())) - - def _copy_from(self, other): - for key in other: - val = other.getlist(key) - if isinstance(val, list): - # Don't need to convert tuples - val = list(val) - self._container[key.lower()] = [key] + val - - def copy(self): - clone = type(self)() - clone._copy_from(self) - return clone - - def iteritems(self): - """Iterate over all header lines, including duplicate ones.""" - for key in self: - vals = self._container[key.lower()] - for val in vals[1:]: - yield vals[0], val - - def itermerged(self): - """Iterate over all headers, merging duplicate ones together.""" - for key in self: - val = self._container[key.lower()] - yield val[0], ", ".join(val[1:]) - - def items(self): - return list(self.iteritems()) - - @classmethod - def from_httplib(cls, message): # Python 2 - """Read headers from a Python 2 httplib message object.""" - # python2.7 does not expose a proper API for exporting multiheaders - # efficiently. This function re-reads raw lines from the message - # object and extracts the multiheaders properly. - obs_fold_continued_leaders = (" ", "\t") - headers = [] - - for line in message.headers: - if line.startswith(obs_fold_continued_leaders): - if not headers: - # We received a header line that starts with OWS as described - # in RFC-7230 S3.2.4. This indicates a multiline header, but - # there exists no previous header to which we can attach it. - raise InvalidHeader( - "Header continuation with no previous header: %s" % line - ) - else: - key, value = headers[-1] - headers[-1] = (key, value + " " + line.strip()) - continue - - key, value = line.split(":", 1) - headers.append((key, value.strip())) - - return cls(headers) diff --git a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools_rust/build.py b/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools_rust/build.py deleted file mode 100644 index 21c19758ea78f7405613af81af358811adf0e649..0000000000000000000000000000000000000000 --- a/spaces/Raspberry-ai/main/.env/lib/python3.11/site-packages/setuptools_rust/build.py +++ /dev/null @@ -1,716 +0,0 @@ -from __future__ import annotations - -import glob -import json -import os -import platform -import shutil -import subprocess -import sys -import sysconfig -from distutils import log -from distutils.errors import ( - CompileError, - DistutilsExecError, - DistutilsFileError, - DistutilsPlatformError, -) -from distutils.sysconfig import get_config_var -from pathlib import Path -from typing import Dict, Iterable, List, NamedTuple, Optional, Set, Tuple, cast - -import pkg_resources -from setuptools.command.build import build as CommandBuild # type: ignore[import] -from setuptools.command.build_ext import build_ext as CommandBuildExt -from setuptools.command.build_ext import get_abi3_suffix -from typing_extensions import Literal - -from ._utils import format_called_process_error -from .command import RustCommand -from .extension import Binding, RustBin, RustExtension, Strip -from .rustc_info import get_rust_host, get_rust_target_list, get_rustc_cfgs - - -class build_rust(RustCommand): - """Command for building Rust crates via cargo.""" - - description = "build Rust extensions (compile/link to build directory)" - - user_options = [ - ( - "inplace", - "i", - "ignore build-lib and put compiled extensions into the source " - + "directory alongside your pure Python modules", - ), - ("debug", "d", "Force debug to true for all Rust extensions "), - ("release", "r", "Force debug to false for all Rust extensions "), - ("qbuild", None, "Force enable quiet option for all Rust extensions "), - ( - "build-temp", - "t", - "directory for temporary files (cargo 'target' directory) ", - ), - ("target=", None, "Build for the target triple"), - ] - boolean_options = ["inplace", "debug", "release", "qbuild"] - - plat_name: Optional[str] = None - - def initialize_options(self) -> None: - super().initialize_options() - self.inplace = None - self.debug = None - self.release = None - self.qbuild = None - self.build_temp = None - self.plat_name = None - self.build_number = None - self.target = os.getenv("CARGO_BUILD_TARGET") - self.cargo = os.getenv("CARGO", "cargo") - - def finalize_options(self) -> None: - super().finalize_options() - - self.data_dir = self.get_data_dir() - - if self.plat_name is None: - self.plat_name = cast( # type: ignore[no-any-unimported] - CommandBuild, self.get_finalized_command("build") - ).plat_name - assert isinstance(self.plat_name, str) - - # Inherit settings from the `build_ext` command - self.set_undefined_options( - "build_ext", - ("build_temp", "build_temp"), - ("debug", "debug"), - ("inplace", "inplace"), - ) - - if self.build_number is not None and not self.build_number[:1].isdigit(): - raise ValueError("Build tag (build-number) must start with a digit.") - - def get_data_dir(self) -> str: - components = ( - pkg_resources.safe_name(self.distribution.get_name()).replace("-", "_"), # type: ignore[attr-defined] - pkg_resources.safe_version(self.distribution.get_version()).replace("-", "_"), # type: ignore[attr-defined] - ) - if self.build_number: - components += (self.build_number,) - return "-".join(components) + ".data" - - def run_for_extension(self, ext: RustExtension) -> None: - assert self.plat_name is not None - - arch_flags = os.getenv("ARCHFLAGS") - universal2 = False - if self.plat_name.startswith("macosx-") and arch_flags: - universal2 = "x86_64" in arch_flags and "arm64" in arch_flags - if not universal2 and not self.target: - if "arm64" in arch_flags: - self.target = "aarch64-apple-darwin" - elif "x86_64" in arch_flags: - self.target = "x86_64-apple-darwin" - - if universal2: - arm64_dylib_paths = self.build_extension(ext, "aarch64-apple-darwin") - x86_64_dylib_paths = self.build_extension(ext, "x86_64-apple-darwin") - dylib_paths = [] - for (target_fname, arm64_dylib), (_, x86_64_dylib) in zip( - arm64_dylib_paths, x86_64_dylib_paths - ): - fat_dylib_path = arm64_dylib.replace("aarch64-apple-darwin/", "") - create_universal2_binary(fat_dylib_path, [arm64_dylib, x86_64_dylib]) - dylib_paths.append(_BuiltModule(target_fname, fat_dylib_path)) - else: - dylib_paths = self.build_extension(ext, self.target) - self.install_extension(ext, dylib_paths) - - def build_extension( - self, ext: RustExtension, forced_target_triple: Optional[str] = None - ) -> List["_BuiltModule"]: - - target_triple = self._detect_rust_target(forced_target_triple) - rustc_cfgs = get_rustc_cfgs(target_triple) - - env = _prepare_build_environment() - - if not os.path.exists(ext.path): - raise DistutilsFileError( - f"can't find Rust extension project file: {ext.path}" - ) - - quiet = self.qbuild or ext.quiet - debug = self._is_debug_build(ext) - - cargo_args = self._cargo_args( - ext=ext, target_triple=target_triple, release=not debug, quiet=quiet - ) - - rustflags = [] - - if ext._uses_exec_binding(): - command = [ - self.cargo, - "build", - "--manifest-path", - ext.path, - "--message-format=json-render-diagnostics", - *cargo_args, - ] - - else: - rustc_args = [ - "--crate-type", - "cdylib", - *ext.rustc_flags, - ] - - # OSX requires special linker arguments - if rustc_cfgs.get("target_os") == "macos": - ext_basename = os.path.basename(self.get_dylib_ext_path(ext, ext.name)) - rustc_args.extend( - [ - "-C", - f"link-args=-undefined dynamic_lookup -Wl,-install_name,@rpath/{ext_basename}", - ] - ) - - # Tell musl targets not to statically link libc. See - # https://github.com/rust-lang/rust/issues/59302 for details. - if rustc_cfgs.get("target_env") == "musl": - # This must go in the env otherwise rustc will refuse to build - # the cdylib, see https://github.com/rust-lang/cargo/issues/10143 - rustflags.append("-Ctarget-feature=-crt-static") - - elif (rustc_cfgs.get("target_arch"), rustc_cfgs.get("target_os")) == ( - "wasm32", - "emscripten", - ): - rustc_args.extend(["-C", f"link-args=-sSIDE_MODULE=2 -sWASM_BIGINT"]) - - command = [ - self.cargo, - "rustc", - "--lib", - "--message-format=json-render-diagnostics", - "--manifest-path", - ext.path, - *cargo_args, - "--", - *rustc_args, - ] - - if rustflags: - existing_rustflags = env.get("RUSTFLAGS") - if existing_rustflags is not None: - rustflags.append(existing_rustflags) - new_rustflags = " ".join(rustflags) - env["RUSTFLAGS"] = new_rustflags - - # print RUSTFLAGS being added before the command - if not quiet: - print(f"[RUSTFLAGS={new_rustflags}]", end=" ", file=sys.stderr) - - if not quiet: - print(" ".join(command), file=sys.stderr) - - # Execute cargo - try: - # If quiet, capture all output and only show it in the exception - # If not quiet, forward all cargo output to stderr - stderr = subprocess.PIPE if quiet else None - cargo_messages = subprocess.check_output( - command, - env=env, - stderr=stderr, - text=True, - ) - except subprocess.CalledProcessError as e: - # Don't include stdout in the formatted error as it is a huge dump - # of cargo json lines which aren't helpful for the end user. - raise CompileError(format_called_process_error(e, include_stdout=False)) - - except OSError: - raise DistutilsExecError( - "Unable to execute 'cargo' - this package " - "requires Rust to be installed and cargo to be on the PATH" - ) - - # Find the shared library that cargo hopefully produced and copy - # it into the build directory as if it were produced by build_ext. - - dylib_paths = [] - package_id = ext.metadata(quiet=quiet)["resolve"]["root"] - - if ext._uses_exec_binding(): - # Find artifact from cargo messages - artifacts = _find_cargo_artifacts( - cargo_messages.splitlines(), - package_id=package_id, - kinds={"bin"}, - ) - for name, dest in ext.target.items(): - if not name: - name = dest.split(".")[-1] - - try: - artifact_path = next( - artifact - for artifact in artifacts - if Path(artifact).with_suffix("").name == name - ) - except StopIteration: - raise DistutilsExecError( - f"Rust build failed; unable to locate executable '{name}'" - ) - - if os.environ.get("CARGO") == "cross": - artifact_path = _replace_cross_target_dir( - artifact_path, ext, quiet=quiet - ) - - dylib_paths.append(_BuiltModule(dest, artifact_path)) - else: - # Find artifact from cargo messages - artifacts = _find_cargo_artifacts( - cargo_messages.splitlines(), - package_id=package_id, - kinds={"cdylib", "dylib"}, - ) - if len(artifacts) == 0: - raise DistutilsExecError( - "Rust build failed; unable to find any cdylib or dylib build artifacts" - ) - elif len(artifacts) > 1: - raise DistutilsExecError( - f"Rust build failed; expected only one cdylib or dylib build artifact but found {artifacts}" - ) - - artifact_path = artifacts[0] - - if os.environ.get("CARGO") == "cross": - artifact_path = _replace_cross_target_dir( - artifact_path, ext, quiet=quiet - ) - - # guaranteed to be just one element after checks above - dylib_paths.append(_BuiltModule(ext.name, artifact_path)) - return dylib_paths - - def install_extension( - self, ext: RustExtension, dylib_paths: List["_BuiltModule"] - ) -> None: - debug_build = ext.debug if ext.debug is not None else self.inplace - debug_build = self.debug if self.debug is not None else debug_build - if self.release: - debug_build = False - - # Ask build_ext where the shared library would go if it had built it, - # then copy it there. - build_ext = cast(CommandBuildExt, self.get_finalized_command("build_ext")) - build_ext.inplace = self.inplace - - for module_name, dylib_path in dylib_paths: - if not module_name: - module_name = os.path.basename( - os.path.splitext(os.path.basename(dylib_path)[3:])[0] - ) - - if ext._uses_exec_binding(): - ext_path = build_ext.get_ext_fullpath(module_name) - # remove extensions - ext_path, _, _ = _split_platform_and_extension(ext_path) - - # Add expected extension - exe = sysconfig.get_config_var("EXE") - if exe is not None: - ext_path += exe - - os.makedirs(os.path.dirname(ext_path), exist_ok=True) - if isinstance(ext, RustBin): - executable_name = module_name - if exe is not None: - executable_name += exe - scripts_dir = os.path.join( - build_ext.build_lib, self.data_dir, "scripts" - ) - os.makedirs(scripts_dir, exist_ok=True) - ext_path = os.path.join(scripts_dir, executable_name) - else: - ext.install_script(module_name.split(".")[-1], ext_path) - else: - ext_path = self.get_dylib_ext_path(ext, module_name) - os.makedirs(os.path.dirname(ext_path), exist_ok=True) - - log.info("Copying rust artifact from %s to %s", dylib_path, ext_path) - shutil.copyfile(dylib_path, ext_path) - - if sys.platform != "win32" and not debug_build: - args = [] - if ext.strip == Strip.All: - args.append("-x") - elif ext.strip == Strip.Debug: - args.append("-S") - - if args: - args.insert(0, "strip") - args.append(ext_path) - try: - output = subprocess.check_output(args) - except subprocess.CalledProcessError: - pass - - # executables, win32(cygwin)-dll's, and shared libraries on - # Unix-like operating systems need X bits - mode = os.stat(ext_path).st_mode - mode |= (mode & 0o444) >> 2 # copy R bits to X - os.chmod(ext_path, mode) - - def get_dylib_ext_path(self, ext: RustExtension, target_fname: str) -> str: - assert self.plat_name is not None - build_ext = cast(CommandBuildExt, self.get_finalized_command("build_ext")) - - ext_path: str = build_ext.get_ext_fullpath(target_fname) - - if _is_py_limited_api(ext.py_limited_api, self._py_limited_api()): - abi3_suffix = get_abi3_suffix() - if abi3_suffix is not None: - so_ext = get_config_var("EXT_SUFFIX") - assert isinstance(so_ext, str) - ext_path = ext_path[: -len(so_ext)] + get_abi3_suffix() - - if ".abi3." in ext_path: - return ext_path - # Examples: linux_x86_64, linux_i686, manylinux2014_aarch64, manylinux_2_24_armv7l - plat_name = self.plat_name.lower().replace("-", "_").replace(".", "_") - if not plat_name.startswith(("linux", "manylinux")): - return ext_path - - arch_parts = [] - arch_found = False - for item in plat_name.split("_"): - if item.startswith(("linux", "manylinux")): - continue - if item.isdigit() and not arch_found: - # manylinux_2_24_armv7l arch should be armv7l - continue - arch_found = True - arch_parts.append(item) - target_arch = "_".join(arch_parts) - host_platform = sysconfig.get_platform() - host_arch = host_platform.rsplit("-", 1)[1] - # Remove incorrect platform tag if we are cross compiling - if target_arch and host_arch != target_arch: - ext_path, _, extension = _split_platform_and_extension(ext_path) - # rust.so, removed platform tag - ext_path += extension - return ext_path - - def _py_limited_api(self) -> _PyLimitedApi: - bdist_wheel = self.distribution.get_command_obj("bdist_wheel", create=False) - - if bdist_wheel is None: - # wheel package is not installed, not building a limited-api wheel - return False - else: - from wheel.bdist_wheel import bdist_wheel as CommandBdistWheel - - bdist_wheel_command = cast(CommandBdistWheel, bdist_wheel) # type: ignore[no-any-unimported] - bdist_wheel_command.ensure_finalized() - return cast(_PyLimitedApi, bdist_wheel_command.py_limited_api) - - def _detect_rust_target( - self, forced_target_triple: Optional[str] = None - ) -> Optional[str]: - assert self.plat_name is not None - if forced_target_triple is not None: - # Automatic target detection can be overridden via the CARGO_BUILD_TARGET - # environment variable or --target command line option - return forced_target_triple - - # Determine local rust target which needs to be "forced" if necessary - local_rust_target = _adjusted_local_rust_target(self.plat_name) - - # Match cargo's behaviour of not using an explicit target if the - # target we're compiling for is the host - if ( - local_rust_target is not None - # check for None first to avoid calling to rustc if not needed - and local_rust_target != get_rust_host() - ): - return local_rust_target - - return None - - def _is_debug_build(self, ext: RustExtension) -> bool: - if self.release: - return False - elif self.debug is not None: - return self.debug - elif ext.debug is not None: - return ext.debug - else: - return bool(self.inplace) - - def _cargo_args( - self, - ext: RustExtension, - target_triple: Optional[str], - release: bool, - quiet: bool, - ) -> List[str]: - args = [] - if target_triple is not None: - args.extend(["--target", target_triple]) - - if release: - profile = ext.get_cargo_profile() - if not profile: - args.append("--release") - - if quiet: - args.append("-q") - - elif self.verbose: - # cargo only have -vv - verbose_level = "v" * min(self.verbose, 2) - args.append(f"-{verbose_level}") - - features = { - *ext.features, - *_binding_features(ext, py_limited_api=self._py_limited_api()), - } - - if features: - args.extend(["--features", " ".join(features)]) - - if ext.args is not None: - args.extend(ext.args) - - if ext.cargo_manifest_args is not None: - args.extend(ext.cargo_manifest_args) - - return args - - -def create_universal2_binary(output_path: str, input_paths: List[str]) -> None: - # Try lipo first - command = ["lipo", "-create", "-output", output_path, *input_paths] - try: - subprocess.check_output(command, text=True) - except subprocess.CalledProcessError as e: - output = e.output - raise CompileError("lipo failed with code: %d\n%s" % (e.returncode, output)) - except OSError: - # lipo not found, try using the fat-macho library - try: - from fat_macho import FatWriter - except ImportError: - raise DistutilsExecError( - "failed to locate `lipo` or import `fat_macho.FatWriter`. " - "Try installing with `pip install fat-macho` " - ) - fat = FatWriter() - for input_path in input_paths: - with open(input_path, "rb") as f: - fat.add(f.read()) - fat.write_to(output_path) - - -class _BuiltModule(NamedTuple): - """ - Attributes: - - module_name: dotted python import path of the module - - path: the location the module has been installed at - """ - - module_name: str - path: str - - -def _replace_vendor_with_unknown(target: str) -> Optional[str]: - """Replaces vendor in the target triple with unknown. - - Returns None if the target is not made of 4 parts. - """ - components = target.split("-") - if len(components) != 4: - return None - components[1] = "unknown" - return "-".join(components) - - -def _prepare_build_environment() -> Dict[str, str]: - """Prepares environment variables to use when executing cargo build.""" - - # Make sure that if pythonXX-sys is used, it builds against the current - # executing python interpreter. - bindir = os.path.dirname(sys.executable) - - env = os.environ.copy() - env.update( - { - # disables rust's pkg-config seeking for specified packages, - # which causes pythonXX-sys to fall back to detecting the - # interpreter from the path. - "PATH": os.path.join(bindir, os.environ.get("PATH", "")), - "PYTHON_SYS_EXECUTABLE": os.environ.get( - "PYTHON_SYS_EXECUTABLE", sys.executable - ), - "PYO3_PYTHON": os.environ.get("PYO3_PYTHON", sys.executable), - } - ) - return env - - -def _is_py_limited_api( - ext_setting: Literal["auto", True, False], - wheel_setting: Optional[_PyLimitedApi], -) -> bool: - """Returns whether this extension is being built for the limited api. - - >>> _is_py_limited_api("auto", None) - False - - >>> _is_py_limited_api("auto", True) - True - - >>> _is_py_limited_api(True, False) - True - - >>> _is_py_limited_api(False, True) - False - """ - - # If the extension explicitly states to use py_limited_api or not, use that. - if ext_setting != "auto": - return ext_setting - - # "auto" setting - use whether the bdist_wheel option is truthy. - return bool(wheel_setting) - - -def _binding_features( - ext: RustExtension, - py_limited_api: _PyLimitedApi, -) -> Set[str]: - if ext.binding in (Binding.NoBinding, Binding.Exec): - return set() - elif ext.binding is Binding.PyO3: - features = {"pyo3/extension-module"} - if ext.py_limited_api == "auto": - if isinstance(py_limited_api, str): - python_version = py_limited_api[2:] - features.add(f"pyo3/abi3-py{python_version}") - elif py_limited_api: - features.add(f"pyo3/abi3") - return features - elif ext.binding is Binding.RustCPython: - return {"cpython/python3-sys", "cpython/extension-module"} - else: - raise DistutilsPlatformError(f"unknown Rust binding: '{ext.binding}'") - - -_PyLimitedApi = Literal["cp37", "cp38", "cp39", "cp310", "cp311", "cp312", True, False] - - -def _adjusted_local_rust_target(plat_name: str) -> Optional[str]: - """Returns the local rust target for the given `plat_name`, if it is - necessary to 'force' a specific target for correctness.""" - - # If we are on a 64-bit machine, but running a 32-bit Python, then - # we'll target a 32-bit Rust build. - if plat_name == "win32": - if get_rustc_cfgs(None).get("target_env") == "gnu": - return "i686-pc-windows-gnu" - else: - return "i686-pc-windows-msvc" - elif plat_name == "win-amd64": - if get_rustc_cfgs(None).get("target_env") == "gnu": - return "x86_64-pc-windows-gnu" - else: - return "x86_64-pc-windows-msvc" - elif plat_name.startswith("macosx-") and platform.machine() == "x86_64": - # x86_64 or arm64 macOS targeting x86_64 - return "x86_64-apple-darwin" - - return None - - -def _split_platform_and_extension(ext_path: str) -> Tuple[str, str, str]: - """Splits an extension path into a tuple (ext_path, plat_tag, extension). - - >>> _split_platform_and_extension("foo/bar.platform.so") - ('foo/bar', '.platform', '.so') - """ - - # rust.cpython-38-x86_64-linux-gnu.so to (rust.cpython-38-x86_64-linux-gnu, .so) - ext_path, extension = os.path.splitext(ext_path) - # rust.cpython-38-x86_64-linux-gnu to (rust, .cpython-38-x86_64-linux-gnu) - ext_path, platform_tag = os.path.splitext(ext_path) - return (ext_path, platform_tag, extension) - - -def _find_cargo_artifacts( - cargo_messages: List[str], - *, - package_id: str, - kinds: Set[str], -) -> List[str]: - """Identifies cargo artifacts built for the given `package_id` from the - provided cargo_messages. - - >>> _find_cargo_artifacts( - ... [ - ... '{"some_irrelevant_message": []}', - ... '{"reason":"compiler-artifact","package_id":"some_id","target":{"kind":["cdylib"]},"filenames":["/some/path/baz.so"]}', - ... '{"reason":"compiler-artifact","package_id":"some_id","target":{"kind":["dylib", "rlib"]},"filenames":["/file/two/baz.dylib", "/file/two/baz.rlib"]}', - ... '{"reason":"compiler-artifact","package_id":"some_other_id","target":{"kind":["cdylib"]},"filenames":["/not/this.so"]}', - ... ], - ... package_id="some_id", - ... kinds={"cdylib", "dylib"}, - ... ) - ['/some/path/baz.so', '/file/two/baz.dylib'] - >>> _find_cargo_artifacts( - ... [ - ... '{"some_irrelevant_message": []}', - ... '{"reason":"compiler-artifact","package_id":"some_id","target":{"kind":["cdylib"]},"filenames":["/some/path/baz.so"]}', - ... '{"reason":"compiler-artifact","package_id":"some_id","target":{"kind":["cdylib", "rlib"]},"filenames":["/file/two/baz.dylib", "/file/two/baz.rlib"]}', - ... '{"reason":"compiler-artifact","package_id":"some_other_id","target":{"kind":["cdylib"]},"filenames":["/not/this.so"]}', - ... ], - ... package_id="some_id", - ... kinds={"rlib"}, - ... ) - ['/file/two/baz.rlib'] - """ - artifacts = [] - for message in cargo_messages: - # only bother parsing messages that look like a match - if "compiler-artifact" in message and package_id in message: - parsed = json.loads(message) - # verify the message is correct - if ( - parsed.get("reason") == "compiler-artifact" - and parsed.get("package_id") == package_id - ): - for artifact_kind, filename in zip( - parsed["target"]["kind"], parsed["filenames"] - ): - if artifact_kind in kinds: - artifacts.append(filename) - return artifacts - - -def _replace_cross_target_dir(path: str, ext: RustExtension, *, quiet: bool) -> str: - """Replaces target director from `cross` docker build with the correct - local path. - - Cross artifact messages and metadata contain paths from inside the - dockerfile; invoking `cargo metadata` we can work out the correct local - target directory. - """ - cross_target_dir = ext._metadata(cargo="cross", quiet=quiet)["target_directory"] - local_target_dir = ext._metadata(cargo="cargo", quiet=quiet)["target_directory"] - return path.replace(cross_target_dir, local_target_dir) diff --git a/spaces/Realcat/image-matching-webui/third_party/SGMNet/utils/fm_utils.py b/spaces/Realcat/image-matching-webui/third_party/SGMNet/utils/fm_utils.py deleted file mode 100644 index 900b73c42723cd9c5bcbef5c758deadcd0b309df..0000000000000000000000000000000000000000 --- a/spaces/Realcat/image-matching-webui/third_party/SGMNet/utils/fm_utils.py +++ /dev/null @@ -1,100 +0,0 @@ -import numpy as np - - -def line_to_border(line, size): - # line:(a,b,c), ax+by+c=0 - # size:(W,H) - H, W = size[1], size[0] - a, b, c = line[0], line[1], line[2] - epsa = 1e-8 if a >= 0 else -1e-8 - epsb = 1e-8 if b >= 0 else -1e-8 - intersection_list = [] - - y_left = -c / (b + epsb) - y_right = (-c - a * (W - 1)) / (b + epsb) - x_top = -c / (a + epsa) - x_down = (-c - b * (H - 1)) / (a + epsa) - - if y_left >= 0 and y_left <= H - 1: - intersection_list.append([0, y_left]) - if y_right >= 0 and y_right <= H - 1: - intersection_list.append([W - 1, y_right]) - if x_top >= 0 and x_top <= W - 1: - intersection_list.append([x_top, 0]) - if x_down >= 0 and x_down <= W - 1: - intersection_list.append([x_down, H - 1]) - if len(intersection_list) != 2: - return None - intersection_list = np.asarray(intersection_list) - return intersection_list - - -def find_point_in_line(end_point): - x_span, y_span = ( - end_point[1, 0] - end_point[0, 0], - end_point[1, 1] - end_point[0, 1], - ) - mv = np.random.uniform() - point = np.asarray([end_point[0, 0] + x_span * mv, end_point[0, 1] + y_span * mv]) - return point - - -def epi_line(point, F): - homo = np.concatenate([point, np.ones([len(point), 1])], axis=-1) - epi = np.matmul(homo, F.T) - return epi - - -def dis_point_to_line(line, point): - homo = np.concatenate([point, np.ones([len(point), 1])], axis=-1) - dis = line * homo - dis = dis.sum(axis=-1) / (np.linalg.norm(line[:, :2], axis=-1) + 1e-8) - return abs(dis) - - -def SGD_oneiter(F1, F2, size1, size2): - H1, W1 = size1[1], size1[0] - factor1 = 1 / np.linalg.norm(size1) - factor2 = 1 / np.linalg.norm(size2) - p0 = np.asarray([(W1 - 1) * np.random.uniform(), (H1 - 1) * np.random.uniform()]) - epi1 = epi_line(p0[np.newaxis], F1)[0] - border_point1 = line_to_border(epi1, size2) - if border_point1 is None: - return -1 - - p1 = find_point_in_line(border_point1) - epi2 = epi_line(p0[np.newaxis], F2) - d1 = dis_point_to_line(epi2, p1[np.newaxis])[0] * factor2 - epi3 = epi_line(p1[np.newaxis], F2.T) - d2 = dis_point_to_line(epi3, p0[np.newaxis])[0] * factor1 - return (d1 + d2) / 2 - - -def compute_SGD(F1, F2, size1, size2): - np.random.seed(1234) - N = 1000 - max_iter = N * 10 - count, sgd = 0, 0 - for i in range(max_iter): - d1 = SGD_oneiter(F1, F2, size1, size2) - if d1 < 0: - continue - d2 = SGD_oneiter(F2, F1, size1, size2) - if d2 < 0: - continue - count += 1 - sgd += (d1 + d2) / 2 - if count == N: - break - if count == 0: - return 1 - else: - return sgd / count - - -def compute_inlier_rate(x1, x2, size1, size2, F_gt, th=0.003): - t1, t2 = np.linalg.norm(size1) * th, np.linalg.norm(size2) * th - epi1, epi2 = epi_line(x1, F_gt), epi_line(x2, F_gt.T) - dis1, dis2 = dis_point_to_line(epi1, x2), dis_point_to_line(epi2, x1) - mask_inlier = np.logical_and(dis1 < t2, dis2 < t1) - return mask_inlier.mean() if len(mask_inlier) != 0 else 0 diff --git a/spaces/Redgon/bingo/src/components/chat-notification.tsx b/spaces/Redgon/bingo/src/components/chat-notification.tsx deleted file mode 100644 index 4be24d0f1755c8058698cfa66c736d8d4792475a..0000000000000000000000000000000000000000 --- a/spaces/Redgon/bingo/src/components/chat-notification.tsx +++ /dev/null @@ -1,77 +0,0 @@ -import { useEffect } from 'react' -import Image from 'next/image' - -import IconWarning from '@/assets/images/warning.svg' -import { ChatError, ErrorCode, ChatMessageModel } from '@/lib/bots/bing/types' -import { ExternalLink } from './external-link' -import { useBing } from '@/lib/hooks/use-bing' - -export interface ChatNotificationProps extends Pick, 'bot'> { - message?: ChatMessageModel -} - -function getAction(error: ChatError, reset: () => void) { - if (error.code === ErrorCode.THROTTLE_LIMIT) { - reset() - return ( -
- 你已达到每日最大发送消息次数,请更换账号或隔一天后重试 -
- ) - } - if (error.code === ErrorCode.BING_FORBIDDEN) { - return ( - - 你的账号已在黑名单,请尝试更换账号及申请解封 - - ) - } - if (error.code === ErrorCode.CONVERSATION_LIMIT) { - return ( -
- 当前话题已中止,请点 - 重新开始 - 开启新的对话 -
- ) - } - if (error.code === ErrorCode.BING_CAPTCHA) { - return ( - - 点击通过人机验证 - - ) - } - if (error.code === ErrorCode.BING_UNAUTHORIZED) { - reset() - return ( - 没有获取到身份信息或身份信息失效,点此重新设置 - ) - } - return error.message -} - -export function ChatNotification({ message, bot }: ChatNotificationProps) { - useEffect(() => { - window.scrollBy(0, 2000) - }, [message]) - - if (!message?.error) return - - return ( -
-
-
-
-
- error - {getAction(message.error, () => bot.resetConversation())} -
-
-
-
-
- ) -} diff --git a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/anchor_head.py b/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/anchor_head.py deleted file mode 100644 index eea73520572725f547216ab639c1ebbdfb50834c..0000000000000000000000000000000000000000 --- a/spaces/Robert001/UniControl-Demo/annotator/uniformer/mmdet/models/dense_heads/anchor_head.py +++ /dev/null @@ -1,751 +0,0 @@ -import torch -import torch.nn as nn -from mmcv.cnn import normal_init -from mmcv.runner import force_fp32 - -from mmdet.core import (anchor_inside_flags, build_anchor_generator, - build_assigner, build_bbox_coder, build_sampler, - images_to_levels, multi_apply, multiclass_nms, unmap) -from ..builder import HEADS, build_loss -from .base_dense_head import BaseDenseHead -from .dense_test_mixins import BBoxTestMixin - - -@HEADS.register_module() -class AnchorHead(BaseDenseHead, BBoxTestMixin): - """Anchor-based head (RPN, RetinaNet, SSD, etc.). - - Args: - num_classes (int): Number of categories excluding the background - category. - in_channels (int): Number of channels in the input feature map. - feat_channels (int): Number of hidden channels. Used in child classes. - anchor_generator (dict): Config dict for anchor generator - bbox_coder (dict): Config of bounding box coder. - reg_decoded_bbox (bool): If true, the regression loss would be - applied directly on decoded bounding boxes, converting both - the predicted boxes and regression targets to absolute - coordinates format. Default False. It should be `True` when - using `IoULoss`, `GIoULoss`, or `DIoULoss` in the bbox head. - loss_cls (dict): Config of classification loss. - loss_bbox (dict): Config of localization loss. - train_cfg (dict): Training config of anchor head. - test_cfg (dict): Testing config of anchor head. - """ # noqa: W605 - - def __init__(self, - num_classes, - in_channels, - feat_channels=256, - anchor_generator=dict( - type='AnchorGenerator', - scales=[8, 16, 32], - ratios=[0.5, 1.0, 2.0], - strides=[4, 8, 16, 32, 64]), - bbox_coder=dict( - type='DeltaXYWHBBoxCoder', - clip_border=True, - target_means=(.0, .0, .0, .0), - target_stds=(1.0, 1.0, 1.0, 1.0)), - reg_decoded_bbox=False, - loss_cls=dict( - type='CrossEntropyLoss', - use_sigmoid=True, - loss_weight=1.0), - loss_bbox=dict( - type='SmoothL1Loss', beta=1.0 / 9.0, loss_weight=1.0), - train_cfg=None, - test_cfg=None): - super(AnchorHead, self).__init__() - self.in_channels = in_channels - self.num_classes = num_classes - self.feat_channels = feat_channels - self.use_sigmoid_cls = loss_cls.get('use_sigmoid', False) - # TODO better way to determine whether sample or not - self.sampling = loss_cls['type'] not in [ - 'FocalLoss', 'GHMC', 'QualityFocalLoss' - ] - if self.use_sigmoid_cls: - self.cls_out_channels = num_classes - else: - self.cls_out_channels = num_classes + 1 - - if self.cls_out_channels <= 0: - raise ValueError(f'num_classes={num_classes} is too small') - self.reg_decoded_bbox = reg_decoded_bbox - - self.bbox_coder = build_bbox_coder(bbox_coder) - self.loss_cls = build_loss(loss_cls) - self.loss_bbox = build_loss(loss_bbox) - self.train_cfg = train_cfg - self.test_cfg = test_cfg - if self.train_cfg: - self.assigner = build_assigner(self.train_cfg.assigner) - # use PseudoSampler when sampling is False - if self.sampling and hasattr(self.train_cfg, 'sampler'): - sampler_cfg = self.train_cfg.sampler - else: - sampler_cfg = dict(type='PseudoSampler') - self.sampler = build_sampler(sampler_cfg, context=self) - self.fp16_enabled = False - - self.anchor_generator = build_anchor_generator(anchor_generator) - # usually the numbers of anchors for each level are the same - # except SSD detectors - self.num_anchors = self.anchor_generator.num_base_anchors[0] - self._init_layers() - - def _init_layers(self): - """Initialize layers of the head.""" - self.conv_cls = nn.Conv2d(self.in_channels, - self.num_anchors * self.cls_out_channels, 1) - self.conv_reg = nn.Conv2d(self.in_channels, self.num_anchors * 4, 1) - - def init_weights(self): - """Initialize weights of the head.""" - normal_init(self.conv_cls, std=0.01) - normal_init(self.conv_reg, std=0.01) - - def forward_single(self, x): - """Forward feature of a single scale level. - - Args: - x (Tensor): Features of a single scale level. - - Returns: - tuple: - cls_score (Tensor): Cls scores for a single scale level \ - the channels number is num_anchors * num_classes. - bbox_pred (Tensor): Box energies / deltas for a single scale \ - level, the channels number is num_anchors * 4. - """ - cls_score = self.conv_cls(x) - bbox_pred = self.conv_reg(x) - return cls_score, bbox_pred - - def forward(self, feats): - """Forward features from the upstream network. - - Args: - feats (tuple[Tensor]): Features from the upstream network, each is - a 4D-tensor. - - Returns: - tuple: A tuple of classification scores and bbox prediction. - - - cls_scores (list[Tensor]): Classification scores for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_anchors * num_classes. - - bbox_preds (list[Tensor]): Box energies / deltas for all \ - scale levels, each is a 4D-tensor, the channels number \ - is num_anchors * 4. - """ - return multi_apply(self.forward_single, feats) - - def get_anchors(self, featmap_sizes, img_metas, device='cuda'): - """Get anchors according to feature map sizes. - - Args: - featmap_sizes (list[tuple]): Multi-level feature map sizes. - img_metas (list[dict]): Image meta info. - device (torch.device | str): Device for returned tensors - - Returns: - tuple: - anchor_list (list[Tensor]): Anchors of each image. - valid_flag_list (list[Tensor]): Valid flags of each image. - """ - num_imgs = len(img_metas) - - # since feature map sizes of all images are the same, we only compute - # anchors for one time - multi_level_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device) - anchor_list = [multi_level_anchors for _ in range(num_imgs)] - - # for each image, we compute valid flags of multi level anchors - valid_flag_list = [] - for img_id, img_meta in enumerate(img_metas): - multi_level_flags = self.anchor_generator.valid_flags( - featmap_sizes, img_meta['pad_shape'], device) - valid_flag_list.append(multi_level_flags) - - return anchor_list, valid_flag_list - - def _get_targets_single(self, - flat_anchors, - valid_flags, - gt_bboxes, - gt_bboxes_ignore, - gt_labels, - img_meta, - label_channels=1, - unmap_outputs=True): - """Compute regression and classification targets for anchors in a - single image. - - Args: - flat_anchors (Tensor): Multi-level anchors of the image, which are - concatenated into a single tensor of shape (num_anchors ,4) - valid_flags (Tensor): Multi level valid flags of the image, - which are concatenated into a single tensor of - shape (num_anchors,). - gt_bboxes (Tensor): Ground truth bboxes of the image, - shape (num_gts, 4). - gt_bboxes_ignore (Tensor): Ground truth bboxes to be - ignored, shape (num_ignored_gts, 4). - img_meta (dict): Meta info of the image. - gt_labels (Tensor): Ground truth labels of each box, - shape (num_gts,). - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: - labels_list (list[Tensor]): Labels of each level - label_weights_list (list[Tensor]): Label weights of each level - bbox_targets_list (list[Tensor]): BBox targets of each level - bbox_weights_list (list[Tensor]): BBox weights of each level - num_total_pos (int): Number of positive samples in all images - num_total_neg (int): Number of negative samples in all images - """ - inside_flags = anchor_inside_flags(flat_anchors, valid_flags, - img_meta['img_shape'][:2], - self.train_cfg.allowed_border) - if not inside_flags.any(): - return (None, ) * 7 - # assign gt and sample anchors - anchors = flat_anchors[inside_flags, :] - - assign_result = self.assigner.assign( - anchors, gt_bboxes, gt_bboxes_ignore, - None if self.sampling else gt_labels) - sampling_result = self.sampler.sample(assign_result, anchors, - gt_bboxes) - - num_valid_anchors = anchors.shape[0] - bbox_targets = torch.zeros_like(anchors) - bbox_weights = torch.zeros_like(anchors) - labels = anchors.new_full((num_valid_anchors, ), - self.num_classes, - dtype=torch.long) - label_weights = anchors.new_zeros(num_valid_anchors, dtype=torch.float) - - pos_inds = sampling_result.pos_inds - neg_inds = sampling_result.neg_inds - if len(pos_inds) > 0: - if not self.reg_decoded_bbox: - pos_bbox_targets = self.bbox_coder.encode( - sampling_result.pos_bboxes, sampling_result.pos_gt_bboxes) - else: - pos_bbox_targets = sampling_result.pos_gt_bboxes - bbox_targets[pos_inds, :] = pos_bbox_targets - bbox_weights[pos_inds, :] = 1.0 - if gt_labels is None: - # Only rpn gives gt_labels as None - # Foreground is the first class since v2.5.0 - labels[pos_inds] = 0 - else: - labels[pos_inds] = gt_labels[ - sampling_result.pos_assigned_gt_inds] - if self.train_cfg.pos_weight <= 0: - label_weights[pos_inds] = 1.0 - else: - label_weights[pos_inds] = self.train_cfg.pos_weight - if len(neg_inds) > 0: - label_weights[neg_inds] = 1.0 - - # map up to original set of anchors - if unmap_outputs: - num_total_anchors = flat_anchors.size(0) - labels = unmap( - labels, num_total_anchors, inside_flags, - fill=self.num_classes) # fill bg label - label_weights = unmap(label_weights, num_total_anchors, - inside_flags) - bbox_targets = unmap(bbox_targets, num_total_anchors, inside_flags) - bbox_weights = unmap(bbox_weights, num_total_anchors, inside_flags) - - return (labels, label_weights, bbox_targets, bbox_weights, pos_inds, - neg_inds, sampling_result) - - def get_targets(self, - anchor_list, - valid_flag_list, - gt_bboxes_list, - img_metas, - gt_bboxes_ignore_list=None, - gt_labels_list=None, - label_channels=1, - unmap_outputs=True, - return_sampling_results=False): - """Compute regression and classification targets for anchors in - multiple images. - - Args: - anchor_list (list[list[Tensor]]): Multi level anchors of each - image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, 4). - valid_flag_list (list[list[Tensor]]): Multi level valid flags of - each image. The outer list indicates images, and the inner list - corresponds to feature levels of the image. Each element of - the inner list is a tensor of shape (num_anchors, ) - gt_bboxes_list (list[Tensor]): Ground truth bboxes of each image. - img_metas (list[dict]): Meta info of each image. - gt_bboxes_ignore_list (list[Tensor]): Ground truth bboxes to be - ignored. - gt_labels_list (list[Tensor]): Ground truth labels of each box. - label_channels (int): Channel of label. - unmap_outputs (bool): Whether to map outputs back to the original - set of anchors. - - Returns: - tuple: Usually returns a tuple containing learning targets. - - - labels_list (list[Tensor]): Labels of each level. - - label_weights_list (list[Tensor]): Label weights of each \ - level. - - bbox_targets_list (list[Tensor]): BBox targets of each level. - - bbox_weights_list (list[Tensor]): BBox weights of each level. - - num_total_pos (int): Number of positive samples in all \ - images. - - num_total_neg (int): Number of negative samples in all \ - images. - additional_returns: This function enables user-defined returns from - `self._get_targets_single`. These returns are currently refined - to properties at each feature map (i.e. having HxW dimension). - The results will be concatenated after the end - """ - num_imgs = len(img_metas) - assert len(anchor_list) == len(valid_flag_list) == num_imgs - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors to a single tensor - concat_anchor_list = [] - concat_valid_flag_list = [] - for i in range(num_imgs): - assert len(anchor_list[i]) == len(valid_flag_list[i]) - concat_anchor_list.append(torch.cat(anchor_list[i])) - concat_valid_flag_list.append(torch.cat(valid_flag_list[i])) - - # compute targets for each image - if gt_bboxes_ignore_list is None: - gt_bboxes_ignore_list = [None for _ in range(num_imgs)] - if gt_labels_list is None: - gt_labels_list = [None for _ in range(num_imgs)] - results = multi_apply( - self._get_targets_single, - concat_anchor_list, - concat_valid_flag_list, - gt_bboxes_list, - gt_bboxes_ignore_list, - gt_labels_list, - img_metas, - label_channels=label_channels, - unmap_outputs=unmap_outputs) - (all_labels, all_label_weights, all_bbox_targets, all_bbox_weights, - pos_inds_list, neg_inds_list, sampling_results_list) = results[:7] - rest_results = list(results[7:]) # user-added return values - # no valid anchors - if any([labels is None for labels in all_labels]): - return None - # sampled anchors of all images - num_total_pos = sum([max(inds.numel(), 1) for inds in pos_inds_list]) - num_total_neg = sum([max(inds.numel(), 1) for inds in neg_inds_list]) - # split targets to a list w.r.t. multiple levels - labels_list = images_to_levels(all_labels, num_level_anchors) - label_weights_list = images_to_levels(all_label_weights, - num_level_anchors) - bbox_targets_list = images_to_levels(all_bbox_targets, - num_level_anchors) - bbox_weights_list = images_to_levels(all_bbox_weights, - num_level_anchors) - res = (labels_list, label_weights_list, bbox_targets_list, - bbox_weights_list, num_total_pos, num_total_neg) - if return_sampling_results: - res = res + (sampling_results_list, ) - for i, r in enumerate(rest_results): # user-added return values - rest_results[i] = images_to_levels(r, num_level_anchors) - - return res + tuple(rest_results) - - def loss_single(self, cls_score, bbox_pred, anchors, labels, label_weights, - bbox_targets, bbox_weights, num_total_samples): - """Compute loss of a single scale level. - - Args: - cls_score (Tensor): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W). - bbox_pred (Tensor): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W). - anchors (Tensor): Box reference for each scale level with shape - (N, num_total_anchors, 4). - labels (Tensor): Labels of each anchors with shape - (N, num_total_anchors). - label_weights (Tensor): Label weights of each anchor with shape - (N, num_total_anchors) - bbox_targets (Tensor): BBox regression targets of each anchor wight - shape (N, num_total_anchors, 4). - bbox_weights (Tensor): BBox regression loss weights of each anchor - with shape (N, num_total_anchors, 4). - num_total_samples (int): If sampling, num total samples equal to - the number of total anchors; Otherwise, it is the number of - positive anchors. - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - # classification loss - labels = labels.reshape(-1) - label_weights = label_weights.reshape(-1) - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(-1, self.cls_out_channels) - loss_cls = self.loss_cls( - cls_score, labels, label_weights, avg_factor=num_total_samples) - # regression loss - bbox_targets = bbox_targets.reshape(-1, 4) - bbox_weights = bbox_weights.reshape(-1, 4) - bbox_pred = bbox_pred.permute(0, 2, 3, 1).reshape(-1, 4) - if self.reg_decoded_bbox: - # When the regression loss (e.g. `IouLoss`, `GIouLoss`) - # is applied directly on the decoded bounding boxes, it - # decodes the already encoded coordinates to absolute format. - anchors = anchors.reshape(-1, 4) - bbox_pred = self.bbox_coder.decode(anchors, bbox_pred) - loss_bbox = self.loss_bbox( - bbox_pred, - bbox_targets, - bbox_weights, - avg_factor=num_total_samples) - return loss_cls, loss_bbox - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def loss(self, - cls_scores, - bbox_preds, - gt_bboxes, - gt_labels, - img_metas, - gt_bboxes_ignore=None): - """Compute losses of the head. - - Args: - cls_scores (list[Tensor]): Box scores for each scale level - Has shape (N, num_anchors * num_classes, H, W) - bbox_preds (list[Tensor]): Box energies / deltas for each scale - level with shape (N, num_anchors * 4, H, W) - gt_bboxes (list[Tensor]): Ground truth bboxes for each image with - shape (num_gts, 4) in [tl_x, tl_y, br_x, br_y] format. - gt_labels (list[Tensor]): class indices corresponding to each box - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - gt_bboxes_ignore (None | list[Tensor]): specify which bounding - boxes can be ignored when computing the loss. Default: None - - Returns: - dict[str, Tensor]: A dictionary of loss components. - """ - featmap_sizes = [featmap.size()[-2:] for featmap in cls_scores] - assert len(featmap_sizes) == self.anchor_generator.num_levels - - device = cls_scores[0].device - - anchor_list, valid_flag_list = self.get_anchors( - featmap_sizes, img_metas, device=device) - label_channels = self.cls_out_channels if self.use_sigmoid_cls else 1 - cls_reg_targets = self.get_targets( - anchor_list, - valid_flag_list, - gt_bboxes, - img_metas, - gt_bboxes_ignore_list=gt_bboxes_ignore, - gt_labels_list=gt_labels, - label_channels=label_channels) - if cls_reg_targets is None: - return None - (labels_list, label_weights_list, bbox_targets_list, bbox_weights_list, - num_total_pos, num_total_neg) = cls_reg_targets - num_total_samples = ( - num_total_pos + num_total_neg if self.sampling else num_total_pos) - - # anchor number of multi levels - num_level_anchors = [anchors.size(0) for anchors in anchor_list[0]] - # concat all level anchors and flags to a single tensor - concat_anchor_list = [] - for i in range(len(anchor_list)): - concat_anchor_list.append(torch.cat(anchor_list[i])) - all_anchor_list = images_to_levels(concat_anchor_list, - num_level_anchors) - - losses_cls, losses_bbox = multi_apply( - self.loss_single, - cls_scores, - bbox_preds, - all_anchor_list, - labels_list, - label_weights_list, - bbox_targets_list, - bbox_weights_list, - num_total_samples=num_total_samples) - return dict(loss_cls=losses_cls, loss_bbox=losses_bbox) - - @force_fp32(apply_to=('cls_scores', 'bbox_preds')) - def get_bboxes(self, - cls_scores, - bbox_preds, - img_metas, - cfg=None, - rescale=False, - with_nms=True): - """Transform network output for a batch into bbox predictions. - - Args: - cls_scores (list[Tensor]): Box scores for each level in the - feature pyramid, has shape - (N, num_anchors * num_classes, H, W). - bbox_preds (list[Tensor]): Box energies / deltas for each - level in the feature pyramid, has shape - (N, num_anchors * 4, H, W). - img_metas (list[dict]): Meta information of each image, e.g., - image size, scaling factor, etc. - cfg (mmcv.Config | None): Test / postprocessing configuration, - if None, test_cfg would be used - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - - Example: - >>> import mmcv - >>> self = AnchorHead( - >>> num_classes=9, - >>> in_channels=1, - >>> anchor_generator=dict( - >>> type='AnchorGenerator', - >>> scales=[8], - >>> ratios=[0.5, 1.0, 2.0], - >>> strides=[4,])) - >>> img_metas = [{'img_shape': (32, 32, 3), 'scale_factor': 1}] - >>> cfg = mmcv.Config(dict( - >>> score_thr=0.00, - >>> nms=dict(type='nms', iou_thr=1.0), - >>> max_per_img=10)) - >>> feat = torch.rand(1, 1, 3, 3) - >>> cls_score, bbox_pred = self.forward_single(feat) - >>> # note the input lists are over different levels, not images - >>> cls_scores, bbox_preds = [cls_score], [bbox_pred] - >>> result_list = self.get_bboxes(cls_scores, bbox_preds, - >>> img_metas, cfg) - >>> det_bboxes, det_labels = result_list[0] - >>> assert len(result_list) == 1 - >>> assert det_bboxes.shape[1] == 5 - >>> assert len(det_bboxes) == len(det_labels) == cfg.max_per_img - """ - assert len(cls_scores) == len(bbox_preds) - num_levels = len(cls_scores) - - device = cls_scores[0].device - featmap_sizes = [cls_scores[i].shape[-2:] for i in range(num_levels)] - mlvl_anchors = self.anchor_generator.grid_anchors( - featmap_sizes, device=device) - - mlvl_cls_scores = [cls_scores[i].detach() for i in range(num_levels)] - mlvl_bbox_preds = [bbox_preds[i].detach() for i in range(num_levels)] - - if torch.onnx.is_in_onnx_export(): - assert len( - img_metas - ) == 1, 'Only support one input image while in exporting to ONNX' - img_shapes = img_metas[0]['img_shape_for_onnx'] - else: - img_shapes = [ - img_metas[i]['img_shape'] - for i in range(cls_scores[0].shape[0]) - ] - scale_factors = [ - img_metas[i]['scale_factor'] for i in range(cls_scores[0].shape[0]) - ] - - if with_nms: - # some heads don't support with_nms argument - result_list = self._get_bboxes(mlvl_cls_scores, mlvl_bbox_preds, - mlvl_anchors, img_shapes, - scale_factors, cfg, rescale) - else: - result_list = self._get_bboxes(mlvl_cls_scores, mlvl_bbox_preds, - mlvl_anchors, img_shapes, - scale_factors, cfg, rescale, - with_nms) - return result_list - - def _get_bboxes(self, - mlvl_cls_scores, - mlvl_bbox_preds, - mlvl_anchors, - img_shapes, - scale_factors, - cfg, - rescale=False, - with_nms=True): - """Transform outputs for a batch item into bbox predictions. - - Args: - mlvl_cls_scores (list[Tensor]): Each element in the list is - the scores of bboxes of single level in the feature pyramid, - has shape (N, num_anchors * num_classes, H, W). - mlvl_bbox_preds (list[Tensor]): Each element in the list is the - bboxes predictions of single level in the feature pyramid, - has shape (N, num_anchors * 4, H, W). - mlvl_anchors (list[Tensor]): Each element in the list is - the anchors of single level in feature pyramid, has shape - (num_anchors, 4). - img_shapes (list[tuple[int]]): Each tuple in the list represent - the shape(height, width, 3) of single image in the batch. - scale_factors (list[ndarray]): Scale factor of the batch - image arange as list[(w_scale, h_scale, w_scale, h_scale)]. - cfg (mmcv.Config): Test / postprocessing configuration, - if None, test_cfg would be used. - rescale (bool): If True, return boxes in original image space. - Default: False. - with_nms (bool): If True, do nms before return boxes. - Default: True. - - Returns: - list[tuple[Tensor, Tensor]]: Each item in result_list is 2-tuple. - The first item is an (n, 5) tensor, where 5 represent - (tl_x, tl_y, br_x, br_y, score) and the score between 0 and 1. - The shape of the second tensor in the tuple is (n,), and - each element represents the class label of the corresponding - box. - """ - cfg = self.test_cfg if cfg is None else cfg - assert len(mlvl_cls_scores) == len(mlvl_bbox_preds) == len( - mlvl_anchors) - batch_size = mlvl_cls_scores[0].shape[0] - # convert to tensor to keep tracing - nms_pre_tensor = torch.tensor( - cfg.get('nms_pre', -1), - device=mlvl_cls_scores[0].device, - dtype=torch.long) - - mlvl_bboxes = [] - mlvl_scores = [] - for cls_score, bbox_pred, anchors in zip(mlvl_cls_scores, - mlvl_bbox_preds, - mlvl_anchors): - assert cls_score.size()[-2:] == bbox_pred.size()[-2:] - cls_score = cls_score.permute(0, 2, 3, - 1).reshape(batch_size, -1, - self.cls_out_channels) - if self.use_sigmoid_cls: - scores = cls_score.sigmoid() - else: - scores = cls_score.softmax(-1) - bbox_pred = bbox_pred.permute(0, 2, 3, - 1).reshape(batch_size, -1, 4) - anchors = anchors.expand_as(bbox_pred) - # Always keep topk op for dynamic input in onnx - if nms_pre_tensor > 0 and (torch.onnx.is_in_onnx_export() - or scores.shape[-2] > nms_pre_tensor): - from torch import _shape_as_tensor - # keep shape as tensor and get k - num_anchor = _shape_as_tensor(scores)[-2].to( - nms_pre_tensor.device) - nms_pre = torch.where(nms_pre_tensor < num_anchor, - nms_pre_tensor, num_anchor) - - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = scores.max(-1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = scores[..., :-1].max(-1) - - _, topk_inds = max_scores.topk(nms_pre) - batch_inds = torch.arange(batch_size).view( - -1, 1).expand_as(topk_inds) - anchors = anchors[batch_inds, topk_inds, :] - bbox_pred = bbox_pred[batch_inds, topk_inds, :] - scores = scores[batch_inds, topk_inds, :] - - bboxes = self.bbox_coder.decode( - anchors, bbox_pred, max_shape=img_shapes) - mlvl_bboxes.append(bboxes) - mlvl_scores.append(scores) - - batch_mlvl_bboxes = torch.cat(mlvl_bboxes, dim=1) - if rescale: - batch_mlvl_bboxes /= batch_mlvl_bboxes.new_tensor( - scale_factors).unsqueeze(1) - batch_mlvl_scores = torch.cat(mlvl_scores, dim=1) - - # Set max number of box to be feed into nms in deployment - deploy_nms_pre = cfg.get('deploy_nms_pre', -1) - if deploy_nms_pre > 0 and torch.onnx.is_in_onnx_export(): - # Get maximum scores for foreground classes. - if self.use_sigmoid_cls: - max_scores, _ = batch_mlvl_scores.max(-1) - else: - # remind that we set FG labels to [0, num_class-1] - # since mmdet v2.0 - # BG cat_id: num_class - max_scores, _ = batch_mlvl_scores[..., :-1].max(-1) - _, topk_inds = max_scores.topk(deploy_nms_pre) - batch_inds = torch.arange(batch_size).view(-1, - 1).expand_as(topk_inds) - batch_mlvl_scores = batch_mlvl_scores[batch_inds, topk_inds] - batch_mlvl_bboxes = batch_mlvl_bboxes[batch_inds, topk_inds] - if self.use_sigmoid_cls: - # Add a dummy background class to the backend when using sigmoid - # remind that we set FG labels to [0, num_class-1] since mmdet v2.0 - # BG cat_id: num_class - padding = batch_mlvl_scores.new_zeros(batch_size, - batch_mlvl_scores.shape[1], - 1) - batch_mlvl_scores = torch.cat([batch_mlvl_scores, padding], dim=-1) - - if with_nms: - det_results = [] - for (mlvl_bboxes, mlvl_scores) in zip(batch_mlvl_bboxes, - batch_mlvl_scores): - det_bbox, det_label = multiclass_nms(mlvl_bboxes, mlvl_scores, - cfg.score_thr, cfg.nms, - cfg.max_per_img) - det_results.append(tuple([det_bbox, det_label])) - else: - det_results = [ - tuple(mlvl_bs) - for mlvl_bs in zip(batch_mlvl_bboxes, batch_mlvl_scores) - ] - return det_results - - def aug_test(self, feats, img_metas, rescale=False): - """Test function with test time augmentation. - - Args: - feats (list[Tensor]): the outer list indicates test-time - augmentations and inner Tensor should have a shape NxCxHxW, - which contains features for all images in the batch. - img_metas (list[list[dict]]): the outer list indicates test-time - augs (multiscale, flip, etc.) and the inner list indicates - images in a batch. each dict has image information. - rescale (bool, optional): Whether to rescale the results. - Defaults to False. - - Returns: - list[ndarray]: bbox results of each class - """ - return self.aug_test_bboxes(feats, img_metas, rescale=rescale) diff --git a/spaces/Salesforce/BLIP/models/blip_pretrain.py b/spaces/Salesforce/BLIP/models/blip_pretrain.py deleted file mode 100644 index 068420247591f3e35242bff6f183c8adb8b977a2..0000000000000000000000000000000000000000 --- a/spaces/Salesforce/BLIP/models/blip_pretrain.py +++ /dev/null @@ -1,339 +0,0 @@ -''' - * Copyright (c) 2022, salesforce.com, inc. - * All rights reserved. - * SPDX-License-Identifier: BSD-3-Clause - * For full license text, see LICENSE.txt file in the repo root or https://opensource.org/licenses/BSD-3-Clause - * By Junnan Li -''' -from models.med import BertConfig, BertModel, BertLMHeadModel -from transformers import BertTokenizer -import transformers -transformers.logging.set_verbosity_error() - -import torch -from torch import nn -import torch.nn.functional as F - -from models.blip import create_vit, init_tokenizer, load_checkpoint - -class BLIP_Pretrain(nn.Module): - def __init__(self, - med_config = 'configs/bert_config.json', - image_size = 224, - vit = 'base', - vit_grad_ckpt = False, - vit_ckpt_layer = 0, - embed_dim = 256, - queue_size = 57600, - momentum = 0.995, - ): - """ - Args: - med_config (str): path for the mixture of encoder-decoder model's configuration file - image_size (int): input image size - vit (str): model size of vision transformer - """ - super().__init__() - - self.visual_encoder, vision_width = create_vit(vit,image_size, vit_grad_ckpt, vit_ckpt_layer, 0) - - if vit=='base': - checkpoint = torch.hub.load_state_dict_from_url( - url="https://dl.fbaipublicfiles.com/deit/deit_base_patch16_224-b5f2ef4d.pth", - map_location="cpu", check_hash=True) - state_dict = checkpoint["model"] - msg = self.visual_encoder.load_state_dict(state_dict,strict=False) - elif vit=='large': - from timm.models.helpers import load_custom_pretrained - from timm.models.vision_transformer import default_cfgs - load_custom_pretrained(self.visual_encoder,default_cfgs['vit_large_patch16_224_in21k']) - - self.tokenizer = init_tokenizer() - encoder_config = BertConfig.from_json_file(med_config) - encoder_config.encoder_width = vision_width - self.text_encoder = BertModel.from_pretrained('bert-base-uncased',config=encoder_config, add_pooling_layer=False) - self.text_encoder.resize_token_embeddings(len(self.tokenizer)) - - text_width = self.text_encoder.config.hidden_size - - self.vision_proj = nn.Linear(vision_width, embed_dim) - self.text_proj = nn.Linear(text_width, embed_dim) - - self.itm_head = nn.Linear(text_width, 2) - - # create momentum encoders - self.visual_encoder_m, vision_width = create_vit(vit,image_size) - self.vision_proj_m = nn.Linear(vision_width, embed_dim) - self.text_encoder_m = BertModel(config=encoder_config, add_pooling_layer=False) - self.text_proj_m = nn.Linear(text_width, embed_dim) - - self.model_pairs = [[self.visual_encoder,self.visual_encoder_m], - [self.vision_proj,self.vision_proj_m], - [self.text_encoder,self.text_encoder_m], - [self.text_proj,self.text_proj_m], - ] - self.copy_params() - - # create the queue - self.register_buffer("image_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("text_queue", torch.randn(embed_dim, queue_size)) - self.register_buffer("queue_ptr", torch.zeros(1, dtype=torch.long)) - - self.image_queue = nn.functional.normalize(self.image_queue, dim=0) - self.text_queue = nn.functional.normalize(self.text_queue, dim=0) - - self.queue_size = queue_size - self.momentum = momentum - self.temp = nn.Parameter(0.07*torch.ones([])) - - # create the decoder - decoder_config = BertConfig.from_json_file(med_config) - decoder_config.encoder_width = vision_width - self.text_decoder = BertLMHeadModel.from_pretrained('bert-base-uncased',config=decoder_config) - self.text_decoder.resize_token_embeddings(len(self.tokenizer)) - tie_encoder_decoder_weights(self.text_decoder.bert,self.text_encoder,'','/attention') - - - def forward(self, image, caption, alpha): - with torch.no_grad(): - self.temp.clamp_(0.001,0.5) - - image_embeds = self.visual_encoder(image) - image_atts = torch.ones(image_embeds.size()[:-1],dtype=torch.long).to(image.device) - image_feat = F.normalize(self.vision_proj(image_embeds[:,0,:]),dim=-1) - - text = self.tokenizer(caption, padding='max_length', truncation=True, max_length=30, - return_tensors="pt").to(image.device) - text_output = self.text_encoder(text.input_ids, attention_mask = text.attention_mask, - return_dict = True, mode = 'text') - text_feat = F.normalize(self.text_proj(text_output.last_hidden_state[:,0,:]),dim=-1) - - # get momentum features - with torch.no_grad(): - self._momentum_update() - image_embeds_m = self.visual_encoder_m(image) - image_feat_m = F.normalize(self.vision_proj_m(image_embeds_m[:,0,:]),dim=-1) - image_feat_all = torch.cat([image_feat_m.t(),self.image_queue.clone().detach()],dim=1) - - text_output_m = self.text_encoder_m(text.input_ids, attention_mask = text.attention_mask, - return_dict = True, mode = 'text') - text_feat_m = F.normalize(self.text_proj_m(text_output_m.last_hidden_state[:,0,:]),dim=-1) - text_feat_all = torch.cat([text_feat_m.t(),self.text_queue.clone().detach()],dim=1) - - sim_i2t_m = image_feat_m @ text_feat_all / self.temp - sim_t2i_m = text_feat_m @ image_feat_all / self.temp - - sim_targets = torch.zeros(sim_i2t_m.size()).to(image.device) - sim_targets.fill_diagonal_(1) - - sim_i2t_targets = alpha * F.softmax(sim_i2t_m, dim=1) + (1 - alpha) * sim_targets - sim_t2i_targets = alpha * F.softmax(sim_t2i_m, dim=1) + (1 - alpha) * sim_targets - - sim_i2t = image_feat @ text_feat_all / self.temp - sim_t2i = text_feat @ image_feat_all / self.temp - - loss_i2t = -torch.sum(F.log_softmax(sim_i2t, dim=1)*sim_i2t_targets,dim=1).mean() - loss_t2i = -torch.sum(F.log_softmax(sim_t2i, dim=1)*sim_t2i_targets,dim=1).mean() - - loss_ita = (loss_i2t+loss_t2i)/2 - - self._dequeue_and_enqueue(image_feat_m, text_feat_m) - - ###============== Image-text Matching ===================### - encoder_input_ids = text.input_ids.clone() - encoder_input_ids[:,0] = self.tokenizer.enc_token_id - - # forward the positve image-text pair - bs = image.size(0) - output_pos = self.text_encoder(encoder_input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - return_dict = True, - ) - with torch.no_grad(): - weights_t2i = F.softmax(sim_t2i[:,:bs],dim=1)+1e-4 - weights_t2i.fill_diagonal_(0) - weights_i2t = F.softmax(sim_i2t[:,:bs],dim=1)+1e-4 - weights_i2t.fill_diagonal_(0) - - # select a negative image for each text - image_embeds_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_t2i[b], 1).item() - image_embeds_neg.append(image_embeds[neg_idx]) - image_embeds_neg = torch.stack(image_embeds_neg,dim=0) - - # select a negative text for each image - text_ids_neg = [] - text_atts_neg = [] - for b in range(bs): - neg_idx = torch.multinomial(weights_i2t[b], 1).item() - text_ids_neg.append(encoder_input_ids[neg_idx]) - text_atts_neg.append(text.attention_mask[neg_idx]) - - text_ids_neg = torch.stack(text_ids_neg,dim=0) - text_atts_neg = torch.stack(text_atts_neg,dim=0) - - text_ids_all = torch.cat([encoder_input_ids, text_ids_neg],dim=0) - text_atts_all = torch.cat([text.attention_mask, text_atts_neg],dim=0) - - image_embeds_all = torch.cat([image_embeds_neg,image_embeds],dim=0) - image_atts_all = torch.cat([image_atts,image_atts],dim=0) - - output_neg = self.text_encoder(text_ids_all, - attention_mask = text_atts_all, - encoder_hidden_states = image_embeds_all, - encoder_attention_mask = image_atts_all, - return_dict = True, - ) - - vl_embeddings = torch.cat([output_pos.last_hidden_state[:,0,:], output_neg.last_hidden_state[:,0,:]],dim=0) - vl_output = self.itm_head(vl_embeddings) - - itm_labels = torch.cat([torch.ones(bs,dtype=torch.long),torch.zeros(2*bs,dtype=torch.long)], - dim=0).to(image.device) - loss_itm = F.cross_entropy(vl_output, itm_labels) - - ##================= LM ========================## - decoder_input_ids = text.input_ids.clone() - decoder_input_ids[:,0] = self.tokenizer.bos_token_id - decoder_targets = decoder_input_ids.masked_fill(decoder_input_ids == self.tokenizer.pad_token_id, -100) - - decoder_output = self.text_decoder(decoder_input_ids, - attention_mask = text.attention_mask, - encoder_hidden_states = image_embeds, - encoder_attention_mask = image_atts, - labels = decoder_targets, - return_dict = True, - ) - - loss_lm = decoder_output.loss - return loss_ita, loss_itm, loss_lm - - - - @torch.no_grad() - def copy_params(self): - for model_pair in self.model_pairs: - for param, param_m in zip(model_pair[0].parameters(), model_pair[1].parameters()): - param_m.data.copy_(param.data) # initialize - param_m.requires_grad = False # not update by gradient - - - @torch.no_grad() - def _momentum_update(self): - for model_pair in self.model_pairs: - for param, param_m in zip(model_pair[0].parameters(), model_pair[1].parameters()): - param_m.data = param_m.data * self.momentum + param.data * (1. - self.momentum) - - - @torch.no_grad() - def _dequeue_and_enqueue(self, image_feat, text_feat): - # gather keys before updating queue - image_feats = concat_all_gather(image_feat) - text_feats = concat_all_gather(text_feat) - - batch_size = image_feats.shape[0] - - ptr = int(self.queue_ptr) - assert self.queue_size % batch_size == 0 # for simplicity - - # replace the keys at ptr (dequeue and enqueue) - self.image_queue[:, ptr:ptr + batch_size] = image_feats.T - self.text_queue[:, ptr:ptr + batch_size] = text_feats.T - ptr = (ptr + batch_size) % self.queue_size # move pointer - - self.queue_ptr[0] = ptr - - -def blip_pretrain(**kwargs): - model = BLIP_Pretrain(**kwargs) - return model - - -@torch.no_grad() -def concat_all_gather(tensor): - """ - Performs all_gather operation on the provided tensors. - *** Warning ***: torch.distributed.all_gather has no gradient. - """ - tensors_gather = [torch.ones_like(tensor) - for _ in range(torch.distributed.get_world_size())] - torch.distributed.all_gather(tensors_gather, tensor, async_op=False) - - output = torch.cat(tensors_gather, dim=0) - return output - - -from typing import List -def tie_encoder_decoder_weights(encoder: nn.Module, decoder: nn.Module, base_model_prefix: str, skip_key:str): - uninitialized_encoder_weights: List[str] = [] - if decoder.__class__ != encoder.__class__: - logger.info( - f"{decoder.__class__} and {encoder.__class__} are not equal. In this case make sure that all encoder weights are correctly initialized." - ) - - def tie_encoder_to_decoder_recursively( - decoder_pointer: nn.Module, - encoder_pointer: nn.Module, - module_name: str, - uninitialized_encoder_weights: List[str], - skip_key: str, - depth=0, - ): - assert isinstance(decoder_pointer, nn.Module) and isinstance( - encoder_pointer, nn.Module - ), f"{decoder_pointer} and {encoder_pointer} have to be of type torch.nn.Module" - if hasattr(decoder_pointer, "weight") and skip_key not in module_name: - assert hasattr(encoder_pointer, "weight") - encoder_pointer.weight = decoder_pointer.weight - if hasattr(decoder_pointer, "bias"): - assert hasattr(encoder_pointer, "bias") - encoder_pointer.bias = decoder_pointer.bias - print(module_name+' is tied') - return - - encoder_modules = encoder_pointer._modules - decoder_modules = decoder_pointer._modules - if len(decoder_modules) > 0: - assert ( - len(encoder_modules) > 0 - ), f"Encoder module {encoder_pointer} does not match decoder module {decoder_pointer}" - - all_encoder_weights = set([module_name + "/" + sub_name for sub_name in encoder_modules.keys()]) - encoder_layer_pos = 0 - for name, module in decoder_modules.items(): - if name.isdigit(): - encoder_name = str(int(name) + encoder_layer_pos) - decoder_name = name - if not isinstance(decoder_modules[decoder_name], type(encoder_modules[encoder_name])) and len( - encoder_modules - ) != len(decoder_modules): - # this can happen if the name corresponds to the position in a list module list of layers - # in this case the decoder has added a cross-attention that the encoder does not have - # thus skip this step and subtract one layer pos from encoder - encoder_layer_pos -= 1 - continue - elif name not in encoder_modules: - continue - elif depth > 500: - raise ValueError( - "Max depth of recursive function `tie_encoder_to_decoder` reached. It seems that there is a circular dependency between two or more `nn.Modules` of your model." - ) - else: - decoder_name = encoder_name = name - tie_encoder_to_decoder_recursively( - decoder_modules[decoder_name], - encoder_modules[encoder_name], - module_name + "/" + name, - uninitialized_encoder_weights, - skip_key, - depth=depth + 1, - ) - all_encoder_weights.remove(module_name + "/" + encoder_name) - - uninitialized_encoder_weights += list(all_encoder_weights) - - # tie weights recursively - tie_encoder_to_decoder_recursively(decoder, encoder, base_model_prefix, uninitialized_encoder_weights, skip_key) diff --git a/spaces/SantiagoMoreno-UdeA/NER_RC/src/scripts/Json_formats.py b/spaces/SantiagoMoreno-UdeA/NER_RC/src/scripts/Json_formats.py deleted file mode 100644 index 6b1ba664874eafced528d36651bf06a2ac7d27a3..0000000000000000000000000000000000000000 --- a/spaces/SantiagoMoreno-UdeA/NER_RC/src/scripts/Json_formats.py +++ /dev/null @@ -1,166 +0,0 @@ -# -*- coding: utf-8 -*- -""" -Created on Tue Dec 6 11:21:55 2022 - -@author: gita -""" - -import gradio as gr - -def image_classifier(): - # j={ - # "sentences":[ - # {"text":"Frase ejemplo"}, - # {"text":"Frase ejemplo"} - # ] - # } - - # j = { - # 'text':"Frase ejemplo Frase ejemplo ", - - # 'text_labeled':" \"Frase\"/Entity_Type ejemplo \"Frase\"/Entity_Type ejemplo ", - - # 'sentences':[ - # {'text':"Frase ejemplo", - # 'text_labeled':" \"Frase\"/Entity_Type ejemplo", - # 'tokens':[ - # {'text':"Frase", 'label':"Entity_Type"}, - # {'text':"ejemplo", 'label':"O"} - # ]}, - - # {'text':"Frase ejemplo", - # 'text_labeled':" \"Frase\"/Entity_Type ejemplo", - # 'tokens':[ - # {'text':"Frase", 'label':"Entity_Type"}, - # {'text':"ejemplo", 'label':"O"} - # ]} - - # ], - - - # 'entities': [ - # { - # 'entity': "Entity_Type" , - # 'index' : 0, - # 'word' : "Frase", - # 'start': 0, - # 'end' : 5 - - # }, - # { - # 'entity': "Entity_Type" , - # 'index' : 2, - # 'word' : "Frase", - # 'start': 14, - # 'end' : 19 - - # } - # ] - - # } - - - j = { - - 'text':"Frase ejemplo Frase ejemplo", - - 'sentences':[ - {'text':"Frase ejemplo", - 'id':"s0", - 'tokens':[ - {'text':"Frase", 'begin':0, 'end':5}, - {'text':"ejemplo", 'begin':6, 'end':13} - ]}, - - {'text':"Frase ejemplo", - 'id':"s1", - 'tokens':[ - {'text':"Frase", 'begin':14, 'end':19}, - {'text':"ejemplo", 'begin':20, 'end':27} - ]}, - - ], - - - 'mentions': [ - { - 'id': "s0-m0" , - 'type' : "Entity_type", - 'begin' : 0, - 'end': 5, - - }, - - { - 'id': "s1-m0" , - 'type' : "Entity_type", - 'begin' : 14, - 'end': 19, - - } - - ] - - } - - - - return j - -demo = gr.Interface(fn=image_classifier, inputs=None, outputs=gr.JSON()) -demo.launch() - -#%% -# JSON FORMAT OUTPUT - -# Document:{ text:"Texto" - -# text_labeled: "Texto \ENTITY" - -# sentences:[{ text:"Texto" - - # text_labeled: "Texto \ENTITY" - - # tokens: [ {text:"Texto", label : "ENTITY"}, - # {text:"Texto", label : "ENTITY"}, - # {text:"Texto", label : "ENTITY"} - - # ] - - # }, - - # { text:"Texto" - - # text_labeled: "Texto " - - # tokens: [ {text:"Texto", label : "ENTITY"}, - # {text:"Texto", label : "ENTITY"}, - # {text:"Texto", label : "ENTITY"} - -# ] - -# } -# ], - # entities:[ - # { - # 'entity': "ENTITY", - # 'index': num, - # 'word': "Texto", - # 'start': num, - # 'end' : num - # } - # ] -# } - -#%% - -# JSON FORMAT INPUT - -# json{... -# sentences:{ -# s:{ -# text: -# } -# } - -# ...} \ No newline at end of file diff --git a/spaces/SeViLA/SeViLA/lavis/models/blip_models/__init__.py b/spaces/SeViLA/SeViLA/lavis/models/blip_models/__init__.py deleted file mode 100644 index 2b88146b9eb3d60dd10ee2aed8e0a33cba924746..0000000000000000000000000000000000000000 --- a/spaces/SeViLA/SeViLA/lavis/models/blip_models/__init__.py +++ /dev/null @@ -1,90 +0,0 @@ -""" - Copyright (c) 2022, salesforce.com, inc. - All rights reserved. - SPDX-License-Identifier: BSD-3-Clause - For full license text, see the LICENSE file in the repo root or https://opensource.org/licenses/BSD-3-Clause -""" - -import logging -from typing import List - -from torch import nn - - -def tie_encoder_decoder_weights( - encoder: nn.Module, decoder: nn.Module, base_model_prefix: str, skip_key: str -): - uninitialized_encoder_weights: List[str] = [] - if decoder.__class__ != encoder.__class__: - logging.info( - f"{decoder.__class__} and {encoder.__class__} are not equal. In this case make sure that all encoder weights are correctly initialized." - ) - - def tie_encoder_to_decoder_recursively( - decoder_pointer: nn.Module, - encoder_pointer: nn.Module, - module_name: str, - uninitialized_encoder_weights: List[str], - skip_key: str, - depth=0, - ): - assert isinstance(decoder_pointer, nn.Module) and isinstance( - encoder_pointer, nn.Module - ), f"{decoder_pointer} and {encoder_pointer} have to be of type torch.nn.Module" - if hasattr(decoder_pointer, "weight") and skip_key not in module_name: - assert hasattr(encoder_pointer, "weight") - encoder_pointer.weight = decoder_pointer.weight - if hasattr(decoder_pointer, "bias"): - assert hasattr(encoder_pointer, "bias") - encoder_pointer.bias = decoder_pointer.bias - print(module_name + " is tied") - return - - encoder_modules = encoder_pointer._modules - decoder_modules = decoder_pointer._modules - if len(decoder_modules) > 0: - assert ( - len(encoder_modules) > 0 - ), f"Encoder module {encoder_pointer} does not match decoder module {decoder_pointer}" - - all_encoder_weights = set( - [module_name + "/" + sub_name for sub_name in encoder_modules.keys()] - ) - encoder_layer_pos = 0 - for name, module in decoder_modules.items(): - if name.isdigit(): - encoder_name = str(int(name) + encoder_layer_pos) - decoder_name = name - if not isinstance( - decoder_modules[decoder_name], - type(encoder_modules[encoder_name]), - ) and len(encoder_modules) != len(decoder_modules): - # this can happen if the name corresponds to the position in a list module list of layers - # in this case the decoder has added a cross-attention that the encoder does not have - # thus skip this step and subtract one layer pos from encoder - encoder_layer_pos -= 1 - continue - elif name not in encoder_modules: - continue - elif depth > 500: - raise ValueError( - "Max depth of recursive function `tie_encoder_to_decoder` reached. It seems that there is a circular dependency between two or more `nn.Modules` of your model." - ) - else: - decoder_name = encoder_name = name - tie_encoder_to_decoder_recursively( - decoder_modules[decoder_name], - encoder_modules[encoder_name], - module_name + "/" + name, - uninitialized_encoder_weights, - skip_key, - depth=depth + 1, - ) - all_encoder_weights.remove(module_name + "/" + encoder_name) - - uninitialized_encoder_weights += list(all_encoder_weights) - - # tie weights recursively - tie_encoder_to_decoder_recursively( - decoder, encoder, base_model_prefix, uninitialized_encoder_weights, skip_key - ) diff --git a/spaces/SeyedAli/Butterfly-image-Generation/README.md b/spaces/SeyedAli/Butterfly-image-Generation/README.md deleted file mode 100644 index 7b49b11d7731c5a79c863970b7cde758e6c54444..0000000000000000000000000000000000000000 --- a/spaces/SeyedAli/Butterfly-image-Generation/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Butterfly Image Generation -emoji: 🦋 -colorFrom: green -colorTo: indigo -sdk: gradio -sdk_version: 3.50.2 -app_file: app.py -pinned: false -license: mit ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/Sonnt/Fracture_Webapp/ui/__init__.py b/spaces/Sonnt/Fracture_Webapp/ui/__init__.py deleted file mode 100644 index 81af026baf6494c81ef0aa7ac70d7d2f3c123335..0000000000000000000000000000000000000000 --- a/spaces/Sonnt/Fracture_Webapp/ui/__init__.py +++ /dev/null @@ -1,18 +0,0 @@ -from .UIConfigs import * -from .PageComponents import * - -__all__ = ['hide_menu_button', - 'condense_layout', - 'set_page_config', - - 'subtab21', - 'subtab22', - 'subtab23', - 'subtab24', - 'subtab25', - 'subtab26', - - 'scatterPoint3D', - 'stViewCurves', - -] \ No newline at end of file diff --git "a/spaces/SouthCity/ShuruiXu/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" "b/spaces/SouthCity/ShuruiXu/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" deleted file mode 100644 index 892578283ca2f8321b8086574107b2dccdc482c7..0000000000000000000000000000000000000000 --- "a/spaces/SouthCity/ShuruiXu/crazy_functions/\344\273\243\347\240\201\351\207\215\345\206\231\344\270\272\345\205\250\350\213\261\346\226\207_\345\244\232\347\272\277\347\250\213.py" +++ /dev/null @@ -1,75 +0,0 @@ -import threading -from predict import predict_no_ui_long_connection -from toolbox import CatchException, write_results_to_file - - - -@CatchException -def 全项目切换英文(txt, top_p, api_key, temperature, chatbot, history, sys_prompt, WEB_PORT): - history = [] # 清空历史,以免输入溢出 - # 集合文件 - import time, glob, os - os.makedirs('gpt_log/generated_english_version', exist_ok=True) - os.makedirs('gpt_log/generated_english_version/crazy_functions', exist_ok=True) - file_manifest = [f for f in glob.glob('./*.py') if ('test_project' not in f) and ('gpt_log' not in f)] + \ - [f for f in glob.glob('./crazy_functions/*.py') if ('test_project' not in f) and ('gpt_log' not in f)] - i_say_show_user_buffer = [] - - # 随便显示点什么防止卡顿的感觉 - for index, fp in enumerate(file_manifest): - # if 'test_project' in fp: continue - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - i_say_show_user =f'[{index}/{len(file_manifest)}] 接下来请将以下代码中包含的所有中文转化为英文,只输出代码: {os.path.abspath(fp)}' - i_say_show_user_buffer.append(i_say_show_user) - chatbot.append((i_say_show_user, "[Local Message] 等待多线程操作,中间过程不予显示.")) - yield chatbot, history, '正常' - - # 任务函数 - mutable_return = [None for _ in file_manifest] - def thread_worker(fp,index): - with open(fp, 'r', encoding='utf-8') as f: - file_content = f.read() - i_say = f'接下来请将以下代码中包含的所有中文转化为英文,只输出代码,文件名是{fp},文件代码是 ```{file_content}```' - # ** gpt request ** - gpt_say = predict_no_ui_long_connection(inputs=i_say, top_p=top_p, api_key=api_key, temperature=temperature, history=history, sys_prompt=sys_prompt) - mutable_return[index] = gpt_say - - # 所有线程同时开始执行任务函数 - handles = [threading.Thread(target=thread_worker, args=(fp,index)) for index, fp in enumerate(file_manifest)] - for h in handles: - h.daemon = True - h.start() - chatbot.append(('开始了吗?', f'多线程操作已经开始')) - yield chatbot, history, '正常' - - # 循环轮询各个线程是否执行完毕 - cnt = 0 - while True: - time.sleep(1) - th_alive = [h.is_alive() for h in handles] - if not any(th_alive): break - stat = ['执行中' if alive else '已完成' for alive in th_alive] - stat_str = '|'.join(stat) - cnt += 1 - chatbot[-1] = (chatbot[-1][0], f'多线程操作已经开始,完成情况: {stat_str}' + ''.join(['.']*(cnt%4))) - yield chatbot, history, '正常' - - # 把结果写入文件 - for index, h in enumerate(handles): - h.join() # 这里其实不需要join了,肯定已经都结束了 - fp = file_manifest[index] - gpt_say = mutable_return[index] - i_say_show_user = i_say_show_user_buffer[index] - - where_to_relocate = f'gpt_log/generated_english_version/{fp}' - with open(where_to_relocate, 'w+', encoding='utf-8') as f: f.write(gpt_say.lstrip('```').rstrip('```')) - chatbot.append((i_say_show_user, f'[Local Message] 已完成{os.path.abspath(fp)}的转化,\n\n存入{os.path.abspath(where_to_relocate)}')) - history.append(i_say_show_user); history.append(gpt_say) - yield chatbot, history, '正常' - time.sleep(1) - - # 备份一个文件 - res = write_results_to_file(history) - chatbot.append(("生成一份任务执行报告", res)) - yield chatbot, history, '正常' diff --git a/spaces/StaticalizaAI/GPT-4/app.py b/spaces/StaticalizaAI/GPT-4/app.py deleted file mode 100644 index 56214df608b5a01a97db7d98f4f0ff4731c5f79d..0000000000000000000000000000000000000000 --- a/spaces/StaticalizaAI/GPT-4/app.py +++ /dev/null @@ -1,65 +0,0 @@ -import gradio as gr -import requests -import re -import os - -API_TOKEN = os.environ.get("API_TOKEN") -API_ENDPOINT = os.environ.get("API_ENDPOINT") - -API_PROCESS = os.environ.get("API_PROCESS") - -KEY = os.environ.get("KEY") - -headers = { "Content-Type": "application/json", "X-Access-Token": API_TOKEN } - -instruction = f"Respond with the format OUTPUT: response" -create_memory = [] - -user = "" -bot = "" - -update_memory = create_memory.copy() -history = "" - -exec(API_PROCESS) - -def send_message(instruction, message, memory, options): - task_message = f"INSTRUCTION: {instruction}\n\nINPUT: {message[1]}" - response = requests.post( f"{API_ENDPOINT}/generate", json={ "task": task_message, **options }, headers=headers ) - response_text = response.json()['result'] - print(f"\n\n{response_text}\n\n") - return response_text - -def predict(get_input, access_key): - - if (access_key != KEY): - print("REQUEST FAILED: Attempted Key: " + access_key) - return ("[UNAUTHORIZED ACCESS]", get_input); - - get_input = get_input.strip() - response = api_request(get_input) - print(f"---\nUSER: {get_input}\nBOT: {response}\n---") - return [response, ""] - -def main(): - with gr.Blocks() as demo: - with gr.Row(variant = "panel"): - gr.Markdown("😈 temporary locked gpt4 till these stupid automation stops (i can read logs and they are failing cause they missing an argument 🤬) (how tf yall bypassing the blocked gradio api 💀)!!!\n\n\n⛔ do not overuse it\n\n\nrespone takes 4-20+ seconds per request but if u make it write a essay it could take over a minute!\n\n\ndo not use it for math it may not be 100% correct!!!\n\n\nuhh ... https://discord.gg/6JRtGawz7B") - - with gr.Row(): - with gr.Column(): - input = gr.Textbox(label = "Input", lines = 4) - access_key = gr.Textbox(label = "Access Key", lines = 1) - run = gr.Button("▶") - with gr.Row(): - with gr.Column(): - output = gr.Textbox(label = "Output", value = "", lines = 50) - - input.submit(predict, inputs = [input, access_key], outputs = [output, input]) - run.click(predict, inputs = [input, access_key], outputs = [output, input]) - - demo.queue(concurrency_count = 5, api_open = False) - demo.launch(inline = True, max_threads = 5, show_api = False) - -if __name__ == "__main__": - main() \ No newline at end of file diff --git a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/adversarial/losses.py b/spaces/SuYuanS/AudioCraft_Plus/audiocraft/adversarial/losses.py deleted file mode 100644 index be293e739bdc2d91273f30fb789befe7c8b49a43..0000000000000000000000000000000000000000 --- a/spaces/SuYuanS/AudioCraft_Plus/audiocraft/adversarial/losses.py +++ /dev/null @@ -1,228 +0,0 @@ -# Copyright (c) Meta Platforms, Inc. and affiliates. -# All rights reserved. -# -# This source code is licensed under the license found in the -# LICENSE file in the root directory of this source tree. - -""" -Utility module to handle adversarial losses without requiring to mess up the main training loop. -""" - -import typing as tp - -import flashy -import torch -import torch.nn as nn -import torch.nn.functional as F - - -ADVERSARIAL_LOSSES = ['mse', 'hinge', 'hinge2'] - - -AdvLossType = tp.Union[nn.Module, tp.Callable[[torch.Tensor], torch.Tensor]] -FeatLossType = tp.Union[nn.Module, tp.Callable[[torch.Tensor, torch.Tensor], torch.Tensor]] - - -class AdversarialLoss(nn.Module): - """Adversary training wrapper. - - Args: - adversary (nn.Module): The adversary module will be used to estimate the logits given the fake and real samples. - We assume here the adversary output is ``Tuple[List[torch.Tensor], List[List[torch.Tensor]]]`` - where the first item is a list of logits and the second item is a list of feature maps. - optimizer (torch.optim.Optimizer): Optimizer used for training the given module. - loss (AdvLossType): Loss function for generator training. - loss_real (AdvLossType): Loss function for adversarial training on logits from real samples. - loss_fake (AdvLossType): Loss function for adversarial training on logits from fake samples. - loss_feat (FeatLossType): Feature matching loss function for generator training. - normalize (bool): Whether to normalize by number of sub-discriminators. - - Example of usage: - adv_loss = AdversarialLoss(adversaries, optimizer, loss, loss_real, loss_fake) - for real in loader: - noise = torch.randn(...) - fake = model(noise) - adv_loss.train_adv(fake, real) - loss, _ = adv_loss(fake, real) - loss.backward() - """ - def __init__(self, - adversary: nn.Module, - optimizer: torch.optim.Optimizer, - loss: AdvLossType, - loss_real: AdvLossType, - loss_fake: AdvLossType, - loss_feat: tp.Optional[FeatLossType] = None, - normalize: bool = True): - super().__init__() - self.adversary: nn.Module = adversary - flashy.distrib.broadcast_model(self.adversary) - self.optimizer = optimizer - self.loss = loss - self.loss_real = loss_real - self.loss_fake = loss_fake - self.loss_feat = loss_feat - self.normalize = normalize - - def _save_to_state_dict(self, destination, prefix, keep_vars): - # Add the optimizer state dict inside our own. - super()._save_to_state_dict(destination, prefix, keep_vars) - destination[prefix + 'optimizer'] = self.optimizer.state_dict() - return destination - - def _load_from_state_dict(self, state_dict, prefix, *args, **kwargs): - # Load optimizer state. - self.optimizer.load_state_dict(state_dict.pop(prefix + 'optimizer')) - super()._load_from_state_dict(state_dict, prefix, *args, **kwargs) - - def get_adversary_pred(self, x): - """Run adversary model, validating expected output format.""" - logits, fmaps = self.adversary(x) - assert isinstance(logits, list) and all([isinstance(t, torch.Tensor) for t in logits]), \ - f'Expecting a list of tensors as logits but {type(logits)} found.' - assert isinstance(fmaps, list), f'Expecting a list of features maps but {type(fmaps)} found.' - for fmap in fmaps: - assert isinstance(fmap, list) and all([isinstance(f, torch.Tensor) for f in fmap]), \ - f'Expecting a list of tensors as feature maps but {type(fmap)} found.' - return logits, fmaps - - def train_adv(self, fake: torch.Tensor, real: torch.Tensor) -> torch.Tensor: - """Train the adversary with the given fake and real example. - - We assume the adversary output is the following format: Tuple[List[torch.Tensor], List[List[torch.Tensor]]]. - The first item being the logits and second item being a list of feature maps for each sub-discriminator. - - This will automatically synchronize gradients (with `flashy.distrib.eager_sync_model`) - and call the optimizer. - """ - loss = torch.tensor(0., device=fake.device) - all_logits_fake_is_fake, _ = self.get_adversary_pred(fake.detach()) - all_logits_real_is_fake, _ = self.get_adversary_pred(real.detach()) - n_sub_adversaries = len(all_logits_fake_is_fake) - for logit_fake_is_fake, logit_real_is_fake in zip(all_logits_fake_is_fake, all_logits_real_is_fake): - loss += self.loss_fake(logit_fake_is_fake) + self.loss_real(logit_real_is_fake) - - if self.normalize: - loss /= n_sub_adversaries - - self.optimizer.zero_grad() - with flashy.distrib.eager_sync_model(self.adversary): - loss.backward() - self.optimizer.step() - - return loss - - def forward(self, fake: torch.Tensor, real: torch.Tensor) -> tp.Tuple[torch.Tensor, torch.Tensor]: - """Return the loss for the generator, i.e. trying to fool the adversary, - and feature matching loss if provided. - """ - adv = torch.tensor(0., device=fake.device) - feat = torch.tensor(0., device=fake.device) - with flashy.utils.readonly(self.adversary): - all_logits_fake_is_fake, all_fmap_fake = self.get_adversary_pred(fake) - all_logits_real_is_fake, all_fmap_real = self.get_adversary_pred(real) - n_sub_adversaries = len(all_logits_fake_is_fake) - for logit_fake_is_fake in all_logits_fake_is_fake: - adv += self.loss(logit_fake_is_fake) - if self.loss_feat: - for fmap_fake, fmap_real in zip(all_fmap_fake, all_fmap_real): - feat += self.loss_feat(fmap_fake, fmap_real) - - if self.normalize: - adv /= n_sub_adversaries - feat /= n_sub_adversaries - - return adv, feat - - -def get_adv_criterion(loss_type: str) -> tp.Callable: - assert loss_type in ADVERSARIAL_LOSSES - if loss_type == 'mse': - return mse_loss - elif loss_type == 'hinge': - return hinge_loss - elif loss_type == 'hinge2': - return hinge2_loss - raise ValueError('Unsupported loss') - - -def get_fake_criterion(loss_type: str) -> tp.Callable: - assert loss_type in ADVERSARIAL_LOSSES - if loss_type == 'mse': - return mse_fake_loss - elif loss_type in ['hinge', 'hinge2']: - return hinge_fake_loss - raise ValueError('Unsupported loss') - - -def get_real_criterion(loss_type: str) -> tp.Callable: - assert loss_type in ADVERSARIAL_LOSSES - if loss_type == 'mse': - return mse_real_loss - elif loss_type in ['hinge', 'hinge2']: - return hinge_real_loss - raise ValueError('Unsupported loss') - - -def mse_real_loss(x: torch.Tensor) -> torch.Tensor: - return F.mse_loss(x, torch.tensor(1., device=x.device).expand_as(x)) - - -def mse_fake_loss(x: torch.Tensor) -> torch.Tensor: - return F.mse_loss(x, torch.tensor(0., device=x.device).expand_as(x)) - - -def hinge_real_loss(x: torch.Tensor) -> torch.Tensor: - return -torch.mean(torch.min(x - 1, torch.tensor(0., device=x.device).expand_as(x))) - - -def hinge_fake_loss(x: torch.Tensor) -> torch.Tensor: - return -torch.mean(torch.min(-x - 1, torch.tensor(0., device=x.device).expand_as(x))) - - -def mse_loss(x: torch.Tensor) -> torch.Tensor: - if x.numel() == 0: - return torch.tensor([0.0], device=x.device) - return F.mse_loss(x, torch.tensor(1., device=x.device).expand_as(x)) - - -def hinge_loss(x: torch.Tensor) -> torch.Tensor: - if x.numel() == 0: - return torch.tensor([0.0], device=x.device) - return -x.mean() - - -def hinge2_loss(x: torch.Tensor) -> torch.Tensor: - if x.numel() == 0: - return torch.tensor([0.0]) - return -torch.mean(torch.min(x - 1, torch.tensor(0., device=x.device).expand_as(x))) - - -class FeatureMatchingLoss(nn.Module): - """Feature matching loss for adversarial training. - - Args: - loss (nn.Module): Loss to use for feature matching (default=torch.nn.L1). - normalize (bool): Whether to normalize the loss. - by number of feature maps. - """ - def __init__(self, loss: nn.Module = torch.nn.L1Loss(), normalize: bool = True): - super().__init__() - self.loss = loss - self.normalize = normalize - - def forward(self, fmap_fake: tp.List[torch.Tensor], fmap_real: tp.List[torch.Tensor]) -> torch.Tensor: - assert len(fmap_fake) == len(fmap_real) and len(fmap_fake) > 0 - feat_loss = torch.tensor(0., device=fmap_fake[0].device) - feat_scale = torch.tensor(0., device=fmap_fake[0].device) - n_fmaps = 0 - for (feat_fake, feat_real) in zip(fmap_fake, fmap_real): - assert feat_fake.shape == feat_real.shape - n_fmaps += 1 - feat_loss += self.loss(feat_fake, feat_real) - feat_scale += torch.mean(torch.abs(feat_real)) - - if self.normalize: - feat_loss /= n_fmaps - - return feat_loss diff --git a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/payload_streamer.py b/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/payload_streamer.py deleted file mode 100644 index 9f8b8bc57cc22fc693da1646bf806c2a6ca8d797..0000000000000000000000000000000000000000 --- a/spaces/SungBeom/chatwine-korean/.venv/Lib/site-packages/aiohttp/payload_streamer.py +++ /dev/null @@ -1,75 +0,0 @@ -""" -Payload implemenation for coroutines as data provider. - -As a simple case, you can upload data from file:: - - @aiohttp.streamer - async def file_sender(writer, file_name=None): - with open(file_name, 'rb') as f: - chunk = f.read(2**16) - while chunk: - await writer.write(chunk) - - chunk = f.read(2**16) - -Then you can use `file_sender` like this: - - async with session.post('http://httpbin.org/post', - data=file_sender(file_name='huge_file')) as resp: - print(await resp.text()) - -..note:: Coroutine must accept `writer` as first argument - -""" - -import types -import warnings -from typing import Any, Awaitable, Callable, Dict, Tuple - -from .abc import AbstractStreamWriter -from .payload import Payload, payload_type - -__all__ = ("streamer",) - - -class _stream_wrapper: - def __init__( - self, - coro: Callable[..., Awaitable[None]], - args: Tuple[Any, ...], - kwargs: Dict[str, Any], - ) -> None: - self.coro = types.coroutine(coro) - self.args = args - self.kwargs = kwargs - - async def __call__(self, writer: AbstractStreamWriter) -> None: - await self.coro(writer, *self.args, **self.kwargs) # type: ignore[operator] - - -class streamer: - def __init__(self, coro: Callable[..., Awaitable[None]]) -> None: - warnings.warn( - "@streamer is deprecated, use async generators instead", - DeprecationWarning, - stacklevel=2, - ) - self.coro = coro - - def __call__(self, *args: Any, **kwargs: Any) -> _stream_wrapper: - return _stream_wrapper(self.coro, args, kwargs) - - -@payload_type(_stream_wrapper) -class StreamWrapperPayload(Payload): - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) - - -@payload_type(streamer) -class StreamPayload(StreamWrapperPayload): - def __init__(self, value: Any, *args: Any, **kwargs: Any) -> None: - super().__init__(value(), *args, **kwargs) - - async def write(self, writer: AbstractStreamWriter) -> None: - await self._value(writer) diff --git a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/meta_arch/build.py b/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/meta_arch/build.py deleted file mode 100644 index 52229b11acf4a8f07c173feb51c45c30e9567903..0000000000000000000000000000000000000000 --- a/spaces/Superlang/ImageProcessor/annotator/oneformer/detectron2/modeling/meta_arch/build.py +++ /dev/null @@ -1,24 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import torch - -from annotator.oneformer.detectron2.utils.logger import _log_api_usage -from annotator.oneformer.detectron2.utils.registry import Registry - -META_ARCH_REGISTRY = Registry("META_ARCH") # noqa F401 isort:skip -META_ARCH_REGISTRY.__doc__ = """ -Registry for meta-architectures, i.e. the whole model. - -The registered object will be called with `obj(cfg)` -and expected to return a `nn.Module` object. -""" - - -def build_model(cfg): - """ - Build the whole model architecture, defined by ``cfg.MODEL.META_ARCHITECTURE``. - Note that it does not load any weights from ``cfg``. - """ - meta_arch = cfg.MODEL.META_ARCHITECTURE - model = META_ARCH_REGISTRY.get(meta_arch)(cfg) - _log_api_usage("modeling.meta_arch." + meta_arch) - return model diff --git a/spaces/TH5314/newbing/src/components/ui/dropdown-menu.tsx b/spaces/TH5314/newbing/src/components/ui/dropdown-menu.tsx deleted file mode 100644 index 184d4e6007ef85187446362f69532ab077897fea..0000000000000000000000000000000000000000 --- a/spaces/TH5314/newbing/src/components/ui/dropdown-menu.tsx +++ /dev/null @@ -1,128 +0,0 @@ -'use client' - -import * as React from 'react' -import * as DropdownMenuPrimitive from '@radix-ui/react-dropdown-menu' - -import { cn } from '@/lib/utils' - -const DropdownMenu = DropdownMenuPrimitive.Root - -const DropdownMenuTrigger = DropdownMenuPrimitive.Trigger - -const DropdownMenuGroup = DropdownMenuPrimitive.Group - -const DropdownMenuPortal = DropdownMenuPrimitive.Portal - -const DropdownMenuSub = DropdownMenuPrimitive.Sub - -const DropdownMenuRadioGroup = DropdownMenuPrimitive.RadioGroup - -const DropdownMenuSubContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSubContent.displayName = - DropdownMenuPrimitive.SubContent.displayName - -const DropdownMenuContent = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, sideOffset = 4, ...props }, ref) => ( - - - -)) -DropdownMenuContent.displayName = DropdownMenuPrimitive.Content.displayName - -const DropdownMenuItem = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuItem.displayName = DropdownMenuPrimitive.Item.displayName - -const DropdownMenuLabel = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef & { - inset?: boolean - } ->(({ className, inset, ...props }, ref) => ( - -)) -DropdownMenuLabel.displayName = DropdownMenuPrimitive.Label.displayName - -const DropdownMenuSeparator = React.forwardRef< - React.ElementRef, - React.ComponentPropsWithoutRef ->(({ className, ...props }, ref) => ( - -)) -DropdownMenuSeparator.displayName = DropdownMenuPrimitive.Separator.displayName - -const DropdownMenuShortcut = ({ - className, - ...props -}: React.HTMLAttributes) => { - return ( - - ) -} -DropdownMenuShortcut.displayName = 'DropdownMenuShortcut' - -export { - DropdownMenu, - DropdownMenuTrigger, - DropdownMenuContent, - DropdownMenuItem, - DropdownMenuLabel, - DropdownMenuSeparator, - DropdownMenuShortcut, - DropdownMenuGroup, - DropdownMenuPortal, - DropdownMenuSub, - DropdownMenuSubContent, - DropdownMenuRadioGroup -} diff --git a/spaces/THEGAMECHANGER/LandscapeColorizer/app.py b/spaces/THEGAMECHANGER/LandscapeColorizer/app.py deleted file mode 100644 index 20a2663fc2bf65e48b476bb4db6354889212bdd8..0000000000000000000000000000000000000000 --- a/spaces/THEGAMECHANGER/LandscapeColorizer/app.py +++ /dev/null @@ -1,22 +0,0 @@ -import gradio as gr -import tensorflow as tf -import numpy as np - -model = tf.keras.models.load_model("landColorGenV1.keras") - -def generate_image(input_img): - input_img = tf.convert_to_tensor(input_img) - input_img = tf.cast(input_img,tf.float32) - init_shape = input_img.shape - input_img = tf.image.resize(input_img, [256, 256], - method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) - input_img = (input_img / 127.5) -1 - input_img = tf.reshape(input_img,(1,256,256,3)) - output = model(input_img,training=True) - # out_img = output[0].numpy()* 0.5 + 0.5 - out_img = tf.image.resize(output[0], [init_shape[0],init_shape[1]], - method=tf.image.ResizeMethod.NEAREST_NEIGHBOR) - out_img = out_img.numpy()*0.5 + 0.5 - return out_img -app = gr.Interface(fn = generate_image, inputs="image", outputs="image") -app.launch(debug=False) \ No newline at end of file diff --git a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/xmlrpc.py b/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/xmlrpc.py deleted file mode 100644 index 4a7d55d0e50cb8b892caa021695522e5ddd54a17..0000000000000000000000000000000000000000 --- a/spaces/TandCAcceptMe/face-swap-docker/mynewshinyroop/Lib/site-packages/pip/_internal/network/xmlrpc.py +++ /dev/null @@ -1,60 +0,0 @@ -"""xmlrpclib.Transport implementation -""" - -import logging -import urllib.parse -import xmlrpc.client -from typing import TYPE_CHECKING, Tuple - -from pip._internal.exceptions import NetworkConnectionError -from pip._internal.network.session import PipSession -from pip._internal.network.utils import raise_for_status - -if TYPE_CHECKING: - from xmlrpc.client import _HostType, _Marshallable - -logger = logging.getLogger(__name__) - - -class PipXmlrpcTransport(xmlrpc.client.Transport): - """Provide a `xmlrpclib.Transport` implementation via a `PipSession` - object. - """ - - def __init__( - self, index_url: str, session: PipSession, use_datetime: bool = False - ) -> None: - super().__init__(use_datetime) - index_parts = urllib.parse.urlparse(index_url) - self._scheme = index_parts.scheme - self._session = session - - def request( - self, - host: "_HostType", - handler: str, - request_body: bytes, - verbose: bool = False, - ) -> Tuple["_Marshallable", ...]: - assert isinstance(host, str) - parts = (self._scheme, host, handler, None, None, None) - url = urllib.parse.urlunparse(parts) - try: - headers = {"Content-Type": "text/xml"} - response = self._session.post( - url, - data=request_body, - headers=headers, - stream=True, - ) - raise_for_status(response) - self.verbose = verbose - return self.parse_response(response.raw) - except NetworkConnectionError as exc: - assert exc.response - logger.critical( - "HTTP error %s while getting %s", - exc.response.status_code, - url, - ) - raise diff --git a/spaces/TencentARC/T2I-Adapter-SDXL/README.md b/spaces/TencentARC/T2I-Adapter-SDXL/README.md deleted file mode 100644 index b687a1ce40d7e5729149f24225697e16ae9e331c..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/T2I-Adapter-SDXL/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: T2I-Adapter-SDXL -emoji: 🚀 -colorFrom: purple -colorTo: yellow -sdk: docker -pinned: false -license: mit -suggested_hardware: t4-small ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_losses.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_losses.py deleted file mode 100644 index d74920246cbd4a188b3c81cf0c78e982af6da1ac..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_losses.py +++ /dev/null @@ -1,82 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import numpy as np -import unittest -import torch - -from detectron2.layers import ciou_loss, diou_loss - - -class TestLosses(unittest.TestCase): - def test_diou_loss(self): - """ - loss = 1 - iou + d/c - where, - d = (distance between centers of the 2 boxes)^2 - c = (diagonal length of the smallest enclosing box covering the 2 boxes)^2 - """ - # Identical boxes should have loss of 0 - box = torch.tensor([-1, -1, 1, 1], dtype=torch.float32) - loss = diou_loss(box, box) - self.assertTrue(np.allclose(loss, [0.0])) - - # Half size box inside other box - # iou = 0.5, d = 0.25, c = 8 - box2 = torch.tensor([0, -1, 1, 1], dtype=torch.float32) - loss = diou_loss(box, box2) - self.assertTrue(np.allclose(loss, [0.53125])) - - # Two diagonally adjacent boxes - # iou = 0, d = 2, c = 8 - box3 = torch.tensor([0, 0, 1, 1], dtype=torch.float32) - box4 = torch.tensor([1, 1, 2, 2], dtype=torch.float32) - loss = diou_loss(box3, box4) - self.assertTrue(np.allclose(loss, [1.25])) - - # Test batched loss and reductions - box1s = torch.stack([box, box3], dim=0) - box2s = torch.stack([box2, box4], dim=0) - - loss = diou_loss(box1s, box2s, reduction="sum") - self.assertTrue(np.allclose(loss, [1.78125])) - - loss = diou_loss(box1s, box2s, reduction="mean") - self.assertTrue(np.allclose(loss, [0.890625])) - - def test_ciou_loss(self): - """ - loss = 1 - iou + d/c + alpha*v - where, - d = (distance between centers of the 2 boxes)^2 - c = (diagonal length of the smallest enclosing box covering the 2 boxes)^2 - v = (4/pi^2) * (arctan(box1_w/box1_h) - arctan(box2_w/box2_h))^2 - alpha = v/(1 - iou + v) - """ - # Identical boxes should have loss of 0 - box = torch.tensor([-1, -1, 1, 1], dtype=torch.float32) - loss = ciou_loss(box, box) - self.assertTrue(np.allclose(loss, [0.0])) - - # Half size box inside other box - # iou = 0.5, d = 0.25, c = 8 - # v = (4/pi^2) * (arctan(1) - arctan(0.5))^2 = 0.042 - # alpha = 0.0775 - box2 = torch.tensor([0, -1, 1, 1], dtype=torch.float32) - loss = ciou_loss(box, box2) - self.assertTrue(np.allclose(loss, [0.5345])) - - # Two diagonally adjacent boxes - # iou = 0, d = 2, c = 8, v = 0, alpha = 0 - box3 = torch.tensor([0, 0, 1, 1], dtype=torch.float32) - box4 = torch.tensor([1, 1, 2, 2], dtype=torch.float32) - loss = ciou_loss(box3, box4) - self.assertTrue(np.allclose(loss, [1.25])) - - # Test batched loss and reductions - box1s = torch.stack([box, box3], dim=0) - box2s = torch.stack([box2, box4], dim=0) - - loss = ciou_loss(box1s, box2s, reduction="sum") - self.assertTrue(np.allclose(loss, [1.7845])) - - loss = ciou_loss(box1s, box2s, reduction="mean") - self.assertTrue(np.allclose(loss, [0.89225])) diff --git a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py b/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py deleted file mode 100644 index 7323d7d5a86816f337571221313c428238c439f4..0000000000000000000000000000000000000000 --- a/spaces/TencentARC/VLog/models/grit_src/third_party/CenterNet2/tests/layers/test_roi_align_rotated.py +++ /dev/null @@ -1,176 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import logging -import unittest -import cv2 -import torch -from torch.autograd import Variable, gradcheck - -from detectron2.layers.roi_align import ROIAlign -from detectron2.layers.roi_align_rotated import ROIAlignRotated - -logger = logging.getLogger(__name__) - - -class ROIAlignRotatedTest(unittest.TestCase): - def _box_to_rotated_box(self, box, angle): - return [ - (box[0] + box[2]) / 2.0, - (box[1] + box[3]) / 2.0, - box[2] - box[0], - box[3] - box[1], - angle, - ] - - def _rot90(self, img, num): - num = num % 4 # note: -1 % 4 == 3 - for _ in range(num): - img = img.transpose(0, 1).flip(0) - return img - - def test_forward_output_0_90_180_270(self): - for i in range(4): - # i = 0, 1, 2, 3 corresponding to 0, 90, 180, 270 degrees - img = torch.arange(25, dtype=torch.float32).reshape(5, 5) - """ - 0 1 2 3 4 - 5 6 7 8 9 - 10 11 12 13 14 - 15 16 17 18 19 - 20 21 22 23 24 - """ - box = [1, 1, 3, 3] - rotated_box = self._box_to_rotated_box(box=box, angle=90 * i) - - result = self._simple_roi_align_rotated(img=img, box=rotated_box, resolution=(4, 4)) - - # Here's an explanation for 0 degree case: - # point 0 in the original input lies at [0.5, 0.5] - # (the center of bin [0, 1] x [0, 1]) - # point 1 in the original input lies at [1.5, 0.5], etc. - # since the resolution is (4, 4) that divides [1, 3] x [1, 3] - # into 4 x 4 equal bins, - # the top-left bin is [1, 1.5] x [1, 1.5], and its center - # (1.25, 1.25) lies at the 3/4 position - # between point 0 and point 1, point 5 and point 6, - # point 0 and point 5, point 1 and point 6, so it can be calculated as - # 0.25*(0*0.25+1*0.75)+(5*0.25+6*0.75)*0.75 = 4.5 - result_expected = torch.tensor( - [ - [4.5, 5.0, 5.5, 6.0], - [7.0, 7.5, 8.0, 8.5], - [9.5, 10.0, 10.5, 11.0], - [12.0, 12.5, 13.0, 13.5], - ] - ) - # This is also an upsampled version of [[6, 7], [11, 12]] - - # When the box is rotated by 90 degrees CCW, - # the result would be rotated by 90 degrees CW, thus it's -i here - result_expected = self._rot90(result_expected, -i) - - assert torch.allclose(result, result_expected) - - def test_resize(self): - H, W = 30, 30 - input = torch.rand(H, W) * 100 - box = [10, 10, 20, 20] - rotated_box = self._box_to_rotated_box(box, angle=0) - output = self._simple_roi_align_rotated(img=input, box=rotated_box, resolution=(5, 5)) - - input2x = cv2.resize(input.numpy(), (W // 2, H // 2), interpolation=cv2.INTER_LINEAR) - input2x = torch.from_numpy(input2x) - box2x = [x / 2 for x in box] - rotated_box2x = self._box_to_rotated_box(box2x, angle=0) - output2x = self._simple_roi_align_rotated(img=input2x, box=rotated_box2x, resolution=(5, 5)) - assert torch.allclose(output2x, output) - - def _simple_roi_align_rotated(self, img, box, resolution): - """ - RoiAlignRotated with scale 1.0 and 0 sample ratio. - """ - op = ROIAlignRotated(output_size=resolution, spatial_scale=1.0, sampling_ratio=0) - input = img[None, None, :, :] - - rois = [0] + list(box) - rois = torch.tensor(rois, dtype=torch.float32)[None, :] - result_cpu = op.forward(input, rois) - if torch.cuda.is_available(): - result_cuda = op.forward(input.cuda(), rois.cuda()) - assert torch.allclose(result_cpu, result_cuda.cpu()) - return result_cpu[0, 0] - - def test_empty_box(self): - img = torch.rand(5, 5) - out = self._simple_roi_align_rotated(img, [2, 3, 0, 0, 0], (7, 7)) - self.assertTrue((out == 0).all()) - - def test_roi_align_rotated_gradcheck_cpu(self): - dtype = torch.float64 - device = torch.device("cpu") - roi_align_rotated_op = ROIAlignRotated( - output_size=(5, 5), spatial_scale=0.5, sampling_ratio=1 - ).to(dtype=dtype, device=device) - x = torch.rand(1, 1, 10, 10, dtype=dtype, device=device, requires_grad=True) - # roi format is (batch index, x_center, y_center, width, height, angle) - rois = torch.tensor( - [[0, 4.5, 4.5, 9, 9, 0], [0, 2, 7, 4, 4, 0], [0, 7, 7, 4, 4, 0]], - dtype=dtype, - device=device, - ) - - def func(input): - return roi_align_rotated_op(input, rois) - - assert gradcheck(func, (x,)), "gradcheck failed for RoIAlignRotated CPU" - assert gradcheck(func, (x.transpose(2, 3),)), "gradcheck failed for RoIAlignRotated CPU" - - @unittest.skipIf(not torch.cuda.is_available(), "CUDA not available") - def test_roi_align_rotated_gradient_cuda(self): - """ - Compute gradients for ROIAlignRotated with multiple bounding boxes on the GPU, - and compare the result with ROIAlign - """ - # torch.manual_seed(123) - dtype = torch.float64 - device = torch.device("cuda") - pool_h, pool_w = (5, 5) - - roi_align = ROIAlign(output_size=(pool_h, pool_w), spatial_scale=1, sampling_ratio=2).to( - device=device - ) - - roi_align_rotated = ROIAlignRotated( - output_size=(pool_h, pool_w), spatial_scale=1, sampling_ratio=2 - ).to(device=device) - - x = torch.rand(1, 1, 10, 10, dtype=dtype, device=device, requires_grad=True) - # x_rotated = x.clone() won't work (will lead to grad_fun=CloneBackward)! - x_rotated = Variable(x.data.clone(), requires_grad=True) - - # roi_rotated format is (batch index, x_center, y_center, width, height, angle) - rois_rotated = torch.tensor( - [[0, 4.5, 4.5, 9, 9, 0], [0, 2, 7, 4, 4, 0], [0, 7, 7, 4, 4, 0]], - dtype=dtype, - device=device, - ) - - y_rotated = roi_align_rotated(x_rotated, rois_rotated) - s_rotated = y_rotated.sum() - s_rotated.backward() - - # roi format is (batch index, x1, y1, x2, y2) - rois = torch.tensor( - [[0, 0, 0, 9, 9], [0, 0, 5, 4, 9], [0, 5, 5, 9, 9]], dtype=dtype, device=device - ) - - y = roi_align(x, rois) - s = y.sum() - s.backward() - - assert torch.allclose( - x.grad, x_rotated.grad - ), "gradients for ROIAlign and ROIAlignRotated mismatch on CUDA" - - -if __name__ == "__main__": - unittest.main() diff --git a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py b/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py deleted file mode 100644 index ee3171bcb7c4a5066560723108b56e055f18be45..0000000000000000000000000000000000000000 --- a/spaces/TheStinger/Ilaria_RVC/lib/infer_pack/modules/F0Predictor/DioF0Predictor.py +++ /dev/null @@ -1,90 +0,0 @@ -from lib.infer_pack.modules.F0Predictor.F0Predictor import F0Predictor -import pyworld -import numpy as np - - -class DioF0Predictor(F0Predictor): - def __init__(self, hop_length=512, f0_min=50, f0_max=1100, sampling_rate=44100): - self.hop_length = hop_length - self.f0_min = f0_min - self.f0_max = f0_max - self.sampling_rate = sampling_rate - - def interpolate_f0(self, f0): - """ - 对F0进行插值处理 - """ - - data = np.reshape(f0, (f0.size, 1)) - - vuv_vector = np.zeros((data.size, 1), dtype=np.float32) - vuv_vector[data > 0.0] = 1.0 - vuv_vector[data <= 0.0] = 0.0 - - ip_data = data - - frame_number = data.size - last_value = 0.0 - for i in range(frame_number): - if data[i] <= 0.0: - j = i + 1 - for j in range(i + 1, frame_number): - if data[j] > 0.0: - break - if j < frame_number - 1: - if last_value > 0.0: - step = (data[j] - data[i - 1]) / float(j - i) - for k in range(i, j): - ip_data[k] = data[i - 1] + step * (k - i + 1) - else: - for k in range(i, j): - ip_data[k] = data[j] - else: - for k in range(i, frame_number): - ip_data[k] = last_value - else: - ip_data[i] = data[i] # 这里可能存在一个没有必要的拷贝 - last_value = data[i] - - return ip_data[:, 0], vuv_vector[:, 0] - - def resize_f0(self, x, target_len): - source = np.array(x) - source[source < 0.001] = np.nan - target = np.interp( - np.arange(0, len(source) * target_len, len(source)) / target_len, - np.arange(0, len(source)), - source, - ) - res = np.nan_to_num(target) - return res - - def compute_f0(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len))[0] - - def compute_f0_uv(self, wav, p_len=None): - if p_len is None: - p_len = wav.shape[0] // self.hop_length - f0, t = pyworld.dio( - wav.astype(np.double), - fs=self.sampling_rate, - f0_floor=self.f0_min, - f0_ceil=self.f0_max, - frame_period=1000 * self.hop_length / self.sampling_rate, - ) - f0 = pyworld.stonemask(wav.astype(np.double), f0, t, self.sampling_rate) - for index, pitch in enumerate(f0): - f0[index] = round(pitch, 1) - return self.interpolate_f0(self.resize_f0(f0, p_len)) diff --git a/spaces/Tuana/what-would-mother-say/custom_nodes/__init__.py b/spaces/Tuana/what-would-mother-say/custom_nodes/__init__.py deleted file mode 100644 index e69de29bb2d1d6434b8b29ae775ad8c2e48c5391..0000000000000000000000000000000000000000 diff --git a/spaces/VIPLab/Track-Anything/tracker/model/aggregate.py b/spaces/VIPLab/Track-Anything/tracker/model/aggregate.py deleted file mode 100644 index 7622391fb3ac9aa8b515df88cf3ea5297b367538..0000000000000000000000000000000000000000 --- a/spaces/VIPLab/Track-Anything/tracker/model/aggregate.py +++ /dev/null @@ -1,17 +0,0 @@ -import torch -import torch.nn.functional as F - - -# Soft aggregation from STM -def aggregate(prob, dim, return_logits=False): - new_prob = torch.cat([ - torch.prod(1-prob, dim=dim, keepdim=True), - prob - ], dim).clamp(1e-7, 1-1e-7) - logits = torch.log((new_prob /(1-new_prob))) - prob = F.softmax(logits, dim=dim) - - if return_logits: - return logits, prob - else: - return prob \ No newline at end of file diff --git a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Lockchat.py b/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Lockchat.py deleted file mode 100644 index 1bce74035403bf8615e68ccfcc9deb7e0151817a..0000000000000000000000000000000000000000 --- a/spaces/VickyKira/NASAGPT/g4f/Provider/Providers/Lockchat.py +++ /dev/null @@ -1,32 +0,0 @@ -import requests -import os -import json -from ...typing import sha256, Dict, get_type_hints -url = 'http://supertest.lockchat.app' -model = ['gpt-4', 'gpt-3.5-turbo'] -supports_stream = True -needs_auth = False - -def _create_completion(model: str, messages: list, stream: bool, temperature: float = 0.7, **kwargs): - - payload = { - "temperature": 0.7, - "messages": messages, - "model": model, - "stream": True, - } - headers = { - "user-agent": "ChatX/39 CFNetwork/1408.0.4 Darwin/22.5.0", - } - response = requests.post("http://supertest.lockchat.app/v1/chat/completions", - json=payload, headers=headers, stream=True) - for token in response.iter_lines(): - if b'The model: `gpt-4` does not exist' in token: - print('error, retrying...') - _create_completion(model=model, messages=messages, stream=stream, temperature=temperature, **kwargs) - if b"content" in token: - token = json.loads(token.decode('utf-8').split('data: ')[1])['choices'][0]['delta'].get('content') - if token: yield (token) - -params = f'g4f.Providers.{os.path.basename(__file__)[:-3]} supports: ' + \ - '(%s)' % ', '.join([f"{name}: {get_type_hints(_create_completion)[name].__name__}" for name in _create_completion.__code__.co_varnames[:_create_completion.__code__.co_argcount]]) \ No newline at end of file diff --git a/spaces/Widium/Image-Recreation/functions/compute.py b/spaces/Widium/Image-Recreation/functions/compute.py deleted file mode 100644 index 3305359a311d0666c4e8334076110bc84664f9e5..0000000000000000000000000000000000000000 --- a/spaces/Widium/Image-Recreation/functions/compute.py +++ /dev/null @@ -1,38 +0,0 @@ -# *************************************************************************** # -# # -# compute.py # -# # -# By: Widium # -# Github : https://github.com/widium # -# # -# Created: 2023/05/05 18:51:08 by Widium # -# Updated: 2023/05/05 18:51:08 by Widium # -# # -# **************************************************************************** # - -from tensorflow import Variable -from tensorflow.keras.optimizers import Optimizer - -from .image import clip_pixel - -# ======================================== # - -def optimize_gradients( - gradients, - optimizer : Optimizer, - generated_img : Variable, -): - """ - Optimize gradients, apply them to the generated image, and clip its pixel values. - - Args: - gradients: The gradients to be optimized. - optimizer: The optimizer used for optimizing the gradients. - generated_img: The generated image variable that will be updated. - loss: The loss value used for optimization. - - Returns: - Variable: The updated generated image variable. - """ - optimizer.apply_gradients(grads_and_vars=[(gradients, generated_img)]) - generated_img.assign(clip_pixel(generated_img)) \ No newline at end of file diff --git a/spaces/Xhaheen/Regex_by_OpenAI/app.py b/spaces/Xhaheen/Regex_by_OpenAI/app.py deleted file mode 100644 index 4c0a3e088aa993cd4c561b6b6708d8b25047a62d..0000000000000000000000000000000000000000 --- a/spaces/Xhaheen/Regex_by_OpenAI/app.py +++ /dev/null @@ -1,55 +0,0 @@ -import openai -import numpy as np -import os -import json -import gradio as gr - -openai.api_key = os.environ["api_key"] -model = os.environ["model"] - - -def happytt(temperature,max_tokens,text,stop): - try: - s = json.loads(stop) - response = openai.Completion.create( - model=model, - prompt=text, - temperature=temperature, - max_tokens=max_tokens, - top_p=1, - frequency_penalty=0, - presence_penalty=0, - stop=s - ) - except json.JSONDecodeError: - response = openai.Completion.create( - model=model, - prompt=text, - temperature=temperature, - max_tokens=max_tokens, - top_p=1, - frequency_penalty=0, - presence_penalty=0 - ) - - return response.choices[0].text - - -title = "OpenAI Codex" -description = '''OpenAI Codex is an artificial intelligence model developed by OpenAI. -It parses natural language and generates code in response. -It is used to power GitHub Copilot, a programming autocompletion -tool developed for Code generation. -Try following prompts and tweak temperatures in following links - -https://www.pragnakalp.com/experimenting-with-openai-codex/ -https://betterprogramming.pub/i-beta-tested-openais-codex-and-the-results-are-spooky-good-e282a1874c79 -https://beta.openai.com/examples?category=code''' - - -iface = gr.Interface( happytt,[ gr.inputs.Slider(0, 1, step=0.1),gr.inputs.Slider(150, 4000, step=1), - gr.inputs.Textbox(type='str', - label="input prompt"), - gr.inputs.Textbox(type='str', - label="list of tokens, when to finish generating", - placeholder='["", "import"]')],"text", title = title, description = description ) -iface.launch(debug=True) \ No newline at end of file diff --git a/spaces/XzJosh/JM-Bert-VITS2/README.md b/spaces/XzJosh/JM-Bert-VITS2/README.md deleted file mode 100644 index 8f4a64e7ec2c9480563fc77753d4d2fd2f095da5..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/JM-Bert-VITS2/README.md +++ /dev/null @@ -1,5 +0,0 @@ ---- -license: mit -sdk: gradio -title: AI剑魔③ ---- \ No newline at end of file diff --git a/spaces/XzJosh/JM-Bert-VITS2/resample.py b/spaces/XzJosh/JM-Bert-VITS2/resample.py deleted file mode 100644 index 2ed1685654a371c5722168e9987809b05b1cb224..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/JM-Bert-VITS2/resample.py +++ /dev/null @@ -1,42 +0,0 @@ -import os -import argparse -import librosa -import numpy as np -from multiprocessing import Pool, cpu_count - -import soundfile -from scipy.io import wavfile -from tqdm import tqdm - - -def process(item): - spkdir, wav_name, args = item - speaker = spkdir.replace("\\", "/").split("/")[-1] - wav_path = os.path.join(args.in_dir, speaker, wav_name) - if os.path.exists(wav_path) and '.wav' in wav_path: - os.makedirs(os.path.join(args.out_dir, speaker), exist_ok=True) - wav, sr = librosa.load(wav_path, sr=args.sr) - soundfile.write( - os.path.join(args.out_dir, speaker, wav_name), - wav, - sr - ) - - - -if __name__ == "__main__": - parser = argparse.ArgumentParser() - parser.add_argument("--sr", type=int, default=44100, help="sampling rate") - parser.add_argument("--in_dir", type=str, default="./raw", help="path to source dir") - parser.add_argument("--out_dir", type=str, default="./dataset", help="path to target dir") - args = parser.parse_args() - # processs = 8 - processs = cpu_count()-2 if cpu_count() >4 else 1 - pool = Pool(processes=processs) - - for speaker in os.listdir(args.in_dir): - spk_dir = os.path.join(args.in_dir, speaker) - if os.path.isdir(spk_dir): - print(spk_dir) - for _ in tqdm(pool.imap_unordered(process, [(spk_dir, i, args) for i in os.listdir(spk_dir) if i.endswith("wav")])): - pass diff --git a/spaces/XzJosh/ShanBao-Bert-VITS2/mel_processing.py b/spaces/XzJosh/ShanBao-Bert-VITS2/mel_processing.py deleted file mode 100644 index 50435ecf88ef4fb6c1d47f3e6edd04c3ea7d3e80..0000000000000000000000000000000000000000 --- a/spaces/XzJosh/ShanBao-Bert-VITS2/mel_processing.py +++ /dev/null @@ -1,112 +0,0 @@ -import math -import os -import random -import torch -from torch import nn -import torch.nn.functional as F -import torch.utils.data -import numpy as np -import librosa -import librosa.util as librosa_util -from librosa.util import normalize, pad_center, tiny -from scipy.signal import get_window -from scipy.io.wavfile import read -from librosa.filters import mel as librosa_mel_fn - -MAX_WAV_VALUE = 32768.0 - - -def dynamic_range_compression_torch(x, C=1, clip_val=1e-5): - """ - PARAMS - ------ - C: compression factor - """ - return torch.log(torch.clamp(x, min=clip_val) * C) - - -def dynamic_range_decompression_torch(x, C=1): - """ - PARAMS - ------ - C: compression factor used to compress - """ - return torch.exp(x) / C - - -def spectral_normalize_torch(magnitudes): - output = dynamic_range_compression_torch(magnitudes) - return output - - -def spectral_de_normalize_torch(magnitudes): - output = dynamic_range_decompression_torch(magnitudes) - return output - - -mel_basis = {} -hann_window = {} - - -def spectrogram_torch(y, n_fft, sampling_rate, hop_size, win_size, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - return spec - - -def spec_to_mel_torch(spec, n_fft, num_mels, sampling_rate, fmin, fmax): - global mel_basis - dtype_device = str(spec.dtype) + '_' + str(spec.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=spec.dtype, device=spec.device) - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - return spec - - -def mel_spectrogram_torch(y, n_fft, num_mels, sampling_rate, hop_size, win_size, fmin, fmax, center=False): - if torch.min(y) < -1.: - print('min value is ', torch.min(y)) - if torch.max(y) > 1.: - print('max value is ', torch.max(y)) - - global mel_basis, hann_window - dtype_device = str(y.dtype) + '_' + str(y.device) - fmax_dtype_device = str(fmax) + '_' + dtype_device - wnsize_dtype_device = str(win_size) + '_' + dtype_device - if fmax_dtype_device not in mel_basis: - mel = librosa_mel_fn(sampling_rate, n_fft, num_mels, fmin, fmax) - mel_basis[fmax_dtype_device] = torch.from_numpy(mel).to(dtype=y.dtype, device=y.device) - if wnsize_dtype_device not in hann_window: - hann_window[wnsize_dtype_device] = torch.hann_window(win_size).to(dtype=y.dtype, device=y.device) - - y = torch.nn.functional.pad(y.unsqueeze(1), (int((n_fft-hop_size)/2), int((n_fft-hop_size)/2)), mode='reflect') - y = y.squeeze(1) - - spec = torch.stft(y, n_fft, hop_length=hop_size, win_length=win_size, window=hann_window[wnsize_dtype_device], - center=center, pad_mode='reflect', normalized=False, onesided=True, return_complex=False) - - spec = torch.sqrt(spec.pow(2).sum(-1) + 1e-6) - - spec = torch.matmul(mel_basis[fmax_dtype_device], spec) - spec = spectral_normalize_torch(spec) - - return spec diff --git a/spaces/YE01/saya-vits/text/__init__.py b/spaces/YE01/saya-vits/text/__init__.py deleted file mode 100644 index 48ae82f3e40ecd1bf17a7de78d87790327af3362..0000000000000000000000000000000000000000 --- a/spaces/YE01/saya-vits/text/__init__.py +++ /dev/null @@ -1,56 +0,0 @@ -""" from https://github.com/keithito/tacotron """ -from text import cleaners -from text.symbols import symbols - - -# Mappings from symbol to numeric ID and vice versa: -_symbol_to_id = {s: i for i, s in enumerate(symbols)} -_id_to_symbol = {i: s for i, s in enumerate(symbols)} - - -def text_to_sequence(text, cleaner_names): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - cleaner_names: names of the cleaner functions to run the text through - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [] - - clean_text = _clean_text(text, cleaner_names) - for symbol in clean_text: - if symbol not in _symbol_to_id.keys(): - continue - symbol_id = _symbol_to_id[symbol] - sequence += [symbol_id] - return sequence - - -def cleaned_text_to_sequence(cleaned_text): - '''Converts a string of text to a sequence of IDs corresponding to the symbols in the text. - Args: - text: string to convert to a sequence - Returns: - List of integers corresponding to the symbols in the text - ''' - sequence = [_symbol_to_id[symbol] for symbol in cleaned_text if symbol in _symbol_to_id.keys()] - return sequence - - -def sequence_to_text(sequence): - '''Converts a sequence of IDs back to a string''' - result = '' - for symbol_id in sequence: - s = _id_to_symbol[symbol_id] - result += s - return result - - -def _clean_text(text, cleaner_names): - for name in cleaner_names: - cleaner = getattr(cleaners, name) - if not cleaner: - raise Exception('Unknown cleaner: %s' % name) - text = cleaner(text) - return text diff --git a/spaces/Yabo/ControlVideo/predict.py b/spaces/Yabo/ControlVideo/predict.py deleted file mode 100644 index 4fedccae33520ea2725dc0e5cacf35e59c113813..0000000000000000000000000000000000000000 --- a/spaces/Yabo/ControlVideo/predict.py +++ /dev/null @@ -1,164 +0,0 @@ -# Prediction interface for Cog ⚙️ -# https://github.com/replicate/cog/blob/main/docs/python.md -import os -import numpy as np -import argparse -import imageio -import torch - -from einops import rearrange -from diffusers import DDIMScheduler, AutoencoderKL -from transformers import CLIPTextModel, CLIPTokenizer -import controlnet_aux -from controlnet_aux import OpenposeDetector, CannyDetector, MidasDetector - -from models.pipeline_controlvideo import ControlVideoPipeline -from models.util import save_videos_grid, read_video, get_annotation -from models.unet import UNet3DConditionModel -from models.controlnet import ControlNetModel3D -from models.RIFE.IFNet_HDv3 import IFNet -from cog import BasePredictor, Input, Path - - -sd_path = "checkpoints/stable-diffusion-v1-5" -inter_path = "checkpoints/flownet.pkl" -controlnet_dict = { - "pose": "checkpoints/sd-controlnet-openpose", - "depth": "checkpoints/sd-controlnet-depth", - "canny": "checkpoints/sd-controlnet-canny", -} - -controlnet_parser_dict = { - "pose": OpenposeDetector, - "depth": MidasDetector, - "canny": CannyDetector, -} - -POS_PROMPT = " ,best quality, extremely detailed, HD, ultra-realistic, 8K, HQ, masterpiece, trending on artstation, art, smooth" -NEG_PROMPT = "longbody, lowres, bad anatomy, bad hands, missing fingers, extra digit, fewer difits, cropped, worst quality, low quality, deformed body, bloated, ugly, unrealistic" - - -class Predictor(BasePredictor): - def setup(self): - """Load the model into memory to make running multiple predictions efficient""" - - self.tokenizer = CLIPTokenizer.from_pretrained(sd_path, subfolder="tokenizer") - self.text_encoder = CLIPTextModel.from_pretrained( - sd_path, subfolder="text_encoder" - ).to(dtype=torch.float16) - self.vae = AutoencoderKL.from_pretrained(sd_path, subfolder="vae").to( - dtype=torch.float16 - ) - self.unet = UNet3DConditionModel.from_pretrained_2d( - sd_path, subfolder="unet" - ).to(dtype=torch.float16) - self.interpolater = IFNet(ckpt_path=inter_path).to(dtype=torch.float16) - self.scheduler = DDIMScheduler.from_pretrained(sd_path, subfolder="scheduler") - self.controlnet = { - k: ControlNetModel3D.from_pretrained_2d(controlnet_dict[k]).to( - dtype=torch.float16 - ) - for k in ["depth", "canny", "pose"] - } - self.annotator = {k: controlnet_parser_dict[k]() for k in ["depth", "canny"]} - self.annotator["pose"] = controlnet_parser_dict["pose"].from_pretrained( - "lllyasviel/ControlNet", cache_dir="checkpoints" - ) - - def predict( - self, - prompt: str = Input( - description="Text description of target video", - default="A striking mallard floats effortlessly on the sparkling pond.", - ), - video_path: Path = Input(description="source video"), - condition: str = Input( - default="depth", - choices=["depth", "canny", "pose"], - description="Condition of structure sequence", - ), - video_length: int = Input( - default=15, description="Length of synthesized video" - ), - smoother_steps: str = Input( - default="19, 20", - description="Timesteps at which using interleaved-frame smoother, separate with comma", - ), - is_long_video: bool = Input( - default=False, - description="Whether to use hierarchical sampler to produce long video", - ), - num_inference_steps: int = Input( - description="Number of denoising steps", default=50 - ), - guidance_scale: float = Input( - description="Scale for classifier-free guidance", ge=1, le=20, default=12.5 - ), - seed: str = Input( - default=None, description="Random seed. Leave blank to randomize the seed" - ), - ) -> Path: - """Run a single prediction on the model""" - if seed is None: - seed = int.from_bytes(os.urandom(2), "big") - else: - seed = int(seed) - print(f"Using seed: {seed}") - - generator = torch.Generator(device="cuda") - generator.manual_seed(seed) - - pipe = ControlVideoPipeline( - vae=self.vae, - text_encoder=self.text_encoder, - tokenizer=self.tokenizer, - unet=self.unet, - controlnet=self.controlnet[condition], - interpolater=self.interpolater, - scheduler=self.scheduler, - ) - - pipe.enable_vae_slicing() - pipe.enable_xformers_memory_efficient_attention() - pipe.to("cuda") - - # Step 1. Read a video - video = read_video(video_path=str(video_path), video_length=video_length) - - # Step 2. Parse a video to conditional frames - pil_annotation = get_annotation(video, self.annotator[condition]) - - # Step 3. inference - smoother_steps = [int(s) for s in smoother_steps.split(",")] - - if is_long_video: - window_size = int(np.sqrt(video_length)) - sample = pipe.generate_long_video( - prompt + POS_PROMPT, - video_length=video_length, - frames=pil_annotation, - num_inference_steps=num_inference_steps, - smooth_steps=smoother_steps, - window_size=window_size, - generator=generator, - guidance_scale=guidance_scale, - negative_prompt=NEG_PROMPT, - ).videos - else: - sample = pipe( - prompt + POS_PROMPT, - video_length=video_length, - frames=pil_annotation, - num_inference_steps=num_inference_steps, - smooth_steps=smoother_steps, - generator=generator, - guidance_scale=guidance_scale, - negative_prompt=NEG_PROMPT, - ).videos - - out_path = "/tmp/out.mp4" - save_videos_grid(sample, out_path) - del pipe - torch.cuda.empty_cache() - - return Path(out_path) diff --git a/spaces/Yiqin/ChatVID/app.py b/spaces/Yiqin/ChatVID/app.py deleted file mode 100644 index 3b9e5e0566db0f6c73b667031b883b5cf1b61ad9..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/app.py +++ /dev/null @@ -1,101 +0,0 @@ -import time - -import gradio as gr - -from config.config_utils import get_config -from model import Captioner, VicunaHandler - - -def mirror(x): - return x - - -def clear_chat(conv_template): - return "", [], conv_template - - -def clear_four(): - return [], "", "", "" - - -def respond(input, chat_history, conv): - bot_response, new_conv = handler.gr_chat(input, conv) - chat_history.append((input, bot_response)) - time.sleep(0.1) - return "", chat_history, new_conv - - -# global variables -config = get_config('config/infer.yaml') -captioner = Captioner(config) -handler = VicunaHandler(config['vicuna']) - -with gr.Blocks(theme=gr.themes.Soft()) as demo: - gr.Markdown("##

ChatVID

") - gr.Markdown("""🔥 [ChatVID](https://github.com/InvincibleWyq/ChatVID) is a - video chatbot. Please give us a ⭐ Star!""") - gr.Markdown("""🎥 You may use the example video by clicking it.""") - gr.Markdown("""🚀 For any questions or suggestions, feel free to drop Yiqin - an email at wyq1217@outlook.com - or open an issue.""") - - with gr.Row(): - with gr.Column(): - video_path = gr.Video(label="Video") - - with gr.Column(): - upload_button = gr.Button("""Upload & Process. - (Click and wait 3min until dialog box appears)""") - - num_frames = gr.Slider( - minimum=5, - value=12, - maximum=12, - step=1, - label="Number of frames") - - gr.Markdown("## Video Examples") - gr.Examples( - examples=[ - "examples/cook_720p.mp4", - "examples/temple_of_heaven_720p.mp4" - ], - inputs=video_path, - outputs=video_path, - fn=mirror, - cache_examples=False, - ) - - with gr.Column(): - caption_box = gr.Textbox("") - chatbot = gr.Chatbot() - conv_template = gr.State("") # determined by the video - conv = gr.State("") # updated thourghout the conversation - with gr.Row(visible=False) as input: - with gr.Column(scale=0.7): - txt = gr.Textbox( - show_label=False, - placeholder="Enter text and press enter").style( - container=False) - with gr.Column(scale=0.15, min_width=0): - run_button = gr.Button("RUN!") - with gr.Column(scale=0.15, min_width=0): - clear_button = gr.Button("CLEAR") - - # conv_template and conv are `Conversation` objects - upload_button.click(lambda: gr.update(visible=False), None, input).then( - clear_four, None, [chatbot, conv, conv_template, caption_box]).then( - captioner.caption_video, [video_path, num_frames], - [conv_template]).then(mirror, [conv_template], [caption_box]).then( - handler.gr_chatbot_init, [conv_template], - [conv_template, conv]).then(lambda: gr.update(visible=True), - None, input) - - txt.submit( - respond, inputs=[txt, chatbot, conv], outputs=[txt, chatbot, conv]) - run_button.click( - respond, inputs=[txt, chatbot, conv], outputs=[txt, chatbot, conv]) - clear_button.click( - clear_chat, inputs=[conv_template], outputs=[txt, chatbot, conv]) - -demo.queue(default_enabled=False).launch() diff --git a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/evaluation/rotated_coco_evaluation.py b/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/evaluation/rotated_coco_evaluation.py deleted file mode 100644 index ea6d1b381dcf106339a03f08577df673ad439c46..0000000000000000000000000000000000000000 --- a/spaces/Yiqin/ChatVID/model/vision/grit_src/third_party/CenterNet2/detectron2/evaluation/rotated_coco_evaluation.py +++ /dev/null @@ -1,207 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import itertools -import json -import numpy as np -import os -import torch -from pycocotools.cocoeval import COCOeval, maskUtils - -from detectron2.structures import BoxMode, RotatedBoxes, pairwise_iou_rotated -from detectron2.utils.file_io import PathManager - -from .coco_evaluation import COCOEvaluator - - -class RotatedCOCOeval(COCOeval): - @staticmethod - def is_rotated(box_list): - if type(box_list) == np.ndarray: - return box_list.shape[1] == 5 - elif type(box_list) == list: - if box_list == []: # cannot decide the box_dim - return False - return np.all( - np.array( - [ - (len(obj) == 5) and ((type(obj) == list) or (type(obj) == np.ndarray)) - for obj in box_list - ] - ) - ) - return False - - @staticmethod - def boxlist_to_tensor(boxlist, output_box_dim): - if type(boxlist) == np.ndarray: - box_tensor = torch.from_numpy(boxlist) - elif type(boxlist) == list: - if boxlist == []: - return torch.zeros((0, output_box_dim), dtype=torch.float32) - else: - box_tensor = torch.FloatTensor(boxlist) - else: - raise Exception("Unrecognized boxlist type") - - input_box_dim = box_tensor.shape[1] - if input_box_dim != output_box_dim: - if input_box_dim == 4 and output_box_dim == 5: - box_tensor = BoxMode.convert(box_tensor, BoxMode.XYWH_ABS, BoxMode.XYWHA_ABS) - else: - raise Exception( - "Unable to convert from {}-dim box to {}-dim box".format( - input_box_dim, output_box_dim - ) - ) - return box_tensor - - def compute_iou_dt_gt(self, dt, gt, is_crowd): - if self.is_rotated(dt) or self.is_rotated(gt): - # TODO: take is_crowd into consideration - assert all(c == 0 for c in is_crowd) - dt = RotatedBoxes(self.boxlist_to_tensor(dt, output_box_dim=5)) - gt = RotatedBoxes(self.boxlist_to_tensor(gt, output_box_dim=5)) - return pairwise_iou_rotated(dt, gt) - else: - # This is the same as the classical COCO evaluation - return maskUtils.iou(dt, gt, is_crowd) - - def computeIoU(self, imgId, catId): - p = self.params - if p.useCats: - gt = self._gts[imgId, catId] - dt = self._dts[imgId, catId] - else: - gt = [_ for cId in p.catIds for _ in self._gts[imgId, cId]] - dt = [_ for cId in p.catIds for _ in self._dts[imgId, cId]] - if len(gt) == 0 and len(dt) == 0: - return [] - inds = np.argsort([-d["score"] for d in dt], kind="mergesort") - dt = [dt[i] for i in inds] - if len(dt) > p.maxDets[-1]: - dt = dt[0 : p.maxDets[-1]] - - assert p.iouType == "bbox", "unsupported iouType for iou computation" - - g = [g["bbox"] for g in gt] - d = [d["bbox"] for d in dt] - - # compute iou between each dt and gt region - iscrowd = [int(o["iscrowd"]) for o in gt] - - # Note: this function is copied from cocoeval.py in cocoapi - # and the major difference is here. - ious = self.compute_iou_dt_gt(d, g, iscrowd) - return ious - - -class RotatedCOCOEvaluator(COCOEvaluator): - """ - Evaluate object proposal/instance detection outputs using COCO-like metrics and APIs, - with rotated boxes support. - Note: this uses IOU only and does not consider angle differences. - """ - - def process(self, inputs, outputs): - """ - Args: - inputs: the inputs to a COCO model (e.g., GeneralizedRCNN). - It is a list of dict. Each dict corresponds to an image and - contains keys like "height", "width", "file_name", "image_id". - outputs: the outputs of a COCO model. It is a list of dicts with key - "instances" that contains :class:`Instances`. - """ - for input, output in zip(inputs, outputs): - prediction = {"image_id": input["image_id"]} - - if "instances" in output: - instances = output["instances"].to(self._cpu_device) - - prediction["instances"] = self.instances_to_json(instances, input["image_id"]) - if "proposals" in output: - prediction["proposals"] = output["proposals"].to(self._cpu_device) - self._predictions.append(prediction) - - def instances_to_json(self, instances, img_id): - num_instance = len(instances) - if num_instance == 0: - return [] - - boxes = instances.pred_boxes.tensor.numpy() - if boxes.shape[1] == 4: - boxes = BoxMode.convert(boxes, BoxMode.XYXY_ABS, BoxMode.XYWH_ABS) - boxes = boxes.tolist() - scores = instances.scores.tolist() - classes = instances.pred_classes.tolist() - - results = [] - for k in range(num_instance): - result = { - "image_id": img_id, - "category_id": classes[k], - "bbox": boxes[k], - "score": scores[k], - } - - results.append(result) - return results - - def _eval_predictions(self, predictions, img_ids=None): # img_ids: unused - """ - Evaluate predictions on the given tasks. - Fill self._results with the metrics of the tasks. - """ - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - - # unmap the category ids for COCO - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - reverse_id_mapping = { - v: k for k, v in self._metadata.thing_dataset_id_to_contiguous_id.items() - } - for result in coco_results: - result["category_id"] = reverse_id_mapping[result["category_id"]] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info("Evaluating predictions ...") - - assert self._tasks is None or set(self._tasks) == { - "bbox" - }, "[RotatedCOCOEvaluator] Only bbox evaluation is supported" - coco_eval = ( - self._evaluate_predictions_on_coco(self._coco_api, coco_results) - if len(coco_results) > 0 - else None # cocoapi does not handle empty results very well - ) - - task = "bbox" - res = self._derive_coco_results( - coco_eval, task, class_names=self._metadata.get("thing_classes") - ) - self._results[task] = res - - def _evaluate_predictions_on_coco(self, coco_gt, coco_results): - """ - Evaluate the coco results using COCOEval API. - """ - assert len(coco_results) > 0 - - coco_dt = coco_gt.loadRes(coco_results) - - # Only bbox is supported for now - coco_eval = RotatedCOCOeval(coco_gt, coco_dt, iouType="bbox") - - coco_eval.evaluate() - coco_eval.accumulate() - coco_eval.summarize() - - return coco_eval diff --git a/spaces/Yuki1111/Yuki/Dockerfile b/spaces/Yuki1111/Yuki/Dockerfile deleted file mode 100644 index 6c01c09373883afcb4ea34ae2d316cd596e1737b..0000000000000000000000000000000000000000 --- a/spaces/Yuki1111/Yuki/Dockerfile +++ /dev/null @@ -1,21 +0,0 @@ -FROM node:18-bullseye-slim - -RUN apt-get update && \ - -apt-get install -y git - -RUN git clone https://gitgud.io/khanon/oai-reverse-proxy.git /app - -WORKDIR /app - -RUN npm install - -COPY Dockerfile greeting.md* .env* ./ - -RUN npm run build - -EXPOSE 7860 - -ENV NODE_ENV=production - -CMD [ "npm", "start" ] \ No newline at end of file diff --git a/spaces/Yuliang/ICON/lib/net/HGFilters.py b/spaces/Yuliang/ICON/lib/net/HGFilters.py deleted file mode 100644 index b8578cc42fb6c2630fea884ea86e5d53ab5f6d5d..0000000000000000000000000000000000000000 --- a/spaces/Yuliang/ICON/lib/net/HGFilters.py +++ /dev/null @@ -1,197 +0,0 @@ - -# -*- coding: utf-8 -*- - -# Max-Planck-Gesellschaft zur Förderung der Wissenschaften e.V. (MPG) is -# holder of all proprietary rights on this computer program. -# You can only use this computer program if you have closed -# a license agreement with MPG or you get the right to use the computer -# program from someone who is authorized to grant you that right. -# Any use of the computer program without a valid license is prohibited and -# liable to prosecution. -# -# Copyright©2019 Max-Planck-Gesellschaft zur Förderung -# der Wissenschaften e.V. (MPG). acting on behalf of its Max Planck Institute -# for Intelligent Systems. All rights reserved. -# -# Contact: ps-license@tuebingen.mpg.de - -from lib.net.net_util import * -import torch.nn as nn -import torch.nn.functional as F - - -class HourGlass(nn.Module): - def __init__(self, num_modules, depth, num_features, opt): - super(HourGlass, self).__init__() - self.num_modules = num_modules - self.depth = depth - self.features = num_features - self.opt = opt - - self._generate_network(self.depth) - - def _generate_network(self, level): - self.add_module('b1_' + str(level), - ConvBlock(self.features, self.features, self.opt)) - - self.add_module('b2_' + str(level), - ConvBlock(self.features, self.features, self.opt)) - - if level > 1: - self._generate_network(level - 1) - else: - self.add_module('b2_plus_' + str(level), - ConvBlock(self.features, self.features, self.opt)) - - self.add_module('b3_' + str(level), - ConvBlock(self.features, self.features, self.opt)) - - def _forward(self, level, inp): - # Upper branch - up1 = inp - up1 = self._modules['b1_' + str(level)](up1) - - # Lower branch - low1 = F.avg_pool2d(inp, 2, stride=2) - low1 = self._modules['b2_' + str(level)](low1) - - if level > 1: - low2 = self._forward(level - 1, low1) - else: - low2 = low1 - low2 = self._modules['b2_plus_' + str(level)](low2) - - low3 = low2 - low3 = self._modules['b3_' + str(level)](low3) - - # NOTE: for newer PyTorch (1.3~), it seems that training results are degraded due to implementation diff in F.grid_sample - # if the pretrained model behaves weirdly, switch with the commented line. - # NOTE: I also found that "bicubic" works better. - up2 = F.interpolate(low3, - scale_factor=2, - mode='bicubic', - align_corners=True) - # up2 = F.interpolate(low3, scale_factor=2, mode='nearest) - - return up1 + up2 - - def forward(self, x): - return self._forward(self.depth, x) - - -class HGFilter(nn.Module): - def __init__(self, opt, num_modules, in_dim): - super(HGFilter, self).__init__() - self.num_modules = num_modules - - self.opt = opt - [k, s, d, p] = self.opt.conv1 - - # self.conv1 = nn.Conv2d(in_dim, 64, kernel_size=7, stride=2, padding=3) - self.conv1 = nn.Conv2d(in_dim, - 64, - kernel_size=k, - stride=s, - dilation=d, - padding=p) - - if self.opt.norm == 'batch': - self.bn1 = nn.BatchNorm2d(64) - elif self.opt.norm == 'group': - self.bn1 = nn.GroupNorm(32, 64) - - if self.opt.hg_down == 'conv64': - self.conv2 = ConvBlock(64, 64, self.opt) - self.down_conv2 = nn.Conv2d(64, - 128, - kernel_size=3, - stride=2, - padding=1) - elif self.opt.hg_down == 'conv128': - self.conv2 = ConvBlock(64, 128, self.opt) - self.down_conv2 = nn.Conv2d(128, - 128, - kernel_size=3, - stride=2, - padding=1) - elif self.opt.hg_down == 'ave_pool': - self.conv2 = ConvBlock(64, 128, self.opt) - else: - raise NameError('Unknown Fan Filter setting!') - - self.conv3 = ConvBlock(128, 128, self.opt) - self.conv4 = ConvBlock(128, 256, self.opt) - - # Stacking part - for hg_module in range(self.num_modules): - self.add_module('m' + str(hg_module), - HourGlass(1, opt.num_hourglass, 256, self.opt)) - - self.add_module('top_m_' + str(hg_module), - ConvBlock(256, 256, self.opt)) - self.add_module( - 'conv_last' + str(hg_module), - nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - if self.opt.norm == 'batch': - self.add_module('bn_end' + str(hg_module), nn.BatchNorm2d(256)) - elif self.opt.norm == 'group': - self.add_module('bn_end' + str(hg_module), - nn.GroupNorm(32, 256)) - - self.add_module( - 'l' + str(hg_module), - nn.Conv2d(256, - opt.hourglass_dim, - kernel_size=1, - stride=1, - padding=0)) - - if hg_module < self.num_modules - 1: - self.add_module( - 'bl' + str(hg_module), - nn.Conv2d(256, 256, kernel_size=1, stride=1, padding=0)) - self.add_module( - 'al' + str(hg_module), - nn.Conv2d(opt.hourglass_dim, - 256, - kernel_size=1, - stride=1, - padding=0)) - - def forward(self, x): - x = F.relu(self.bn1(self.conv1(x)), True) - tmpx = x - if self.opt.hg_down == 'ave_pool': - x = F.avg_pool2d(self.conv2(x), 2, stride=2) - elif self.opt.hg_down in ['conv64', 'conv128']: - x = self.conv2(x) - x = self.down_conv2(x) - else: - raise NameError('Unknown Fan Filter setting!') - - x = self.conv3(x) - x = self.conv4(x) - - previous = x - - outputs = [] - for i in range(self.num_modules): - hg = self._modules['m' + str(i)](previous) - - ll = hg - ll = self._modules['top_m_' + str(i)](ll) - - ll = F.relu( - self._modules['bn_end' + str(i)]( - self._modules['conv_last' + str(i)](ll)), True) - - # Predict heatmaps - tmp_out = self._modules['l' + str(i)](ll) - outputs.append(tmp_out) - - if i < self.num_modules - 1: - ll = self._modules['bl' + str(i)](ll) - tmp_out_ = self._modules['al' + str(i)](tmp_out) - previous = previous + ll + tmp_out_ - - return outputs diff --git a/spaces/ZJunTvT/ZJunChat/modules/shared.py b/spaces/ZJunTvT/ZJunChat/modules/shared.py deleted file mode 100644 index a9e72580aa7ae48f907e923a09099513570a9ad8..0000000000000000000000000000000000000000 --- a/spaces/ZJunTvT/ZJunChat/modules/shared.py +++ /dev/null @@ -1,55 +0,0 @@ -from modules.presets import COMPLETION_URL, BALANCE_API_URL, USAGE_API_URL, API_HOST -import os -import queue - -class State: - interrupted = False - multi_api_key = False - completion_url = COMPLETION_URL - balance_api_url = BALANCE_API_URL - usage_api_url = USAGE_API_URL - - def interrupt(self): - self.interrupted = True - - def recover(self): - self.interrupted = False - - def set_api_host(self, api_host): - self.completion_url = f"https://{api_host}/v1/chat/completions" - self.balance_api_url = f"https://{api_host}/dashboard/billing/credit_grants" - self.usage_api_url = f"https://{api_host}/dashboard/billing/usage" - os.environ["OPENAI_API_BASE"] = f"https://{api_host}/v1" - - def reset_api_host(self): - self.completion_url = COMPLETION_URL - self.balance_api_url = BALANCE_API_URL - self.usage_api_url = USAGE_API_URL - os.environ["OPENAI_API_BASE"] = f"https://{API_HOST}/v1" - return API_HOST - - def reset_all(self): - self.interrupted = False - self.completion_url = COMPLETION_URL - - def set_api_key_queue(self, api_key_list): - self.multi_api_key = True - self.api_key_queue = queue.Queue() - for api_key in api_key_list: - self.api_key_queue.put(api_key) - - def switching_api_key(self, func): - if not hasattr(self, "api_key_queue"): - return func - - def wrapped(*args, **kwargs): - api_key = self.api_key_queue.get() - args[0].api_key = api_key - ret = func(*args, **kwargs) - self.api_key_queue.put(api_key) - return ret - - return wrapped - - -state = State() diff --git a/spaces/ZeroGPT/GPTZero/README.md b/spaces/ZeroGPT/GPTZero/README.md deleted file mode 100644 index 7bf482036a19a665c243f209eeb306a4ee31c2b6..0000000000000000000000000000000000000000 --- a/spaces/ZeroGPT/GPTZero/README.md +++ /dev/null @@ -1,18 +0,0 @@ ---- -title: GPTZero Alternative - AI Content Detector - ZeroGPT.CC -emoji: 🐠 -colorFrom: purple -colorTo: indigo -sdk: gradio -sdk_version: 3.27.0 -app_file: app.py -pinned: false -license: mit ---- - -# GPTZero Alternative - AI Content Detector - ZeroGPT.CC -ZeroGPT.cc is a state-of-the-art tool that can help you verify whether a text was generated by an AI tool such as Open AI ChatGPT, Google Bard, and Bing AI. This free platform uses advanced language models and sophisticated algorithms to analyze and identify content accurately. - -By using ZeroGPT.cc, you can be confident that your content is original and meets your high standards of quality. Whether you're a writer, a marketer, or a business owner, ZeroGPT.cc can streamline your content creation and management process, saving you time and effort in the long run. - -Check out website for more details: [AI Content Detector](https://zerogpt.cc) diff --git a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/app.py b/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/app.py deleted file mode 100644 index 580d3b353dfe066a53293417f4380121aaa5827b..0000000000000000000000000000000000000000 --- a/spaces/aaaaaabbbbbbbdddddddduuuuulllll/Ashaar/app.py +++ /dev/null @@ -1,151 +0,0 @@ -import os -os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' -import gradio as gr -from transformers import pipeline -from transformers import AutoTokenizer, AutoModelForCausalLM -from Ashaar.utils import get_output_df, get_highlighted_patterns_html -from Ashaar.bait_analysis import BaitAnalysis -from langs import * -import sys -import json -import argparse - -arg_parser = argparse.ArgumentParser() -arg_parser.add_argument('--lang', type = str, default = 'ar') -args = arg_parser.parse_args() -lang = args.lang - -if lang == 'ar': - TITLE = TITLE_ar - DESCRIPTION = DESCRIPTION_ar - textbox_trg_text = textbox_trg_text_ar - textbox_inp_text = textbox_inp_text_ar - btn_trg_text = btn_trg_text_ar - btn_inp_text = btn_inp_text_ar - css = """ #textbox{ direction: RTL;}""" - -else: - TITLE = TITLE_en - DESCRIPTION = DESCRIPTION_en - textbox_trg_text = textbox_trg_text_en - textbox_inp_text = textbox_inp_text_en - btn_trg_text = btn_trg_text_en - btn_inp_text = btn_inp_text_en - css = "" - -gpt_tokenizer = AutoTokenizer.from_pretrained('arbml/ashaar_tokenizer') -model = AutoModelForCausalLM.from_pretrained('arbml/Ashaar_model') - -theme_to_token = json.load(open("extra/theme_tokens.json", "r")) -token_to_theme = {t:m for m,t in theme_to_token.items()} -meter_to_token = json.load(open("extra/meter_tokens.json", "r")) -token_to_meter = {t:m for m,t in meter_to_token.items()} - -analysis = BaitAnalysis() -meter, theme, qafiyah = "", "", "" - -def analyze(poem): - global meter,theme,qafiyah, generate_btn - shatrs = poem.split("\n") - baits = [' # '.join(shatrs[2*i:2*i+2]) for i in range(len(shatrs)//2)] - output = analysis.analyze(baits,override_tashkeel=True) - meter = output['meter'] - qafiyah = output['qafiyah'][0] - theme = output['theme'][-1] - df = get_output_df(output) - return get_highlighted_patterns_html(df), gr.Button.update(interactive=True) - -def generate(inputs, top_p = 3): - baits = inputs.split('\n') - if len(baits) % 2 !=0: - baits = baits[:-1] - poem = ' '.join(['<|bsep|> '+baits[i]+' <|vsep|> '+baits[i+1]+' ' for i in range(0, len(baits), 2)]) - prompt = f""" - {meter_to_token[meter]} {qafiyah} {theme_to_token[theme]} - <|psep|> - {poem} - """.strip() - print(prompt) - encoded_input = gpt_tokenizer(prompt, return_tensors='pt') - output = model.generate(**encoded_input, max_length = 512, top_p = 3, do_sample=True) - - result = "" - prev_token = "" - line_cnts = 0 - for i, beam in enumerate(output[:, len(encoded_input.input_ids[0]):]): - if line_cnts >= 10: - break - for token in beam: - if line_cnts >= 10: - break - decoded = gpt_tokenizer.decode(token) - if 'meter' in decoded or 'theme' in decoded: - break - if decoded in ["<|vsep|>", ""]: - result += "\n" - line_cnts+=1 - elif decoded in ['<|bsep|>', '<|psep|>', '']: - pass - else: - result += decoded - prev_token = decoded - else: - break - # return theme+" "+ f"من بحر {meter} مع قافية بحر ({qafiyah})" + "\n" +result - return result, gr.Button.update(interactive=False) - -examples = [ - [ -"""القلب أعلم يا عذول بدائه -وأحق منك بجفنه وبمائه""" - ], - [ -"""رمتِ الفؤادَ مليحة عذراءُ - بسهامِ لحظٍ ما لهنَّ دواءُ""" - ], - [ -"""أذَلَّ الحِرْصُ والطَّمَعُ الرِّقابَا -وقَد يَعفو الكَريمُ، إذا استَرَابَا""" - ] -] - -with gr.Blocks(theme=gr.themes.Soft(), css=css) as demo: - with gr.Row(): - with gr.Column(): - gr.HTML(TITLE) - gr.HTML(DESCRIPTION) - - with gr.Row(): - with gr.Column(): - textbox_output = gr.Textbox(lines=10, label=textbox_trg_text, elem_id="textbox") - with gr.Column(): - inputs = gr.Textbox(lines=10, label=textbox_inp_text, elem_id="textbox") - - - with gr.Row(): - with gr.Column(): - if lang == 'ar': - trg_btn = gr.Button(btn_trg_text, interactive=False) - else: - trg_btn = gr.Button(btn_trg_text) - - with gr.Column(): - if lang == 'ar': - inp_btn = gr.Button(btn_inp_text) - else: - inp_btn = gr.Button(btn_inp_text, interactive = False) - - with gr.Row(): - html_output = gr.HTML() - - if lang == 'en': - gr.Examples(examples, textbox_output) - inp_btn.click(generate, inputs = textbox_output, outputs=[inputs, inp_btn]) - trg_btn.click(analyze, inputs = textbox_output, outputs=[html_output,inp_btn]) - else: - gr.Examples(examples, inputs) - trg_btn.click(generate, inputs = inputs, outputs=[textbox_output, trg_btn]) - inp_btn.click(analyze, inputs = inputs, outputs=[html_output,trg_btn] ) - -# demo.launch(server_name = '0.0.0.0', share=True) -demo.launch() \ No newline at end of file diff --git a/spaces/aaronherrera/Calorie_Counter/README.md b/spaces/aaronherrera/Calorie_Counter/README.md deleted file mode 100644 index 81e13282a6cd285e755faaa3578821e036481f99..0000000000000000000000000000000000000000 --- a/spaces/aaronherrera/Calorie_Counter/README.md +++ /dev/null @@ -1,13 +0,0 @@ ---- -title: Calorie_Counter -emoji: 🔥 -colorFrom: yellow -colorTo: gray -sdk: gradio -sdk_version: 2.8.14 -app_file: app.py -pinned: false -license: apache-2.0 ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces#reference diff --git a/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/README.md b/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/README.md deleted file mode 100644 index d90a9617d0e09bd81278b23b82de65a9a1a2a0a8..0000000000000000000000000000000000000000 --- a/spaces/abhaskumarsinha/MinimalGPT-Felis_Catus/README.md +++ /dev/null @@ -1,65 +0,0 @@ ---- -license: mit -title: 'MinimalGPT: Felis Catus' -sdk: gradio -emoji: 😻 -colorFrom: gray -colorTo: blue -pinned: true ---- - -# MinimalGPT: Felis Catus - -[[`MinimalGPT`](https://github.com/abhaskumarsinha/MinimalGPT)] [[`Project Gutenberg Dataset`](https://www.kaggle.com/datasets/shubchat/1002-short-stories-from-project-guttenberg)] - - -This HuggingFace space serves as an illustrative application of the GitHub Repository: [MinimalGPT](https://github.com/abhaskumarsinha/MinimalGPT), which embodies a departure from conventional GPT models that undergo scaling and training on high-performance computing systems and clusters. The primary objective of the MinimalGPT project was to explore the extent to which a GPT model could be minimized in size. - -Within this HF space, we introduce a diminutive GPT model named [Felis Catus](https://en.wikipedia.org/wiki/Cat) (stray Cat), which boasts a mere 15 million parameters. What distinguishes this model is its training process, which was executed on a standard home computer CPU (specifically, an AMD Ryzen 5) without any reliance on GPU acceleration. Remarkably, the training duration lasted a mere 15 minutes, utilizing a dataset comprising a meager ~150,000 tokens of text. - -At present, the Felis Catus model exhibits the capacity to generate a concise story excerpt consisting of 70 tokens, requiring a mere 5 to 7 words as input. The model's dictionary encompasses a modest 12,000 words. Moreover, we are presently engaged in endeavors to further scale the model in our forthcoming project. - -## Model Specifications - -``` -Model: "model" -_________________________________________________________________ - Layer (type) Output Shape Param # -================================================================= - input_1 (InputLayer) [(None, 10)] 0 - - embedding (Embedding) (None, 10, 128) 1597184 - - positional_embedding (Posit (None, 10, 128) 0 - ionalEmbedding) - - decoder (Decoder) (None, 10, 128) 71208 - - flatten (Flatten) (None, 1280) 0 - - dense (Dense) (None, 12479) 15985599 - - tf.nn.softmax (TFOpLambda) (None, 12479) 0 - -================================================================= -Total params: 17,653,991 -Trainable params: 17,653,991 -Non-trainable params: 0 -_________________________________________________________________ -``` - -## Hyperparameters - -``` -gpt_input: 10 [Max input size, d_k] -d_model: 128 [Embedding size, d_model] -h: 8 [Number of multiheads, h] -decoder_stacks: 1 [Number of decoder stacks, stack] -GPT_attention: True [Attention Layer implementation type - OpenAI style] -``` - -## References -1. Vaswani, Ashish, et al. "Attention is all you need." Advances in neural information processing systems 30 (2017). -2. Radford, Alec, et al. "Language models are unsupervised multitask learners." OpenAI blog 1.8 (2019): 9. -3. Project Gutenberg. (n.d.). Retrieved FebruApril 20, 2023, from www.gutenberg.org. -4. Abadi, Martın, et al. "TensorFlow: Large-scale machine learning on heterogeneous systems, software available from tensorflow. org (2015)." URL https://www.tensorflow.org (2015). \ No newline at end of file diff --git a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/utils/__init__.py b/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/utils/__init__.py deleted file mode 100644 index f2678b321c295bcceaef945111ac3524be19d6e4..0000000000000000000000000000000000000000 --- a/spaces/abhishek/sketch-to-image/annotator/uniformer_base/mmseg/core/utils/__init__.py +++ /dev/null @@ -1,3 +0,0 @@ -from .misc import add_prefix - -__all__ = ['add_prefix'] diff --git a/spaces/abidlabs/Webcam-background-remover/app.py b/spaces/abidlabs/Webcam-background-remover/app.py deleted file mode 100644 index 6987d628c8d3875b8fcc062c1692b76cd1d0e751..0000000000000000000000000000000000000000 --- a/spaces/abidlabs/Webcam-background-remover/app.py +++ /dev/null @@ -1,9 +0,0 @@ -import gradio as gr -gr.Interface.load( - "spaces/eugenesiow/remove-bg", - theme="default", - css=".footer {display:none !important}", - inputs="webcam", - title=None, - article=None, - description="This demo (based on: https://huggingface.co/spaces/eugenesiow/remove-bg) removes the background from your webcam photo!").launch() \ No newline at end of file diff --git a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/darwin/cocoapy/cocoatypes.py b/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/darwin/cocoapy/cocoatypes.py deleted file mode 100644 index b30019e89576030042d6bd399d6f7c56afe56287..0000000000000000000000000000000000000000 --- a/spaces/abrar-lohia/text-2-character-anim/pyrender/.eggs/pyglet-2.0.5-py3.10.egg/pyglet/libs/darwin/cocoapy/cocoatypes.py +++ /dev/null @@ -1,85 +0,0 @@ -from ctypes import * - -import sys, platform, struct - -__LP64__ = (8*struct.calcsize("P") == 64) -__i386__ = (platform.machine() == 'i386') - -PyObjectEncoding = b'{PyObject=@}' - -def encoding_for_ctype(vartype): - typecodes = {c_char:b'c', c_int:b'i', c_short:b's', c_long:b'l', c_longlong:b'q', - c_ubyte:b'C', c_uint:b'I', c_ushort:b'S', c_ulong:b'L', c_ulonglong:b'Q', - c_float:b'f', c_double:b'd', c_bool:b'B', c_char_p:b'*', c_void_p:b'@', - py_object:PyObjectEncoding} - return typecodes.get(vartype, b'?') - -# Note CGBase.h located at -# /System/Library/Frameworks/ApplicationServices.framework/Frameworks/CoreGraphics.framework/Headers/CGBase.h -# defines CGFloat as double if __LP64__, otherwise it's a float. -if __LP64__: - NSInteger = c_long - NSUInteger = c_ulong - CGFloat = c_double - NSPointEncoding = b'{CGPoint=dd}' - NSSizeEncoding = b'{CGSize=dd}' - NSRectEncoding = b'{CGRect={CGPoint=dd}{CGSize=dd}}' - NSRangeEncoding = b'{_NSRange=QQ}' -else: - NSInteger = c_int - NSUInteger = c_uint - CGFloat = c_float - NSPointEncoding = b'{_NSPoint=ff}' - NSSizeEncoding = b'{_NSSize=ff}' - NSRectEncoding = b'{_NSRect={_NSPoint=ff}{_NSSize=ff}}' - NSRangeEncoding = b'{_NSRange=II}' - -NSIntegerEncoding = encoding_for_ctype(NSInteger) -NSUIntegerEncoding = encoding_for_ctype(NSUInteger) -CGFloatEncoding = encoding_for_ctype(CGFloat) - -# Special case so that NSImage.initWithCGImage_size_() will work. -CGImageEncoding = b'{CGImage=}' - -NSZoneEncoding = b'{_NSZone=}' - -# from /System/Library/Frameworks/Foundation.framework/Headers/NSGeometry.h -class NSPoint(Structure): - _fields_ = [ ("x", CGFloat), ("y", CGFloat) ] -CGPoint = NSPoint - -class NSSize(Structure): - _fields_ = [ ("width", CGFloat), ("height", CGFloat) ] -CGSize = NSSize - -class NSRect(Structure): - _fields_ = [ ("origin", NSPoint), ("size", NSSize) ] -CGRect = NSRect - -def NSMakeSize(w, h): - return NSSize(w, h) - -def NSMakeRect(x, y, w, h): - return NSRect(NSPoint(x, y), NSSize(w, h)) - -# NSDate.h -NSTimeInterval = c_double - -CFIndex = c_long -UniChar = c_ushort -unichar = c_wchar # (actually defined as c_ushort in NSString.h, but need ctypes to convert properly) -CGGlyph = c_ushort - -# CFRange struct defined in CFBase.h -# This replaces the CFRangeMake(LOC, LEN) macro. -class CFRange(Structure): - _fields_ = [ ("location", CFIndex), ("length", CFIndex) ] - -# NSRange.h (Note, not defined the same as CFRange) -class NSRange(Structure): - _fields_ = [ ("location", NSUInteger), ("length", NSUInteger) ] - -NSZeroPoint = NSPoint(0,0) - -CFTypeID = c_ulong -CFNumberType = c_uint32 diff --git a/spaces/ahmedgamal777722/flowise/Dockerfile b/spaces/ahmedgamal777722/flowise/Dockerfile deleted file mode 100644 index 9c0ad22929159b8c4d192856163699570fd27307..0000000000000000000000000000000000000000 --- a/spaces/ahmedgamal777722/flowise/Dockerfile +++ /dev/null @@ -1,26 +0,0 @@ -FROM node:18-alpine -USER root - -# Arguments that can be passed at build time -ARG FLOWISE_PATH=/usr/local/lib/node_modules/flowise -ARG BASE_PATH=/root/.flowise -ARG DATABASE_PATH=$BASE_PATH -ARG APIKEY_PATH=$BASE_PATH -ARG SECRETKEY_PATH=$BASE_PATH -ARG LOG_PATH=$BASE_PATH/logs - -# Install dependencies -RUN apk add --no-cache git python3 py3-pip make g++ build-base cairo-dev pango-dev chromium - -ENV PUPPETEER_SKIP_DOWNLOAD=true -ENV PUPPETEER_EXECUTABLE_PATH=/usr/bin/chromium-browser - -# Install Flowise globally -RUN npm install -g flowise - -# Configure Flowise directories using the ARG -RUN mkdir -p $LOG_PATH $FLOWISE_PATH/uploads && chmod -R 777 $LOG_PATH $FLOWISE_PATH - -WORKDIR /data - -CMD ["npx", "flowise", "start"] \ No newline at end of file diff --git a/spaces/ai-forever/Kandinsky2.1/README.md b/spaces/ai-forever/Kandinsky2.1/README.md deleted file mode 100644 index bfa1dc422e96cef5190028c0e3dde6be6fc94c14..0000000000000000000000000000000000000000 --- a/spaces/ai-forever/Kandinsky2.1/README.md +++ /dev/null @@ -1,12 +0,0 @@ ---- -title: Kandinsky2.1 -emoji: 📉 -colorFrom: indigo -colorTo: green -sdk: gradio -sdk_version: 3.11.0 -app_file: app.py -pinned: false ---- - -Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference diff --git a/spaces/akalin/DeepDanbooru_string/app.py b/spaces/akalin/DeepDanbooru_string/app.py deleted file mode 100644 index 49019837c9207cc68cb37be0342f3bc44fd0decb..0000000000000000000000000000000000000000 --- a/spaces/akalin/DeepDanbooru_string/app.py +++ /dev/null @@ -1,185 +0,0 @@ -#!/usr/bin/env python - -from __future__ import annotations - -import argparse -import functools -import os -import html -import pathlib -import tarfile - -import deepdanbooru as dd -import gradio as gr -import huggingface_hub -import numpy as np -import PIL.Image -import tensorflow as tf -import piexif -import piexif.helper - -TITLE = 'DeepDanbooru String' - -TOKEN = os.environ['TOKEN'] -MODEL_REPO = 'CikeyQI/DeepDanbooru_string' -MODEL_FILENAME = 'model-resnet_custom_v3.h5' -LABEL_FILENAME = 'tags.txt' - - -def parse_args() -> argparse.Namespace: - parser = argparse.ArgumentParser() - parser.add_argument('--score-slider-step', type=float, default=0.05) - parser.add_argument('--score-threshold', type=float, default=0.5) - parser.add_argument('--theme', type=str, default='dark-grass') - parser.add_argument('--live', action='store_true') - parser.add_argument('--share', action='store_true') - parser.add_argument('--port', type=int) - parser.add_argument('--disable-queue', - dest='enable_queue', - action='store_false') - parser.add_argument('--allow-flagging', type=str, default='never') - return parser.parse_args() - - -def load_sample_image_paths() -> list[pathlib.Path]: - image_dir = pathlib.Path('images') - if not image_dir.exists(): - dataset_repo = 'hysts/sample-images-TADNE' - path = huggingface_hub.hf_hub_download(dataset_repo, - 'images.tar.gz', - repo_type='dataset', - use_auth_token=TOKEN) - with tarfile.open(path) as f: - f.extractall() - return sorted(image_dir.glob('*')) - - -def load_model() -> tf.keras.Model: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - MODEL_FILENAME, - use_auth_token=TOKEN) - model = tf.keras.models.load_model(path) - return model - - -def load_labels() -> list[str]: - path = huggingface_hub.hf_hub_download(MODEL_REPO, - LABEL_FILENAME, - use_auth_token=TOKEN) - with open(path) as f: - labels = [line.strip() for line in f.readlines()] - return labels - -def plaintext_to_html(text): - text = "

" + "
\n".join([f"{html.escape(x)}" for x in text.split('\n')]) + "

" - return text - -def predict(image: PIL.Image.Image, score_threshold: float, - model: tf.keras.Model, labels: list[str]) -> dict[str, float]: - rawimage = image - _, height, width, _ = model.input_shape - image = np.asarray(image) - image = tf.image.resize(image, - size=(height, width), - method=tf.image.ResizeMethod.AREA, - preserve_aspect_ratio=True) - image = image.numpy() - image = dd.image.transform_and_pad_image(image, width, height) - image = image / 255. - probs = model.predict(image[None, ...])[0] - probs = probs.astype(float) - res = dict() - for prob, label in zip(probs.tolist(), labels): - if prob < score_threshold: - continue - res[label] = prob - b = dict(sorted(res.items(),key=lambda item:item[1], reverse=True)) - a = ', '.join(list(b.keys())).replace('_',' ').replace('(','\(').replace(')','\)') - c = ', '.join(list(b.keys())) - - items = rawimage.info - geninfo = '' - - if "exif" in rawimage.info: - exif = piexif.load(rawimage.info["exif"]) - exif_comment = (exif or {}).get("Exif", {}).get(piexif.ExifIFD.UserComment, b'') - try: - exif_comment = piexif.helper.UserComment.load(exif_comment) - except ValueError: - exif_comment = exif_comment.decode('utf8', errors="ignore") - - items['exif comment'] = exif_comment - geninfo = exif_comment - - for field in ['jfif', 'jfif_version', 'jfif_unit', 'jfif_density', 'dpi', 'exif', - 'loop', 'background', 'timestamp', 'duration']: - items.pop(field, None) - - geninfo = items.get('parameters', geninfo) - - info = f""" -

PNG Info

-""" - for key, text in items.items(): - info += f""" -
-

{plaintext_to_html(str(key))}

-

{plaintext_to_html(str(text))}

-
-""".strip()+"\n" - - if len(info) == 0: - message = "Nothing found in the image." - info = f"

{message}

" - - return (a,c,res,info) - - -def main(): - args = parse_args() - model = load_model() - labels = load_labels() - - func = functools.partial(predict, model=model, labels=labels) - func = functools.update_wrapper(func, predict) - - gr.Interface( - func, - [ - gr.inputs.Image(type='pil', label='Input'), - gr.inputs.Slider(0, - 1, - step=args.score_slider_step, - default=args.score_threshold, - label='Score Threshold'), - ], - [ - gr.outputs.Textbox(label='Output (string)'), - gr.outputs.Textbox(label='Output (raw string)'), - gr.outputs.Label(label='Output (label)'), - gr.outputs.HTML() - ], - examples=[ - ['miku.jpg',0.5], - ['miku2.jpg',0.5] - ], - title=TITLE, - description=''' -Demo for [KichangKim/DeepDanbooru](https://github.com/KichangKim/DeepDanbooru) with "ready to copy" prompt and a prompt analyzer. - -Modified from [hysts/DeepDanbooru](https://huggingface.co/spaces/hysts/DeepDanbooru) - -PNG Info code forked from [AUTOMATIC1111/stable-diffusion-webui](https://github.com/AUTOMATIC1111/stable-diffusion-webui) - ''', - theme=args.theme, - allow_flagging=args.allow_flagging, - live=args.live, - ).launch( - enable_queue=args.enable_queue, - server_port=args.port, - share=args.share, - ) - - -if __name__ == '__main__': - main() diff --git a/spaces/akdeniz27/turkish-qna-with-xlm-roberta/app.py b/spaces/akdeniz27/turkish-qna-with-xlm-roberta/app.py deleted file mode 100644 index ba57693a30ce7b7b8e4f2dce7e2f70e1a47c4f14..0000000000000000000000000000000000000000 --- a/spaces/akdeniz27/turkish-qna-with-xlm-roberta/app.py +++ /dev/null @@ -1,82 +0,0 @@ -# Turkish Q&A with XLM-RoBERTa - -from transformers import AutoTokenizer, AutoModelForQuestionAnswering, pipeline -import sentencepiece -import torch -import streamlit as st -import pandas as pd - -text_1 = """Mustafa Kemal Atatürk, 1881 yılında Selanik'te Kocakasım Mahallesi, Islahhane Caddesi'ndeki üç katlı pembe evde doğdu. \ -Babası Ali Rıza Efendi, annesi Zübeyde Hanım'dır. Baba tarafından dedesi Hafız Ahmet Efendi, 14-15. yüzyıllarda Konya ve Aydın'dan \ -Makedonya'ya yerleştirilmiş Kocacık Yörüklerindendir. Annesi Zübeyde Hanım ise Selanik yakınlarındaki Langaza kasabasına yerleşmiş \ -eski bir Türk ailesinin kızıdır. Ali Rıza Efendi, 1871 yılında Zübeyde Hanım'la evlendi. Atatürk'ün beş kardeşinden dördü küçük \ -yaşlarda öldü, sadece Makbule (Atadan) Hanım 1956 yılına değin yaşadı.""" - - -text_2 = """Dünya çapında 40 milyondan fazla insana bulaşan ve 1.1 milyondan fazla insanın ölümüne sebep olan \ -corona virüsüne karşı Pfizer ile BioNTech'in geliştirdiği aşının ilk görüntüleri ortaya çıktı. Aşının fabrikadaki \ -ilk görüntülerini değerlendiren Pfizer'ın Birleşik Krallık CEO'su, "Üretim bandında aşıyı görmek beni neşelendirdi" \ -dedi. ABD merkezli çokuluslu ilaç şirketi Pfizer ile Türk bilim insanlarının kurduğu BioNTech’in geliştirdiği corona \ -virüsü aşısında sona gelindi. Pfizer, paylaştığı video ile bütün dünyayı heyecanlandıran gelişmeyi duyurdu. Şirket, \ -Belçika’daki Puurs’ta geliştirilen Covid-19 aşılarının seri üretim bandındaki üretim aşamasını uluslararası kamuoyu \ -ile paylaştı. Almanya’nın Mainz kentinde Türk profesör Uğur Şahin ile eşi Özlem Türeci’nin kurduğu ve yönettiği \ -biyoteknoloji şirketi BioNTech ile aşı sürecini sürdüren Pfizer’ın küçük şişelerde binlerce corona virüsü aşısı \ -üretmeye başladığı belirtildi. Pfizer, aşının güvenli ve etkili olduğunun klinik olarak da kanıtlanması ve resmi \ -mercilerden de onay alınması durumunda üretilen aşının dağıtılacağını duyurdu.""" - -question_list_1 = ["Mustafa Kemal hangi yıl doğdu?", - "Mustafa Kemal'in dedesi kimdir?"] - -question_list_2 = ["Corona virüsü dünya çapında kaç kişiye bulaştı?", - "BioNTech nerededir?" ] - -st.set_page_config(layout="wide") - -st.title("Turkish Q&A with Multilingual \ - XLM-RoBERTa Models") - -model_list = ['alon-albalak/xlm-roberta-large-xquad', - 'deepset/xlm-roberta-large-squad2'] - -st.sidebar.header("Select Model") -model_checkpoint = st.sidebar.radio("", model_list) - -st.sidebar.write("For details of models:") -st.sidebar.write("https://huggingface.co/alon-albalak") -st.sidebar.write("https://huggingface.co/deepset") - -st.sidebar.write("For XQUAD Dataset:") -st.sidebar.write("https://huggingface.co/datasets/xquad") - -st.subheader("Select Context and Question") -context_1 = st.text_area("Context #1", text_1, height=128) -context_2 = st.text_area("Context #2", text_2, height=128) -context_3 = st.text_area("New Context", value="", height=128) - -context = st.radio("Select Context", ("Context #1", "Context #2", "New Context")) - -if context == "Context #1": - selected_context = context_1 - selected_question = st.radio("Select Question", question_list_1) -elif context == "Context #2": - selected_context = context_2 - selected_question = st.radio("Select Question", question_list_2) -elif context == "New Context": - selected_context = context_3 - selected_question = st.text_area("New Question", value="", height=64) - -@st.cache_resource -def setModel(model_checkpoint): - model = AutoModelForQuestionAnswering.from_pretrained(model_checkpoint) - tokenizer = AutoTokenizer.from_pretrained(model_checkpoint) - return pipeline("question-answering", model=model, tokenizer=tokenizer) - -Run_Button = st.button("Run", key=None) -if Run_Button == True: - - qna_pipeline = setModel(model_checkpoint) - output = qna_pipeline(question=selected_question, context=selected_context) - - st.header("Answer") - st.write(output) - \ No newline at end of file diff --git a/spaces/akhaliq/Detic/detic/modeling/meta_arch/custom_rcnn.py b/spaces/akhaliq/Detic/detic/modeling/meta_arch/custom_rcnn.py deleted file mode 100644 index 9a5ac721d42e40a8b4f28508b10a932cef827fcf..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Detic/detic/modeling/meta_arch/custom_rcnn.py +++ /dev/null @@ -1,232 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import copy -import logging -import numpy as np -from typing import Dict, List, Optional, Tuple -import torch -from torch import nn -import json -from detectron2.utils.events import get_event_storage -from detectron2.config import configurable -from detectron2.structures import ImageList, Instances, Boxes -import detectron2.utils.comm as comm - -from detectron2.modeling.meta_arch.build import META_ARCH_REGISTRY -from detectron2.modeling.meta_arch.rcnn import GeneralizedRCNN -from detectron2.modeling.postprocessing import detector_postprocess -from detectron2.utils.visualizer import Visualizer, _create_text_labels -from detectron2.data.detection_utils import convert_image_to_rgb - -from torch.cuda.amp import autocast -from ..text.text_encoder import build_text_encoder -from ..utils import load_class_freq, get_fed_loss_inds - -@META_ARCH_REGISTRY.register() -class CustomRCNN(GeneralizedRCNN): - ''' - Add image labels - ''' - @configurable - def __init__( - self, - with_image_labels = False, - dataset_loss_weight = [], - fp16 = False, - sync_caption_batch = False, - roi_head_name = '', - cap_batch_ratio = 4, - with_caption = False, - dynamic_classifier = False, - **kwargs): - """ - """ - self.with_image_labels = with_image_labels - self.dataset_loss_weight = dataset_loss_weight - self.fp16 = fp16 - self.with_caption = with_caption - self.sync_caption_batch = sync_caption_batch - self.roi_head_name = roi_head_name - self.cap_batch_ratio = cap_batch_ratio - self.dynamic_classifier = dynamic_classifier - self.return_proposal = False - if self.dynamic_classifier: - self.freq_weight = kwargs.pop('freq_weight') - self.num_classes = kwargs.pop('num_classes') - self.num_sample_cats = kwargs.pop('num_sample_cats') - super().__init__(**kwargs) - assert self.proposal_generator is not None - if self.with_caption: - assert not self.dynamic_classifier - self.text_encoder = build_text_encoder(pretrain=True) - for v in self.text_encoder.parameters(): - v.requires_grad = False - - - @classmethod - def from_config(cls, cfg): - ret = super().from_config(cfg) - ret.update({ - 'with_image_labels': cfg.WITH_IMAGE_LABELS, - 'dataset_loss_weight': cfg.MODEL.DATASET_LOSS_WEIGHT, - 'fp16': cfg.FP16, - 'with_caption': cfg.MODEL.WITH_CAPTION, - 'sync_caption_batch': cfg.MODEL.SYNC_CAPTION_BATCH, - 'dynamic_classifier': cfg.MODEL.DYNAMIC_CLASSIFIER, - 'roi_head_name': cfg.MODEL.ROI_HEADS.NAME, - 'cap_batch_ratio': cfg.MODEL.CAP_BATCH_RATIO, - }) - if ret['dynamic_classifier']: - ret['freq_weight'] = load_class_freq( - cfg.MODEL.ROI_BOX_HEAD.CAT_FREQ_PATH, - cfg.MODEL.ROI_BOX_HEAD.FED_LOSS_FREQ_WEIGHT) - ret['num_classes'] = cfg.MODEL.ROI_HEADS.NUM_CLASSES - ret['num_sample_cats'] = cfg.MODEL.NUM_SAMPLE_CATS - return ret - - - def inference( - self, - batched_inputs: Tuple[Dict[str, torch.Tensor]], - detected_instances: Optional[List[Instances]] = None, - do_postprocess: bool = True, - ): - assert not self.training - assert detected_instances is None - - images = self.preprocess_image(batched_inputs) - features = self.backbone(images.tensor) - proposals, _ = self.proposal_generator(images, features, None) - results, _ = self.roi_heads(images, features, proposals) - if do_postprocess: - assert not torch.jit.is_scripting(), \ - "Scripting is not supported for postprocess." - return CustomRCNN._postprocess( - results, batched_inputs, images.image_sizes) - else: - return results - - - def forward(self, batched_inputs: List[Dict[str, torch.Tensor]]): - """ - Add ann_type - Ignore proposal loss when training with image labels - """ - if not self.training: - return self.inference(batched_inputs) - - images = self.preprocess_image(batched_inputs) - - ann_type = 'box' - gt_instances = [x["instances"].to(self.device) for x in batched_inputs] - if self.with_image_labels: - for inst, x in zip(gt_instances, batched_inputs): - inst._ann_type = x['ann_type'] - inst._pos_category_ids = x['pos_category_ids'] - ann_types = [x['ann_type'] for x in batched_inputs] - assert len(set(ann_types)) == 1 - ann_type = ann_types[0] - if ann_type in ['prop', 'proptag']: - for t in gt_instances: - t.gt_classes *= 0 - - if self.fp16: # TODO (zhouxy): improve - with autocast(): - features = self.backbone(images.tensor.half()) - features = {k: v.float() for k, v in features.items()} - else: - features = self.backbone(images.tensor) - - cls_features, cls_inds, caption_features = None, None, None - - if self.with_caption and 'caption' in ann_type: - inds = [torch.randint(len(x['captions']), (1,))[0].item() \ - for x in batched_inputs] - caps = [x['captions'][ind] for ind, x in zip(inds, batched_inputs)] - caption_features = self.text_encoder(caps).float() - if self.sync_caption_batch: - caption_features = self._sync_caption_features( - caption_features, ann_type, len(batched_inputs)) - - if self.dynamic_classifier and ann_type != 'caption': - cls_inds = self._sample_cls_inds(gt_instances, ann_type) # inds, inv_inds - ind_with_bg = cls_inds[0].tolist() + [-1] - cls_features = self.roi_heads.box_predictor[ - 0].cls_score.zs_weight[:, ind_with_bg].permute(1, 0).contiguous() - - classifier_info = cls_features, cls_inds, caption_features - proposals, proposal_losses = self.proposal_generator( - images, features, gt_instances) - - if self.roi_head_name in ['StandardROIHeads', 'CascadeROIHeads']: - proposals, detector_losses = self.roi_heads( - images, features, proposals, gt_instances) - else: - proposals, detector_losses = self.roi_heads( - images, features, proposals, gt_instances, - ann_type=ann_type, classifier_info=classifier_info) - - if self.vis_period > 0: - storage = get_event_storage() - if storage.iter % self.vis_period == 0: - self.visualize_training(batched_inputs, proposals) - - losses = {} - losses.update(detector_losses) - if self.with_image_labels: - if ann_type in ['box', 'prop', 'proptag']: - losses.update(proposal_losses) - else: # ignore proposal loss for non-bbox data - losses.update({k: v * 0 for k, v in proposal_losses.items()}) - else: - losses.update(proposal_losses) - if len(self.dataset_loss_weight) > 0: - dataset_sources = [x['dataset_source'] for x in batched_inputs] - assert len(set(dataset_sources)) == 1 - dataset_source = dataset_sources[0] - for k in losses: - losses[k] *= self.dataset_loss_weight[dataset_source] - - if self.return_proposal: - return proposals, losses - else: - return losses - - - def _sync_caption_features(self, caption_features, ann_type, BS): - has_caption_feature = (caption_features is not None) - BS = (BS * self.cap_batch_ratio) if (ann_type == 'box') else BS - rank = torch.full( - (BS, 1), comm.get_rank(), dtype=torch.float32, - device=self.device) - if not has_caption_feature: - caption_features = rank.new_zeros((BS, 512)) - caption_features = torch.cat([caption_features, rank], dim=1) - global_caption_features = comm.all_gather(caption_features) - caption_features = torch.cat( - [x.to(self.device) for x in global_caption_features], dim=0) \ - if has_caption_feature else None # (NB) x (D + 1) - return caption_features - - - def _sample_cls_inds(self, gt_instances, ann_type='box'): - if ann_type == 'box': - gt_classes = torch.cat( - [x.gt_classes for x in gt_instances]) - C = len(self.freq_weight) - freq_weight = self.freq_weight - else: - gt_classes = torch.cat( - [torch.tensor( - x._pos_category_ids, - dtype=torch.long, device=x.gt_classes.device) \ - for x in gt_instances]) - C = self.num_classes - freq_weight = None - assert gt_classes.max() < C, '{} {}'.format(gt_classes.max(), C) - inds = get_fed_loss_inds( - gt_classes, self.num_sample_cats, C, - weight=freq_weight) - cls_id_map = gt_classes.new_full( - (self.num_classes + 1,), len(inds)) - cls_id_map[inds] = torch.arange(len(inds), device=cls_id_map.device) - return inds, cls_id_map \ No newline at end of file diff --git a/spaces/akhaliq/Mask2Former/mask2former/evaluation/instance_evaluation.py b/spaces/akhaliq/Mask2Former/mask2former/evaluation/instance_evaluation.py deleted file mode 100644 index bc2facec351e5f6ee965ee9acb4394f12c023f54..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/Mask2Former/mask2former/evaluation/instance_evaluation.py +++ /dev/null @@ -1,107 +0,0 @@ -# Copyright (c) Facebook, Inc. and its affiliates. -import contextlib -import copy -import io -import itertools -import json -import logging -import numpy as np -import os -import pickle -from collections import OrderedDict -import pycocotools.mask as mask_util -import torch -from pycocotools.coco import COCO -from pycocotools.cocoeval import COCOeval -from tabulate import tabulate - -import detectron2.utils.comm as comm -from detectron2.config import CfgNode -from detectron2.data import MetadataCatalog -from detectron2.data.datasets.coco import convert_to_coco_json -from detectron2.evaluation.coco_evaluation import COCOEvaluator, _evaluate_predictions_on_coco -from detectron2.evaluation.fast_eval_api import COCOeval_opt -from detectron2.structures import Boxes, BoxMode, pairwise_iou -from detectron2.utils.file_io import PathManager -from detectron2.utils.logger import create_small_table - - -# modified from COCOEvaluator for instance segmetnat -class InstanceSegEvaluator(COCOEvaluator): - """ - Evaluate AR for object proposals, AP for instance detection/segmentation, AP - for keypoint detection outputs using COCO's metrics. - See http://cocodataset.org/#detection-eval and - http://cocodataset.org/#keypoints-eval to understand its metrics. - The metrics range from 0 to 100 (instead of 0 to 1), where a -1 or NaN means - the metric cannot be computed (e.g. due to no predictions made). - - In addition to COCO, this evaluator is able to support any bounding box detection, - instance segmentation, or keypoint detection dataset. - """ - - def _eval_predictions(self, predictions, img_ids=None): - """ - Evaluate predictions. Fill self._results with the metrics of the tasks. - """ - self._logger.info("Preparing results for COCO format ...") - coco_results = list(itertools.chain(*[x["instances"] for x in predictions])) - tasks = self._tasks or self._tasks_from_predictions(coco_results) - - # unmap the category ids for COCO - if hasattr(self._metadata, "thing_dataset_id_to_contiguous_id"): - dataset_id_to_contiguous_id = self._metadata.thing_dataset_id_to_contiguous_id - # all_contiguous_ids = list(dataset_id_to_contiguous_id.values()) - # num_classes = len(all_contiguous_ids) - # assert min(all_contiguous_ids) == 0 and max(all_contiguous_ids) == num_classes - 1 - - reverse_id_mapping = {v: k for k, v in dataset_id_to_contiguous_id.items()} - for result in coco_results: - category_id = result["category_id"] - # assert category_id < num_classes, ( - # f"A prediction has class={category_id}, " - # f"but the dataset only has {num_classes} classes and " - # f"predicted class id should be in [0, {num_classes - 1}]." - # ) - assert category_id in reverse_id_mapping, ( - f"A prediction has class={category_id}, " - f"but the dataset only has class ids in {dataset_id_to_contiguous_id}." - ) - result["category_id"] = reverse_id_mapping[category_id] - - if self._output_dir: - file_path = os.path.join(self._output_dir, "coco_instances_results.json") - self._logger.info("Saving results to {}".format(file_path)) - with PathManager.open(file_path, "w") as f: - f.write(json.dumps(coco_results)) - f.flush() - - if not self._do_evaluation: - self._logger.info("Annotations are not available for evaluation.") - return - - self._logger.info( - "Evaluating predictions with {} COCO API...".format( - "unofficial" if self._use_fast_impl else "official" - ) - ) - for task in sorted(tasks): - assert task in {"bbox", "segm", "keypoints"}, f"Got unknown task: {task}!" - coco_eval = ( - _evaluate_predictions_on_coco( - self._coco_api, - coco_results, - task, - kpt_oks_sigmas=self._kpt_oks_sigmas, - use_fast_impl=self._use_fast_impl, - img_ids=img_ids, - max_dets_per_image=self._max_dets_per_image, - ) - if len(coco_results) > 0 - else None # cocoapi does not handle empty results very well - ) - - res = self._derive_coco_results( - coco_eval, task, class_names=self._metadata.get("thing_classes") - ) - self._results[task] = res diff --git a/spaces/akhaliq/encoder4editing/app.py b/spaces/akhaliq/encoder4editing/app.py deleted file mode 100644 index 0c3287f5f5057a97f26f3e8dab5b558396889963..0000000000000000000000000000000000000000 --- a/spaces/akhaliq/encoder4editing/app.py +++ /dev/null @@ -1,29 +0,0 @@ -import os -import gradio as gr - -os.system("git clone https://github.com/AK391/realworld-stylegan2-encoder.git") -os.chdir("realworld-stylegan2-encoder") -os.system("pip install gdown") -os.system("pip install dlib") -os.system("gdown https://drive.google.com/uc?id=1i873OKcKjvpxiF0UBU4NzxlMMaD9qR5z") -os.system("wget https://github.com/kim-ninh/align_face_ffhq/raw/main/shape_predictor_68_face_landmarks.dat -P .") -os.system("wget https://i.imgur.com/dJVNQSF.jpg -O ./mona.jpg") - -def inference(image): - os.system("python scripts/test.py --align --ckpt ./e4e_encode_mobile_cartoon.pt --network e4e --platform torch --size 1024 --images_path "+image.name) - return "out.png" - -title = "Encoder4editing" -description = "Gradio demo for Encoder4editing. To use it, simply upload your image, or click one of the examples to load them. Read more at the links below." -article = "

Github Repo

" - -gr.Interface( - inference, - gr.inputs.Image(type="file", label="Input"), - gr.outputs.Image(type="file", label="Output"), - title=title, - description=description, - article=article, - enable_queue=True, - examples=[['mona.jpg']] - ).launch(debug=True) \ No newline at end of file diff --git "a/spaces/alexrame/rewardedsoups/pages/03_\360\237\246\277_Locomotion.py" "b/spaces/alexrame/rewardedsoups/pages/03_\360\237\246\277_Locomotion.py" deleted file mode 100644 index f0ec94bcf4dcaddcbe3abf8acd19f49150c7b030..0000000000000000000000000000000000000000 --- "a/spaces/alexrame/rewardedsoups/pages/03_\360\237\246\277_Locomotion.py" +++ /dev/null @@ -1,216 +0,0 @@ -import streamlit as st -from PIL import Image -import codecs -import streamlit.components.v1 as components -from utils import inject_custom_css -import streamlit as st -from streamlit_plotly_events import plotly_events -import pickle -import matplotlib.pyplot as plt -import plotly.graph_objects as go -import typing as tp - -plt.style.use('default') - -shapes=[ - dict( - type="rect", - xref="paper", - yref="paper", - x0=0, - y0=0, - x1=1, - y1=1, - line=dict( - color="Black", - width=2, - ), - ) -] - -import colorsys - -def interpolate_color(color1, color2, factor): - """Interpolates between two RGB colors. Factor is between 0 and 1.""" - color1 = colorsys.rgb_to_hls(int(color1[1:3], 16)/255.0, int(color1[3:5], 16)/255.0, int(color1[5:], 16)/255.0) - color2 = colorsys.rgb_to_hls(int(color2[1:3], 16)/255.0, int(color2[3:5], 16)/255.0, int(color2[5:], 16)/255.0) - new_color = [color1[i] * (1 - factor) + color2[i] * factor for i in range(3)] - new_color = colorsys.hls_to_rgb(*new_color) - return '#{:02x}{:02x}{:02x}'.format(int(new_color[0]*255), int(new_color[1]*255), int(new_color[2]*255)) - - -color1 = "#fa7659" -color2 = "#6dafd7" - -def plot_pareto(dict_results: tp.Dict): - keys = list(dict_results["wa"][0].keys()) - lambda_key, reward2_key, reward1_key = keys - - # Series for "wa" - dict_results["wa"] = [x for i,x in enumerate(dict_results["wa"]) if i%2==0] - lambda_values_wa = [item[lambda_key] for item in dict_results["wa"]][::-1] - reward1_values_wa = [item[reward1_key] for item in dict_results["wa"]][::-1] - reward2_values_wa = [item[reward2_key] for item in dict_results["wa"]][::-1] - - # Series for "init" - reward1_values_init = [item[reward1_key] for item in dict_results["init"]] - reward2_values_init = [item[reward2_key] for item in dict_results["init"]] - - layout = go.Layout(autosize=False,width=1000,height=1000) - fig = go.Figure(layout=layout) - - for i in range(len(reward1_values_wa) - 1): - fig.add_trace(go.Scatter( - x=reward1_values_wa[i:i+2], - y=reward2_values_wa[i:i+2], - mode='lines', - hoverinfo='skip', - line=dict( - color=interpolate_color(color1, color2, i/(len(reward1_values_wa)-1)), - width=2 - ), - showlegend=False - )) - - # Plot for "wa" - fig.add_trace( - go.Scatter( - x=reward1_values_wa, - y=reward2_values_wa, - mode='markers', - name='Rewarded soups: 0≤λ≤1', - hoverinfo='text', - hovertext=[f'λ={lmbda}' for lmbda in lambda_values_wa], - marker=dict( - color=[ - interpolate_color(color1, color2, i / len(lambda_values_wa)) - for i in range(len(lambda_values_wa)) - ], - size=10 - ) - ) - ) - - # Plot for "morl" - fig.add_trace( - go.Scatter( - x=[6400.], - y=[3300.], - mode='markers', - name='MORL: μ=0.5', - hoverinfo='skip', - marker=dict(color='#A45EE9', size=15, symbol="star"), - ) - ) - # Plot for "init" - fig.add_trace( - go.Scatter( - x=reward1_values_init, - y=reward2_values_init, - mode='markers', - name='Pre-trained init', - hoverinfo='skip', - marker=dict(color='#9f9bc8', size=15, symbol="star"), - ) - ) - - fig.update_layout( - xaxis=dict( - range=[3000, 7000], - nticks=6, - showticklabels=True, - ticks='outside', - tickfont=dict(size=18,), - title=dict(text="Risky reward", font=dict(size=18), standoff=10), - showgrid=False, - zeroline=False, - hoverformat='.2f' - ), - yaxis=dict( - range=[-1000, 4500], - nticks=7, - showticklabels=True, - ticks='outside', - tickfont=dict(size=18,), - title=dict(text="Cautious reward", font=dict(size=18), standoff=10), - showgrid=False, - zeroline=False, - hoverformat='.2f' - ), - font=dict(family="Roboto", size=12, color="Black"), - hovermode='x unified', - autosize=False, - width=500, - height=500, - margin=dict(l=100, r=50, b=150, t=20, pad=0), - paper_bgcolor="White", - plot_bgcolor="White", - shapes=shapes, - legend=dict( - x=0.5, - y=0.03, - traceorder="normal", - font=dict(family="Roboto", size=12, color="black"), - bgcolor="White", - bordercolor="Black", - borderwidth=1 - ) - ) - - return fig - -def run(): - - st.write( - f""" - - - -

Making humanoid run more naturally with diverse engineered rewards

""",unsafe_allow_html=True) - - st.markdown( - r""" -Teaching humanoids to walk in a human-like manner serves as a benchmark to evaluate RL strategies for continuous control. One of the key challenges is shaping a suitable proxy reward, given the intricate coordination and balance involved in human locomotion. It is standard to consider the dense reward at each timestep: ${r(t)=velocity-\alpha \times \sum_t a^{2}_{t}}$, controlling the agent's velocity while penalizing wide actions. Yet, the penalty coefficient $\alpha$ is challenging to set. To tackle this, we devised two rewards in the Brax physics engine: a *risky* one with $\alpha=0$, and a *cautious* one $\alpha=1$. - -Below in the interactive animation, you will see the humanoids trained with these two rewards: the humanoid for $\alpha=0$ is the fastest but the most chaotic, while the one for $\alpha=1$ is more cautious but slower. For intermediate values of $\lambda$, the policy is obtained by linear interpolation of those extreme weights, arguably resulting in smoother motion patterns. -""", unsafe_allow_html=True - ) - st.markdown("""

Click on a rewarded soup point!

""",unsafe_allow_html=True) - - files = [] - for i in range(21): - filename = f'streamlit_app/data/locomotion/trajectories/{i}.html' - files.append(codecs.open(filename, "r", "utf-8").read()) - files = [x for i,x in enumerate(files) if i%2==0] - - row_0_1,row_0_2,row_0_3,row_0_4 = st.columns([3,1,1,1]) - with row_0_1: - with open("streamlit_app/data/locomotion/pareto/humanoid_averse_taker_with_morl.pkl","rb") as f: - dict_results = pickle.load(f) - fig = plot_pareto(dict_results) - onclick = plotly_events(fig, click_event=True) - with row_0_4: - st.markdown(f"""
λ=1.0
""",unsafe_allow_html=True) - components.html(files[-1],width=150,height=300) - with row_0_3: - if len(onclick) > 0: - idx = onclick[-1]['pointIndex'] - else: - idx = 5 - st.markdown( - f"""
λ={round(1-idx/(len(files)-1),2)}
""", - unsafe_allow_html=True - ) - components.html(files[idx], width=150, height=300) - with row_0_2: - st.markdown(f"""
λ=0.0
""",unsafe_allow_html=True) - components.html(files[0],width=150,height=300) - - -if __name__ == "__main__": - img = Image.open("streamlit_app/assets/images/icon.png") - st.set_page_config(page_title="Rewarded soups",page_icon=img,layout="wide") - inject_custom_css("streamlit_app/assets/styles.css") - st.set_option('deprecation.showPyplotGlobalUse', False) - run() diff --git a/spaces/ali-ghamdan/gfp-Gans/gfpgan/train.py b/spaces/ali-ghamdan/gfp-Gans/gfpgan/train.py deleted file mode 100644 index fe5f1f909ae15a8d830ef65dcb43436d4f4ee7ae..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/gfp-Gans/gfpgan/train.py +++ /dev/null @@ -1,11 +0,0 @@ -# flake8: noqa -import os.path as osp -from basicsr.train import train_pipeline - -import gfpgan.archs -import gfpgan.data -import gfpgan.models - -if __name__ == '__main__': - root_path = osp.abspath(osp.join(__file__, osp.pardir, osp.pardir)) - train_pipeline(root_path) diff --git a/spaces/ali-ghamdan/gfp-Gans/scripts/parse_landmark.py b/spaces/ali-ghamdan/gfp-Gans/scripts/parse_landmark.py deleted file mode 100644 index 74e2ff9e130ad4f2395c9666dca3ba78526d7a8a..0000000000000000000000000000000000000000 --- a/spaces/ali-ghamdan/gfp-Gans/scripts/parse_landmark.py +++ /dev/null @@ -1,85 +0,0 @@ -import cv2 -import json -import numpy as np -import os -import torch -from basicsr.utils import FileClient, imfrombytes -from collections import OrderedDict - -# ---------------------------- This script is used to parse facial landmarks ------------------------------------- # -# Configurations -save_img = False -scale = 0.5 # 0.5 for official FFHQ (512x512), 1 for others -enlarge_ratio = 1.4 # only for eyes -json_path = 'ffhq-dataset-v2.json' -face_path = 'datasets/ffhq/ffhq_512.lmdb' -save_path = './FFHQ_eye_mouth_landmarks_512.pth' - -print('Load JSON metadata...') -# use the official json file in FFHQ dataset -with open(json_path, 'rb') as f: - json_data = json.load(f, object_pairs_hook=OrderedDict) - -print('Open LMDB file...') -# read ffhq images -file_client = FileClient('lmdb', db_paths=face_path) -with open(os.path.join(face_path, 'meta_info.txt')) as fin: - paths = [line.split('.')[0] for line in fin] - -save_dict = {} - -for item_idx, item in enumerate(json_data.values()): - print(f'\r{item_idx} / {len(json_data)}, {item["image"]["file_path"]} ', end='', flush=True) - - # parse landmarks - lm = np.array(item['image']['face_landmarks']) - lm = lm * scale - - item_dict = {} - # get image - if save_img: - img_bytes = file_client.get(paths[item_idx]) - img = imfrombytes(img_bytes, float32=True) - - # get landmarks for each component - map_left_eye = list(range(36, 42)) - map_right_eye = list(range(42, 48)) - map_mouth = list(range(48, 68)) - - # eye_left - mean_left_eye = np.mean(lm[map_left_eye], 0) # (x, y) - half_len_left_eye = np.max((np.max(np.max(lm[map_left_eye], 0) - np.min(lm[map_left_eye], 0)) / 2, 16)) - item_dict['left_eye'] = [mean_left_eye[0], mean_left_eye[1], half_len_left_eye] - # mean_left_eye[0] = 512 - mean_left_eye[0] # for testing flip - half_len_left_eye *= enlarge_ratio - loc_left_eye = np.hstack((mean_left_eye - half_len_left_eye + 1, mean_left_eye + half_len_left_eye)).astype(int) - if save_img: - eye_left_img = img[loc_left_eye[1]:loc_left_eye[3], loc_left_eye[0]:loc_left_eye[2], :] - cv2.imwrite(f'tmp/{item_idx:08d}_eye_left.png', eye_left_img * 255) - - # eye_right - mean_right_eye = np.mean(lm[map_right_eye], 0) - half_len_right_eye = np.max((np.max(np.max(lm[map_right_eye], 0) - np.min(lm[map_right_eye], 0)) / 2, 16)) - item_dict['right_eye'] = [mean_right_eye[0], mean_right_eye[1], half_len_right_eye] - # mean_right_eye[0] = 512 - mean_right_eye[0] # # for testing flip - half_len_right_eye *= enlarge_ratio - loc_right_eye = np.hstack( - (mean_right_eye - half_len_right_eye + 1, mean_right_eye + half_len_right_eye)).astype(int) - if save_img: - eye_right_img = img[loc_right_eye[1]:loc_right_eye[3], loc_right_eye[0]:loc_right_eye[2], :] - cv2.imwrite(f'tmp/{item_idx:08d}_eye_right.png', eye_right_img * 255) - - # mouth - mean_mouth = np.mean(lm[map_mouth], 0) - half_len_mouth = np.max((np.max(np.max(lm[map_mouth], 0) - np.min(lm[map_mouth], 0)) / 2, 16)) - item_dict['mouth'] = [mean_mouth[0], mean_mouth[1], half_len_mouth] - # mean_mouth[0] = 512 - mean_mouth[0] # for testing flip - loc_mouth = np.hstack((mean_mouth - half_len_mouth + 1, mean_mouth + half_len_mouth)).astype(int) - if save_img: - mouth_img = img[loc_mouth[1]:loc_mouth[3], loc_mouth[0]:loc_mouth[2], :] - cv2.imwrite(f'tmp/{item_idx:08d}_mouth.png', mouth_img * 255) - - save_dict[f'{item_idx:08d}'] = item_dict - -print('Save...') -torch.save(save_dict, save_path) diff --git a/spaces/alistairmcleay/cambridge-masters-project/scripts/user_model_code/interaction/utils.py b/spaces/alistairmcleay/cambridge-masters-project/scripts/user_model_code/interaction/utils.py deleted file mode 100644 index 19ced3d04372ba30c60d591b14ebab3992881b8f..0000000000000000000000000000000000000000 --- a/spaces/alistairmcleay/cambridge-masters-project/scripts/user_model_code/interaction/utils.py +++ /dev/null @@ -1,308 +0,0 @@ -import json -import re - - -def segment_gen(gen, dial_id): - def _color(_segment): - if tag == "CTX": - _segment = _segment.replace(" ", f"{bcolors.ENDC}") - _segment = _segment.replace(" ", f"{bcolors.ENDC}") - _segment = _segment.replace(" ", f"USR: {bcolors.OKCYAN}") - _segment = _segment.replace(" ", f"SYS: {bcolors.OKBLUE}") - if tag == "SYS_UTT": - _segment = f"{bcolors.OKBLUE}" + _segment + f"{bcolors.ENDC}" - if tag == "USR_UTT": - _segment = f"{bcolors.OKCYAN}" + _segment + f"{bcolors.ENDC}" - if tag in ["SYS_ACT", "USR_ACT", "GOAL"]: - _segment = _segment.replace(" ", f"{bcolors.RED}") - _segment = _segment.replace(" ", f"{bcolors.ENDC}") - _segment = _segment.replace(" ", f"{bcolors.YELLOW}") - _segment = _segment.replace(" ", f"{bcolors.ENDC}") - _segment = _segment.replace(" ", f"{bcolors.GREEN}") - _segment = _segment.replace(" ", f"{bcolors.ENDC}") - if tag == "GOAL": - _segment = _segment.replace( - "", f"{bcolors.UNDERLINE}" - ) - _segment = _segment.replace("", f"{bcolors.ENDC}") - _segment = _segment.replace("", f"{bcolors.UNDERLINE}") - _segment = _segment.replace("", f"{bcolors.ENDC}") - # if tag in ["SNT", "GC"]: - # segment = segment.replace("<{}/> ".format(tag), "<{}/> *".format(tag)) - # segment = segment.replace(" ".format(tag), "* <{}/>".format(tag)) - return _segment - - assert isinstance(gen, str) - # gen = gen.split() - # print(gen) - print("*** Dial_id: {} ***".format(dial_id)) - for tag in [ - "CTX", - "SYS_UTT", - "SYS_ACT", - "GOAL", - "SNT", - "RA", - "GC", - "USR_ACT", - "USR_UTT", - ]: - segment = find_segment(gen, tag) - if segment is not None: - print('{} -> "{}"'.format(tag, _color(segment))) - else: - print("Fail to find the segment...") - print("GEN:", gen) - print("---" * 30) - - -# input("press...") - - -def get_original_act_set(): - # full act vocab: - # https://github.com/ConvLab/ConvLab/blob/master/data/multiwoz/annotation/Multiwoz%20data%20analysis.md#dialog-act - acts = set() - acts.add("Inform") - acts.add("Request") - acts.add( - "NoOffer" - ) # equivalent to the concept of `no matching`, `cannot find` in database - acts.add("Recommend") - acts.add("Select") - acts.add( - "OfferBook" - ) # only for `train` domain, ask if book is needed, equivalent to `Booking-Inform` with [[none, none]] - # args in restaurant/hotel domain - acts.add( - "OfferBooked" - ) # only for `train` domain, inform booking is complete, with corresponding info (such as ref number) - acts.add("Book") # inform booking is successful, equivalent to `OfferBooked` above - acts.add( - "NoBook" - ) # inform booking fails, might because of no availability, usually come together act `request` - acts.add("bye") - acts.add("greet") - acts.add("reqmore") - acts.add("welcome") - acts.add("thank") - return acts - - -def get_act_natural_language(act): - if act in ["bye", "greet", "reqmore", "welcome", "thank"]: - return act - - assert act[0].isupper() - tokens = re.findall("[A-Z][^A-Z]*", act) # e.g., `FindEvents` -> `Find Events` - tokens = list(map(str.lower, tokens)) # lower case, -> `find events` - act_nl = " ".join(tokens) - return act_nl - - -def convert_act_into_sgd(act, SPECIAL_TOKENS): - # TODO: check inference result to see if mapping on NoOffer, OfferBook and NoBook are fine - """ - convert multiwoz acts (w/o domain info) into sgd acts ensure that acts with same concept use one name - e.g., Book (OfferBooked) -> NOTIFY_SUCCESS, NoBook -> NOTIFY_FAILURE - """ - if act == "NoOffer": - act = "NOTIFY_FAILURE" - - elif act == "Recommend": - act = "OFFER" - - # technically, `OfferBook` is equivalent to (`act=OFFER_INTENT, slot=intent, value=ReserveRestaurant`) - # on system side in sgd since (1) the conversion is not trivial (completely different representations) - # and (2) multiwoz has no slot called `intent` one cannot simply convert `OfferBook` to `OFFER_INTENT` - # we thus keep the act as is - # note that there is no slot `intent` and value conveying intents in multiwoz - elif act == "OfferBook": - act = "Offer_Book" - - elif act == "OfferBooked": - act = "NOTIFY_SUCCESS" - - elif act == "Book": # same as `OfferBooked` - act = "NOTIFY_SUCCESS" - - elif act == "NoBook": - act = "NOTIFY_FAILURE" - - elif act == "bye": - act = "GOODBYE" - - elif act == "reqmore": - act = "REQ_MORE" - - elif act == "thank": - act = "THANK_YOU" - # elif act == "greet": - # elif act == "welcome": - act = act.upper() # align with sgd acts, e.g., `Inform` -> `INFORM` - - # check if valid - assert "_{}_".format(act) in SPECIAL_TOKENS["additional_special_tokens"] - return act - - -def load_schema(schema_file): - def _update(key, value, mapping): - if key in mapping: - assert ( - value == mapping[key] - ) # ensure service meta is the same between data splits - else: - mapping[key] = value - - def _restructure_service_meta(service_meta, attribute): - """ "convert slot/intent metadata list into dict(slot/intent=metadata)""" - assert attribute in ["slots", "intents"] - mapping = {} - for value in service_meta[attribute]: - key = value["name"] - if attribute == "slots": # domain-slot in multiwoz - assert "-" in key - _, key = key.split("-") # domain, slot - key = normalise_slot(key) - else: # intent - key = normalise_intent(key) - mapping[key] = value - service_meta[attribute] = mapping - - with open(schema_file) as f: - data = json.load(f) - - SERVICE2META = {} - SLOTS, INTENTS = set(), set() - for service_meta in data: - service = service_meta["service_name"] - _restructure_service_meta(service_meta, "slots") - _restructure_service_meta(service_meta, "intents") - _update(service, service_meta, SERVICE2META) - - # collect domain-independent slots - # for domain_slot in service_meta["slots"]: - # assert "-" in domain_slot - # domain, slot = domain_slot.split("-") - # slot = normalise_slot(slot) - # SLOTS.add(slot) - for slot in service_meta["slots"]: - SLOTS.add(slot) - - for intent in service_meta["intents"]: - # intent = normalise_intent(intent) - INTENTS.add(intent) - - print("Load schema, intents: {}, slots: {}".format(len(INTENTS), len(SLOTS))) - return SERVICE2META, INTENTS, SLOTS - - -def normalise_intent(intent): - """convert intent into natural language, e.g., find_hotel -> find hotel""" - if intent == "police": - intent = "find_police" - if intent == "book_taxi": - intent = "find_taxi" - assert "_" in intent - return " ".join(intent.split("_")) - - -def normalise_slot(slot): - if slot == "pricerange": - return "price range" - - elif slot == "bookday": - return "book day" - - elif slot == "bookpeople": - return "book people" - - elif slot == "booktime": - return "book time" - - elif slot == "bookstay": - return "book stay" - - elif slot == "ref": - return "reference" - - elif slot == "arriveby": - return "arrive by" - - elif slot == "leaveat": - return "leave at" - - elif slot == "trainid": - return "train id" - - elif slot == "openhours": - return "open hours" - - elif slot == "entrancefee": - return "entrance fee" - - elif slot in ["none", "?"]: - # return "_Empty_" # special token mark will be added during sequence linearlisation - return "Empty" - - else: - return slot - - -def normalise_value(value): - # deal with binary and empty values - if value == "yes": - # return "_True_" - return "True" - - elif value == "no": - # return "_False_" - return "False" - - elif value in ["none", "?"]: - # return "_Empty_" - return "Empty" - - # if value == "swimmingpool": # for simplicity, dont split - # return "swimming pool" - - else: - return value - - -def wrap_element(content_type, content): - """ - wrap elements such as slot, value, e.g., slot - """ - assert "/" not in content_type - return "<{}/> {} ".format(content_type, content, content_type) - - -def add_str(str1, str2): - return str1 + " " + str2 - - -def find_segment(gen, tag): - assert isinstance(gen, str) - gen = gen.split() - try: - start = gen.index("<{}/>".format(tag)) + 1 - end = gen.index("".format(tag)) - segment = " ".join(gen[start:end]) - except Exception: - print("Missing {} tag in generated sequence".format(tag)) - segment = None - return segment - - -class bcolors: - HEADER = "\033[95m" - OKBLUE = "\033[94m" - OKCYAN = "\033[96m" - GREEN = "\033[92m" - YELLOW = "\033[93m" - RED = "\033[91m" - ENDC = "\033[0m" - BOLD = "\033[1m" - UNDERLINE = "\033[4m" diff --git a/spaces/allknowingroger/Image-Models-Test202/app.py b/spaces/allknowingroger/Image-Models-Test202/app.py deleted file mode 100644 index 3c9a92786125c6e8b12d10bd8ab33850a5ed783b..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test202/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "Bigyi/lora-trained-xl-colab", - "jwhedbee/lora-trained-xl-take-two", - "naphatmanu/sdxl-lora-index-modern-luxury-1", - "naphatmanu/sdxl-lora-index-contemporary-1", - "Yntec/DreamFulV2", - "jwhedbee/lora-trained-xl", - "naphatmanu/sdxl-lora-index-modern-1", - "LilyNgo/lora_Galaxiga-trained-xl-colab", - "jwhedbee/lora-trained-xl-telnyx-product-hero-poc", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/allknowingroger/Image-Models-Test91/app.py b/spaces/allknowingroger/Image-Models-Test91/app.py deleted file mode 100644 index 1ecf3ce1b16af205f4bf4f6ccab84b82fbca04d6..0000000000000000000000000000000000000000 --- a/spaces/allknowingroger/Image-Models-Test91/app.py +++ /dev/null @@ -1,144 +0,0 @@ -import gradio as gr -# import os -# import sys -# from pathlib import Path -import time - -models =[ - "stillerman/trdne-smol-simple", - "stillerman/trdne-smol", - "stephanebhiri/lora-trained-xl-colab-stpCSmith2", - "Sekharreddy/mnb", - "stephanebhiri/lora-trained-xl-colab-stpCSmith", - "camus-ng/lora-trained-xl-cory-8", - "hhhtc/yokai_v2", - "mangoxb/tangled1", - "digiplay/OldFish_v1.1_diffusers_recover", -] - - -model_functions = {} -model_idx = 1 -for model_path in models: - try: - model_functions[model_idx] = gr.Interface.load(f"models/{model_path}", live=False, preprocess=True, postprocess=False) - except Exception as error: - def the_fn(txt): - return None - model_functions[model_idx] = gr.Interface(fn=the_fn, inputs=["text"], outputs=["image"]) - model_idx+=1 - - -def send_it_idx(idx): - def send_it_fn(prompt): - output = (model_functions.get(str(idx)) or model_functions.get(str(1)))(prompt) - return output - return send_it_fn - -def get_prompts(prompt_text): - return prompt_text - -def clear_it(val): - if int(val) != 0: - val = 0 - else: - val = 0 - pass - return val - -def all_task_end(cnt,t_stamp): - to = t_stamp + 60 - et = time.time() - if et > to and t_stamp != 0: - d = gr.update(value=0) - tog = gr.update(value=1) - #print(f'to: {to} et: {et}') - else: - if cnt != 0: - d = gr.update(value=et) - else: - d = gr.update(value=0) - tog = gr.update(value=0) - #print (f'passing: to: {to} et: {et}') - pass - return d, tog - -def all_task_start(): - print("\n\n\n\n\n\n\n") - t = time.gmtime() - t_stamp = time.time() - current_time = time.strftime("%H:%M:%S", t) - return gr.update(value=t_stamp), gr.update(value=t_stamp), gr.update(value=0) - -def clear_fn(): - nn = len(models) - return tuple([None, *[None for _ in range(nn)]]) - - - -with gr.Blocks(title="SD Models") as my_interface: - with gr.Column(scale=12): - # with gr.Row(): - # gr.Markdown("""- Primary prompt: 你想画的内容(英文单词,如 a cat, 加英文逗号效果更好;点 Improve 按钮进行完善)\n- Real prompt: 完善后的提示词,出现后再点右边的 Run 按钮开始运行""") - with gr.Row(): - with gr.Row(scale=6): - primary_prompt=gr.Textbox(label="Prompt", value="") - # real_prompt=gr.Textbox(label="Real prompt") - with gr.Row(scale=6): - # improve_prompts_btn=gr.Button("Improve") - with gr.Row(): - run=gr.Button("Run",variant="primary") - clear_btn=gr.Button("Clear") - with gr.Row(): - sd_outputs = {} - model_idx = 1 - for model_path in models: - with gr.Column(scale=3, min_width=320): - with gr.Box(): - sd_outputs[model_idx] = gr.Image(label=model_path) - pass - model_idx += 1 - pass - pass - - with gr.Row(visible=False): - start_box=gr.Number(interactive=False) - end_box=gr.Number(interactive=False) - tog_box=gr.Textbox(value=0,interactive=False) - - start_box.change( - all_task_end, - [start_box, end_box], - [start_box, tog_box], - every=1, - show_progress=False) - - primary_prompt.submit(all_task_start, None, [start_box, end_box, tog_box]) - run.click(all_task_start, None, [start_box, end_box, tog_box]) - runs_dict = {} - model_idx = 1 - for model_path in models: - runs_dict[model_idx] = run.click(model_functions[model_idx], inputs=[primary_prompt], outputs=[sd_outputs[model_idx]]) - model_idx += 1 - pass - pass - - # improve_prompts_btn_clicked=improve_prompts_btn.click( - # get_prompts, - # inputs=[primary_prompt], - # outputs=[primary_prompt], - # cancels=list(runs_dict.values())) - clear_btn.click( - clear_fn, - None, - [primary_prompt, *list(sd_outputs.values())], - cancels=[*list(runs_dict.values())]) - tog_box.change( - clear_it, - tog_box, - tog_box, - cancels=[*list(runs_dict.values())]) - -my_interface.queue(concurrency_count=600, status_update_rate=1) -my_interface.launch(inline=True, show_api=False) - \ No newline at end of file diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/README.md b/spaces/amarchheda/ChordDuplicate/portaudio/README.md deleted file mode 100644 index bf48975f507ca3dc7f8e16b0bbcb4ebb159e30e2..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/README.md +++ /dev/null @@ -1,62 +0,0 @@ -# PortAudio - portable audio I/O library - -PortAudio is a portable audio I/O library designed for cross-platform -support of audio. It uses either a callback mechanism to request audio -processing, or blocking read/write calls to buffer data between the -native audio subsystem and the client. Audio can be processed in various -formats, including 32 bit floating point, and will be converted to the -native format internally. - -## Documentation: - -* Documentation is available at http://www.portaudio.com/docs/ -* Or at `/doc/html/index.html` after running Doxygen. -* Also see `src/common/portaudio.h` for the API spec. -* And see the `examples/` and `test/` directories for many examples of usage. (We suggest `examples/paex_saw.c` for an example.) - -For information on compiling programs with PortAudio, please see the -tutorial at: - - http://portaudio.com/docs/v19-doxydocs/tutorial_start.html - -We have an active mailing list for user and developer discussions. -Please feel free to join. See http://www.portaudio.com for details. - -## Important Files and Folders: - - include/portaudio.h = header file for PortAudio API. Specifies API. - src/common/ = platform independent code, host independent - code for all implementations. - src/os = os specific (but host api neutral) code - src/hostapi = implementations for different host apis - - -### Host API Implementations: - - src/hostapi/alsa = Advanced Linux Sound Architecture (ALSA) - src/hostapi/asihpi = AudioScience HPI - src/hostapi/asio = ASIO for Windows and Macintosh - src/hostapi/coreaudio = Macintosh Core Audio for OS X - src/hostapi/dsound = Windows Direct Sound - src/hostapi/jack = JACK Audio Connection Kit - src/hostapi/oss = Unix Open Sound System (OSS) - src/hostapi/wasapi = Windows Vista WASAPI - src/hostapi/wdmks = Windows WDM Kernel Streaming - src/hostapi/wmme = Windows MultiMedia Extensions (MME) - - -### Test Programs: - - test/pa_fuzz.c = guitar fuzz box - test/pa_devs.c = print a list of available devices - test/pa_minlat.c = determine minimum latency for your machine - test/paqa_devs.c = self test that opens all devices - test/paqa_errs.c = test error detection and reporting - test/patest_clip.c = hear a sine wave clipped and unclipped - test/patest_dither.c = hear effects of dithering (extremely subtle) - test/patest_pink.c = fun with pink noise - test/patest_record.c = record and playback some audio - test/patest_maxsines.c = how many sine waves can we play? Tests Pa_GetCPULoad(). - test/patest_sine.c = output a sine wave in a simple PA app - test/patest_sync.c = test synchronization of audio and video - test/patest_wire.c = pass input to output, wire simulator diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/ltmain.sh b/spaces/amarchheda/ChordDuplicate/portaudio/ltmain.sh deleted file mode 100644 index 4a1ede7111ea6d110e4dd505718a34a1bee63882..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/ltmain.sh +++ /dev/null @@ -1,9661 +0,0 @@ - -# libtool (GNU libtool) 2.4.2 -# Written by Gordon Matzigkeit , 1996 - -# Copyright (C) 1996, 1997, 1998, 1999, 2000, 2001, 2003, 2004, 2005, 2006, -# 2007, 2008, 2009, 2010, 2011 Free Software Foundation, Inc. -# This is free software; see the source for copying conditions. There is NO -# warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. - -# GNU Libtool is free software; you can redistribute it and/or modify -# it under the terms of the GNU General Public License as published by -# the Free Software Foundation; either version 2 of the License, or -# (at your option) any later version. -# -# As a special exception to the GNU General Public License, -# if you distribute this file as part of a program or library that -# is built using GNU Libtool, you may include this file under the -# same distribution terms that you use for the rest of that program. -# -# GNU Libtool is distributed in the hope that it will be useful, but -# WITHOUT ANY WARRANTY; without even the implied warranty of -# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU -# General Public License for more details. -# -# You should have received a copy of the GNU General Public License -# along with GNU Libtool; see the file COPYING. If not, a copy -# can be downloaded from http://www.gnu.org/licenses/gpl.html, -# or obtained by writing to the Free Software Foundation, Inc., -# 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. - -# Usage: $progname [OPTION]... [MODE-ARG]... -# -# Provide generalized library-building support services. -# -# --config show all configuration variables -# --debug enable verbose shell tracing -# -n, --dry-run display commands without modifying any files -# --features display basic configuration information and exit -# --mode=MODE use operation mode MODE -# --preserve-dup-deps don't remove duplicate dependency libraries -# --quiet, --silent don't print informational messages -# --no-quiet, --no-silent -# print informational messages (default) -# --no-warn don't display warning messages -# --tag=TAG use configuration variables from tag TAG -# -v, --verbose print more informational messages than default -# --no-verbose don't print the extra informational messages -# --version print version information -# -h, --help, --help-all print short, long, or detailed help message -# -# MODE must be one of the following: -# -# clean remove files from the build directory -# compile compile a source file into a libtool object -# execute automatically set library path, then run a program -# finish complete the installation of libtool libraries -# install install libraries or executables -# link create a library or an executable -# uninstall remove libraries from an installed directory -# -# MODE-ARGS vary depending on the MODE. When passed as first option, -# `--mode=MODE' may be abbreviated as `MODE' or a unique abbreviation of that. -# Try `$progname --help --mode=MODE' for a more detailed description of MODE. -# -# When reporting a bug, please describe a test case to reproduce it and -# include the following information: -# -# host-triplet: $host -# shell: $SHELL -# compiler: $LTCC -# compiler flags: $LTCFLAGS -# linker: $LD (gnu? $with_gnu_ld) -# $progname: (GNU libtool) 2.4.2 Debian-2.4.2-1.7ubuntu1 -# automake: $automake_version -# autoconf: $autoconf_version -# -# Report bugs to . -# GNU libtool home page: . -# General help using GNU software: . - -PROGRAM=libtool -PACKAGE=libtool -VERSION="2.4.2 Debian-2.4.2-1.7ubuntu1" -TIMESTAMP="" -package_revision=1.3337 - -# Be Bourne compatible -if test -n "${ZSH_VERSION+set}" && (emulate sh) >/dev/null 2>&1; then - emulate sh - NULLCMD=: - # Zsh 3.x and 4.x performs word splitting on ${1+"$@"}, which - # is contrary to our usage. Disable this feature. - alias -g '${1+"$@"}'='"$@"' - setopt NO_GLOB_SUBST -else - case `(set -o) 2>/dev/null` in *posix*) set -o posix;; esac -fi -BIN_SH=xpg4; export BIN_SH # for Tru64 -DUALCASE=1; export DUALCASE # for MKS sh - -# A function that is used when there is no print builtin or printf. -func_fallback_echo () -{ - eval 'cat <<_LTECHO_EOF -$1 -_LTECHO_EOF' -} - -# NLS nuisances: We save the old values to restore during execute mode. -lt_user_locale= -lt_safe_locale= -for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES -do - eval "if test \"\${$lt_var+set}\" = set; then - save_$lt_var=\$$lt_var - $lt_var=C - export $lt_var - lt_user_locale=\"$lt_var=\\\$save_\$lt_var; \$lt_user_locale\" - lt_safe_locale=\"$lt_var=C; \$lt_safe_locale\" - fi" -done -LC_ALL=C -LANGUAGE=C -export LANGUAGE LC_ALL - -$lt_unset CDPATH - - -# Work around backward compatibility issue on IRIX 6.5. On IRIX 6.4+, sh -# is ksh but when the shell is invoked as "sh" and the current value of -# the _XPG environment variable is not equal to 1 (one), the special -# positional parameter $0, within a function call, is the name of the -# function. -progpath="$0" - - - -: ${CP="cp -f"} -test "${ECHO+set}" = set || ECHO=${as_echo-'printf %s\n'} -: ${MAKE="make"} -: ${MKDIR="mkdir"} -: ${MV="mv -f"} -: ${RM="rm -f"} -: ${SHELL="${CONFIG_SHELL-/bin/sh}"} -: ${Xsed="$SED -e 1s/^X//"} - -# Global variables: -EXIT_SUCCESS=0 -EXIT_FAILURE=1 -EXIT_MISMATCH=63 # $? = 63 is used to indicate version mismatch to missing. -EXIT_SKIP=77 # $? = 77 is used to indicate a skipped test to automake. - -exit_status=$EXIT_SUCCESS - -# Make sure IFS has a sensible default -lt_nl=' -' -IFS=" $lt_nl" - -dirname="s,/[^/]*$,," -basename="s,^.*/,," - -# func_dirname file append nondir_replacement -# Compute the dirname of FILE. If nonempty, add APPEND to the result, -# otherwise set result to NONDIR_REPLACEMENT. -func_dirname () -{ - func_dirname_result=`$ECHO "${1}" | $SED "$dirname"` - if test "X$func_dirname_result" = "X${1}"; then - func_dirname_result="${3}" - else - func_dirname_result="$func_dirname_result${2}" - fi -} # func_dirname may be replaced by extended shell implementation - - -# func_basename file -func_basename () -{ - func_basename_result=`$ECHO "${1}" | $SED "$basename"` -} # func_basename may be replaced by extended shell implementation - - -# func_dirname_and_basename file append nondir_replacement -# perform func_basename and func_dirname in a single function -# call: -# dirname: Compute the dirname of FILE. If nonempty, -# add APPEND to the result, otherwise set result -# to NONDIR_REPLACEMENT. -# value returned in "$func_dirname_result" -# basename: Compute filename of FILE. -# value returned in "$func_basename_result" -# Implementation must be kept synchronized with func_dirname -# and func_basename. For efficiency, we do not delegate to -# those functions but instead duplicate the functionality here. -func_dirname_and_basename () -{ - # Extract subdirectory from the argument. - func_dirname_result=`$ECHO "${1}" | $SED -e "$dirname"` - if test "X$func_dirname_result" = "X${1}"; then - func_dirname_result="${3}" - else - func_dirname_result="$func_dirname_result${2}" - fi - func_basename_result=`$ECHO "${1}" | $SED -e "$basename"` -} # func_dirname_and_basename may be replaced by extended shell implementation - - -# func_stripname prefix suffix name -# strip PREFIX and SUFFIX off of NAME. -# PREFIX and SUFFIX must not contain globbing or regex special -# characters, hashes, percent signs, but SUFFIX may contain a leading -# dot (in which case that matches only a dot). -# func_strip_suffix prefix name -func_stripname () -{ - case ${2} in - .*) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%\\\\${2}\$%%"`;; - *) func_stripname_result=`$ECHO "${3}" | $SED "s%^${1}%%; s%${2}\$%%"`;; - esac -} # func_stripname may be replaced by extended shell implementation - - -# These SED scripts presuppose an absolute path with a trailing slash. -pathcar='s,^/\([^/]*\).*$,\1,' -pathcdr='s,^/[^/]*,,' -removedotparts=':dotsl - s@/\./@/@g - t dotsl - s,/\.$,/,' -collapseslashes='s@/\{1,\}@/@g' -finalslash='s,/*$,/,' - -# func_normal_abspath PATH -# Remove doubled-up and trailing slashes, "." path components, -# and cancel out any ".." path components in PATH after making -# it an absolute path. -# value returned in "$func_normal_abspath_result" -func_normal_abspath () -{ - # Start from root dir and reassemble the path. - func_normal_abspath_result= - func_normal_abspath_tpath=$1 - func_normal_abspath_altnamespace= - case $func_normal_abspath_tpath in - "") - # Empty path, that just means $cwd. - func_stripname '' '/' "`pwd`" - func_normal_abspath_result=$func_stripname_result - return - ;; - # The next three entries are used to spot a run of precisely - # two leading slashes without using negated character classes; - # we take advantage of case's first-match behaviour. - ///*) - # Unusual form of absolute path, do nothing. - ;; - //*) - # Not necessarily an ordinary path; POSIX reserves leading '//' - # and for example Cygwin uses it to access remote file shares - # over CIFS/SMB, so we conserve a leading double slash if found. - func_normal_abspath_altnamespace=/ - ;; - /*) - # Absolute path, do nothing. - ;; - *) - # Relative path, prepend $cwd. - func_normal_abspath_tpath=`pwd`/$func_normal_abspath_tpath - ;; - esac - # Cancel out all the simple stuff to save iterations. We also want - # the path to end with a slash for ease of parsing, so make sure - # there is one (and only one) here. - func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ - -e "$removedotparts" -e "$collapseslashes" -e "$finalslash"` - while :; do - # Processed it all yet? - if test "$func_normal_abspath_tpath" = / ; then - # If we ascended to the root using ".." the result may be empty now. - if test -z "$func_normal_abspath_result" ; then - func_normal_abspath_result=/ - fi - break - fi - func_normal_abspath_tcomponent=`$ECHO "$func_normal_abspath_tpath" | $SED \ - -e "$pathcar"` - func_normal_abspath_tpath=`$ECHO "$func_normal_abspath_tpath" | $SED \ - -e "$pathcdr"` - # Figure out what to do with it - case $func_normal_abspath_tcomponent in - "") - # Trailing empty path component, ignore it. - ;; - ..) - # Parent dir; strip last assembled component from result. - func_dirname "$func_normal_abspath_result" - func_normal_abspath_result=$func_dirname_result - ;; - *) - # Actual path component, append it. - func_normal_abspath_result=$func_normal_abspath_result/$func_normal_abspath_tcomponent - ;; - esac - done - # Restore leading double-slash if one was found on entry. - func_normal_abspath_result=$func_normal_abspath_altnamespace$func_normal_abspath_result -} - -# func_relative_path SRCDIR DSTDIR -# generates a relative path from SRCDIR to DSTDIR, with a trailing -# slash if non-empty, suitable for immediately appending a filename -# without needing to append a separator. -# value returned in "$func_relative_path_result" -func_relative_path () -{ - func_relative_path_result= - func_normal_abspath "$1" - func_relative_path_tlibdir=$func_normal_abspath_result - func_normal_abspath "$2" - func_relative_path_tbindir=$func_normal_abspath_result - - # Ascend the tree starting from libdir - while :; do - # check if we have found a prefix of bindir - case $func_relative_path_tbindir in - $func_relative_path_tlibdir) - # found an exact match - func_relative_path_tcancelled= - break - ;; - $func_relative_path_tlibdir*) - # found a matching prefix - func_stripname "$func_relative_path_tlibdir" '' "$func_relative_path_tbindir" - func_relative_path_tcancelled=$func_stripname_result - if test -z "$func_relative_path_result"; then - func_relative_path_result=. - fi - break - ;; - *) - func_dirname $func_relative_path_tlibdir - func_relative_path_tlibdir=${func_dirname_result} - if test "x$func_relative_path_tlibdir" = x ; then - # Have to descend all the way to the root! - func_relative_path_result=../$func_relative_path_result - func_relative_path_tcancelled=$func_relative_path_tbindir - break - fi - func_relative_path_result=../$func_relative_path_result - ;; - esac - done - - # Now calculate path; take care to avoid doubling-up slashes. - func_stripname '' '/' "$func_relative_path_result" - func_relative_path_result=$func_stripname_result - func_stripname '/' '/' "$func_relative_path_tcancelled" - if test "x$func_stripname_result" != x ; then - func_relative_path_result=${func_relative_path_result}/${func_stripname_result} - fi - - # Normalisation. If bindir is libdir, return empty string, - # else relative path ending with a slash; either way, target - # file name can be directly appended. - if test ! -z "$func_relative_path_result"; then - func_stripname './' '' "$func_relative_path_result/" - func_relative_path_result=$func_stripname_result - fi -} - -# The name of this program: -func_dirname_and_basename "$progpath" -progname=$func_basename_result - -# Make sure we have an absolute path for reexecution: -case $progpath in - [\\/]*|[A-Za-z]:\\*) ;; - *[\\/]*) - progdir=$func_dirname_result - progdir=`cd "$progdir" && pwd` - progpath="$progdir/$progname" - ;; - *) - save_IFS="$IFS" - IFS=${PATH_SEPARATOR-:} - for progdir in $PATH; do - IFS="$save_IFS" - test -x "$progdir/$progname" && break - done - IFS="$save_IFS" - test -n "$progdir" || progdir=`pwd` - progpath="$progdir/$progname" - ;; -esac - -# Sed substitution that helps us do robust quoting. It backslashifies -# metacharacters that are still active within double-quoted strings. -Xsed="${SED}"' -e 1s/^X//' -sed_quote_subst='s/\([`"$\\]\)/\\\1/g' - -# Same as above, but do not quote variable references. -double_quote_subst='s/\(["`\\]\)/\\\1/g' - -# Sed substitution that turns a string into a regex matching for the -# string literally. -sed_make_literal_regex='s,[].[^$\\*\/],\\&,g' - -# Sed substitution that converts a w32 file name or path -# which contains forward slashes, into one that contains -# (escaped) backslashes. A very naive implementation. -lt_sed_naive_backslashify='s|\\\\*|\\|g;s|/|\\|g;s|\\|\\\\|g' - -# Re-`\' parameter expansions in output of double_quote_subst that were -# `\'-ed in input to the same. If an odd number of `\' preceded a '$' -# in input to double_quote_subst, that '$' was protected from expansion. -# Since each input `\' is now two `\'s, look for any number of runs of -# four `\'s followed by two `\'s and then a '$'. `\' that '$'. -bs='\\' -bs2='\\\\' -bs4='\\\\\\\\' -dollar='\$' -sed_double_backslash="\ - s/$bs4/&\\ -/g - s/^$bs2$dollar/$bs&/ - s/\\([^$bs]\\)$bs2$dollar/\\1$bs2$bs$dollar/g - s/\n//g" - -# Standard options: -opt_dry_run=false -opt_help=false -opt_quiet=false -opt_verbose=false -opt_warning=: - -# func_echo arg... -# Echo program name prefixed message, along with the current mode -# name if it has been set yet. -func_echo () -{ - $ECHO "$progname: ${opt_mode+$opt_mode: }$*" -} - -# func_verbose arg... -# Echo program name prefixed message in verbose mode only. -func_verbose () -{ - $opt_verbose && func_echo ${1+"$@"} - - # A bug in bash halts the script if the last line of a function - # fails when set -e is in force, so we need another command to - # work around that: - : -} - -# func_echo_all arg... -# Invoke $ECHO with all args, space-separated. -func_echo_all () -{ - $ECHO "$*" -} - -# func_error arg... -# Echo program name prefixed message to standard error. -func_error () -{ - $ECHO "$progname: ${opt_mode+$opt_mode: }"${1+"$@"} 1>&2 -} - -# func_warning arg... -# Echo program name prefixed warning message to standard error. -func_warning () -{ - $opt_warning && $ECHO "$progname: ${opt_mode+$opt_mode: }warning: "${1+"$@"} 1>&2 - - # bash bug again: - : -} - -# func_fatal_error arg... -# Echo program name prefixed message to standard error, and exit. -func_fatal_error () -{ - func_error ${1+"$@"} - exit $EXIT_FAILURE -} - -# func_fatal_help arg... -# Echo program name prefixed message to standard error, followed by -# a help hint, and exit. -func_fatal_help () -{ - func_error ${1+"$@"} - func_fatal_error "$help" -} -help="Try \`$progname --help' for more information." ## default - - -# func_grep expression filename -# Check whether EXPRESSION matches any line of FILENAME, without output. -func_grep () -{ - $GREP "$1" "$2" >/dev/null 2>&1 -} - - -# func_mkdir_p directory-path -# Make sure the entire path to DIRECTORY-PATH is available. -func_mkdir_p () -{ - my_directory_path="$1" - my_dir_list= - - if test -n "$my_directory_path" && test "$opt_dry_run" != ":"; then - - # Protect directory names starting with `-' - case $my_directory_path in - -*) my_directory_path="./$my_directory_path" ;; - esac - - # While some portion of DIR does not yet exist... - while test ! -d "$my_directory_path"; do - # ...make a list in topmost first order. Use a colon delimited - # list incase some portion of path contains whitespace. - my_dir_list="$my_directory_path:$my_dir_list" - - # If the last portion added has no slash in it, the list is done - case $my_directory_path in */*) ;; *) break ;; esac - - # ...otherwise throw away the child directory and loop - my_directory_path=`$ECHO "$my_directory_path" | $SED -e "$dirname"` - done - my_dir_list=`$ECHO "$my_dir_list" | $SED 's,:*$,,'` - - save_mkdir_p_IFS="$IFS"; IFS=':' - for my_dir in $my_dir_list; do - IFS="$save_mkdir_p_IFS" - # mkdir can fail with a `File exist' error if two processes - # try to create one of the directories concurrently. Don't - # stop in that case! - $MKDIR "$my_dir" 2>/dev/null || : - done - IFS="$save_mkdir_p_IFS" - - # Bail out if we (or some other process) failed to create a directory. - test -d "$my_directory_path" || \ - func_fatal_error "Failed to create \`$1'" - fi -} - - -# func_mktempdir [string] -# Make a temporary directory that won't clash with other running -# libtool processes, and avoids race conditions if possible. If -# given, STRING is the basename for that directory. -func_mktempdir () -{ - my_template="${TMPDIR-/tmp}/${1-$progname}" - - if test "$opt_dry_run" = ":"; then - # Return a directory name, but don't create it in dry-run mode - my_tmpdir="${my_template}-$$" - else - - # If mktemp works, use that first and foremost - my_tmpdir=`mktemp -d "${my_template}-XXXXXXXX" 2>/dev/null` - - if test ! -d "$my_tmpdir"; then - # Failing that, at least try and use $RANDOM to avoid a race - my_tmpdir="${my_template}-${RANDOM-0}$$" - - save_mktempdir_umask=`umask` - umask 0077 - $MKDIR "$my_tmpdir" - umask $save_mktempdir_umask - fi - - # If we're not in dry-run mode, bomb out on failure - test -d "$my_tmpdir" || \ - func_fatal_error "cannot create temporary directory \`$my_tmpdir'" - fi - - $ECHO "$my_tmpdir" -} - - -# func_quote_for_eval arg -# Aesthetically quote ARG to be evaled later. -# This function returns two values: FUNC_QUOTE_FOR_EVAL_RESULT -# is double-quoted, suitable for a subsequent eval, whereas -# FUNC_QUOTE_FOR_EVAL_UNQUOTED_RESULT has merely all characters -# which are still active within double quotes backslashified. -func_quote_for_eval () -{ - case $1 in - *[\\\`\"\$]*) - func_quote_for_eval_unquoted_result=`$ECHO "$1" | $SED "$sed_quote_subst"` ;; - *) - func_quote_for_eval_unquoted_result="$1" ;; - esac - - case $func_quote_for_eval_unquoted_result in - # Double-quote args containing shell metacharacters to delay - # word splitting, command substitution and variable - # expansion for a subsequent eval. - # Many Bourne shells cannot handle close brackets correctly - # in scan sets, so we specify it separately. - *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") - func_quote_for_eval_result="\"$func_quote_for_eval_unquoted_result\"" - ;; - *) - func_quote_for_eval_result="$func_quote_for_eval_unquoted_result" - esac -} - - -# func_quote_for_expand arg -# Aesthetically quote ARG to be evaled later; same as above, -# but do not quote variable references. -func_quote_for_expand () -{ - case $1 in - *[\\\`\"]*) - my_arg=`$ECHO "$1" | $SED \ - -e "$double_quote_subst" -e "$sed_double_backslash"` ;; - *) - my_arg="$1" ;; - esac - - case $my_arg in - # Double-quote args containing shell metacharacters to delay - # word splitting and command substitution for a subsequent eval. - # Many Bourne shells cannot handle close brackets correctly - # in scan sets, so we specify it separately. - *[\[\~\#\^\&\*\(\)\{\}\|\;\<\>\?\'\ \ ]*|*]*|"") - my_arg="\"$my_arg\"" - ;; - esac - - func_quote_for_expand_result="$my_arg" -} - - -# func_show_eval cmd [fail_exp] -# Unless opt_silent is true, then output CMD. Then, if opt_dryrun is -# not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP -# is given, then evaluate it. -func_show_eval () -{ - my_cmd="$1" - my_fail_exp="${2-:}" - - ${opt_silent-false} || { - func_quote_for_expand "$my_cmd" - eval "func_echo $func_quote_for_expand_result" - } - - if ${opt_dry_run-false}; then :; else - eval "$my_cmd" - my_status=$? - if test "$my_status" -eq 0; then :; else - eval "(exit $my_status); $my_fail_exp" - fi - fi -} - - -# func_show_eval_locale cmd [fail_exp] -# Unless opt_silent is true, then output CMD. Then, if opt_dryrun is -# not true, evaluate CMD. If the evaluation of CMD fails, and FAIL_EXP -# is given, then evaluate it. Use the saved locale for evaluation. -func_show_eval_locale () -{ - my_cmd="$1" - my_fail_exp="${2-:}" - - ${opt_silent-false} || { - func_quote_for_expand "$my_cmd" - eval "func_echo $func_quote_for_expand_result" - } - - if ${opt_dry_run-false}; then :; else - eval "$lt_user_locale - $my_cmd" - my_status=$? - eval "$lt_safe_locale" - if test "$my_status" -eq 0; then :; else - eval "(exit $my_status); $my_fail_exp" - fi - fi -} - -# func_tr_sh -# Turn $1 into a string suitable for a shell variable name. -# Result is stored in $func_tr_sh_result. All characters -# not in the set a-zA-Z0-9_ are replaced with '_'. Further, -# if $1 begins with a digit, a '_' is prepended as well. -func_tr_sh () -{ - case $1 in - [0-9]* | *[!a-zA-Z0-9_]*) - func_tr_sh_result=`$ECHO "$1" | $SED 's/^\([0-9]\)/_\1/; s/[^a-zA-Z0-9_]/_/g'` - ;; - * ) - func_tr_sh_result=$1 - ;; - esac -} - - -# func_version -# Echo version message to standard output and exit. -func_version () -{ - $opt_debug - - $SED -n '/(C)/!b go - :more - /\./!{ - N - s/\n# / / - b more - } - :go - /^# '$PROGRAM' (GNU /,/# warranty; / { - s/^# // - s/^# *$// - s/\((C)\)[ 0-9,-]*\( [1-9][0-9]*\)/\1\2/ - p - }' < "$progpath" - exit $? -} - -# func_usage -# Echo short help message to standard output and exit. -func_usage () -{ - $opt_debug - - $SED -n '/^# Usage:/,/^# *.*--help/ { - s/^# // - s/^# *$// - s/\$progname/'$progname'/ - p - }' < "$progpath" - echo - $ECHO "run \`$progname --help | more' for full usage" - exit $? -} - -# func_help [NOEXIT] -# Echo long help message to standard output and exit, -# unless 'noexit' is passed as argument. -func_help () -{ - $opt_debug - - $SED -n '/^# Usage:/,/# Report bugs to/ { - :print - s/^# // - s/^# *$// - s*\$progname*'$progname'* - s*\$host*'"$host"'* - s*\$SHELL*'"$SHELL"'* - s*\$LTCC*'"$LTCC"'* - s*\$LTCFLAGS*'"$LTCFLAGS"'* - s*\$LD*'"$LD"'* - s/\$with_gnu_ld/'"$with_gnu_ld"'/ - s/\$automake_version/'"`(${AUTOMAKE-automake} --version) 2>/dev/null |$SED 1q`"'/ - s/\$autoconf_version/'"`(${AUTOCONF-autoconf} --version) 2>/dev/null |$SED 1q`"'/ - p - d - } - /^# .* home page:/b print - /^# General help using/b print - ' < "$progpath" - ret=$? - if test -z "$1"; then - exit $ret - fi -} - -# func_missing_arg argname -# Echo program name prefixed message to standard error and set global -# exit_cmd. -func_missing_arg () -{ - $opt_debug - - func_error "missing argument for $1." - exit_cmd=exit -} - - -# func_split_short_opt shortopt -# Set func_split_short_opt_name and func_split_short_opt_arg shell -# variables after splitting SHORTOPT after the 2nd character. -func_split_short_opt () -{ - my_sed_short_opt='1s/^\(..\).*$/\1/;q' - my_sed_short_rest='1s/^..\(.*\)$/\1/;q' - - func_split_short_opt_name=`$ECHO "$1" | $SED "$my_sed_short_opt"` - func_split_short_opt_arg=`$ECHO "$1" | $SED "$my_sed_short_rest"` -} # func_split_short_opt may be replaced by extended shell implementation - - -# func_split_long_opt longopt -# Set func_split_long_opt_name and func_split_long_opt_arg shell -# variables after splitting LONGOPT at the `=' sign. -func_split_long_opt () -{ - my_sed_long_opt='1s/^\(--[^=]*\)=.*/\1/;q' - my_sed_long_arg='1s/^--[^=]*=//' - - func_split_long_opt_name=`$ECHO "$1" | $SED "$my_sed_long_opt"` - func_split_long_opt_arg=`$ECHO "$1" | $SED "$my_sed_long_arg"` -} # func_split_long_opt may be replaced by extended shell implementation - -exit_cmd=: - - - - - -magic="%%%MAGIC variable%%%" -magic_exe="%%%MAGIC EXE variable%%%" - -# Global variables. -nonopt= -preserve_args= -lo2o="s/\\.lo\$/.${objext}/" -o2lo="s/\\.${objext}\$/.lo/" -extracted_archives= -extracted_serial=0 - -# If this variable is set in any of the actions, the command in it -# will be execed at the end. This prevents here-documents from being -# left over by shells. -exec_cmd= - -# func_append var value -# Append VALUE to the end of shell variable VAR. -func_append () -{ - eval "${1}=\$${1}\${2}" -} # func_append may be replaced by extended shell implementation - -# func_append_quoted var value -# Quote VALUE and append to the end of shell variable VAR, separated -# by a space. -func_append_quoted () -{ - func_quote_for_eval "${2}" - eval "${1}=\$${1}\\ \$func_quote_for_eval_result" -} # func_append_quoted may be replaced by extended shell implementation - - -# func_arith arithmetic-term... -func_arith () -{ - func_arith_result=`expr "${@}"` -} # func_arith may be replaced by extended shell implementation - - -# func_len string -# STRING may not start with a hyphen. -func_len () -{ - func_len_result=`expr "${1}" : ".*" 2>/dev/null || echo $max_cmd_len` -} # func_len may be replaced by extended shell implementation - - -# func_lo2o object -func_lo2o () -{ - func_lo2o_result=`$ECHO "${1}" | $SED "$lo2o"` -} # func_lo2o may be replaced by extended shell implementation - - -# func_xform libobj-or-source -func_xform () -{ - func_xform_result=`$ECHO "${1}" | $SED 's/\.[^.]*$/.lo/'` -} # func_xform may be replaced by extended shell implementation - - -# func_fatal_configuration arg... -# Echo program name prefixed message to standard error, followed by -# a configuration failure hint, and exit. -func_fatal_configuration () -{ - func_error ${1+"$@"} - func_error "See the $PACKAGE documentation for more information." - func_fatal_error "Fatal configuration error." -} - - -# func_config -# Display the configuration for all the tags in this script. -func_config () -{ - re_begincf='^# ### BEGIN LIBTOOL' - re_endcf='^# ### END LIBTOOL' - - # Default configuration. - $SED "1,/$re_begincf CONFIG/d;/$re_endcf CONFIG/,\$d" < "$progpath" - - # Now print the configurations for the tags. - for tagname in $taglist; do - $SED -n "/$re_begincf TAG CONFIG: $tagname\$/,/$re_endcf TAG CONFIG: $tagname\$/p" < "$progpath" - done - - exit $? -} - -# func_features -# Display the features supported by this script. -func_features () -{ - echo "host: $host" - if test "$build_libtool_libs" = yes; then - echo "enable shared libraries" - else - echo "disable shared libraries" - fi - if test "$build_old_libs" = yes; then - echo "enable static libraries" - else - echo "disable static libraries" - fi - - exit $? -} - -# func_enable_tag tagname -# Verify that TAGNAME is valid, and either flag an error and exit, or -# enable the TAGNAME tag. We also add TAGNAME to the global $taglist -# variable here. -func_enable_tag () -{ - # Global variable: - tagname="$1" - - re_begincf="^# ### BEGIN LIBTOOL TAG CONFIG: $tagname\$" - re_endcf="^# ### END LIBTOOL TAG CONFIG: $tagname\$" - sed_extractcf="/$re_begincf/,/$re_endcf/p" - - # Validate tagname. - case $tagname in - *[!-_A-Za-z0-9,/]*) - func_fatal_error "invalid tag name: $tagname" - ;; - esac - - # Don't test for the "default" C tag, as we know it's - # there but not specially marked. - case $tagname in - CC) ;; - *) - if $GREP "$re_begincf" "$progpath" >/dev/null 2>&1; then - taglist="$taglist $tagname" - - # Evaluate the configuration. Be careful to quote the path - # and the sed script, to avoid splitting on whitespace, but - # also don't use non-portable quotes within backquotes within - # quotes we have to do it in 2 steps: - extractedcf=`$SED -n -e "$sed_extractcf" < "$progpath"` - eval "$extractedcf" - else - func_error "ignoring unknown tag $tagname" - fi - ;; - esac -} - -# func_check_version_match -# Ensure that we are using m4 macros, and libtool script from the same -# release of libtool. -func_check_version_match () -{ - if test "$package_revision" != "$macro_revision"; then - if test "$VERSION" != "$macro_version"; then - if test -z "$macro_version"; then - cat >&2 <<_LT_EOF -$progname: Version mismatch error. This is $PACKAGE $VERSION, but the -$progname: definition of this LT_INIT comes from an older release. -$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION -$progname: and run autoconf again. -_LT_EOF - else - cat >&2 <<_LT_EOF -$progname: Version mismatch error. This is $PACKAGE $VERSION, but the -$progname: definition of this LT_INIT comes from $PACKAGE $macro_version. -$progname: You should recreate aclocal.m4 with macros from $PACKAGE $VERSION -$progname: and run autoconf again. -_LT_EOF - fi - else - cat >&2 <<_LT_EOF -$progname: Version mismatch error. This is $PACKAGE $VERSION, revision $package_revision, -$progname: but the definition of this LT_INIT comes from revision $macro_revision. -$progname: You should recreate aclocal.m4 with macros from revision $package_revision -$progname: of $PACKAGE $VERSION and run autoconf again. -_LT_EOF - fi - - exit $EXIT_MISMATCH - fi -} - - -# Shorthand for --mode=foo, only valid as the first argument -case $1 in -clean|clea|cle|cl) - shift; set dummy --mode clean ${1+"$@"}; shift - ;; -compile|compil|compi|comp|com|co|c) - shift; set dummy --mode compile ${1+"$@"}; shift - ;; -execute|execut|execu|exec|exe|ex|e) - shift; set dummy --mode execute ${1+"$@"}; shift - ;; -finish|finis|fini|fin|fi|f) - shift; set dummy --mode finish ${1+"$@"}; shift - ;; -install|instal|insta|inst|ins|in|i) - shift; set dummy --mode install ${1+"$@"}; shift - ;; -link|lin|li|l) - shift; set dummy --mode link ${1+"$@"}; shift - ;; -uninstall|uninstal|uninsta|uninst|unins|unin|uni|un|u) - shift; set dummy --mode uninstall ${1+"$@"}; shift - ;; -esac - - - -# Option defaults: -opt_debug=: -opt_dry_run=false -opt_config=false -opt_preserve_dup_deps=false -opt_features=false -opt_finish=false -opt_help=false -opt_help_all=false -opt_silent=: -opt_warning=: -opt_verbose=: -opt_silent=false -opt_verbose=false - - -# Parse options once, thoroughly. This comes as soon as possible in the -# script to make things like `--version' happen as quickly as we can. -{ - # this just eases exit handling - while test $# -gt 0; do - opt="$1" - shift - case $opt in - --debug|-x) opt_debug='set -x' - func_echo "enabling shell trace mode" - $opt_debug - ;; - --dry-run|--dryrun|-n) - opt_dry_run=: - ;; - --config) - opt_config=: -func_config - ;; - --dlopen|-dlopen) - optarg="$1" - opt_dlopen="${opt_dlopen+$opt_dlopen -}$optarg" - shift - ;; - --preserve-dup-deps) - opt_preserve_dup_deps=: - ;; - --features) - opt_features=: -func_features - ;; - --finish) - opt_finish=: -set dummy --mode finish ${1+"$@"}; shift - ;; - --help) - opt_help=: - ;; - --help-all) - opt_help_all=: -opt_help=': help-all' - ;; - --mode) - test $# = 0 && func_missing_arg $opt && break - optarg="$1" - opt_mode="$optarg" -case $optarg in - # Valid mode arguments: - clean|compile|execute|finish|install|link|relink|uninstall) ;; - - # Catch anything else as an error - *) func_error "invalid argument for $opt" - exit_cmd=exit - break - ;; -esac - shift - ;; - --no-silent|--no-quiet) - opt_silent=false -func_append preserve_args " $opt" - ;; - --no-warning|--no-warn) - opt_warning=false -func_append preserve_args " $opt" - ;; - --no-verbose) - opt_verbose=false -func_append preserve_args " $opt" - ;; - --silent|--quiet) - opt_silent=: -func_append preserve_args " $opt" - opt_verbose=false - ;; - --verbose|-v) - opt_verbose=: -func_append preserve_args " $opt" -opt_silent=false - ;; - --tag) - test $# = 0 && func_missing_arg $opt && break - optarg="$1" - opt_tag="$optarg" -func_append preserve_args " $opt $optarg" -func_enable_tag "$optarg" - shift - ;; - - -\?|-h) func_usage ;; - --help) func_help ;; - --version) func_version ;; - - # Separate optargs to long options: - --*=*) - func_split_long_opt "$opt" - set dummy "$func_split_long_opt_name" "$func_split_long_opt_arg" ${1+"$@"} - shift - ;; - - # Separate non-argument short options: - -\?*|-h*|-n*|-v*) - func_split_short_opt "$opt" - set dummy "$func_split_short_opt_name" "-$func_split_short_opt_arg" ${1+"$@"} - shift - ;; - - --) break ;; - -*) func_fatal_help "unrecognized option \`$opt'" ;; - *) set dummy "$opt" ${1+"$@"}; shift; break ;; - esac - done - - # Validate options: - - # save first non-option argument - if test "$#" -gt 0; then - nonopt="$opt" - shift - fi - - # preserve --debug - test "$opt_debug" = : || func_append preserve_args " --debug" - - case $host in - *cygwin* | *mingw* | *pw32* | *cegcc*) - # don't eliminate duplications in $postdeps and $predeps - opt_duplicate_compiler_generated_deps=: - ;; - *) - opt_duplicate_compiler_generated_deps=$opt_preserve_dup_deps - ;; - esac - - $opt_help || { - # Sanity checks first: - func_check_version_match - - if test "$build_libtool_libs" != yes && test "$build_old_libs" != yes; then - func_fatal_configuration "not configured to build any kind of library" - fi - - # Darwin sucks - eval std_shrext=\"$shrext_cmds\" - - # Only execute mode is allowed to have -dlopen flags. - if test -n "$opt_dlopen" && test "$opt_mode" != execute; then - func_error "unrecognized option \`-dlopen'" - $ECHO "$help" 1>&2 - exit $EXIT_FAILURE - fi - - # Change the help message to a mode-specific one. - generic_help="$help" - help="Try \`$progname --help --mode=$opt_mode' for more information." - } - - - # Bail if the options were screwed - $exit_cmd $EXIT_FAILURE -} - - - - -## ----------- ## -## Main. ## -## ----------- ## - -# func_lalib_p file -# True iff FILE is a libtool `.la' library or `.lo' object file. -# This function is only a basic sanity check; it will hardly flush out -# determined imposters. -func_lalib_p () -{ - test -f "$1" && - $SED -e 4q "$1" 2>/dev/null \ - | $GREP "^# Generated by .*$PACKAGE" > /dev/null 2>&1 -} - -# func_lalib_unsafe_p file -# True iff FILE is a libtool `.la' library or `.lo' object file. -# This function implements the same check as func_lalib_p without -# resorting to external programs. To this end, it redirects stdin and -# closes it afterwards, without saving the original file descriptor. -# As a safety measure, use it only where a negative result would be -# fatal anyway. Works if `file' does not exist. -func_lalib_unsafe_p () -{ - lalib_p=no - if test -f "$1" && test -r "$1" && exec 5<&0 <"$1"; then - for lalib_p_l in 1 2 3 4 - do - read lalib_p_line - case "$lalib_p_line" in - \#\ Generated\ by\ *$PACKAGE* ) lalib_p=yes; break;; - esac - done - exec 0<&5 5<&- - fi - test "$lalib_p" = yes -} - -# func_ltwrapper_script_p file -# True iff FILE is a libtool wrapper script -# This function is only a basic sanity check; it will hardly flush out -# determined imposters. -func_ltwrapper_script_p () -{ - func_lalib_p "$1" -} - -# func_ltwrapper_executable_p file -# True iff FILE is a libtool wrapper executable -# This function is only a basic sanity check; it will hardly flush out -# determined imposters. -func_ltwrapper_executable_p () -{ - func_ltwrapper_exec_suffix= - case $1 in - *.exe) ;; - *) func_ltwrapper_exec_suffix=.exe ;; - esac - $GREP "$magic_exe" "$1$func_ltwrapper_exec_suffix" >/dev/null 2>&1 -} - -# func_ltwrapper_scriptname file -# Assumes file is an ltwrapper_executable -# uses $file to determine the appropriate filename for a -# temporary ltwrapper_script. -func_ltwrapper_scriptname () -{ - func_dirname_and_basename "$1" "" "." - func_stripname '' '.exe' "$func_basename_result" - func_ltwrapper_scriptname_result="$func_dirname_result/$objdir/${func_stripname_result}_ltshwrapper" -} - -# func_ltwrapper_p file -# True iff FILE is a libtool wrapper script or wrapper executable -# This function is only a basic sanity check; it will hardly flush out -# determined imposters. -func_ltwrapper_p () -{ - func_ltwrapper_script_p "$1" || func_ltwrapper_executable_p "$1" -} - - -# func_execute_cmds commands fail_cmd -# Execute tilde-delimited COMMANDS. -# If FAIL_CMD is given, eval that upon failure. -# FAIL_CMD may read-access the current command in variable CMD! -func_execute_cmds () -{ - $opt_debug - save_ifs=$IFS; IFS='~' - for cmd in $1; do - IFS=$save_ifs - eval cmd=\"$cmd\" - func_show_eval "$cmd" "${2-:}" - done - IFS=$save_ifs -} - - -# func_source file -# Source FILE, adding directory component if necessary. -# Note that it is not necessary on cygwin/mingw to append a dot to -# FILE even if both FILE and FILE.exe exist: automatic-append-.exe -# behavior happens only for exec(3), not for open(2)! Also, sourcing -# `FILE.' does not work on cygwin managed mounts. -func_source () -{ - $opt_debug - case $1 in - */* | *\\*) . "$1" ;; - *) . "./$1" ;; - esac -} - - -# func_resolve_sysroot PATH -# Replace a leading = in PATH with a sysroot. Store the result into -# func_resolve_sysroot_result -func_resolve_sysroot () -{ - func_resolve_sysroot_result=$1 - case $func_resolve_sysroot_result in - =*) - func_stripname '=' '' "$func_resolve_sysroot_result" - func_resolve_sysroot_result=$lt_sysroot$func_stripname_result - ;; - esac -} - -# func_replace_sysroot PATH -# If PATH begins with the sysroot, replace it with = and -# store the result into func_replace_sysroot_result. -func_replace_sysroot () -{ - case "$lt_sysroot:$1" in - ?*:"$lt_sysroot"*) - func_stripname "$lt_sysroot" '' "$1" - func_replace_sysroot_result="=$func_stripname_result" - ;; - *) - # Including no sysroot. - func_replace_sysroot_result=$1 - ;; - esac -} - -# func_infer_tag arg -# Infer tagged configuration to use if any are available and -# if one wasn't chosen via the "--tag" command line option. -# Only attempt this if the compiler in the base compile -# command doesn't match the default compiler. -# arg is usually of the form 'gcc ...' -func_infer_tag () -{ - $opt_debug - if test -n "$available_tags" && test -z "$tagname"; then - CC_quoted= - for arg in $CC; do - func_append_quoted CC_quoted "$arg" - done - CC_expanded=`func_echo_all $CC` - CC_quoted_expanded=`func_echo_all $CC_quoted` - case $@ in - # Blanks in the command may have been stripped by the calling shell, - # but not from the CC environment variable when configure was run. - " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ - " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) ;; - # Blanks at the start of $base_compile will cause this to fail - # if we don't check for them as well. - *) - for z in $available_tags; do - if $GREP "^# ### BEGIN LIBTOOL TAG CONFIG: $z$" < "$progpath" > /dev/null; then - # Evaluate the configuration. - eval "`${SED} -n -e '/^# ### BEGIN LIBTOOL TAG CONFIG: '$z'$/,/^# ### END LIBTOOL TAG CONFIG: '$z'$/p' < $progpath`" - CC_quoted= - for arg in $CC; do - # Double-quote args containing other shell metacharacters. - func_append_quoted CC_quoted "$arg" - done - CC_expanded=`func_echo_all $CC` - CC_quoted_expanded=`func_echo_all $CC_quoted` - case "$@ " in - " $CC "* | "$CC "* | " $CC_expanded "* | "$CC_expanded "* | \ - " $CC_quoted"* | "$CC_quoted "* | " $CC_quoted_expanded "* | "$CC_quoted_expanded "*) - # The compiler in the base compile command matches - # the one in the tagged configuration. - # Assume this is the tagged configuration we want. - tagname=$z - break - ;; - esac - fi - done - # If $tagname still isn't set, then no tagged configuration - # was found and let the user know that the "--tag" command - # line option must be used. - if test -z "$tagname"; then - func_echo "unable to infer tagged configuration" - func_fatal_error "specify a tag with \`--tag'" -# else -# func_verbose "using $tagname tagged configuration" - fi - ;; - esac - fi -} - - - -# func_write_libtool_object output_name pic_name nonpic_name -# Create a libtool object file (analogous to a ".la" file), -# but don't create it if we're doing a dry run. -func_write_libtool_object () -{ - write_libobj=${1} - if test "$build_libtool_libs" = yes; then - write_lobj=\'${2}\' - else - write_lobj=none - fi - - if test "$build_old_libs" = yes; then - write_oldobj=\'${3}\' - else - write_oldobj=none - fi - - $opt_dry_run || { - cat >${write_libobj}T </dev/null` - if test "$?" -eq 0 && test -n "${func_convert_core_file_wine_to_w32_tmp}"; then - func_convert_core_file_wine_to_w32_result=`$ECHO "$func_convert_core_file_wine_to_w32_tmp" | - $SED -e "$lt_sed_naive_backslashify"` - else - func_convert_core_file_wine_to_w32_result= - fi - fi -} -# end: func_convert_core_file_wine_to_w32 - - -# func_convert_core_path_wine_to_w32 ARG -# Helper function used by path conversion functions when $build is *nix, and -# $host is mingw, cygwin, or some other w32 environment. Relies on a correctly -# configured wine environment available, with the winepath program in $build's -# $PATH. Assumes ARG has no leading or trailing path separator characters. -# -# ARG is path to be converted from $build format to win32. -# Result is available in $func_convert_core_path_wine_to_w32_result. -# Unconvertible file (directory) names in ARG are skipped; if no directory names -# are convertible, then the result may be empty. -func_convert_core_path_wine_to_w32 () -{ - $opt_debug - # unfortunately, winepath doesn't convert paths, only file names - func_convert_core_path_wine_to_w32_result="" - if test -n "$1"; then - oldIFS=$IFS - IFS=: - for func_convert_core_path_wine_to_w32_f in $1; do - IFS=$oldIFS - func_convert_core_file_wine_to_w32 "$func_convert_core_path_wine_to_w32_f" - if test -n "$func_convert_core_file_wine_to_w32_result" ; then - if test -z "$func_convert_core_path_wine_to_w32_result"; then - func_convert_core_path_wine_to_w32_result="$func_convert_core_file_wine_to_w32_result" - else - func_append func_convert_core_path_wine_to_w32_result ";$func_convert_core_file_wine_to_w32_result" - fi - fi - done - IFS=$oldIFS - fi -} -# end: func_convert_core_path_wine_to_w32 - - -# func_cygpath ARGS... -# Wrapper around calling the cygpath program via LT_CYGPATH. This is used when -# when (1) $build is *nix and Cygwin is hosted via a wine environment; or (2) -# $build is MSYS and $host is Cygwin, or (3) $build is Cygwin. In case (1) or -# (2), returns the Cygwin file name or path in func_cygpath_result (input -# file name or path is assumed to be in w32 format, as previously converted -# from $build's *nix or MSYS format). In case (3), returns the w32 file name -# or path in func_cygpath_result (input file name or path is assumed to be in -# Cygwin format). Returns an empty string on error. -# -# ARGS are passed to cygpath, with the last one being the file name or path to -# be converted. -# -# Specify the absolute *nix (or w32) name to cygpath in the LT_CYGPATH -# environment variable; do not put it in $PATH. -func_cygpath () -{ - $opt_debug - if test -n "$LT_CYGPATH" && test -f "$LT_CYGPATH"; then - func_cygpath_result=`$LT_CYGPATH "$@" 2>/dev/null` - if test "$?" -ne 0; then - # on failure, ensure result is empty - func_cygpath_result= - fi - else - func_cygpath_result= - func_error "LT_CYGPATH is empty or specifies non-existent file: \`$LT_CYGPATH'" - fi -} -#end: func_cygpath - - -# func_convert_core_msys_to_w32 ARG -# Convert file name or path ARG from MSYS format to w32 format. Return -# result in func_convert_core_msys_to_w32_result. -func_convert_core_msys_to_w32 () -{ - $opt_debug - # awkward: cmd appends spaces to result - func_convert_core_msys_to_w32_result=`( cmd //c echo "$1" ) 2>/dev/null | - $SED -e 's/[ ]*$//' -e "$lt_sed_naive_backslashify"` -} -#end: func_convert_core_msys_to_w32 - - -# func_convert_file_check ARG1 ARG2 -# Verify that ARG1 (a file name in $build format) was converted to $host -# format in ARG2. Otherwise, emit an error message, but continue (resetting -# func_to_host_file_result to ARG1). -func_convert_file_check () -{ - $opt_debug - if test -z "$2" && test -n "$1" ; then - func_error "Could not determine host file name corresponding to" - func_error " \`$1'" - func_error "Continuing, but uninstalled executables may not work." - # Fallback: - func_to_host_file_result="$1" - fi -} -# end func_convert_file_check - - -# func_convert_path_check FROM_PATHSEP TO_PATHSEP FROM_PATH TO_PATH -# Verify that FROM_PATH (a path in $build format) was converted to $host -# format in TO_PATH. Otherwise, emit an error message, but continue, resetting -# func_to_host_file_result to a simplistic fallback value (see below). -func_convert_path_check () -{ - $opt_debug - if test -z "$4" && test -n "$3"; then - func_error "Could not determine the host path corresponding to" - func_error " \`$3'" - func_error "Continuing, but uninstalled executables may not work." - # Fallback. This is a deliberately simplistic "conversion" and - # should not be "improved". See libtool.info. - if test "x$1" != "x$2"; then - lt_replace_pathsep_chars="s|$1|$2|g" - func_to_host_path_result=`echo "$3" | - $SED -e "$lt_replace_pathsep_chars"` - else - func_to_host_path_result="$3" - fi - fi -} -# end func_convert_path_check - - -# func_convert_path_front_back_pathsep FRONTPAT BACKPAT REPL ORIG -# Modifies func_to_host_path_result by prepending REPL if ORIG matches FRONTPAT -# and appending REPL if ORIG matches BACKPAT. -func_convert_path_front_back_pathsep () -{ - $opt_debug - case $4 in - $1 ) func_to_host_path_result="$3$func_to_host_path_result" - ;; - esac - case $4 in - $2 ) func_append func_to_host_path_result "$3" - ;; - esac -} -# end func_convert_path_front_back_pathsep - - -################################################## -# $build to $host FILE NAME CONVERSION FUNCTIONS # -################################################## -# invoked via `$to_host_file_cmd ARG' -# -# In each case, ARG is the path to be converted from $build to $host format. -# Result will be available in $func_to_host_file_result. - - -# func_to_host_file ARG -# Converts the file name ARG from $build format to $host format. Return result -# in func_to_host_file_result. -func_to_host_file () -{ - $opt_debug - $to_host_file_cmd "$1" -} -# end func_to_host_file - - -# func_to_tool_file ARG LAZY -# converts the file name ARG from $build format to toolchain format. Return -# result in func_to_tool_file_result. If the conversion in use is listed -# in (the comma separated) LAZY, no conversion takes place. -func_to_tool_file () -{ - $opt_debug - case ,$2, in - *,"$to_tool_file_cmd",*) - func_to_tool_file_result=$1 - ;; - *) - $to_tool_file_cmd "$1" - func_to_tool_file_result=$func_to_host_file_result - ;; - esac -} -# end func_to_tool_file - - -# func_convert_file_noop ARG -# Copy ARG to func_to_host_file_result. -func_convert_file_noop () -{ - func_to_host_file_result="$1" -} -# end func_convert_file_noop - - -# func_convert_file_msys_to_w32 ARG -# Convert file name ARG from (mingw) MSYS to (mingw) w32 format; automatic -# conversion to w32 is not available inside the cwrapper. Returns result in -# func_to_host_file_result. -func_convert_file_msys_to_w32 () -{ - $opt_debug - func_to_host_file_result="$1" - if test -n "$1"; then - func_convert_core_msys_to_w32 "$1" - func_to_host_file_result="$func_convert_core_msys_to_w32_result" - fi - func_convert_file_check "$1" "$func_to_host_file_result" -} -# end func_convert_file_msys_to_w32 - - -# func_convert_file_cygwin_to_w32 ARG -# Convert file name ARG from Cygwin to w32 format. Returns result in -# func_to_host_file_result. -func_convert_file_cygwin_to_w32 () -{ - $opt_debug - func_to_host_file_result="$1" - if test -n "$1"; then - # because $build is cygwin, we call "the" cygpath in $PATH; no need to use - # LT_CYGPATH in this case. - func_to_host_file_result=`cygpath -m "$1"` - fi - func_convert_file_check "$1" "$func_to_host_file_result" -} -# end func_convert_file_cygwin_to_w32 - - -# func_convert_file_nix_to_w32 ARG -# Convert file name ARG from *nix to w32 format. Requires a wine environment -# and a working winepath. Returns result in func_to_host_file_result. -func_convert_file_nix_to_w32 () -{ - $opt_debug - func_to_host_file_result="$1" - if test -n "$1"; then - func_convert_core_file_wine_to_w32 "$1" - func_to_host_file_result="$func_convert_core_file_wine_to_w32_result" - fi - func_convert_file_check "$1" "$func_to_host_file_result" -} -# end func_convert_file_nix_to_w32 - - -# func_convert_file_msys_to_cygwin ARG -# Convert file name ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. -# Returns result in func_to_host_file_result. -func_convert_file_msys_to_cygwin () -{ - $opt_debug - func_to_host_file_result="$1" - if test -n "$1"; then - func_convert_core_msys_to_w32 "$1" - func_cygpath -u "$func_convert_core_msys_to_w32_result" - func_to_host_file_result="$func_cygpath_result" - fi - func_convert_file_check "$1" "$func_to_host_file_result" -} -# end func_convert_file_msys_to_cygwin - - -# func_convert_file_nix_to_cygwin ARG -# Convert file name ARG from *nix to Cygwin format. Requires Cygwin installed -# in a wine environment, working winepath, and LT_CYGPATH set. Returns result -# in func_to_host_file_result. -func_convert_file_nix_to_cygwin () -{ - $opt_debug - func_to_host_file_result="$1" - if test -n "$1"; then - # convert from *nix to w32, then use cygpath to convert from w32 to cygwin. - func_convert_core_file_wine_to_w32 "$1" - func_cygpath -u "$func_convert_core_file_wine_to_w32_result" - func_to_host_file_result="$func_cygpath_result" - fi - func_convert_file_check "$1" "$func_to_host_file_result" -} -# end func_convert_file_nix_to_cygwin - - -############################################# -# $build to $host PATH CONVERSION FUNCTIONS # -############################################# -# invoked via `$to_host_path_cmd ARG' -# -# In each case, ARG is the path to be converted from $build to $host format. -# The result will be available in $func_to_host_path_result. -# -# Path separators are also converted from $build format to $host format. If -# ARG begins or ends with a path separator character, it is preserved (but -# converted to $host format) on output. -# -# All path conversion functions are named using the following convention: -# file name conversion function : func_convert_file_X_to_Y () -# path conversion function : func_convert_path_X_to_Y () -# where, for any given $build/$host combination the 'X_to_Y' value is the -# same. If conversion functions are added for new $build/$host combinations, -# the two new functions must follow this pattern, or func_init_to_host_path_cmd -# will break. - - -# func_init_to_host_path_cmd -# Ensures that function "pointer" variable $to_host_path_cmd is set to the -# appropriate value, based on the value of $to_host_file_cmd. -to_host_path_cmd= -func_init_to_host_path_cmd () -{ - $opt_debug - if test -z "$to_host_path_cmd"; then - func_stripname 'func_convert_file_' '' "$to_host_file_cmd" - to_host_path_cmd="func_convert_path_${func_stripname_result}" - fi -} - - -# func_to_host_path ARG -# Converts the path ARG from $build format to $host format. Return result -# in func_to_host_path_result. -func_to_host_path () -{ - $opt_debug - func_init_to_host_path_cmd - $to_host_path_cmd "$1" -} -# end func_to_host_path - - -# func_convert_path_noop ARG -# Copy ARG to func_to_host_path_result. -func_convert_path_noop () -{ - func_to_host_path_result="$1" -} -# end func_convert_path_noop - - -# func_convert_path_msys_to_w32 ARG -# Convert path ARG from (mingw) MSYS to (mingw) w32 format; automatic -# conversion to w32 is not available inside the cwrapper. Returns result in -# func_to_host_path_result. -func_convert_path_msys_to_w32 () -{ - $opt_debug - func_to_host_path_result="$1" - if test -n "$1"; then - # Remove leading and trailing path separator characters from ARG. MSYS - # behavior is inconsistent here; cygpath turns them into '.;' and ';.'; - # and winepath ignores them completely. - func_stripname : : "$1" - func_to_host_path_tmp1=$func_stripname_result - func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" - func_to_host_path_result="$func_convert_core_msys_to_w32_result" - func_convert_path_check : ";" \ - "$func_to_host_path_tmp1" "$func_to_host_path_result" - func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" - fi -} -# end func_convert_path_msys_to_w32 - - -# func_convert_path_cygwin_to_w32 ARG -# Convert path ARG from Cygwin to w32 format. Returns result in -# func_to_host_file_result. -func_convert_path_cygwin_to_w32 () -{ - $opt_debug - func_to_host_path_result="$1" - if test -n "$1"; then - # See func_convert_path_msys_to_w32: - func_stripname : : "$1" - func_to_host_path_tmp1=$func_stripname_result - func_to_host_path_result=`cygpath -m -p "$func_to_host_path_tmp1"` - func_convert_path_check : ";" \ - "$func_to_host_path_tmp1" "$func_to_host_path_result" - func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" - fi -} -# end func_convert_path_cygwin_to_w32 - - -# func_convert_path_nix_to_w32 ARG -# Convert path ARG from *nix to w32 format. Requires a wine environment and -# a working winepath. Returns result in func_to_host_file_result. -func_convert_path_nix_to_w32 () -{ - $opt_debug - func_to_host_path_result="$1" - if test -n "$1"; then - # See func_convert_path_msys_to_w32: - func_stripname : : "$1" - func_to_host_path_tmp1=$func_stripname_result - func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" - func_to_host_path_result="$func_convert_core_path_wine_to_w32_result" - func_convert_path_check : ";" \ - "$func_to_host_path_tmp1" "$func_to_host_path_result" - func_convert_path_front_back_pathsep ":*" "*:" ";" "$1" - fi -} -# end func_convert_path_nix_to_w32 - - -# func_convert_path_msys_to_cygwin ARG -# Convert path ARG from MSYS to Cygwin format. Requires LT_CYGPATH set. -# Returns result in func_to_host_file_result. -func_convert_path_msys_to_cygwin () -{ - $opt_debug - func_to_host_path_result="$1" - if test -n "$1"; then - # See func_convert_path_msys_to_w32: - func_stripname : : "$1" - func_to_host_path_tmp1=$func_stripname_result - func_convert_core_msys_to_w32 "$func_to_host_path_tmp1" - func_cygpath -u -p "$func_convert_core_msys_to_w32_result" - func_to_host_path_result="$func_cygpath_result" - func_convert_path_check : : \ - "$func_to_host_path_tmp1" "$func_to_host_path_result" - func_convert_path_front_back_pathsep ":*" "*:" : "$1" - fi -} -# end func_convert_path_msys_to_cygwin - - -# func_convert_path_nix_to_cygwin ARG -# Convert path ARG from *nix to Cygwin format. Requires Cygwin installed in a -# a wine environment, working winepath, and LT_CYGPATH set. Returns result in -# func_to_host_file_result. -func_convert_path_nix_to_cygwin () -{ - $opt_debug - func_to_host_path_result="$1" - if test -n "$1"; then - # Remove leading and trailing path separator characters from - # ARG. msys behavior is inconsistent here, cygpath turns them - # into '.;' and ';.', and winepath ignores them completely. - func_stripname : : "$1" - func_to_host_path_tmp1=$func_stripname_result - func_convert_core_path_wine_to_w32 "$func_to_host_path_tmp1" - func_cygpath -u -p "$func_convert_core_path_wine_to_w32_result" - func_to_host_path_result="$func_cygpath_result" - func_convert_path_check : : \ - "$func_to_host_path_tmp1" "$func_to_host_path_result" - func_convert_path_front_back_pathsep ":*" "*:" : "$1" - fi -} -# end func_convert_path_nix_to_cygwin - - -# func_mode_compile arg... -func_mode_compile () -{ - $opt_debug - # Get the compilation command and the source file. - base_compile= - srcfile="$nonopt" # always keep a non-empty value in "srcfile" - suppress_opt=yes - suppress_output= - arg_mode=normal - libobj= - later= - pie_flag= - - for arg - do - case $arg_mode in - arg ) - # do not "continue". Instead, add this to base_compile - lastarg="$arg" - arg_mode=normal - ;; - - target ) - libobj="$arg" - arg_mode=normal - continue - ;; - - normal ) - # Accept any command-line options. - case $arg in - -o) - test -n "$libobj" && \ - func_fatal_error "you cannot specify \`-o' more than once" - arg_mode=target - continue - ;; - - -pie | -fpie | -fPIE) - func_append pie_flag " $arg" - continue - ;; - - -shared | -static | -prefer-pic | -prefer-non-pic) - func_append later " $arg" - continue - ;; - - -no-suppress) - suppress_opt=no - continue - ;; - - -Xcompiler) - arg_mode=arg # the next one goes into the "base_compile" arg list - continue # The current "srcfile" will either be retained or - ;; # replaced later. I would guess that would be a bug. - - -Wc,*) - func_stripname '-Wc,' '' "$arg" - args=$func_stripname_result - lastarg= - save_ifs="$IFS"; IFS=',' - for arg in $args; do - IFS="$save_ifs" - func_append_quoted lastarg "$arg" - done - IFS="$save_ifs" - func_stripname ' ' '' "$lastarg" - lastarg=$func_stripname_result - - # Add the arguments to base_compile. - func_append base_compile " $lastarg" - continue - ;; - - *) - # Accept the current argument as the source file. - # The previous "srcfile" becomes the current argument. - # - lastarg="$srcfile" - srcfile="$arg" - ;; - esac # case $arg - ;; - esac # case $arg_mode - - # Aesthetically quote the previous argument. - func_append_quoted base_compile "$lastarg" - done # for arg - - case $arg_mode in - arg) - func_fatal_error "you must specify an argument for -Xcompile" - ;; - target) - func_fatal_error "you must specify a target with \`-o'" - ;; - *) - # Get the name of the library object. - test -z "$libobj" && { - func_basename "$srcfile" - libobj="$func_basename_result" - } - ;; - esac - - # Recognize several different file suffixes. - # If the user specifies -o file.o, it is replaced with file.lo - case $libobj in - *.[cCFSifmso] | \ - *.ada | *.adb | *.ads | *.asm | \ - *.c++ | *.cc | *.ii | *.class | *.cpp | *.cxx | \ - *.[fF][09]? | *.for | *.java | *.go | *.obj | *.sx | *.cu | *.cup) - func_xform "$libobj" - libobj=$func_xform_result - ;; - esac - - case $libobj in - *.lo) func_lo2o "$libobj"; obj=$func_lo2o_result ;; - *) - func_fatal_error "cannot determine name of library object from \`$libobj'" - ;; - esac - - func_infer_tag $base_compile - - for arg in $later; do - case $arg in - -shared) - test "$build_libtool_libs" != yes && \ - func_fatal_configuration "can not build a shared library" - build_old_libs=no - continue - ;; - - -static) - build_libtool_libs=no - build_old_libs=yes - continue - ;; - - -prefer-pic) - pic_mode=yes - continue - ;; - - -prefer-non-pic) - pic_mode=no - continue - ;; - esac - done - - func_quote_for_eval "$libobj" - test "X$libobj" != "X$func_quote_for_eval_result" \ - && $ECHO "X$libobj" | $GREP '[]~#^*{};<>?"'"'"' &()|`$[]' \ - && func_warning "libobj name \`$libobj' may not contain shell special characters." - func_dirname_and_basename "$obj" "/" "" - objname="$func_basename_result" - xdir="$func_dirname_result" - lobj=${xdir}$objdir/$objname - - test -z "$base_compile" && \ - func_fatal_help "you must specify a compilation command" - - # Delete any leftover library objects. - if test "$build_old_libs" = yes; then - removelist="$obj $lobj $libobj ${libobj}T" - else - removelist="$lobj $libobj ${libobj}T" - fi - - # On Cygwin there's no "real" PIC flag so we must build both object types - case $host_os in - cygwin* | mingw* | pw32* | os2* | cegcc*) - pic_mode=default - ;; - esac - if test "$pic_mode" = no && test "$deplibs_check_method" != pass_all; then - # non-PIC code in shared libraries is not supported - pic_mode=default - fi - - # Calculate the filename of the output object if compiler does - # not support -o with -c - if test "$compiler_c_o" = no; then - output_obj=`$ECHO "$srcfile" | $SED 's%^.*/%%; s%\.[^.]*$%%'`.${objext} - lockfile="$output_obj.lock" - else - output_obj= - need_locks=no - lockfile= - fi - - # Lock this critical section if it is needed - # We use this script file to make the link, it avoids creating a new file - if test "$need_locks" = yes; then - until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do - func_echo "Waiting for $lockfile to be removed" - sleep 2 - done - elif test "$need_locks" = warn; then - if test -f "$lockfile"; then - $ECHO "\ -*** ERROR, $lockfile exists and contains: -`cat $lockfile 2>/dev/null` - -This indicates that another process is trying to use the same -temporary object file, and libtool could not work around it because -your compiler does not support \`-c' and \`-o' together. If you -repeat this compilation, it may succeed, by chance, but you had better -avoid parallel builds (make -j) in this platform, or get a better -compiler." - - $opt_dry_run || $RM $removelist - exit $EXIT_FAILURE - fi - func_append removelist " $output_obj" - $ECHO "$srcfile" > "$lockfile" - fi - - $opt_dry_run || $RM $removelist - func_append removelist " $lockfile" - trap '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' 1 2 15 - - func_to_tool_file "$srcfile" func_convert_file_msys_to_w32 - srcfile=$func_to_tool_file_result - func_quote_for_eval "$srcfile" - qsrcfile=$func_quote_for_eval_result - - # Only build a PIC object if we are building libtool libraries. - if test "$build_libtool_libs" = yes; then - # Without this assignment, base_compile gets emptied. - fbsd_hideous_sh_bug=$base_compile - - if test "$pic_mode" != no; then - command="$base_compile $qsrcfile $pic_flag" - else - # Don't build PIC code - command="$base_compile $qsrcfile" - fi - - func_mkdir_p "$xdir$objdir" - - if test -z "$output_obj"; then - # Place PIC objects in $objdir - func_append command " -o $lobj" - fi - - func_show_eval_locale "$command" \ - 'test -n "$output_obj" && $RM $removelist; exit $EXIT_FAILURE' - - if test "$need_locks" = warn && - test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then - $ECHO "\ -*** ERROR, $lockfile contains: -`cat $lockfile 2>/dev/null` - -but it should contain: -$srcfile - -This indicates that another process is trying to use the same -temporary object file, and libtool could not work around it because -your compiler does not support \`-c' and \`-o' together. If you -repeat this compilation, it may succeed, by chance, but you had better -avoid parallel builds (make -j) in this platform, or get a better -compiler." - - $opt_dry_run || $RM $removelist - exit $EXIT_FAILURE - fi - - # Just move the object if needed, then go on to compile the next one - if test -n "$output_obj" && test "X$output_obj" != "X$lobj"; then - func_show_eval '$MV "$output_obj" "$lobj"' \ - 'error=$?; $opt_dry_run || $RM $removelist; exit $error' - fi - - # Allow error messages only from the first compilation. - if test "$suppress_opt" = yes; then - suppress_output=' >/dev/null 2>&1' - fi - fi - - # Only build a position-dependent object if we build old libraries. - if test "$build_old_libs" = yes; then - if test "$pic_mode" != yes; then - # Don't build PIC code - command="$base_compile $qsrcfile$pie_flag" - else - command="$base_compile $qsrcfile $pic_flag" - fi - if test "$compiler_c_o" = yes; then - func_append command " -o $obj" - fi - - # Suppress compiler output if we already did a PIC compilation. - func_append command "$suppress_output" - func_show_eval_locale "$command" \ - '$opt_dry_run || $RM $removelist; exit $EXIT_FAILURE' - - if test "$need_locks" = warn && - test "X`cat $lockfile 2>/dev/null`" != "X$srcfile"; then - $ECHO "\ -*** ERROR, $lockfile contains: -`cat $lockfile 2>/dev/null` - -but it should contain: -$srcfile - -This indicates that another process is trying to use the same -temporary object file, and libtool could not work around it because -your compiler does not support \`-c' and \`-o' together. If you -repeat this compilation, it may succeed, by chance, but you had better -avoid parallel builds (make -j) in this platform, or get a better -compiler." - - $opt_dry_run || $RM $removelist - exit $EXIT_FAILURE - fi - - # Just move the object if needed - if test -n "$output_obj" && test "X$output_obj" != "X$obj"; then - func_show_eval '$MV "$output_obj" "$obj"' \ - 'error=$?; $opt_dry_run || $RM $removelist; exit $error' - fi - fi - - $opt_dry_run || { - func_write_libtool_object "$libobj" "$objdir/$objname" "$objname" - - # Unlock the critical section if it was locked - if test "$need_locks" != no; then - removelist=$lockfile - $RM "$lockfile" - fi - } - - exit $EXIT_SUCCESS -} - -$opt_help || { - test "$opt_mode" = compile && func_mode_compile ${1+"$@"} -} - -func_mode_help () -{ - # We need to display help for each of the modes. - case $opt_mode in - "") - # Generic help is extracted from the usage comments - # at the start of this file. - func_help - ;; - - clean) - $ECHO \ -"Usage: $progname [OPTION]... --mode=clean RM [RM-OPTION]... FILE... - -Remove files from the build directory. - -RM is the name of the program to use to delete files associated with each FILE -(typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed -to RM. - -If FILE is a libtool library, object or program, all the files associated -with it are deleted. Otherwise, only FILE itself is deleted using RM." - ;; - - compile) - $ECHO \ -"Usage: $progname [OPTION]... --mode=compile COMPILE-COMMAND... SOURCEFILE - -Compile a source file into a libtool library object. - -This mode accepts the following additional options: - - -o OUTPUT-FILE set the output file name to OUTPUT-FILE - -no-suppress do not suppress compiler output for multiple passes - -prefer-pic try to build PIC objects only - -prefer-non-pic try to build non-PIC objects only - -shared do not build a \`.o' file suitable for static linking - -static only build a \`.o' file suitable for static linking - -Wc,FLAG pass FLAG directly to the compiler - -COMPILE-COMMAND is a command to be used in creating a \`standard' object file -from the given SOURCEFILE. - -The output file name is determined by removing the directory component from -SOURCEFILE, then substituting the C source code suffix \`.c' with the -library object suffix, \`.lo'." - ;; - - execute) - $ECHO \ -"Usage: $progname [OPTION]... --mode=execute COMMAND [ARGS]... - -Automatically set library path, then run a program. - -This mode accepts the following additional options: - - -dlopen FILE add the directory containing FILE to the library path - -This mode sets the library path environment variable according to \`-dlopen' -flags. - -If any of the ARGS are libtool executable wrappers, then they are translated -into their corresponding uninstalled binary, and any of their required library -directories are added to the library path. - -Then, COMMAND is executed, with ARGS as arguments." - ;; - - finish) - $ECHO \ -"Usage: $progname [OPTION]... --mode=finish [LIBDIR]... - -Complete the installation of libtool libraries. - -Each LIBDIR is a directory that contains libtool libraries. - -The commands that this mode executes may require superuser privileges. Use -the \`--dry-run' option if you just want to see what would be executed." - ;; - - install) - $ECHO \ -"Usage: $progname [OPTION]... --mode=install INSTALL-COMMAND... - -Install executables or libraries. - -INSTALL-COMMAND is the installation command. The first component should be -either the \`install' or \`cp' program. - -The following components of INSTALL-COMMAND are treated specially: - - -inst-prefix-dir PREFIX-DIR Use PREFIX-DIR as a staging area for installation - -The rest of the components are interpreted as arguments to that command (only -BSD-compatible install options are recognized)." - ;; - - link) - $ECHO \ -"Usage: $progname [OPTION]... --mode=link LINK-COMMAND... - -Link object files or libraries together to form another library, or to -create an executable program. - -LINK-COMMAND is a command using the C compiler that you would use to create -a program from several object files. - -The following components of LINK-COMMAND are treated specially: - - -all-static do not do any dynamic linking at all - -avoid-version do not add a version suffix if possible - -bindir BINDIR specify path to binaries directory (for systems where - libraries must be found in the PATH setting at runtime) - -dlopen FILE \`-dlpreopen' FILE if it cannot be dlopened at runtime - -dlpreopen FILE link in FILE and add its symbols to lt_preloaded_symbols - -export-dynamic allow symbols from OUTPUT-FILE to be resolved with dlsym(3) - -export-symbols SYMFILE - try to export only the symbols listed in SYMFILE - -export-symbols-regex REGEX - try to export only the symbols matching REGEX - -LLIBDIR search LIBDIR for required installed libraries - -lNAME OUTPUT-FILE requires the installed library libNAME - -module build a library that can dlopened - -no-fast-install disable the fast-install mode - -no-install link a not-installable executable - -no-undefined declare that a library does not refer to external symbols - -o OUTPUT-FILE create OUTPUT-FILE from the specified objects - -objectlist FILE Use a list of object files found in FILE to specify objects - -precious-files-regex REGEX - don't remove output files matching REGEX - -release RELEASE specify package release information - -rpath LIBDIR the created library will eventually be installed in LIBDIR - -R[ ]LIBDIR add LIBDIR to the runtime path of programs and libraries - -shared only do dynamic linking of libtool libraries - -shrext SUFFIX override the standard shared library file extension - -static do not do any dynamic linking of uninstalled libtool libraries - -static-libtool-libs - do not do any dynamic linking of libtool libraries - -version-info CURRENT[:REVISION[:AGE]] - specify library version info [each variable defaults to 0] - -weak LIBNAME declare that the target provides the LIBNAME interface - -Wc,FLAG - -Xcompiler FLAG pass linker-specific FLAG directly to the compiler - -Wl,FLAG - -Xlinker FLAG pass linker-specific FLAG directly to the linker - -XCClinker FLAG pass link-specific FLAG to the compiler driver (CC) - -All other options (arguments beginning with \`-') are ignored. - -Every other argument is treated as a filename. Files ending in \`.la' are -treated as uninstalled libtool libraries, other files are standard or library -object files. - -If the OUTPUT-FILE ends in \`.la', then a libtool library is created, -only library objects (\`.lo' files) may be specified, and \`-rpath' is -required, except when creating a convenience library. - -If OUTPUT-FILE ends in \`.a' or \`.lib', then a standard library is created -using \`ar' and \`ranlib', or on Windows using \`lib'. - -If OUTPUT-FILE ends in \`.lo' or \`.${objext}', then a reloadable object file -is created, otherwise an executable program is created." - ;; - - uninstall) - $ECHO \ -"Usage: $progname [OPTION]... --mode=uninstall RM [RM-OPTION]... FILE... - -Remove libraries from an installation directory. - -RM is the name of the program to use to delete files associated with each FILE -(typically \`/bin/rm'). RM-OPTIONS are options (such as \`-f') to be passed -to RM. - -If FILE is a libtool library, all the files associated with it are deleted. -Otherwise, only FILE itself is deleted using RM." - ;; - - *) - func_fatal_help "invalid operation mode \`$opt_mode'" - ;; - esac - - echo - $ECHO "Try \`$progname --help' for more information about other modes." -} - -# Now that we've collected a possible --mode arg, show help if necessary -if $opt_help; then - if test "$opt_help" = :; then - func_mode_help - else - { - func_help noexit - for opt_mode in compile link execute install finish uninstall clean; do - func_mode_help - done - } | sed -n '1p; 2,$s/^Usage:/ or: /p' - { - func_help noexit - for opt_mode in compile link execute install finish uninstall clean; do - echo - func_mode_help - done - } | - sed '1d - /^When reporting/,/^Report/{ - H - d - } - $x - /information about other modes/d - /more detailed .*MODE/d - s/^Usage:.*--mode=\([^ ]*\) .*/Description of \1 mode:/' - fi - exit $? -fi - - -# func_mode_execute arg... -func_mode_execute () -{ - $opt_debug - # The first argument is the command name. - cmd="$nonopt" - test -z "$cmd" && \ - func_fatal_help "you must specify a COMMAND" - - # Handle -dlopen flags immediately. - for file in $opt_dlopen; do - test -f "$file" \ - || func_fatal_help "\`$file' is not a file" - - dir= - case $file in - *.la) - func_resolve_sysroot "$file" - file=$func_resolve_sysroot_result - - # Check to see that this really is a libtool archive. - func_lalib_unsafe_p "$file" \ - || func_fatal_help "\`$lib' is not a valid libtool archive" - - # Read the libtool library. - dlname= - library_names= - func_source "$file" - - # Skip this library if it cannot be dlopened. - if test -z "$dlname"; then - # Warn if it was a shared library. - test -n "$library_names" && \ - func_warning "\`$file' was not linked with \`-export-dynamic'" - continue - fi - - func_dirname "$file" "" "." - dir="$func_dirname_result" - - if test -f "$dir/$objdir/$dlname"; then - func_append dir "/$objdir" - else - if test ! -f "$dir/$dlname"; then - func_fatal_error "cannot find \`$dlname' in \`$dir' or \`$dir/$objdir'" - fi - fi - ;; - - *.lo) - # Just add the directory containing the .lo file. - func_dirname "$file" "" "." - dir="$func_dirname_result" - ;; - - *) - func_warning "\`-dlopen' is ignored for non-libtool libraries and objects" - continue - ;; - esac - - # Get the absolute pathname. - absdir=`cd "$dir" && pwd` - test -n "$absdir" && dir="$absdir" - - # Now add the directory to shlibpath_var. - if eval "test -z \"\$$shlibpath_var\""; then - eval "$shlibpath_var=\"\$dir\"" - else - eval "$shlibpath_var=\"\$dir:\$$shlibpath_var\"" - fi - done - - # This variable tells wrapper scripts just to set shlibpath_var - # rather than running their programs. - libtool_execute_magic="$magic" - - # Check if any of the arguments is a wrapper script. - args= - for file - do - case $file in - -* | *.la | *.lo ) ;; - *) - # Do a test to see if this is really a libtool program. - if func_ltwrapper_script_p "$file"; then - func_source "$file" - # Transform arg to wrapped name. - file="$progdir/$program" - elif func_ltwrapper_executable_p "$file"; then - func_ltwrapper_scriptname "$file" - func_source "$func_ltwrapper_scriptname_result" - # Transform arg to wrapped name. - file="$progdir/$program" - fi - ;; - esac - # Quote arguments (to preserve shell metacharacters). - func_append_quoted args "$file" - done - - if test "X$opt_dry_run" = Xfalse; then - if test -n "$shlibpath_var"; then - # Export the shlibpath_var. - eval "export $shlibpath_var" - fi - - # Restore saved environment variables - for lt_var in LANG LANGUAGE LC_ALL LC_CTYPE LC_COLLATE LC_MESSAGES - do - eval "if test \"\${save_$lt_var+set}\" = set; then - $lt_var=\$save_$lt_var; export $lt_var - else - $lt_unset $lt_var - fi" - done - - # Now prepare to actually exec the command. - exec_cmd="\$cmd$args" - else - # Display what would be done. - if test -n "$shlibpath_var"; then - eval "\$ECHO \"\$shlibpath_var=\$$shlibpath_var\"" - echo "export $shlibpath_var" - fi - $ECHO "$cmd$args" - exit $EXIT_SUCCESS - fi -} - -test "$opt_mode" = execute && func_mode_execute ${1+"$@"} - - -# func_mode_finish arg... -func_mode_finish () -{ - $opt_debug - libs= - libdirs= - admincmds= - - for opt in "$nonopt" ${1+"$@"} - do - if test -d "$opt"; then - func_append libdirs " $opt" - - elif test -f "$opt"; then - if func_lalib_unsafe_p "$opt"; then - func_append libs " $opt" - else - func_warning "\`$opt' is not a valid libtool archive" - fi - - else - func_fatal_error "invalid argument \`$opt'" - fi - done - - if test -n "$libs"; then - if test -n "$lt_sysroot"; then - sysroot_regex=`$ECHO "$lt_sysroot" | $SED "$sed_make_literal_regex"` - sysroot_cmd="s/\([ ']\)$sysroot_regex/\1/g;" - else - sysroot_cmd= - fi - - # Remove sysroot references - if $opt_dry_run; then - for lib in $libs; do - echo "removing references to $lt_sysroot and \`=' prefixes from $lib" - done - else - tmpdir=`func_mktempdir` - for lib in $libs; do - sed -e "${sysroot_cmd} s/\([ ']-[LR]\)=/\1/g; s/\([ ']\)=/\1/g" $lib \ - > $tmpdir/tmp-la - mv -f $tmpdir/tmp-la $lib - done - ${RM}r "$tmpdir" - fi - fi - - if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then - for libdir in $libdirs; do - if test -n "$finish_cmds"; then - # Do each command in the finish commands. - func_execute_cmds "$finish_cmds" 'admincmds="$admincmds -'"$cmd"'"' - fi - if test -n "$finish_eval"; then - # Do the single finish_eval. - eval cmds=\"$finish_eval\" - $opt_dry_run || eval "$cmds" || func_append admincmds " - $cmds" - fi - done - fi - - # Exit here if they wanted silent mode. - $opt_silent && exit $EXIT_SUCCESS - - if test -n "$finish_cmds$finish_eval" && test -n "$libdirs"; then - echo "----------------------------------------------------------------------" - echo "Libraries have been installed in:" - for libdir in $libdirs; do - $ECHO " $libdir" - done - echo - echo "If you ever happen to want to link against installed libraries" - echo "in a given directory, LIBDIR, you must either use libtool, and" - echo "specify the full pathname of the library, or use the \`-LLIBDIR'" - echo "flag during linking and do at least one of the following:" - if test -n "$shlibpath_var"; then - echo " - add LIBDIR to the \`$shlibpath_var' environment variable" - echo " during execution" - fi - if test -n "$runpath_var"; then - echo " - add LIBDIR to the \`$runpath_var' environment variable" - echo " during linking" - fi - if test -n "$hardcode_libdir_flag_spec"; then - libdir=LIBDIR - eval flag=\"$hardcode_libdir_flag_spec\" - - $ECHO " - use the \`$flag' linker flag" - fi - if test -n "$admincmds"; then - $ECHO " - have your system administrator run these commands:$admincmds" - fi - if test -f /etc/ld.so.conf; then - echo " - have your system administrator add LIBDIR to \`/etc/ld.so.conf'" - fi - echo - - echo "See any operating system documentation about shared libraries for" - case $host in - solaris2.[6789]|solaris2.1[0-9]) - echo "more information, such as the ld(1), crle(1) and ld.so(8) manual" - echo "pages." - ;; - *) - echo "more information, such as the ld(1) and ld.so(8) manual pages." - ;; - esac - echo "----------------------------------------------------------------------" - fi - exit $EXIT_SUCCESS -} - -test "$opt_mode" = finish && func_mode_finish ${1+"$@"} - - -# func_mode_install arg... -func_mode_install () -{ - $opt_debug - # There may be an optional sh(1) argument at the beginning of - # install_prog (especially on Windows NT). - if test "$nonopt" = "$SHELL" || test "$nonopt" = /bin/sh || - # Allow the use of GNU shtool's install command. - case $nonopt in *shtool*) :;; *) false;; esac; then - # Aesthetically quote it. - func_quote_for_eval "$nonopt" - install_prog="$func_quote_for_eval_result " - arg=$1 - shift - else - install_prog= - arg=$nonopt - fi - - # The real first argument should be the name of the installation program. - # Aesthetically quote it. - func_quote_for_eval "$arg" - func_append install_prog "$func_quote_for_eval_result" - install_shared_prog=$install_prog - case " $install_prog " in - *[\\\ /]cp\ *) install_cp=: ;; - *) install_cp=false ;; - esac - - # We need to accept at least all the BSD install flags. - dest= - files= - opts= - prev= - install_type= - isdir=no - stripme= - no_mode=: - for arg - do - arg2= - if test -n "$dest"; then - func_append files " $dest" - dest=$arg - continue - fi - - case $arg in - -d) isdir=yes ;; - -f) - if $install_cp; then :; else - prev=$arg - fi - ;; - -g | -m | -o) - prev=$arg - ;; - -s) - stripme=" -s" - continue - ;; - -*) - ;; - *) - # If the previous option needed an argument, then skip it. - if test -n "$prev"; then - if test "x$prev" = x-m && test -n "$install_override_mode"; then - arg2=$install_override_mode - no_mode=false - fi - prev= - else - dest=$arg - continue - fi - ;; - esac - - # Aesthetically quote the argument. - func_quote_for_eval "$arg" - func_append install_prog " $func_quote_for_eval_result" - if test -n "$arg2"; then - func_quote_for_eval "$arg2" - fi - func_append install_shared_prog " $func_quote_for_eval_result" - done - - test -z "$install_prog" && \ - func_fatal_help "you must specify an install program" - - test -n "$prev" && \ - func_fatal_help "the \`$prev' option requires an argument" - - if test -n "$install_override_mode" && $no_mode; then - if $install_cp; then :; else - func_quote_for_eval "$install_override_mode" - func_append install_shared_prog " -m $func_quote_for_eval_result" - fi - fi - - if test -z "$files"; then - if test -z "$dest"; then - func_fatal_help "no file or destination specified" - else - func_fatal_help "you must specify a destination" - fi - fi - - # Strip any trailing slash from the destination. - func_stripname '' '/' "$dest" - dest=$func_stripname_result - - # Check to see that the destination is a directory. - test -d "$dest" && isdir=yes - if test "$isdir" = yes; then - destdir="$dest" - destname= - else - func_dirname_and_basename "$dest" "" "." - destdir="$func_dirname_result" - destname="$func_basename_result" - - # Not a directory, so check to see that there is only one file specified. - set dummy $files; shift - test "$#" -gt 1 && \ - func_fatal_help "\`$dest' is not a directory" - fi - case $destdir in - [\\/]* | [A-Za-z]:[\\/]*) ;; - *) - for file in $files; do - case $file in - *.lo) ;; - *) - func_fatal_help "\`$destdir' must be an absolute directory name" - ;; - esac - done - ;; - esac - - # This variable tells wrapper scripts just to set variables rather - # than running their programs. - libtool_install_magic="$magic" - - staticlibs= - future_libdirs= - current_libdirs= - for file in $files; do - - # Do each installation. - case $file in - *.$libext) - # Do the static libraries later. - func_append staticlibs " $file" - ;; - - *.la) - func_resolve_sysroot "$file" - file=$func_resolve_sysroot_result - - # Check to see that this really is a libtool archive. - func_lalib_unsafe_p "$file" \ - || func_fatal_help "\`$file' is not a valid libtool archive" - - library_names= - old_library= - relink_command= - func_source "$file" - - # Add the libdir to current_libdirs if it is the destination. - if test "X$destdir" = "X$libdir"; then - case "$current_libdirs " in - *" $libdir "*) ;; - *) func_append current_libdirs " $libdir" ;; - esac - else - # Note the libdir as a future libdir. - case "$future_libdirs " in - *" $libdir "*) ;; - *) func_append future_libdirs " $libdir" ;; - esac - fi - - func_dirname "$file" "/" "" - dir="$func_dirname_result" - func_append dir "$objdir" - - if test -n "$relink_command"; then - # Determine the prefix the user has applied to our future dir. - inst_prefix_dir=`$ECHO "$destdir" | $SED -e "s%$libdir\$%%"` - - # Don't allow the user to place us outside of our expected - # location b/c this prevents finding dependent libraries that - # are installed to the same prefix. - # At present, this check doesn't affect windows .dll's that - # are installed into $libdir/../bin (currently, that works fine) - # but it's something to keep an eye on. - test "$inst_prefix_dir" = "$destdir" && \ - func_fatal_error "error: cannot install \`$file' to a directory not ending in $libdir" - - if test -n "$inst_prefix_dir"; then - # Stick the inst_prefix_dir data into the link command. - relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%-inst-prefix-dir $inst_prefix_dir%"` - else - relink_command=`$ECHO "$relink_command" | $SED "s%@inst_prefix_dir@%%"` - fi - - func_warning "relinking \`$file'" - func_show_eval "$relink_command" \ - 'func_fatal_error "error: relink \`$file'\'' with the above command before installing it"' - fi - - # See the names of the shared library. - set dummy $library_names; shift - if test -n "$1"; then - realname="$1" - shift - - srcname="$realname" - test -n "$relink_command" && srcname="$realname"T - - # Install the shared library and build the symlinks. - func_show_eval "$install_shared_prog $dir/$srcname $destdir/$realname" \ - 'exit $?' - tstripme="$stripme" - case $host_os in - cygwin* | mingw* | pw32* | cegcc*) - case $realname in - *.dll.a) - tstripme="" - ;; - esac - ;; - esac - if test -n "$tstripme" && test -n "$striplib"; then - func_show_eval "$striplib $destdir/$realname" 'exit $?' - fi - - if test "$#" -gt 0; then - # Delete the old symlinks, and create new ones. - # Try `ln -sf' first, because the `ln' binary might depend on - # the symlink we replace! Solaris /bin/ln does not understand -f, - # so we also need to try rm && ln -s. - for linkname - do - test "$linkname" != "$realname" \ - && func_show_eval "(cd $destdir && { $LN_S -f $realname $linkname || { $RM $linkname && $LN_S $realname $linkname; }; })" - done - fi - - # Do each command in the postinstall commands. - lib="$destdir/$realname" - func_execute_cmds "$postinstall_cmds" 'exit $?' - fi - - # Install the pseudo-library for information purposes. - func_basename "$file" - name="$func_basename_result" - instname="$dir/$name"i - func_show_eval "$install_prog $instname $destdir/$name" 'exit $?' - - # Maybe install the static library, too. - test -n "$old_library" && func_append staticlibs " $dir/$old_library" - ;; - - *.lo) - # Install (i.e. copy) a libtool object. - - # Figure out destination file name, if it wasn't already specified. - if test -n "$destname"; then - destfile="$destdir/$destname" - else - func_basename "$file" - destfile="$func_basename_result" - destfile="$destdir/$destfile" - fi - - # Deduce the name of the destination old-style object file. - case $destfile in - *.lo) - func_lo2o "$destfile" - staticdest=$func_lo2o_result - ;; - *.$objext) - staticdest="$destfile" - destfile= - ;; - *) - func_fatal_help "cannot copy a libtool object to \`$destfile'" - ;; - esac - - # Install the libtool object if requested. - test -n "$destfile" && \ - func_show_eval "$install_prog $file $destfile" 'exit $?' - - # Install the old object if enabled. - if test "$build_old_libs" = yes; then - # Deduce the name of the old-style object file. - func_lo2o "$file" - staticobj=$func_lo2o_result - func_show_eval "$install_prog \$staticobj \$staticdest" 'exit $?' - fi - exit $EXIT_SUCCESS - ;; - - *) - # Figure out destination file name, if it wasn't already specified. - if test -n "$destname"; then - destfile="$destdir/$destname" - else - func_basename "$file" - destfile="$func_basename_result" - destfile="$destdir/$destfile" - fi - - # If the file is missing, and there is a .exe on the end, strip it - # because it is most likely a libtool script we actually want to - # install - stripped_ext="" - case $file in - *.exe) - if test ! -f "$file"; then - func_stripname '' '.exe' "$file" - file=$func_stripname_result - stripped_ext=".exe" - fi - ;; - esac - - # Do a test to see if this is really a libtool program. - case $host in - *cygwin* | *mingw*) - if func_ltwrapper_executable_p "$file"; then - func_ltwrapper_scriptname "$file" - wrapper=$func_ltwrapper_scriptname_result - else - func_stripname '' '.exe' "$file" - wrapper=$func_stripname_result - fi - ;; - *) - wrapper=$file - ;; - esac - if func_ltwrapper_script_p "$wrapper"; then - notinst_deplibs= - relink_command= - - func_source "$wrapper" - - # Check the variables that should have been set. - test -z "$generated_by_libtool_version" && \ - func_fatal_error "invalid libtool wrapper script \`$wrapper'" - - finalize=yes - for lib in $notinst_deplibs; do - # Check to see that each library is installed. - libdir= - if test -f "$lib"; then - func_source "$lib" - fi - libfile="$libdir/"`$ECHO "$lib" | $SED 's%^.*/%%g'` ### testsuite: skip nested quoting test - if test -n "$libdir" && test ! -f "$libfile"; then - func_warning "\`$lib' has not been installed in \`$libdir'" - finalize=no - fi - done - - relink_command= - func_source "$wrapper" - - outputname= - if test "$fast_install" = no && test -n "$relink_command"; then - $opt_dry_run || { - if test "$finalize" = yes; then - tmpdir=`func_mktempdir` - func_basename "$file$stripped_ext" - file="$func_basename_result" - outputname="$tmpdir/$file" - # Replace the output file specification. - relink_command=`$ECHO "$relink_command" | $SED 's%@OUTPUT@%'"$outputname"'%g'` - - $opt_silent || { - func_quote_for_expand "$relink_command" - eval "func_echo $func_quote_for_expand_result" - } - if eval "$relink_command"; then : - else - func_error "error: relink \`$file' with the above command before installing it" - $opt_dry_run || ${RM}r "$tmpdir" - continue - fi - file="$outputname" - else - func_warning "cannot relink \`$file'" - fi - } - else - # Install the binary that we compiled earlier. - file=`$ECHO "$file$stripped_ext" | $SED "s%\([^/]*\)$%$objdir/\1%"` - fi - fi - - # remove .exe since cygwin /usr/bin/install will append another - # one anyway - case $install_prog,$host in - */usr/bin/install*,*cygwin*) - case $file:$destfile in - *.exe:*.exe) - # this is ok - ;; - *.exe:*) - destfile=$destfile.exe - ;; - *:*.exe) - func_stripname '' '.exe' "$destfile" - destfile=$func_stripname_result - ;; - esac - ;; - esac - func_show_eval "$install_prog\$stripme \$file \$destfile" 'exit $?' - $opt_dry_run || if test -n "$outputname"; then - ${RM}r "$tmpdir" - fi - ;; - esac - done - - for file in $staticlibs; do - func_basename "$file" - name="$func_basename_result" - - # Set up the ranlib parameters. - oldlib="$destdir/$name" - func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 - tool_oldlib=$func_to_tool_file_result - - func_show_eval "$install_prog \$file \$oldlib" 'exit $?' - - if test -n "$stripme" && test -n "$old_striplib"; then - func_show_eval "$old_striplib $tool_oldlib" 'exit $?' - fi - - # Do each command in the postinstall commands. - func_execute_cmds "$old_postinstall_cmds" 'exit $?' - done - - test -n "$future_libdirs" && \ - func_warning "remember to run \`$progname --finish$future_libdirs'" - - if test -n "$current_libdirs"; then - # Maybe just do a dry run. - $opt_dry_run && current_libdirs=" -n$current_libdirs" - exec_cmd='$SHELL $progpath $preserve_args --finish$current_libdirs' - else - exit $EXIT_SUCCESS - fi -} - -test "$opt_mode" = install && func_mode_install ${1+"$@"} - - -# func_generate_dlsyms outputname originator pic_p -# Extract symbols from dlprefiles and create ${outputname}S.o with -# a dlpreopen symbol table. -func_generate_dlsyms () -{ - $opt_debug - my_outputname="$1" - my_originator="$2" - my_pic_p="${3-no}" - my_prefix=`$ECHO "$my_originator" | sed 's%[^a-zA-Z0-9]%_%g'` - my_dlsyms= - - if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then - if test -n "$NM" && test -n "$global_symbol_pipe"; then - my_dlsyms="${my_outputname}S.c" - else - func_error "not configured to extract global symbols from dlpreopened files" - fi - fi - - if test -n "$my_dlsyms"; then - case $my_dlsyms in - "") ;; - *.c) - # Discover the nlist of each of the dlfiles. - nlist="$output_objdir/${my_outputname}.nm" - - func_show_eval "$RM $nlist ${nlist}S ${nlist}T" - - # Parse the name list into a source file. - func_verbose "creating $output_objdir/$my_dlsyms" - - $opt_dry_run || $ECHO > "$output_objdir/$my_dlsyms" "\ -/* $my_dlsyms - symbol resolution table for \`$my_outputname' dlsym emulation. */ -/* Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION */ - -#ifdef __cplusplus -extern \"C\" { -#endif - -#if defined(__GNUC__) && (((__GNUC__ == 4) && (__GNUC_MINOR__ >= 4)) || (__GNUC__ > 4)) -#pragma GCC diagnostic ignored \"-Wstrict-prototypes\" -#endif - -/* Keep this code in sync between libtool.m4, ltmain, lt_system.h, and tests. */ -#if defined(_WIN32) || defined(__CYGWIN__) || defined(_WIN32_WCE) -/* DATA imports from DLLs on WIN32 con't be const, because runtime - relocations are performed -- see ld's documentation on pseudo-relocs. */ -# define LT_DLSYM_CONST -#elif defined(__osf__) -/* This system does not cope well with relocations in const data. */ -# define LT_DLSYM_CONST -#else -# define LT_DLSYM_CONST const -#endif - -/* External symbol declarations for the compiler. */\ -" - - if test "$dlself" = yes; then - func_verbose "generating symbol list for \`$output'" - - $opt_dry_run || echo ': @PROGRAM@ ' > "$nlist" - - # Add our own program objects to the symbol list. - progfiles=`$ECHO "$objs$old_deplibs" | $SP2NL | $SED "$lo2o" | $NL2SP` - for progfile in $progfiles; do - func_to_tool_file "$progfile" func_convert_file_msys_to_w32 - func_verbose "extracting global C symbols from \`$func_to_tool_file_result'" - $opt_dry_run || eval "$NM $func_to_tool_file_result | $global_symbol_pipe >> '$nlist'" - done - - if test -n "$exclude_expsyms"; then - $opt_dry_run || { - eval '$EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T' - eval '$MV "$nlist"T "$nlist"' - } - fi - - if test -n "$export_symbols_regex"; then - $opt_dry_run || { - eval '$EGREP -e "$export_symbols_regex" "$nlist" > "$nlist"T' - eval '$MV "$nlist"T "$nlist"' - } - fi - - # Prepare the list of exported symbols - if test -z "$export_symbols"; then - export_symbols="$output_objdir/$outputname.exp" - $opt_dry_run || { - $RM $export_symbols - eval "${SED} -n -e '/^: @PROGRAM@ $/d' -e 's/^.* \(.*\)$/\1/p' "'< "$nlist" > "$export_symbols"' - case $host in - *cygwin* | *mingw* | *cegcc* ) - eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' - eval 'cat "$export_symbols" >> "$output_objdir/$outputname.def"' - ;; - esac - } - else - $opt_dry_run || { - eval "${SED} -e 's/\([].[*^$]\)/\\\\\1/g' -e 's/^/ /' -e 's/$/$/'"' < "$export_symbols" > "$output_objdir/$outputname.exp"' - eval '$GREP -f "$output_objdir/$outputname.exp" < "$nlist" > "$nlist"T' - eval '$MV "$nlist"T "$nlist"' - case $host in - *cygwin* | *mingw* | *cegcc* ) - eval "echo EXPORTS "'> "$output_objdir/$outputname.def"' - eval 'cat "$nlist" >> "$output_objdir/$outputname.def"' - ;; - esac - } - fi - fi - - for dlprefile in $dlprefiles; do - func_verbose "extracting global C symbols from \`$dlprefile'" - func_basename "$dlprefile" - name="$func_basename_result" - case $host in - *cygwin* | *mingw* | *cegcc* ) - # if an import library, we need to obtain dlname - if func_win32_import_lib_p "$dlprefile"; then - func_tr_sh "$dlprefile" - eval "curr_lafile=\$libfile_$func_tr_sh_result" - dlprefile_dlbasename="" - if test -n "$curr_lafile" && func_lalib_p "$curr_lafile"; then - # Use subshell, to avoid clobbering current variable values - dlprefile_dlname=`source "$curr_lafile" && echo "$dlname"` - if test -n "$dlprefile_dlname" ; then - func_basename "$dlprefile_dlname" - dlprefile_dlbasename="$func_basename_result" - else - # no lafile. user explicitly requested -dlpreopen . - $sharedlib_from_linklib_cmd "$dlprefile" - dlprefile_dlbasename=$sharedlib_from_linklib_result - fi - fi - $opt_dry_run || { - if test -n "$dlprefile_dlbasename" ; then - eval '$ECHO ": $dlprefile_dlbasename" >> "$nlist"' - else - func_warning "Could not compute DLL name from $name" - eval '$ECHO ": $name " >> "$nlist"' - fi - func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 - eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe | - $SED -e '/I __imp/d' -e 's/I __nm_/D /;s/_nm__//' >> '$nlist'" - } - else # not an import lib - $opt_dry_run || { - eval '$ECHO ": $name " >> "$nlist"' - func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 - eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" - } - fi - ;; - *) - $opt_dry_run || { - eval '$ECHO ": $name " >> "$nlist"' - func_to_tool_file "$dlprefile" func_convert_file_msys_to_w32 - eval "$NM \"$func_to_tool_file_result\" 2>/dev/null | $global_symbol_pipe >> '$nlist'" - } - ;; - esac - done - - $opt_dry_run || { - # Make sure we have at least an empty file. - test -f "$nlist" || : > "$nlist" - - if test -n "$exclude_expsyms"; then - $EGREP -v " ($exclude_expsyms)$" "$nlist" > "$nlist"T - $MV "$nlist"T "$nlist" - fi - - # Try sorting and uniquifying the output. - if $GREP -v "^: " < "$nlist" | - if sort -k 3 /dev/null 2>&1; then - sort -k 3 - else - sort +2 - fi | - uniq > "$nlist"S; then - : - else - $GREP -v "^: " < "$nlist" > "$nlist"S - fi - - if test -f "$nlist"S; then - eval "$global_symbol_to_cdecl"' < "$nlist"S >> "$output_objdir/$my_dlsyms"' - else - echo '/* NONE */' >> "$output_objdir/$my_dlsyms" - fi - - echo >> "$output_objdir/$my_dlsyms" "\ - -/* The mapping between symbol names and symbols. */ -typedef struct { - const char *name; - void *address; -} lt_dlsymlist; -extern LT_DLSYM_CONST lt_dlsymlist -lt_${my_prefix}_LTX_preloaded_symbols[]; -LT_DLSYM_CONST lt_dlsymlist -lt_${my_prefix}_LTX_preloaded_symbols[] = -{\ - { \"$my_originator\", (void *) 0 }," - - case $need_lib_prefix in - no) - eval "$global_symbol_to_c_name_address" < "$nlist" >> "$output_objdir/$my_dlsyms" - ;; - *) - eval "$global_symbol_to_c_name_address_lib_prefix" < "$nlist" >> "$output_objdir/$my_dlsyms" - ;; - esac - echo >> "$output_objdir/$my_dlsyms" "\ - {0, (void *) 0} -}; - -/* This works around a problem in FreeBSD linker */ -#ifdef FREEBSD_WORKAROUND -static const void *lt_preloaded_setup() { - return lt_${my_prefix}_LTX_preloaded_symbols; -} -#endif - -#ifdef __cplusplus -} -#endif\ -" - } # !$opt_dry_run - - pic_flag_for_symtable= - case "$compile_command " in - *" -static "*) ;; - *) - case $host in - # compiling the symbol table file with pic_flag works around - # a FreeBSD bug that causes programs to crash when -lm is - # linked before any other PIC object. But we must not use - # pic_flag when linking with -static. The problem exists in - # FreeBSD 2.2.6 and is fixed in FreeBSD 3.1. - *-*-freebsd2.*|*-*-freebsd3.0*|*-*-freebsdelf3.0*) - pic_flag_for_symtable=" $pic_flag -DFREEBSD_WORKAROUND" ;; - *-*-hpux*) - pic_flag_for_symtable=" $pic_flag" ;; - *) - if test "X$my_pic_p" != Xno; then - pic_flag_for_symtable=" $pic_flag" - fi - ;; - esac - ;; - esac - symtab_cflags= - for arg in $LTCFLAGS; do - case $arg in - -pie | -fpie | -fPIE) ;; - *) func_append symtab_cflags " $arg" ;; - esac - done - - # Now compile the dynamic symbol file. - func_show_eval '(cd $output_objdir && $LTCC$symtab_cflags -c$no_builtin_flag$pic_flag_for_symtable "$my_dlsyms")' 'exit $?' - - # Clean up the generated files. - func_show_eval '$RM "$output_objdir/$my_dlsyms" "$nlist" "${nlist}S" "${nlist}T"' - - # Transform the symbol file into the correct name. - symfileobj="$output_objdir/${my_outputname}S.$objext" - case $host in - *cygwin* | *mingw* | *cegcc* ) - if test -f "$output_objdir/$my_outputname.def"; then - compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` - finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$output_objdir/$my_outputname.def $symfileobj%"` - else - compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` - finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` - fi - ;; - *) - compile_command=`$ECHO "$compile_command" | $SED "s%@SYMFILE@%$symfileobj%"` - finalize_command=`$ECHO "$finalize_command" | $SED "s%@SYMFILE@%$symfileobj%"` - ;; - esac - ;; - *) - func_fatal_error "unknown suffix for \`$my_dlsyms'" - ;; - esac - else - # We keep going just in case the user didn't refer to - # lt_preloaded_symbols. The linker will fail if global_symbol_pipe - # really was required. - - # Nullify the symbol file. - compile_command=`$ECHO "$compile_command" | $SED "s% @SYMFILE@%%"` - finalize_command=`$ECHO "$finalize_command" | $SED "s% @SYMFILE@%%"` - fi -} - -# func_win32_libid arg -# return the library type of file 'arg' -# -# Need a lot of goo to handle *both* DLLs and import libs -# Has to be a shell function in order to 'eat' the argument -# that is supplied when $file_magic_command is called. -# Despite the name, also deal with 64 bit binaries. -func_win32_libid () -{ - $opt_debug - win32_libid_type="unknown" - win32_fileres=`file -L $1 2>/dev/null` - case $win32_fileres in - *ar\ archive\ import\ library*) # definitely import - win32_libid_type="x86 archive import" - ;; - *ar\ archive*) # could be an import, or static - # Keep the egrep pattern in sync with the one in _LT_CHECK_MAGIC_METHOD. - if eval $OBJDUMP -f $1 | $SED -e '10q' 2>/dev/null | - $EGREP 'file format (pei*-i386(.*architecture: i386)?|pe-arm-wince|pe-x86-64)' >/dev/null; then - func_to_tool_file "$1" func_convert_file_msys_to_w32 - win32_nmres=`eval $NM -f posix -A \"$func_to_tool_file_result\" | - $SED -n -e ' - 1,100{ - / I /{ - s,.*,import, - p - q - } - }'` - case $win32_nmres in - import*) win32_libid_type="x86 archive import";; - *) win32_libid_type="x86 archive static";; - esac - fi - ;; - *DLL*) - win32_libid_type="x86 DLL" - ;; - *executable*) # but shell scripts are "executable" too... - case $win32_fileres in - *MS\ Windows\ PE\ Intel*) - win32_libid_type="x86 DLL" - ;; - esac - ;; - esac - $ECHO "$win32_libid_type" -} - -# func_cygming_dll_for_implib ARG -# -# Platform-specific function to extract the -# name of the DLL associated with the specified -# import library ARG. -# Invoked by eval'ing the libtool variable -# $sharedlib_from_linklib_cmd -# Result is available in the variable -# $sharedlib_from_linklib_result -func_cygming_dll_for_implib () -{ - $opt_debug - sharedlib_from_linklib_result=`$DLLTOOL --identify-strict --identify "$1"` -} - -# func_cygming_dll_for_implib_fallback_core SECTION_NAME LIBNAMEs -# -# The is the core of a fallback implementation of a -# platform-specific function to extract the name of the -# DLL associated with the specified import library LIBNAME. -# -# SECTION_NAME is either .idata$6 or .idata$7, depending -# on the platform and compiler that created the implib. -# -# Echos the name of the DLL associated with the -# specified import library. -func_cygming_dll_for_implib_fallback_core () -{ - $opt_debug - match_literal=`$ECHO "$1" | $SED "$sed_make_literal_regex"` - $OBJDUMP -s --section "$1" "$2" 2>/dev/null | - $SED '/^Contents of section '"$match_literal"':/{ - # Place marker at beginning of archive member dllname section - s/.*/====MARK====/ - p - d - } - # These lines can sometimes be longer than 43 characters, but - # are always uninteresting - /:[ ]*file format pe[i]\{,1\}-/d - /^In archive [^:]*:/d - # Ensure marker is printed - /^====MARK====/p - # Remove all lines with less than 43 characters - /^.\{43\}/!d - # From remaining lines, remove first 43 characters - s/^.\{43\}//' | - $SED -n ' - # Join marker and all lines until next marker into a single line - /^====MARK====/ b para - H - $ b para - b - :para - x - s/\n//g - # Remove the marker - s/^====MARK====// - # Remove trailing dots and whitespace - s/[\. \t]*$// - # Print - /./p' | - # we now have a list, one entry per line, of the stringified - # contents of the appropriate section of all members of the - # archive which possess that section. Heuristic: eliminate - # all those which have a first or second character that is - # a '.' (that is, objdump's representation of an unprintable - # character.) This should work for all archives with less than - # 0x302f exports -- but will fail for DLLs whose name actually - # begins with a literal '.' or a single character followed by - # a '.'. - # - # Of those that remain, print the first one. - $SED -e '/^\./d;/^.\./d;q' -} - -# func_cygming_gnu_implib_p ARG -# This predicate returns with zero status (TRUE) if -# ARG is a GNU/binutils-style import library. Returns -# with nonzero status (FALSE) otherwise. -func_cygming_gnu_implib_p () -{ - $opt_debug - func_to_tool_file "$1" func_convert_file_msys_to_w32 - func_cygming_gnu_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $EGREP ' (_head_[A-Za-z0-9_]+_[ad]l*|[A-Za-z0-9_]+_[ad]l*_iname)$'` - test -n "$func_cygming_gnu_implib_tmp" -} - -# func_cygming_ms_implib_p ARG -# This predicate returns with zero status (TRUE) if -# ARG is an MS-style import library. Returns -# with nonzero status (FALSE) otherwise. -func_cygming_ms_implib_p () -{ - $opt_debug - func_to_tool_file "$1" func_convert_file_msys_to_w32 - func_cygming_ms_implib_tmp=`$NM "$func_to_tool_file_result" | eval "$global_symbol_pipe" | $GREP '_NULL_IMPORT_DESCRIPTOR'` - test -n "$func_cygming_ms_implib_tmp" -} - -# func_cygming_dll_for_implib_fallback ARG -# Platform-specific function to extract the -# name of the DLL associated with the specified -# import library ARG. -# -# This fallback implementation is for use when $DLLTOOL -# does not support the --identify-strict option. -# Invoked by eval'ing the libtool variable -# $sharedlib_from_linklib_cmd -# Result is available in the variable -# $sharedlib_from_linklib_result -func_cygming_dll_for_implib_fallback () -{ - $opt_debug - if func_cygming_gnu_implib_p "$1" ; then - # binutils import library - sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$7' "$1"` - elif func_cygming_ms_implib_p "$1" ; then - # ms-generated import library - sharedlib_from_linklib_result=`func_cygming_dll_for_implib_fallback_core '.idata$6' "$1"` - else - # unknown - sharedlib_from_linklib_result="" - fi -} - - -# func_extract_an_archive dir oldlib -func_extract_an_archive () -{ - $opt_debug - f_ex_an_ar_dir="$1"; shift - f_ex_an_ar_oldlib="$1" - if test "$lock_old_archive_extraction" = yes; then - lockfile=$f_ex_an_ar_oldlib.lock - until $opt_dry_run || ln "$progpath" "$lockfile" 2>/dev/null; do - func_echo "Waiting for $lockfile to be removed" - sleep 2 - done - fi - func_show_eval "(cd \$f_ex_an_ar_dir && $AR x \"\$f_ex_an_ar_oldlib\")" \ - 'stat=$?; rm -f "$lockfile"; exit $stat' - if test "$lock_old_archive_extraction" = yes; then - $opt_dry_run || rm -f "$lockfile" - fi - if ($AR t "$f_ex_an_ar_oldlib" | sort | sort -uc >/dev/null 2>&1); then - : - else - func_fatal_error "object name conflicts in archive: $f_ex_an_ar_dir/$f_ex_an_ar_oldlib" - fi -} - - -# func_extract_archives gentop oldlib ... -func_extract_archives () -{ - $opt_debug - my_gentop="$1"; shift - my_oldlibs=${1+"$@"} - my_oldobjs="" - my_xlib="" - my_xabs="" - my_xdir="" - - for my_xlib in $my_oldlibs; do - # Extract the objects. - case $my_xlib in - [\\/]* | [A-Za-z]:[\\/]*) my_xabs="$my_xlib" ;; - *) my_xabs=`pwd`"/$my_xlib" ;; - esac - func_basename "$my_xlib" - my_xlib="$func_basename_result" - my_xlib_u=$my_xlib - while :; do - case " $extracted_archives " in - *" $my_xlib_u "*) - func_arith $extracted_serial + 1 - extracted_serial=$func_arith_result - my_xlib_u=lt$extracted_serial-$my_xlib ;; - *) break ;; - esac - done - extracted_archives="$extracted_archives $my_xlib_u" - my_xdir="$my_gentop/$my_xlib_u" - - func_mkdir_p "$my_xdir" - - case $host in - *-darwin*) - func_verbose "Extracting $my_xabs" - # Do not bother doing anything if just a dry run - $opt_dry_run || { - darwin_orig_dir=`pwd` - cd $my_xdir || exit $? - darwin_archive=$my_xabs - darwin_curdir=`pwd` - darwin_base_archive=`basename "$darwin_archive"` - darwin_arches=`$LIPO -info "$darwin_archive" 2>/dev/null | $GREP Architectures 2>/dev/null || true` - if test -n "$darwin_arches"; then - darwin_arches=`$ECHO "$darwin_arches" | $SED -e 's/.*are://'` - darwin_arch= - func_verbose "$darwin_base_archive has multiple architectures $darwin_arches" - for darwin_arch in $darwin_arches ; do - func_mkdir_p "unfat-$$/${darwin_base_archive}-${darwin_arch}" - $LIPO -thin $darwin_arch -output "unfat-$$/${darwin_base_archive}-${darwin_arch}/${darwin_base_archive}" "${darwin_archive}" - cd "unfat-$$/${darwin_base_archive}-${darwin_arch}" - func_extract_an_archive "`pwd`" "${darwin_base_archive}" - cd "$darwin_curdir" - $RM "unfat-$$/${darwin_base_archive}-${darwin_arch}/${darwin_base_archive}" - done # $darwin_arches - ## Okay now we've a bunch of thin objects, gotta fatten them up :) - darwin_filelist=`find unfat-$$ -type f -name \*.o -print -o -name \*.lo -print | $SED -e "$basename" | sort -u` - darwin_file= - darwin_files= - for darwin_file in $darwin_filelist; do - darwin_files=`find unfat-$$ -name $darwin_file -print | sort | $NL2SP` - $LIPO -create -output "$darwin_file" $darwin_files - done # $darwin_filelist - $RM -rf unfat-$$ - cd "$darwin_orig_dir" - else - cd $darwin_orig_dir - func_extract_an_archive "$my_xdir" "$my_xabs" - fi # $darwin_arches - } # !$opt_dry_run - ;; - *) - func_extract_an_archive "$my_xdir" "$my_xabs" - ;; - esac - my_oldobjs="$my_oldobjs "`find $my_xdir -name \*.$objext -print -o -name \*.lo -print | sort | $NL2SP` - done - - func_extract_archives_result="$my_oldobjs" -} - - -# func_emit_wrapper [arg=no] -# -# Emit a libtool wrapper script on stdout. -# Don't directly open a file because we may want to -# incorporate the script contents within a cygwin/mingw -# wrapper executable. Must ONLY be called from within -# func_mode_link because it depends on a number of variables -# set therein. -# -# ARG is the value that the WRAPPER_SCRIPT_BELONGS_IN_OBJDIR -# variable will take. If 'yes', then the emitted script -# will assume that the directory in which it is stored is -# the $objdir directory. This is a cygwin/mingw-specific -# behavior. -func_emit_wrapper () -{ - func_emit_wrapper_arg1=${1-no} - - $ECHO "\ -#! $SHELL - -# $output - temporary wrapper script for $objdir/$outputname -# Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION -# -# The $output program cannot be directly executed until all the libtool -# libraries that it depends on are installed. -# -# This wrapper script should never be moved out of the build directory. -# If it is, it will not operate correctly. - -# Sed substitution that helps us do robust quoting. It backslashifies -# metacharacters that are still active within double-quoted strings. -sed_quote_subst='$sed_quote_subst' - -# Be Bourne compatible -if test -n \"\${ZSH_VERSION+set}\" && (emulate sh) >/dev/null 2>&1; then - emulate sh - NULLCMD=: - # Zsh 3.x and 4.x performs word splitting on \${1+\"\$@\"}, which - # is contrary to our usage. Disable this feature. - alias -g '\${1+\"\$@\"}'='\"\$@\"' - setopt NO_GLOB_SUBST -else - case \`(set -o) 2>/dev/null\` in *posix*) set -o posix;; esac -fi -BIN_SH=xpg4; export BIN_SH # for Tru64 -DUALCASE=1; export DUALCASE # for MKS sh - -# The HP-UX ksh and POSIX shell print the target directory to stdout -# if CDPATH is set. -(unset CDPATH) >/dev/null 2>&1 && unset CDPATH - -relink_command=\"$relink_command\" - -# This environment variable determines our operation mode. -if test \"\$libtool_install_magic\" = \"$magic\"; then - # install mode needs the following variables: - generated_by_libtool_version='$macro_version' - notinst_deplibs='$notinst_deplibs' -else - # When we are sourced in execute mode, \$file and \$ECHO are already set. - if test \"\$libtool_execute_magic\" != \"$magic\"; then - file=\"\$0\"" - - qECHO=`$ECHO "$ECHO" | $SED "$sed_quote_subst"` - $ECHO "\ - -# A function that is used when there is no print builtin or printf. -func_fallback_echo () -{ - eval 'cat <<_LTECHO_EOF -\$1 -_LTECHO_EOF' -} - ECHO=\"$qECHO\" - fi - -# Very basic option parsing. These options are (a) specific to -# the libtool wrapper, (b) are identical between the wrapper -# /script/ and the wrapper /executable/ which is used only on -# windows platforms, and (c) all begin with the string "--lt-" -# (application programs are unlikely to have options which match -# this pattern). -# -# There are only two supported options: --lt-debug and -# --lt-dump-script. There is, deliberately, no --lt-help. -# -# The first argument to this parsing function should be the -# script's $0 value, followed by "$@". -lt_option_debug= -func_parse_lt_options () -{ - lt_script_arg0=\$0 - shift - for lt_opt - do - case \"\$lt_opt\" in - --lt-debug) lt_option_debug=1 ;; - --lt-dump-script) - lt_dump_D=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%/[^/]*$%%'\` - test \"X\$lt_dump_D\" = \"X\$lt_script_arg0\" && lt_dump_D=. - lt_dump_F=\`\$ECHO \"X\$lt_script_arg0\" | $SED -e 's/^X//' -e 's%^.*/%%'\` - cat \"\$lt_dump_D/\$lt_dump_F\" - exit 0 - ;; - --lt-*) - \$ECHO \"Unrecognized --lt- option: '\$lt_opt'\" 1>&2 - exit 1 - ;; - esac - done - - # Print the debug banner immediately: - if test -n \"\$lt_option_debug\"; then - echo \"${outputname}:${output}:\${LINENO}: libtool wrapper (GNU $PACKAGE$TIMESTAMP) $VERSION\" 1>&2 - fi -} - -# Used when --lt-debug. Prints its arguments to stdout -# (redirection is the responsibility of the caller) -func_lt_dump_args () -{ - lt_dump_args_N=1; - for lt_arg - do - \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[\$lt_dump_args_N]: \$lt_arg\" - lt_dump_args_N=\`expr \$lt_dump_args_N + 1\` - done -} - -# Core function for launching the target application -func_exec_program_core () -{ -" - case $host in - # Backslashes separate directories on plain windows - *-*-mingw | *-*-os2* | *-cegcc*) - $ECHO "\ - if test -n \"\$lt_option_debug\"; then - \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir\\\\\$program\" 1>&2 - func_lt_dump_args \${1+\"\$@\"} 1>&2 - fi - exec \"\$progdir\\\\\$program\" \${1+\"\$@\"} -" - ;; - - *) - $ECHO "\ - if test -n \"\$lt_option_debug\"; then - \$ECHO \"${outputname}:${output}:\${LINENO}: newargv[0]: \$progdir/\$program\" 1>&2 - func_lt_dump_args \${1+\"\$@\"} 1>&2 - fi - exec \"\$progdir/\$program\" \${1+\"\$@\"} -" - ;; - esac - $ECHO "\ - \$ECHO \"\$0: cannot exec \$program \$*\" 1>&2 - exit 1 -} - -# A function to encapsulate launching the target application -# Strips options in the --lt-* namespace from \$@ and -# launches target application with the remaining arguments. -func_exec_program () -{ - case \" \$* \" in - *\\ --lt-*) - for lt_wr_arg - do - case \$lt_wr_arg in - --lt-*) ;; - *) set x \"\$@\" \"\$lt_wr_arg\"; shift;; - esac - shift - done ;; - esac - func_exec_program_core \${1+\"\$@\"} -} - - # Parse options - func_parse_lt_options \"\$0\" \${1+\"\$@\"} - - # Find the directory that this script lives in. - thisdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*$%%'\` - test \"x\$thisdir\" = \"x\$file\" && thisdir=. - - # Follow symbolic links until we get to the real thisdir. - file=\`ls -ld \"\$file\" | $SED -n 's/.*-> //p'\` - while test -n \"\$file\"; do - destdir=\`\$ECHO \"\$file\" | $SED 's%/[^/]*\$%%'\` - - # If there was a directory component, then change thisdir. - if test \"x\$destdir\" != \"x\$file\"; then - case \"\$destdir\" in - [\\\\/]* | [A-Za-z]:[\\\\/]*) thisdir=\"\$destdir\" ;; - *) thisdir=\"\$thisdir/\$destdir\" ;; - esac - fi - - file=\`\$ECHO \"\$file\" | $SED 's%^.*/%%'\` - file=\`ls -ld \"\$thisdir/\$file\" | $SED -n 's/.*-> //p'\` - done - - # Usually 'no', except on cygwin/mingw when embedded into - # the cwrapper. - WRAPPER_SCRIPT_BELONGS_IN_OBJDIR=$func_emit_wrapper_arg1 - if test \"\$WRAPPER_SCRIPT_BELONGS_IN_OBJDIR\" = \"yes\"; then - # special case for '.' - if test \"\$thisdir\" = \".\"; then - thisdir=\`pwd\` - fi - # remove .libs from thisdir - case \"\$thisdir\" in - *[\\\\/]$objdir ) thisdir=\`\$ECHO \"\$thisdir\" | $SED 's%[\\\\/][^\\\\/]*$%%'\` ;; - $objdir ) thisdir=. ;; - esac - fi - - # Try to get the absolute directory name. - absdir=\`cd \"\$thisdir\" && pwd\` - test -n \"\$absdir\" && thisdir=\"\$absdir\" -" - - if test "$fast_install" = yes; then - $ECHO "\ - program=lt-'$outputname'$exeext - progdir=\"\$thisdir/$objdir\" - - if test ! -f \"\$progdir/\$program\" || - { file=\`ls -1dt \"\$progdir/\$program\" \"\$progdir/../\$program\" 2>/dev/null | ${SED} 1q\`; \\ - test \"X\$file\" != \"X\$progdir/\$program\"; }; then - - file=\"\$\$-\$program\" - - if test ! -d \"\$progdir\"; then - $MKDIR \"\$progdir\" - else - $RM \"\$progdir/\$file\" - fi" - - $ECHO "\ - - # relink executable if necessary - if test -n \"\$relink_command\"; then - if relink_command_output=\`eval \$relink_command 2>&1\`; then : - else - $ECHO \"\$relink_command_output\" >&2 - $RM \"\$progdir/\$file\" - exit 1 - fi - fi - - $MV \"\$progdir/\$file\" \"\$progdir/\$program\" 2>/dev/null || - { $RM \"\$progdir/\$program\"; - $MV \"\$progdir/\$file\" \"\$progdir/\$program\"; } - $RM \"\$progdir/\$file\" - fi" - else - $ECHO "\ - program='$outputname' - progdir=\"\$thisdir/$objdir\" -" - fi - - $ECHO "\ - - if test -f \"\$progdir/\$program\"; then" - - # fixup the dll searchpath if we need to. - # - # Fix the DLL searchpath if we need to. Do this before prepending - # to shlibpath, because on Windows, both are PATH and uninstalled - # libraries must come first. - if test -n "$dllsearchpath"; then - $ECHO "\ - # Add the dll search path components to the executable PATH - PATH=$dllsearchpath:\$PATH -" - fi - - # Export our shlibpath_var if we have one. - if test "$shlibpath_overrides_runpath" = yes && test -n "$shlibpath_var" && test -n "$temp_rpath"; then - $ECHO "\ - # Add our own library path to $shlibpath_var - $shlibpath_var=\"$temp_rpath\$$shlibpath_var\" - - # Some systems cannot cope with colon-terminated $shlibpath_var - # The second colon is a workaround for a bug in BeOS R4 sed - $shlibpath_var=\`\$ECHO \"\$$shlibpath_var\" | $SED 's/::*\$//'\` - - export $shlibpath_var -" - fi - - $ECHO "\ - if test \"\$libtool_execute_magic\" != \"$magic\"; then - # Run the actual program with our arguments. - func_exec_program \${1+\"\$@\"} - fi - else - # The program doesn't exist. - \$ECHO \"\$0: error: \\\`\$progdir/\$program' does not exist\" 1>&2 - \$ECHO \"This script is just a wrapper for \$program.\" 1>&2 - \$ECHO \"See the $PACKAGE documentation for more information.\" 1>&2 - exit 1 - fi -fi\ -" -} - - -# func_emit_cwrapperexe_src -# emit the source code for a wrapper executable on stdout -# Must ONLY be called from within func_mode_link because -# it depends on a number of variable set therein. -func_emit_cwrapperexe_src () -{ - cat < -#include -#ifdef _MSC_VER -# include -# include -# include -#else -# include -# include -# ifdef __CYGWIN__ -# include -# endif -#endif -#include -#include -#include -#include -#include -#include -#include -#include - -/* declarations of non-ANSI functions */ -#if defined(__MINGW32__) -# ifdef __STRICT_ANSI__ -int _putenv (const char *); -# endif -#elif defined(__CYGWIN__) -# ifdef __STRICT_ANSI__ -char *realpath (const char *, char *); -int putenv (char *); -int setenv (const char *, const char *, int); -# endif -/* #elif defined (other platforms) ... */ -#endif - -/* portability defines, excluding path handling macros */ -#if defined(_MSC_VER) -# define setmode _setmode -# define stat _stat -# define chmod _chmod -# define getcwd _getcwd -# define putenv _putenv -# define S_IXUSR _S_IEXEC -# ifndef _INTPTR_T_DEFINED -# define _INTPTR_T_DEFINED -# define intptr_t int -# endif -#elif defined(__MINGW32__) -# define setmode _setmode -# define stat _stat -# define chmod _chmod -# define getcwd _getcwd -# define putenv _putenv -#elif defined(__CYGWIN__) -# define HAVE_SETENV -# define FOPEN_WB "wb" -/* #elif defined (other platforms) ... */ -#endif - -#if defined(PATH_MAX) -# define LT_PATHMAX PATH_MAX -#elif defined(MAXPATHLEN) -# define LT_PATHMAX MAXPATHLEN -#else -# define LT_PATHMAX 1024 -#endif - -#ifndef S_IXOTH -# define S_IXOTH 0 -#endif -#ifndef S_IXGRP -# define S_IXGRP 0 -#endif - -/* path handling portability macros */ -#ifndef DIR_SEPARATOR -# define DIR_SEPARATOR '/' -# define PATH_SEPARATOR ':' -#endif - -#if defined (_WIN32) || defined (__MSDOS__) || defined (__DJGPP__) || \ - defined (__OS2__) -# define HAVE_DOS_BASED_FILE_SYSTEM -# define FOPEN_WB "wb" -# ifndef DIR_SEPARATOR_2 -# define DIR_SEPARATOR_2 '\\' -# endif -# ifndef PATH_SEPARATOR_2 -# define PATH_SEPARATOR_2 ';' -# endif -#endif - -#ifndef DIR_SEPARATOR_2 -# define IS_DIR_SEPARATOR(ch) ((ch) == DIR_SEPARATOR) -#else /* DIR_SEPARATOR_2 */ -# define IS_DIR_SEPARATOR(ch) \ - (((ch) == DIR_SEPARATOR) || ((ch) == DIR_SEPARATOR_2)) -#endif /* DIR_SEPARATOR_2 */ - -#ifndef PATH_SEPARATOR_2 -# define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR) -#else /* PATH_SEPARATOR_2 */ -# define IS_PATH_SEPARATOR(ch) ((ch) == PATH_SEPARATOR_2) -#endif /* PATH_SEPARATOR_2 */ - -#ifndef FOPEN_WB -# define FOPEN_WB "w" -#endif -#ifndef _O_BINARY -# define _O_BINARY 0 -#endif - -#define XMALLOC(type, num) ((type *) xmalloc ((num) * sizeof(type))) -#define XFREE(stale) do { \ - if (stale) { free ((void *) stale); stale = 0; } \ -} while (0) - -#if defined(LT_DEBUGWRAPPER) -static int lt_debug = 1; -#else -static int lt_debug = 0; -#endif - -const char *program_name = "libtool-wrapper"; /* in case xstrdup fails */ - -void *xmalloc (size_t num); -char *xstrdup (const char *string); -const char *base_name (const char *name); -char *find_executable (const char *wrapper); -char *chase_symlinks (const char *pathspec); -int make_executable (const char *path); -int check_executable (const char *path); -char *strendzap (char *str, const char *pat); -void lt_debugprintf (const char *file, int line, const char *fmt, ...); -void lt_fatal (const char *file, int line, const char *message, ...); -static const char *nonnull (const char *s); -static const char *nonempty (const char *s); -void lt_setenv (const char *name, const char *value); -char *lt_extend_str (const char *orig_value, const char *add, int to_end); -void lt_update_exe_path (const char *name, const char *value); -void lt_update_lib_path (const char *name, const char *value); -char **prepare_spawn (char **argv); -void lt_dump_script (FILE *f); -EOF - - cat <= 0) - && (st.st_mode & (S_IXUSR | S_IXGRP | S_IXOTH))) - return 1; - else - return 0; -} - -int -make_executable (const char *path) -{ - int rval = 0; - struct stat st; - - lt_debugprintf (__FILE__, __LINE__, "(make_executable): %s\n", - nonempty (path)); - if ((!path) || (!*path)) - return 0; - - if (stat (path, &st) >= 0) - { - rval = chmod (path, st.st_mode | S_IXOTH | S_IXGRP | S_IXUSR); - } - return rval; -} - -/* Searches for the full path of the wrapper. Returns - newly allocated full path name if found, NULL otherwise - Does not chase symlinks, even on platforms that support them. -*/ -char * -find_executable (const char *wrapper) -{ - int has_slash = 0; - const char *p; - const char *p_next; - /* static buffer for getcwd */ - char tmp[LT_PATHMAX + 1]; - int tmp_len; - char *concat_name; - - lt_debugprintf (__FILE__, __LINE__, "(find_executable): %s\n", - nonempty (wrapper)); - - if ((wrapper == NULL) || (*wrapper == '\0')) - return NULL; - - /* Absolute path? */ -#if defined (HAVE_DOS_BASED_FILE_SYSTEM) - if (isalpha ((unsigned char) wrapper[0]) && wrapper[1] == ':') - { - concat_name = xstrdup (wrapper); - if (check_executable (concat_name)) - return concat_name; - XFREE (concat_name); - } - else - { -#endif - if (IS_DIR_SEPARATOR (wrapper[0])) - { - concat_name = xstrdup (wrapper); - if (check_executable (concat_name)) - return concat_name; - XFREE (concat_name); - } -#if defined (HAVE_DOS_BASED_FILE_SYSTEM) - } -#endif - - for (p = wrapper; *p; p++) - if (*p == '/') - { - has_slash = 1; - break; - } - if (!has_slash) - { - /* no slashes; search PATH */ - const char *path = getenv ("PATH"); - if (path != NULL) - { - for (p = path; *p; p = p_next) - { - const char *q; - size_t p_len; - for (q = p; *q; q++) - if (IS_PATH_SEPARATOR (*q)) - break; - p_len = q - p; - p_next = (*q == '\0' ? q : q + 1); - if (p_len == 0) - { - /* empty path: current directory */ - if (getcwd (tmp, LT_PATHMAX) == NULL) - lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", - nonnull (strerror (errno))); - tmp_len = strlen (tmp); - concat_name = - XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); - memcpy (concat_name, tmp, tmp_len); - concat_name[tmp_len] = '/'; - strcpy (concat_name + tmp_len + 1, wrapper); - } - else - { - concat_name = - XMALLOC (char, p_len + 1 + strlen (wrapper) + 1); - memcpy (concat_name, p, p_len); - concat_name[p_len] = '/'; - strcpy (concat_name + p_len + 1, wrapper); - } - if (check_executable (concat_name)) - return concat_name; - XFREE (concat_name); - } - } - /* not found in PATH; assume curdir */ - } - /* Relative path | not found in path: prepend cwd */ - if (getcwd (tmp, LT_PATHMAX) == NULL) - lt_fatal (__FILE__, __LINE__, "getcwd failed: %s", - nonnull (strerror (errno))); - tmp_len = strlen (tmp); - concat_name = XMALLOC (char, tmp_len + 1 + strlen (wrapper) + 1); - memcpy (concat_name, tmp, tmp_len); - concat_name[tmp_len] = '/'; - strcpy (concat_name + tmp_len + 1, wrapper); - - if (check_executable (concat_name)) - return concat_name; - XFREE (concat_name); - return NULL; -} - -char * -chase_symlinks (const char *pathspec) -{ -#ifndef S_ISLNK - return xstrdup (pathspec); -#else - char buf[LT_PATHMAX]; - struct stat s; - char *tmp_pathspec = xstrdup (pathspec); - char *p; - int has_symlinks = 0; - while (strlen (tmp_pathspec) && !has_symlinks) - { - lt_debugprintf (__FILE__, __LINE__, - "checking path component for symlinks: %s\n", - tmp_pathspec); - if (lstat (tmp_pathspec, &s) == 0) - { - if (S_ISLNK (s.st_mode) != 0) - { - has_symlinks = 1; - break; - } - - /* search backwards for last DIR_SEPARATOR */ - p = tmp_pathspec + strlen (tmp_pathspec) - 1; - while ((p > tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) - p--; - if ((p == tmp_pathspec) && (!IS_DIR_SEPARATOR (*p))) - { - /* no more DIR_SEPARATORS left */ - break; - } - *p = '\0'; - } - else - { - lt_fatal (__FILE__, __LINE__, - "error accessing file \"%s\": %s", - tmp_pathspec, nonnull (strerror (errno))); - } - } - XFREE (tmp_pathspec); - - if (!has_symlinks) - { - return xstrdup (pathspec); - } - - tmp_pathspec = realpath (pathspec, buf); - if (tmp_pathspec == 0) - { - lt_fatal (__FILE__, __LINE__, - "could not follow symlinks for %s", pathspec); - } - return xstrdup (tmp_pathspec); -#endif -} - -char * -strendzap (char *str, const char *pat) -{ - size_t len, patlen; - - assert (str != NULL); - assert (pat != NULL); - - len = strlen (str); - patlen = strlen (pat); - - if (patlen <= len) - { - str += len - patlen; - if (strcmp (str, pat) == 0) - *str = '\0'; - } - return str; -} - -void -lt_debugprintf (const char *file, int line, const char *fmt, ...) -{ - va_list args; - if (lt_debug) - { - (void) fprintf (stderr, "%s:%s:%d: ", program_name, file, line); - va_start (args, fmt); - (void) vfprintf (stderr, fmt, args); - va_end (args); - } -} - -static void -lt_error_core (int exit_status, const char *file, - int line, const char *mode, - const char *message, va_list ap) -{ - fprintf (stderr, "%s:%s:%d: %s: ", program_name, file, line, mode); - vfprintf (stderr, message, ap); - fprintf (stderr, ".\n"); - - if (exit_status >= 0) - exit (exit_status); -} - -void -lt_fatal (const char *file, int line, const char *message, ...) -{ - va_list ap; - va_start (ap, message); - lt_error_core (EXIT_FAILURE, file, line, "FATAL", message, ap); - va_end (ap); -} - -static const char * -nonnull (const char *s) -{ - return s ? s : "(null)"; -} - -static const char * -nonempty (const char *s) -{ - return (s && !*s) ? "(empty)" : nonnull (s); -} - -void -lt_setenv (const char *name, const char *value) -{ - lt_debugprintf (__FILE__, __LINE__, - "(lt_setenv) setting '%s' to '%s'\n", - nonnull (name), nonnull (value)); - { -#ifdef HAVE_SETENV - /* always make a copy, for consistency with !HAVE_SETENV */ - char *str = xstrdup (value); - setenv (name, str, 1); -#else - int len = strlen (name) + 1 + strlen (value) + 1; - char *str = XMALLOC (char, len); - sprintf (str, "%s=%s", name, value); - if (putenv (str) != EXIT_SUCCESS) - { - XFREE (str); - } -#endif - } -} - -char * -lt_extend_str (const char *orig_value, const char *add, int to_end) -{ - char *new_value; - if (orig_value && *orig_value) - { - int orig_value_len = strlen (orig_value); - int add_len = strlen (add); - new_value = XMALLOC (char, add_len + orig_value_len + 1); - if (to_end) - { - strcpy (new_value, orig_value); - strcpy (new_value + orig_value_len, add); - } - else - { - strcpy (new_value, add); - strcpy (new_value + add_len, orig_value); - } - } - else - { - new_value = xstrdup (add); - } - return new_value; -} - -void -lt_update_exe_path (const char *name, const char *value) -{ - lt_debugprintf (__FILE__, __LINE__, - "(lt_update_exe_path) modifying '%s' by prepending '%s'\n", - nonnull (name), nonnull (value)); - - if (name && *name && value && *value) - { - char *new_value = lt_extend_str (getenv (name), value, 0); - /* some systems can't cope with a ':'-terminated path #' */ - int len = strlen (new_value); - while (((len = strlen (new_value)) > 0) && IS_PATH_SEPARATOR (new_value[len-1])) - { - new_value[len-1] = '\0'; - } - lt_setenv (name, new_value); - XFREE (new_value); - } -} - -void -lt_update_lib_path (const char *name, const char *value) -{ - lt_debugprintf (__FILE__, __LINE__, - "(lt_update_lib_path) modifying '%s' by prepending '%s'\n", - nonnull (name), nonnull (value)); - - if (name && *name && value && *value) - { - char *new_value = lt_extend_str (getenv (name), value, 0); - lt_setenv (name, new_value); - XFREE (new_value); - } -} - -EOF - case $host_os in - mingw*) - cat <<"EOF" - -/* Prepares an argument vector before calling spawn(). - Note that spawn() does not by itself call the command interpreter - (getenv ("COMSPEC") != NULL ? getenv ("COMSPEC") : - ({ OSVERSIONINFO v; v.dwOSVersionInfoSize = sizeof(OSVERSIONINFO); - GetVersionEx(&v); - v.dwPlatformId == VER_PLATFORM_WIN32_NT; - }) ? "cmd.exe" : "command.com"). - Instead it simply concatenates the arguments, separated by ' ', and calls - CreateProcess(). We must quote the arguments since Win32 CreateProcess() - interprets characters like ' ', '\t', '\\', '"' (but not '<' and '>') in a - special way: - - Space and tab are interpreted as delimiters. They are not treated as - delimiters if they are surrounded by double quotes: "...". - - Unescaped double quotes are removed from the input. Their only effect is - that within double quotes, space and tab are treated like normal - characters. - - Backslashes not followed by double quotes are not special. - - But 2*n+1 backslashes followed by a double quote become - n backslashes followed by a double quote (n >= 0): - \" -> " - \\\" -> \" - \\\\\" -> \\" - */ -#define SHELL_SPECIAL_CHARS "\"\\ \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" -#define SHELL_SPACE_CHARS " \001\002\003\004\005\006\007\010\011\012\013\014\015\016\017\020\021\022\023\024\025\026\027\030\031\032\033\034\035\036\037" -char ** -prepare_spawn (char **argv) -{ - size_t argc; - char **new_argv; - size_t i; - - /* Count number of arguments. */ - for (argc = 0; argv[argc] != NULL; argc++) - ; - - /* Allocate new argument vector. */ - new_argv = XMALLOC (char *, argc + 1); - - /* Put quoted arguments into the new argument vector. */ - for (i = 0; i < argc; i++) - { - const char *string = argv[i]; - - if (string[0] == '\0') - new_argv[i] = xstrdup ("\"\""); - else if (strpbrk (string, SHELL_SPECIAL_CHARS) != NULL) - { - int quote_around = (strpbrk (string, SHELL_SPACE_CHARS) != NULL); - size_t length; - unsigned int backslashes; - const char *s; - char *quoted_string; - char *p; - - length = 0; - backslashes = 0; - if (quote_around) - length++; - for (s = string; *s != '\0'; s++) - { - char c = *s; - if (c == '"') - length += backslashes + 1; - length++; - if (c == '\\') - backslashes++; - else - backslashes = 0; - } - if (quote_around) - length += backslashes + 1; - - quoted_string = XMALLOC (char, length + 1); - - p = quoted_string; - backslashes = 0; - if (quote_around) - *p++ = '"'; - for (s = string; *s != '\0'; s++) - { - char c = *s; - if (c == '"') - { - unsigned int j; - for (j = backslashes + 1; j > 0; j--) - *p++ = '\\'; - } - *p++ = c; - if (c == '\\') - backslashes++; - else - backslashes = 0; - } - if (quote_around) - { - unsigned int j; - for (j = backslashes; j > 0; j--) - *p++ = '\\'; - *p++ = '"'; - } - *p = '\0'; - - new_argv[i] = quoted_string; - } - else - new_argv[i] = (char *) string; - } - new_argv[argc] = NULL; - - return new_argv; -} -EOF - ;; - esac - - cat <<"EOF" -void lt_dump_script (FILE* f) -{ -EOF - func_emit_wrapper yes | - $SED -n -e ' -s/^\(.\{79\}\)\(..*\)/\1\ -\2/ -h -s/\([\\"]\)/\\\1/g -s/$/\\n/ -s/\([^\n]*\).*/ fputs ("\1", f);/p -g -D' - cat <<"EOF" -} -EOF -} -# end: func_emit_cwrapperexe_src - -# func_win32_import_lib_p ARG -# True if ARG is an import lib, as indicated by $file_magic_cmd -func_win32_import_lib_p () -{ - $opt_debug - case `eval $file_magic_cmd \"\$1\" 2>/dev/null | $SED -e 10q` in - *import*) : ;; - *) false ;; - esac -} - -# func_mode_link arg... -func_mode_link () -{ - $opt_debug - case $host in - *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) - # It is impossible to link a dll without this setting, and - # we shouldn't force the makefile maintainer to figure out - # which system we are compiling for in order to pass an extra - # flag for every libtool invocation. - # allow_undefined=no - - # FIXME: Unfortunately, there are problems with the above when trying - # to make a dll which has undefined symbols, in which case not - # even a static library is built. For now, we need to specify - # -no-undefined on the libtool link line when we can be certain - # that all symbols are satisfied, otherwise we get a static library. - allow_undefined=yes - ;; - *) - allow_undefined=yes - ;; - esac - libtool_args=$nonopt - base_compile="$nonopt $@" - compile_command=$nonopt - finalize_command=$nonopt - - compile_rpath= - finalize_rpath= - compile_shlibpath= - finalize_shlibpath= - convenience= - old_convenience= - deplibs= - old_deplibs= - compiler_flags= - linker_flags= - dllsearchpath= - lib_search_path=`pwd` - inst_prefix_dir= - new_inherited_linker_flags= - - avoid_version=no - bindir= - dlfiles= - dlprefiles= - dlself=no - export_dynamic=no - export_symbols= - export_symbols_regex= - generated= - libobjs= - ltlibs= - module=no - no_install=no - objs= - non_pic_objects= - precious_files_regex= - prefer_static_libs=no - preload=no - prev= - prevarg= - release= - rpath= - xrpath= - perm_rpath= - temp_rpath= - thread_safe=no - vinfo= - vinfo_number=no - weak_libs= - single_module="${wl}-single_module" - func_infer_tag $base_compile - - # We need to know -static, to get the right output filenames. - for arg - do - case $arg in - -shared) - test "$build_libtool_libs" != yes && \ - func_fatal_configuration "can not build a shared library" - build_old_libs=no - break - ;; - -all-static | -static | -static-libtool-libs) - case $arg in - -all-static) - if test "$build_libtool_libs" = yes && test -z "$link_static_flag"; then - func_warning "complete static linking is impossible in this configuration" - fi - if test -n "$link_static_flag"; then - dlopen_self=$dlopen_self_static - fi - prefer_static_libs=yes - ;; - -static) - if test -z "$pic_flag" && test -n "$link_static_flag"; then - dlopen_self=$dlopen_self_static - fi - prefer_static_libs=built - ;; - -static-libtool-libs) - if test -z "$pic_flag" && test -n "$link_static_flag"; then - dlopen_self=$dlopen_self_static - fi - prefer_static_libs=yes - ;; - esac - build_libtool_libs=no - build_old_libs=yes - break - ;; - esac - done - - # See if our shared archives depend on static archives. - test -n "$old_archive_from_new_cmds" && build_old_libs=yes - - # Go through the arguments, transforming them on the way. - while test "$#" -gt 0; do - arg="$1" - shift - func_quote_for_eval "$arg" - qarg=$func_quote_for_eval_unquoted_result - func_append libtool_args " $func_quote_for_eval_result" - - # If the previous option needs an argument, assign it. - if test -n "$prev"; then - case $prev in - output) - func_append compile_command " @OUTPUT@" - func_append finalize_command " @OUTPUT@" - ;; - esac - - case $prev in - bindir) - bindir="$arg" - prev= - continue - ;; - dlfiles|dlprefiles) - if test "$preload" = no; then - # Add the symbol object into the linking commands. - func_append compile_command " @SYMFILE@" - func_append finalize_command " @SYMFILE@" - preload=yes - fi - case $arg in - *.la | *.lo) ;; # We handle these cases below. - force) - if test "$dlself" = no; then - dlself=needless - export_dynamic=yes - fi - prev= - continue - ;; - self) - if test "$prev" = dlprefiles; then - dlself=yes - elif test "$prev" = dlfiles && test "$dlopen_self" != yes; then - dlself=yes - else - dlself=needless - export_dynamic=yes - fi - prev= - continue - ;; - *) - if test "$prev" = dlfiles; then - func_append dlfiles " $arg" - else - func_append dlprefiles " $arg" - fi - prev= - continue - ;; - esac - ;; - expsyms) - export_symbols="$arg" - test -f "$arg" \ - || func_fatal_error "symbol file \`$arg' does not exist" - prev= - continue - ;; - expsyms_regex) - export_symbols_regex="$arg" - prev= - continue - ;; - framework) - case $host in - *-*-darwin*) - case "$deplibs " in - *" $qarg.ltframework "*) ;; - *) func_append deplibs " $qarg.ltframework" # this is fixed later - ;; - esac - ;; - esac - prev= - continue - ;; - inst_prefix) - inst_prefix_dir="$arg" - prev= - continue - ;; - objectlist) - if test -f "$arg"; then - save_arg=$arg - moreargs= - for fil in `cat "$save_arg"` - do -# func_append moreargs " $fil" - arg=$fil - # A libtool-controlled object. - - # Check to see that this really is a libtool object. - if func_lalib_unsafe_p "$arg"; then - pic_object= - non_pic_object= - - # Read the .lo file - func_source "$arg" - - if test -z "$pic_object" || - test -z "$non_pic_object" || - test "$pic_object" = none && - test "$non_pic_object" = none; then - func_fatal_error "cannot find name of object for \`$arg'" - fi - - # Extract subdirectory from the argument. - func_dirname "$arg" "/" "" - xdir="$func_dirname_result" - - if test "$pic_object" != none; then - # Prepend the subdirectory the object is found in. - pic_object="$xdir$pic_object" - - if test "$prev" = dlfiles; then - if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then - func_append dlfiles " $pic_object" - prev= - continue - else - # If libtool objects are unsupported, then we need to preload. - prev=dlprefiles - fi - fi - - # CHECK ME: I think I busted this. -Ossama - if test "$prev" = dlprefiles; then - # Preload the old-style object. - func_append dlprefiles " $pic_object" - prev= - fi - - # A PIC object. - func_append libobjs " $pic_object" - arg="$pic_object" - fi - - # Non-PIC object. - if test "$non_pic_object" != none; then - # Prepend the subdirectory the object is found in. - non_pic_object="$xdir$non_pic_object" - - # A standard non-PIC object - func_append non_pic_objects " $non_pic_object" - if test -z "$pic_object" || test "$pic_object" = none ; then - arg="$non_pic_object" - fi - else - # If the PIC object exists, use it instead. - # $xdir was prepended to $pic_object above. - non_pic_object="$pic_object" - func_append non_pic_objects " $non_pic_object" - fi - else - # Only an error if not doing a dry-run. - if $opt_dry_run; then - # Extract subdirectory from the argument. - func_dirname "$arg" "/" "" - xdir="$func_dirname_result" - - func_lo2o "$arg" - pic_object=$xdir$objdir/$func_lo2o_result - non_pic_object=$xdir$func_lo2o_result - func_append libobjs " $pic_object" - func_append non_pic_objects " $non_pic_object" - else - func_fatal_error "\`$arg' is not a valid libtool object" - fi - fi - done - else - func_fatal_error "link input file \`$arg' does not exist" - fi - arg=$save_arg - prev= - continue - ;; - precious_regex) - precious_files_regex="$arg" - prev= - continue - ;; - release) - release="-$arg" - prev= - continue - ;; - rpath | xrpath) - # We need an absolute path. - case $arg in - [\\/]* | [A-Za-z]:[\\/]*) ;; - *) - func_fatal_error "only absolute run-paths are allowed" - ;; - esac - if test "$prev" = rpath; then - case "$rpath " in - *" $arg "*) ;; - *) func_append rpath " $arg" ;; - esac - else - case "$xrpath " in - *" $arg "*) ;; - *) func_append xrpath " $arg" ;; - esac - fi - prev= - continue - ;; - shrext) - shrext_cmds="$arg" - prev= - continue - ;; - weak) - func_append weak_libs " $arg" - prev= - continue - ;; - xcclinker) - func_append linker_flags " $qarg" - func_append compiler_flags " $qarg" - prev= - func_append compile_command " $qarg" - func_append finalize_command " $qarg" - continue - ;; - xcompiler) - func_append compiler_flags " $qarg" - prev= - func_append compile_command " $qarg" - func_append finalize_command " $qarg" - continue - ;; - xlinker) - func_append linker_flags " $qarg" - func_append compiler_flags " $wl$qarg" - prev= - func_append compile_command " $wl$qarg" - func_append finalize_command " $wl$qarg" - continue - ;; - *) - eval "$prev=\"\$arg\"" - prev= - continue - ;; - esac - fi # test -n "$prev" - - prevarg="$arg" - - case $arg in - -all-static) - if test -n "$link_static_flag"; then - # See comment for -static flag below, for more details. - func_append compile_command " $link_static_flag" - func_append finalize_command " $link_static_flag" - fi - continue - ;; - - -allow-undefined) - # FIXME: remove this flag sometime in the future. - func_fatal_error "\`-allow-undefined' must not be used because it is the default" - ;; - - -avoid-version) - avoid_version=yes - continue - ;; - - -bindir) - prev=bindir - continue - ;; - - -dlopen) - prev=dlfiles - continue - ;; - - -dlpreopen) - prev=dlprefiles - continue - ;; - - -export-dynamic) - export_dynamic=yes - continue - ;; - - -export-symbols | -export-symbols-regex) - if test -n "$export_symbols" || test -n "$export_symbols_regex"; then - func_fatal_error "more than one -exported-symbols argument is not allowed" - fi - if test "X$arg" = "X-export-symbols"; then - prev=expsyms - else - prev=expsyms_regex - fi - continue - ;; - - -framework) - prev=framework - continue - ;; - - -inst-prefix-dir) - prev=inst_prefix - continue - ;; - - # The native IRIX linker understands -LANG:*, -LIST:* and -LNO:* - # so, if we see these flags be careful not to treat them like -L - -L[A-Z][A-Z]*:*) - case $with_gcc/$host in - no/*-*-irix* | /*-*-irix*) - func_append compile_command " $arg" - func_append finalize_command " $arg" - ;; - esac - continue - ;; - - -L*) - func_stripname "-L" '' "$arg" - if test -z "$func_stripname_result"; then - if test "$#" -gt 0; then - func_fatal_error "require no space between \`-L' and \`$1'" - else - func_fatal_error "need path for \`-L' option" - fi - fi - func_resolve_sysroot "$func_stripname_result" - dir=$func_resolve_sysroot_result - # We need an absolute path. - case $dir in - [\\/]* | [A-Za-z]:[\\/]*) ;; - *) - absdir=`cd "$dir" && pwd` - test -z "$absdir" && \ - func_fatal_error "cannot determine absolute directory name of \`$dir'" - dir="$absdir" - ;; - esac - case "$deplibs " in - *" -L$dir "* | *" $arg "*) - # Will only happen for absolute or sysroot arguments - ;; - *) - # Preserve sysroot, but never include relative directories - case $dir in - [\\/]* | [A-Za-z]:[\\/]* | =*) func_append deplibs " $arg" ;; - *) func_append deplibs " -L$dir" ;; - esac - func_append lib_search_path " $dir" - ;; - esac - case $host in - *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) - testbindir=`$ECHO "$dir" | $SED 's*/lib$*/bin*'` - case :$dllsearchpath: in - *":$dir:"*) ;; - ::) dllsearchpath=$dir;; - *) func_append dllsearchpath ":$dir";; - esac - case :$dllsearchpath: in - *":$testbindir:"*) ;; - ::) dllsearchpath=$testbindir;; - *) func_append dllsearchpath ":$testbindir";; - esac - ;; - esac - continue - ;; - - -l*) - if test "X$arg" = "X-lc" || test "X$arg" = "X-lm"; then - case $host in - *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-beos* | *-cegcc* | *-*-haiku*) - # These systems don't actually have a C or math library (as such) - continue - ;; - *-*-os2*) - # These systems don't actually have a C library (as such) - test "X$arg" = "X-lc" && continue - ;; - *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) - # Do not include libc due to us having libc/libc_r. - test "X$arg" = "X-lc" && continue - ;; - *-*-rhapsody* | *-*-darwin1.[012]) - # Rhapsody C and math libraries are in the System framework - func_append deplibs " System.ltframework" - continue - ;; - *-*-sco3.2v5* | *-*-sco5v6*) - # Causes problems with __ctype - test "X$arg" = "X-lc" && continue - ;; - *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) - # Compiler inserts libc in the correct place for threads to work - test "X$arg" = "X-lc" && continue - ;; - esac - elif test "X$arg" = "X-lc_r"; then - case $host in - *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) - # Do not include libc_r directly, use -pthread flag. - continue - ;; - esac - fi - func_append deplibs " $arg" - continue - ;; - - -module) - module=yes - continue - ;; - - # Tru64 UNIX uses -model [arg] to determine the layout of C++ - # classes, name mangling, and exception handling. - # Darwin uses the -arch flag to determine output architecture. - -model|-arch|-isysroot|--sysroot) - func_append compiler_flags " $arg" - func_append compile_command " $arg" - func_append finalize_command " $arg" - prev=xcompiler - continue - ;; - - -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \ - |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) - func_append compiler_flags " $arg" - func_append compile_command " $arg" - func_append finalize_command " $arg" - case "$new_inherited_linker_flags " in - *" $arg "*) ;; - * ) func_append new_inherited_linker_flags " $arg" ;; - esac - continue - ;; - - -multi_module) - single_module="${wl}-multi_module" - continue - ;; - - -no-fast-install) - fast_install=no - continue - ;; - - -no-install) - case $host in - *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-darwin* | *-cegcc*) - # The PATH hackery in wrapper scripts is required on Windows - # and Darwin in order for the loader to find any dlls it needs. - func_warning "\`-no-install' is ignored for $host" - func_warning "assuming \`-no-fast-install' instead" - fast_install=no - ;; - *) no_install=yes ;; - esac - continue - ;; - - -no-undefined) - allow_undefined=no - continue - ;; - - -objectlist) - prev=objectlist - continue - ;; - - -o) prev=output ;; - - -precious-files-regex) - prev=precious_regex - continue - ;; - - -release) - prev=release - continue - ;; - - -rpath) - prev=rpath - continue - ;; - - -R) - prev=xrpath - continue - ;; - - -R*) - func_stripname '-R' '' "$arg" - dir=$func_stripname_result - # We need an absolute path. - case $dir in - [\\/]* | [A-Za-z]:[\\/]*) ;; - =*) - func_stripname '=' '' "$dir" - dir=$lt_sysroot$func_stripname_result - ;; - *) - func_fatal_error "only absolute run-paths are allowed" - ;; - esac - case "$xrpath " in - *" $dir "*) ;; - *) func_append xrpath " $dir" ;; - esac - continue - ;; - - -shared) - # The effects of -shared are defined in a previous loop. - continue - ;; - - -shrext) - prev=shrext - continue - ;; - - -static | -static-libtool-libs) - # The effects of -static are defined in a previous loop. - # We used to do the same as -all-static on platforms that - # didn't have a PIC flag, but the assumption that the effects - # would be equivalent was wrong. It would break on at least - # Digital Unix and AIX. - continue - ;; - - -thread-safe) - thread_safe=yes - continue - ;; - - -version-info) - prev=vinfo - continue - ;; - - -version-number) - prev=vinfo - vinfo_number=yes - continue - ;; - - -weak) - prev=weak - continue - ;; - - -Wc,*) - func_stripname '-Wc,' '' "$arg" - args=$func_stripname_result - arg= - save_ifs="$IFS"; IFS=',' - for flag in $args; do - IFS="$save_ifs" - func_quote_for_eval "$flag" - func_append arg " $func_quote_for_eval_result" - func_append compiler_flags " $func_quote_for_eval_result" - done - IFS="$save_ifs" - func_stripname ' ' '' "$arg" - arg=$func_stripname_result - ;; - - -Wl,*) - func_stripname '-Wl,' '' "$arg" - args=$func_stripname_result - arg= - save_ifs="$IFS"; IFS=',' - for flag in $args; do - IFS="$save_ifs" - func_quote_for_eval "$flag" - func_append arg " $wl$func_quote_for_eval_result" - func_append compiler_flags " $wl$func_quote_for_eval_result" - func_append linker_flags " $func_quote_for_eval_result" - done - IFS="$save_ifs" - func_stripname ' ' '' "$arg" - arg=$func_stripname_result - ;; - - -Xcompiler) - prev=xcompiler - continue - ;; - - -Xlinker) - prev=xlinker - continue - ;; - - -XCClinker) - prev=xcclinker - continue - ;; - - # -msg_* for osf cc - -msg_*) - func_quote_for_eval "$arg" - arg="$func_quote_for_eval_result" - ;; - - # Flags to be passed through unchanged, with rationale: - # -64, -mips[0-9] enable 64-bit mode for the SGI compiler - # -r[0-9][0-9]* specify processor for the SGI compiler - # -xarch=*, -xtarget=* enable 64-bit mode for the Sun compiler - # +DA*, +DD* enable 64-bit mode for the HP compiler - # -q* compiler args for the IBM compiler - # -m*, -t[45]*, -txscale* architecture-specific flags for GCC - # -F/path path to uninstalled frameworks, gcc on darwin - # -p, -pg, --coverage, -fprofile-* profiling flags for GCC - # @file GCC response files - # -tp=* Portland pgcc target processor selection - # --sysroot=* for sysroot support - # -O*, -flto*, -fwhopr*, -fuse-linker-plugin GCC link-time optimization - -64|-mips[0-9]|-r[0-9][0-9]*|-xarch=*|-xtarget=*|+DA*|+DD*|-q*|-m*| \ - -t[45]*|-txscale*|-p|-pg|--coverage|-fprofile-*|-F*|@*|-tp=*|--sysroot=*| \ - -O*|-flto*|-fwhopr*|-fuse-linker-plugin) - func_quote_for_eval "$arg" - arg="$func_quote_for_eval_result" - func_append compile_command " $arg" - func_append finalize_command " $arg" - func_append compiler_flags " $arg" - continue - ;; - - # Some other compiler flag. - -* | +*) - func_quote_for_eval "$arg" - arg="$func_quote_for_eval_result" - ;; - - *.$objext) - # A standard object. - func_append objs " $arg" - ;; - - *.lo) - # A libtool-controlled object. - - # Check to see that this really is a libtool object. - if func_lalib_unsafe_p "$arg"; then - pic_object= - non_pic_object= - - # Read the .lo file - func_source "$arg" - - if test -z "$pic_object" || - test -z "$non_pic_object" || - test "$pic_object" = none && - test "$non_pic_object" = none; then - func_fatal_error "cannot find name of object for \`$arg'" - fi - - # Extract subdirectory from the argument. - func_dirname "$arg" "/" "" - xdir="$func_dirname_result" - - if test "$pic_object" != none; then - # Prepend the subdirectory the object is found in. - pic_object="$xdir$pic_object" - - if test "$prev" = dlfiles; then - if test "$build_libtool_libs" = yes && test "$dlopen_support" = yes; then - func_append dlfiles " $pic_object" - prev= - continue - else - # If libtool objects are unsupported, then we need to preload. - prev=dlprefiles - fi - fi - - # CHECK ME: I think I busted this. -Ossama - if test "$prev" = dlprefiles; then - # Preload the old-style object. - func_append dlprefiles " $pic_object" - prev= - fi - - # A PIC object. - func_append libobjs " $pic_object" - arg="$pic_object" - fi - - # Non-PIC object. - if test "$non_pic_object" != none; then - # Prepend the subdirectory the object is found in. - non_pic_object="$xdir$non_pic_object" - - # A standard non-PIC object - func_append non_pic_objects " $non_pic_object" - if test -z "$pic_object" || test "$pic_object" = none ; then - arg="$non_pic_object" - fi - else - # If the PIC object exists, use it instead. - # $xdir was prepended to $pic_object above. - non_pic_object="$pic_object" - func_append non_pic_objects " $non_pic_object" - fi - else - # Only an error if not doing a dry-run. - if $opt_dry_run; then - # Extract subdirectory from the argument. - func_dirname "$arg" "/" "" - xdir="$func_dirname_result" - - func_lo2o "$arg" - pic_object=$xdir$objdir/$func_lo2o_result - non_pic_object=$xdir$func_lo2o_result - func_append libobjs " $pic_object" - func_append non_pic_objects " $non_pic_object" - else - func_fatal_error "\`$arg' is not a valid libtool object" - fi - fi - ;; - - *.$libext) - # An archive. - func_append deplibs " $arg" - func_append old_deplibs " $arg" - continue - ;; - - *.la) - # A libtool-controlled library. - - func_resolve_sysroot "$arg" - if test "$prev" = dlfiles; then - # This library was specified with -dlopen. - func_append dlfiles " $func_resolve_sysroot_result" - prev= - elif test "$prev" = dlprefiles; then - # The library was specified with -dlpreopen. - func_append dlprefiles " $func_resolve_sysroot_result" - prev= - else - func_append deplibs " $func_resolve_sysroot_result" - fi - continue - ;; - - # Some other compiler argument. - *) - # Unknown arguments in both finalize_command and compile_command need - # to be aesthetically quoted because they are evaled later. - func_quote_for_eval "$arg" - arg="$func_quote_for_eval_result" - ;; - esac # arg - - # Now actually substitute the argument into the commands. - if test -n "$arg"; then - func_append compile_command " $arg" - func_append finalize_command " $arg" - fi - done # argument parsing loop - - test -n "$prev" && \ - func_fatal_help "the \`$prevarg' option requires an argument" - - if test "$export_dynamic" = yes && test -n "$export_dynamic_flag_spec"; then - eval arg=\"$export_dynamic_flag_spec\" - func_append compile_command " $arg" - func_append finalize_command " $arg" - fi - - oldlibs= - # calculate the name of the file, without its directory - func_basename "$output" - outputname="$func_basename_result" - libobjs_save="$libobjs" - - if test -n "$shlibpath_var"; then - # get the directories listed in $shlibpath_var - eval shlib_search_path=\`\$ECHO \"\${$shlibpath_var}\" \| \$SED \'s/:/ /g\'\` - else - shlib_search_path= - fi - eval sys_lib_search_path=\"$sys_lib_search_path_spec\" - eval sys_lib_dlsearch_path=\"$sys_lib_dlsearch_path_spec\" - - func_dirname "$output" "/" "" - output_objdir="$func_dirname_result$objdir" - func_to_tool_file "$output_objdir/" - tool_output_objdir=$func_to_tool_file_result - # Create the object directory. - func_mkdir_p "$output_objdir" - - # Determine the type of output - case $output in - "") - func_fatal_help "you must specify an output file" - ;; - *.$libext) linkmode=oldlib ;; - *.lo | *.$objext) linkmode=obj ;; - *.la) linkmode=lib ;; - *) linkmode=prog ;; # Anything else should be a program. - esac - - specialdeplibs= - - libs= - # Find all interdependent deplibs by searching for libraries - # that are linked more than once (e.g. -la -lb -la) - for deplib in $deplibs; do - if $opt_preserve_dup_deps ; then - case "$libs " in - *" $deplib "*) func_append specialdeplibs " $deplib" ;; - esac - fi - func_append libs " $deplib" - done - - if test "$linkmode" = lib; then - libs="$predeps $libs $compiler_lib_search_path $postdeps" - - # Compute libraries that are listed more than once in $predeps - # $postdeps and mark them as special (i.e., whose duplicates are - # not to be eliminated). - pre_post_deps= - if $opt_duplicate_compiler_generated_deps; then - for pre_post_dep in $predeps $postdeps; do - case "$pre_post_deps " in - *" $pre_post_dep "*) func_append specialdeplibs " $pre_post_deps" ;; - esac - func_append pre_post_deps " $pre_post_dep" - done - fi - pre_post_deps= - fi - - deplibs= - newdependency_libs= - newlib_search_path= - need_relink=no # whether we're linking any uninstalled libtool libraries - notinst_deplibs= # not-installed libtool libraries - notinst_path= # paths that contain not-installed libtool libraries - - case $linkmode in - lib) - passes="conv dlpreopen link" - for file in $dlfiles $dlprefiles; do - case $file in - *.la) ;; - *) - func_fatal_help "libraries can \`-dlopen' only libtool libraries: $file" - ;; - esac - done - ;; - prog) - compile_deplibs= - finalize_deplibs= - alldeplibs=no - newdlfiles= - newdlprefiles= - passes="conv scan dlopen dlpreopen link" - ;; - *) passes="conv" - ;; - esac - - for pass in $passes; do - # The preopen pass in lib mode reverses $deplibs; put it back here - # so that -L comes before libs that need it for instance... - if test "$linkmode,$pass" = "lib,link"; then - ## FIXME: Find the place where the list is rebuilt in the wrong - ## order, and fix it there properly - tmp_deplibs= - for deplib in $deplibs; do - tmp_deplibs="$deplib $tmp_deplibs" - done - deplibs="$tmp_deplibs" - fi - - if test "$linkmode,$pass" = "lib,link" || - test "$linkmode,$pass" = "prog,scan"; then - libs="$deplibs" - deplibs= - fi - if test "$linkmode" = prog; then - case $pass in - dlopen) libs="$dlfiles" ;; - dlpreopen) libs="$dlprefiles" ;; - link) - libs="$deplibs %DEPLIBS%" - test "X$link_all_deplibs" != Xno && libs="$libs $dependency_libs" - ;; - esac - fi - if test "$linkmode,$pass" = "lib,dlpreopen"; then - # Collect and forward deplibs of preopened libtool libs - for lib in $dlprefiles; do - # Ignore non-libtool-libs - dependency_libs= - func_resolve_sysroot "$lib" - case $lib in - *.la) func_source "$func_resolve_sysroot_result" ;; - esac - - # Collect preopened libtool deplibs, except any this library - # has declared as weak libs - for deplib in $dependency_libs; do - func_basename "$deplib" - deplib_base=$func_basename_result - case " $weak_libs " in - *" $deplib_base "*) ;; - *) func_append deplibs " $deplib" ;; - esac - done - done - libs="$dlprefiles" - fi - if test "$pass" = dlopen; then - # Collect dlpreopened libraries - save_deplibs="$deplibs" - deplibs= - fi - - for deplib in $libs; do - lib= - found=no - case $deplib in - -mt|-mthreads|-kthread|-Kthread|-pthread|-pthreads|--thread-safe \ - |-threads|-fopenmp|-openmp|-mp|-xopenmp|-omp|-qsmp=*) - if test "$linkmode,$pass" = "prog,link"; then - compile_deplibs="$deplib $compile_deplibs" - finalize_deplibs="$deplib $finalize_deplibs" - else - func_append compiler_flags " $deplib" - if test "$linkmode" = lib ; then - case "$new_inherited_linker_flags " in - *" $deplib "*) ;; - * ) func_append new_inherited_linker_flags " $deplib" ;; - esac - fi - fi - continue - ;; - -l*) - if test "$linkmode" != lib && test "$linkmode" != prog; then - func_warning "\`-l' is ignored for archives/objects" - continue - fi - func_stripname '-l' '' "$deplib" - name=$func_stripname_result - if test "$linkmode" = lib; then - searchdirs="$newlib_search_path $lib_search_path $compiler_lib_search_dirs $sys_lib_search_path $shlib_search_path" - else - searchdirs="$newlib_search_path $lib_search_path $sys_lib_search_path $shlib_search_path" - fi - for searchdir in $searchdirs; do - for search_ext in .la $std_shrext .so .a; do - # Search the libtool library - lib="$searchdir/lib${name}${search_ext}" - if test -f "$lib"; then - if test "$search_ext" = ".la"; then - found=yes - else - found=no - fi - break 2 - fi - done - done - if test "$found" != yes; then - # deplib doesn't seem to be a libtool library - if test "$linkmode,$pass" = "prog,link"; then - compile_deplibs="$deplib $compile_deplibs" - finalize_deplibs="$deplib $finalize_deplibs" - else - deplibs="$deplib $deplibs" - test "$linkmode" = lib && newdependency_libs="$deplib $newdependency_libs" - fi - continue - else # deplib is a libtool library - # If $allow_libtool_libs_with_static_runtimes && $deplib is a stdlib, - # We need to do some special things here, and not later. - if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then - case " $predeps $postdeps " in - *" $deplib "*) - if func_lalib_p "$lib"; then - library_names= - old_library= - func_source "$lib" - for l in $old_library $library_names; do - ll="$l" - done - if test "X$ll" = "X$old_library" ; then # only static version available - found=no - func_dirname "$lib" "" "." - ladir="$func_dirname_result" - lib=$ladir/$old_library - if test "$linkmode,$pass" = "prog,link"; then - compile_deplibs="$deplib $compile_deplibs" - finalize_deplibs="$deplib $finalize_deplibs" - else - deplibs="$deplib $deplibs" - test "$linkmode" = lib && newdependency_libs="$deplib $newdependency_libs" - fi - continue - fi - fi - ;; - *) ;; - esac - fi - fi - ;; # -l - *.ltframework) - if test "$linkmode,$pass" = "prog,link"; then - compile_deplibs="$deplib $compile_deplibs" - finalize_deplibs="$deplib $finalize_deplibs" - else - deplibs="$deplib $deplibs" - if test "$linkmode" = lib ; then - case "$new_inherited_linker_flags " in - *" $deplib "*) ;; - * ) func_append new_inherited_linker_flags " $deplib" ;; - esac - fi - fi - continue - ;; - -L*) - case $linkmode in - lib) - deplibs="$deplib $deplibs" - test "$pass" = conv && continue - newdependency_libs="$deplib $newdependency_libs" - func_stripname '-L' '' "$deplib" - func_resolve_sysroot "$func_stripname_result" - func_append newlib_search_path " $func_resolve_sysroot_result" - ;; - prog) - if test "$pass" = conv; then - deplibs="$deplib $deplibs" - continue - fi - if test "$pass" = scan; then - deplibs="$deplib $deplibs" - else - compile_deplibs="$deplib $compile_deplibs" - finalize_deplibs="$deplib $finalize_deplibs" - fi - func_stripname '-L' '' "$deplib" - func_resolve_sysroot "$func_stripname_result" - func_append newlib_search_path " $func_resolve_sysroot_result" - ;; - *) - func_warning "\`-L' is ignored for archives/objects" - ;; - esac # linkmode - continue - ;; # -L - -R*) - if test "$pass" = link; then - func_stripname '-R' '' "$deplib" - func_resolve_sysroot "$func_stripname_result" - dir=$func_resolve_sysroot_result - # Make sure the xrpath contains only unique directories. - case "$xrpath " in - *" $dir "*) ;; - *) func_append xrpath " $dir" ;; - esac - fi - deplibs="$deplib $deplibs" - continue - ;; - *.la) - func_resolve_sysroot "$deplib" - lib=$func_resolve_sysroot_result - ;; - *.$libext) - if test "$pass" = conv; then - deplibs="$deplib $deplibs" - continue - fi - case $linkmode in - lib) - # Linking convenience modules into shared libraries is allowed, - # but linking other static libraries is non-portable. - case " $dlpreconveniencelibs " in - *" $deplib "*) ;; - *) - valid_a_lib=no - case $deplibs_check_method in - match_pattern*) - set dummy $deplibs_check_method; shift - match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` - if eval "\$ECHO \"$deplib\"" 2>/dev/null | $SED 10q \ - | $EGREP "$match_pattern_regex" > /dev/null; then - valid_a_lib=yes - fi - ;; - pass_all) - valid_a_lib=yes - ;; - esac - if test "$valid_a_lib" != yes; then - echo - $ECHO "*** Warning: Trying to link with static lib archive $deplib." - echo "*** I have the capability to make that library automatically link in when" - echo "*** you link to this library. But I can only do this if you have a" - echo "*** shared version of the library, which you do not appear to have" - echo "*** because the file extensions .$libext of this argument makes me believe" - echo "*** that it is just a static archive that I should not use here." - else - echo - $ECHO "*** Warning: Linking the shared library $output against the" - $ECHO "*** static library $deplib is not portable!" - deplibs="$deplib $deplibs" - fi - ;; - esac - continue - ;; - prog) - if test "$pass" != link; then - deplibs="$deplib $deplibs" - else - compile_deplibs="$deplib $compile_deplibs" - finalize_deplibs="$deplib $finalize_deplibs" - fi - continue - ;; - esac # linkmode - ;; # *.$libext - *.lo | *.$objext) - if test "$pass" = conv; then - deplibs="$deplib $deplibs" - elif test "$linkmode" = prog; then - if test "$pass" = dlpreopen || test "$dlopen_support" != yes || test "$build_libtool_libs" = no; then - # If there is no dlopen support or we're linking statically, - # we need to preload. - func_append newdlprefiles " $deplib" - compile_deplibs="$deplib $compile_deplibs" - finalize_deplibs="$deplib $finalize_deplibs" - else - func_append newdlfiles " $deplib" - fi - fi - continue - ;; - %DEPLIBS%) - alldeplibs=yes - continue - ;; - esac # case $deplib - - if test "$found" = yes || test -f "$lib"; then : - else - func_fatal_error "cannot find the library \`$lib' or unhandled argument \`$deplib'" - fi - - # Check to see that this really is a libtool archive. - func_lalib_unsafe_p "$lib" \ - || func_fatal_error "\`$lib' is not a valid libtool archive" - - func_dirname "$lib" "" "." - ladir="$func_dirname_result" - - dlname= - dlopen= - dlpreopen= - libdir= - library_names= - old_library= - inherited_linker_flags= - # If the library was installed with an old release of libtool, - # it will not redefine variables installed, or shouldnotlink - installed=yes - shouldnotlink=no - avoidtemprpath= - - - # Read the .la file - func_source "$lib" - - # Convert "-framework foo" to "foo.ltframework" - if test -n "$inherited_linker_flags"; then - tmp_inherited_linker_flags=`$ECHO "$inherited_linker_flags" | $SED 's/-framework \([^ $]*\)/\1.ltframework/g'` - for tmp_inherited_linker_flag in $tmp_inherited_linker_flags; do - case " $new_inherited_linker_flags " in - *" $tmp_inherited_linker_flag "*) ;; - *) func_append new_inherited_linker_flags " $tmp_inherited_linker_flag";; - esac - done - fi - dependency_libs=`$ECHO " $dependency_libs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` - if test "$linkmode,$pass" = "lib,link" || - test "$linkmode,$pass" = "prog,scan" || - { test "$linkmode" != prog && test "$linkmode" != lib; }; then - test -n "$dlopen" && func_append dlfiles " $dlopen" - test -n "$dlpreopen" && func_append dlprefiles " $dlpreopen" - fi - - if test "$pass" = conv; then - # Only check for convenience libraries - deplibs="$lib $deplibs" - if test -z "$libdir"; then - if test -z "$old_library"; then - func_fatal_error "cannot find name of link library for \`$lib'" - fi - # It is a libtool convenience library, so add in its objects. - func_append convenience " $ladir/$objdir/$old_library" - func_append old_convenience " $ladir/$objdir/$old_library" - tmp_libs= - for deplib in $dependency_libs; do - deplibs="$deplib $deplibs" - if $opt_preserve_dup_deps ; then - case "$tmp_libs " in - *" $deplib "*) func_append specialdeplibs " $deplib" ;; - esac - fi - func_append tmp_libs " $deplib" - done - elif test "$linkmode" != prog && test "$linkmode" != lib; then - func_fatal_error "\`$lib' is not a convenience library" - fi - continue - fi # $pass = conv - - - # Get the name of the library we link against. - linklib= - if test -n "$old_library" && - { test "$prefer_static_libs" = yes || - test "$prefer_static_libs,$installed" = "built,no"; }; then - linklib=$old_library - else - for l in $old_library $library_names; do - linklib="$l" - done - fi - if test -z "$linklib"; then - func_fatal_error "cannot find name of link library for \`$lib'" - fi - - # This library was specified with -dlopen. - if test "$pass" = dlopen; then - if test -z "$libdir"; then - func_fatal_error "cannot -dlopen a convenience library: \`$lib'" - fi - if test -z "$dlname" || - test "$dlopen_support" != yes || - test "$build_libtool_libs" = no; then - # If there is no dlname, no dlopen support or we're linking - # statically, we need to preload. We also need to preload any - # dependent libraries so libltdl's deplib preloader doesn't - # bomb out in the load deplibs phase. - func_append dlprefiles " $lib $dependency_libs" - else - func_append newdlfiles " $lib" - fi - continue - fi # $pass = dlopen - - # We need an absolute path. - case $ladir in - [\\/]* | [A-Za-z]:[\\/]*) abs_ladir="$ladir" ;; - *) - abs_ladir=`cd "$ladir" && pwd` - if test -z "$abs_ladir"; then - func_warning "cannot determine absolute directory name of \`$ladir'" - func_warning "passing it literally to the linker, although it might fail" - abs_ladir="$ladir" - fi - ;; - esac - func_basename "$lib" - laname="$func_basename_result" - - # Find the relevant object directory and library name. - if test "X$installed" = Xyes; then - if test ! -f "$lt_sysroot$libdir/$linklib" && test -f "$abs_ladir/$linklib"; then - func_warning "library \`$lib' was moved." - dir="$ladir" - absdir="$abs_ladir" - libdir="$abs_ladir" - else - dir="$lt_sysroot$libdir" - absdir="$lt_sysroot$libdir" - fi - test "X$hardcode_automatic" = Xyes && avoidtemprpath=yes - else - if test ! -f "$ladir/$objdir/$linklib" && test -f "$abs_ladir/$linklib"; then - dir="$ladir" - absdir="$abs_ladir" - # Remove this search path later - func_append notinst_path " $abs_ladir" - else - dir="$ladir/$objdir" - absdir="$abs_ladir/$objdir" - # Remove this search path later - func_append notinst_path " $abs_ladir" - fi - fi # $installed = yes - func_stripname 'lib' '.la' "$laname" - name=$func_stripname_result - - # This library was specified with -dlpreopen. - if test "$pass" = dlpreopen; then - if test -z "$libdir" && test "$linkmode" = prog; then - func_fatal_error "only libraries may -dlpreopen a convenience library: \`$lib'" - fi - case "$host" in - # special handling for platforms with PE-DLLs. - *cygwin* | *mingw* | *cegcc* ) - # Linker will automatically link against shared library if both - # static and shared are present. Therefore, ensure we extract - # symbols from the import library if a shared library is present - # (otherwise, the dlopen module name will be incorrect). We do - # this by putting the import library name into $newdlprefiles. - # We recover the dlopen module name by 'saving' the la file - # name in a special purpose variable, and (later) extracting the - # dlname from the la file. - if test -n "$dlname"; then - func_tr_sh "$dir/$linklib" - eval "libfile_$func_tr_sh_result=\$abs_ladir/\$laname" - func_append newdlprefiles " $dir/$linklib" - else - func_append newdlprefiles " $dir/$old_library" - # Keep a list of preopened convenience libraries to check - # that they are being used correctly in the link pass. - test -z "$libdir" && \ - func_append dlpreconveniencelibs " $dir/$old_library" - fi - ;; - * ) - # Prefer using a static library (so that no silly _DYNAMIC symbols - # are required to link). - if test -n "$old_library"; then - func_append newdlprefiles " $dir/$old_library" - # Keep a list of preopened convenience libraries to check - # that they are being used correctly in the link pass. - test -z "$libdir" && \ - func_append dlpreconveniencelibs " $dir/$old_library" - # Otherwise, use the dlname, so that lt_dlopen finds it. - elif test -n "$dlname"; then - func_append newdlprefiles " $dir/$dlname" - else - func_append newdlprefiles " $dir/$linklib" - fi - ;; - esac - fi # $pass = dlpreopen - - if test -z "$libdir"; then - # Link the convenience library - if test "$linkmode" = lib; then - deplibs="$dir/$old_library $deplibs" - elif test "$linkmode,$pass" = "prog,link"; then - compile_deplibs="$dir/$old_library $compile_deplibs" - finalize_deplibs="$dir/$old_library $finalize_deplibs" - else - deplibs="$lib $deplibs" # used for prog,scan pass - fi - continue - fi - - - if test "$linkmode" = prog && test "$pass" != link; then - func_append newlib_search_path " $ladir" - deplibs="$lib $deplibs" - - linkalldeplibs=no - if test "$link_all_deplibs" != no || test -z "$library_names" || - test "$build_libtool_libs" = no; then - linkalldeplibs=yes - fi - - tmp_libs= - for deplib in $dependency_libs; do - case $deplib in - -L*) func_stripname '-L' '' "$deplib" - func_resolve_sysroot "$func_stripname_result" - func_append newlib_search_path " $func_resolve_sysroot_result" - ;; - esac - # Need to link against all dependency_libs? - if test "$linkalldeplibs" = yes; then - deplibs="$deplib $deplibs" - else - # Need to hardcode shared library paths - # or/and link against static libraries - newdependency_libs="$deplib $newdependency_libs" - fi - if $opt_preserve_dup_deps ; then - case "$tmp_libs " in - *" $deplib "*) func_append specialdeplibs " $deplib" ;; - esac - fi - func_append tmp_libs " $deplib" - done # for deplib - continue - fi # $linkmode = prog... - - if test "$linkmode,$pass" = "prog,link"; then - if test -n "$library_names" && - { { test "$prefer_static_libs" = no || - test "$prefer_static_libs,$installed" = "built,yes"; } || - test -z "$old_library"; }; then - # We need to hardcode the library path - if test -n "$shlibpath_var" && test -z "$avoidtemprpath" ; then - # Make sure the rpath contains only unique directories. - case "$temp_rpath:" in - *"$absdir:"*) ;; - *) func_append temp_rpath "$absdir:" ;; - esac - fi - - # Hardcode the library path. - # Skip directories that are in the system default run-time - # search path. - case " $sys_lib_dlsearch_path " in - *" $absdir "*) ;; - *) - case "$compile_rpath " in - *" $absdir "*) ;; - *) func_append compile_rpath " $absdir" ;; - esac - ;; - esac - case " $sys_lib_dlsearch_path " in - *" $libdir "*) ;; - *) - case "$finalize_rpath " in - *" $libdir "*) ;; - *) func_append finalize_rpath " $libdir" ;; - esac - ;; - esac - fi # $linkmode,$pass = prog,link... - - if test "$alldeplibs" = yes && - { test "$deplibs_check_method" = pass_all || - { test "$build_libtool_libs" = yes && - test -n "$library_names"; }; }; then - # We only need to search for static libraries - continue - fi - fi - - link_static=no # Whether the deplib will be linked statically - use_static_libs=$prefer_static_libs - if test "$use_static_libs" = built && test "$installed" = yes; then - use_static_libs=no - fi - if test -n "$library_names" && - { test "$use_static_libs" = no || test -z "$old_library"; }; then - case $host in - *cygwin* | *mingw* | *cegcc*) - # No point in relinking DLLs because paths are not encoded - func_append notinst_deplibs " $lib" - need_relink=no - ;; - *) - if test "$installed" = no; then - func_append notinst_deplibs " $lib" - need_relink=yes - fi - ;; - esac - # This is a shared library - - # Warn about portability, can't link against -module's on some - # systems (darwin). Don't bleat about dlopened modules though! - dlopenmodule="" - for dlpremoduletest in $dlprefiles; do - if test "X$dlpremoduletest" = "X$lib"; then - dlopenmodule="$dlpremoduletest" - break - fi - done - if test -z "$dlopenmodule" && test "$shouldnotlink" = yes && test "$pass" = link; then - echo - if test "$linkmode" = prog; then - $ECHO "*** Warning: Linking the executable $output against the loadable module" - else - $ECHO "*** Warning: Linking the shared library $output against the loadable module" - fi - $ECHO "*** $linklib is not portable!" - fi - if test "$linkmode" = lib && - test "$hardcode_into_libs" = yes; then - # Hardcode the library path. - # Skip directories that are in the system default run-time - # search path. - case " $sys_lib_dlsearch_path " in - *" $absdir "*) ;; - *) - case "$compile_rpath " in - *" $absdir "*) ;; - *) func_append compile_rpath " $absdir" ;; - esac - ;; - esac - case " $sys_lib_dlsearch_path " in - *" $libdir "*) ;; - *) - case "$finalize_rpath " in - *" $libdir "*) ;; - *) func_append finalize_rpath " $libdir" ;; - esac - ;; - esac - fi - - if test -n "$old_archive_from_expsyms_cmds"; then - # figure out the soname - set dummy $library_names - shift - realname="$1" - shift - libname=`eval "\\$ECHO \"$libname_spec\""` - # use dlname if we got it. it's perfectly good, no? - if test -n "$dlname"; then - soname="$dlname" - elif test -n "$soname_spec"; then - # bleh windows - case $host in - *cygwin* | mingw* | *cegcc*) - func_arith $current - $age - major=$func_arith_result - versuffix="-$major" - ;; - esac - eval soname=\"$soname_spec\" - else - soname="$realname" - fi - - # Make a new name for the extract_expsyms_cmds to use - soroot="$soname" - func_basename "$soroot" - soname="$func_basename_result" - func_stripname 'lib' '.dll' "$soname" - newlib=libimp-$func_stripname_result.a - - # If the library has no export list, then create one now - if test -f "$output_objdir/$soname-def"; then : - else - func_verbose "extracting exported symbol list from \`$soname'" - func_execute_cmds "$extract_expsyms_cmds" 'exit $?' - fi - - # Create $newlib - if test -f "$output_objdir/$newlib"; then :; else - func_verbose "generating import library for \`$soname'" - func_execute_cmds "$old_archive_from_expsyms_cmds" 'exit $?' - fi - # make sure the library variables are pointing to the new library - dir=$output_objdir - linklib=$newlib - fi # test -n "$old_archive_from_expsyms_cmds" - - if test "$linkmode" = prog || test "$opt_mode" != relink; then - add_shlibpath= - add_dir= - add= - lib_linked=yes - case $hardcode_action in - immediate | unsupported) - if test "$hardcode_direct" = no; then - add="$dir/$linklib" - case $host in - *-*-sco3.2v5.0.[024]*) add_dir="-L$dir" ;; - *-*-sysv4*uw2*) add_dir="-L$dir" ;; - *-*-sysv5OpenUNIX* | *-*-sysv5UnixWare7.[01].[10]* | \ - *-*-unixware7*) add_dir="-L$dir" ;; - *-*-darwin* ) - # if the lib is a (non-dlopened) module then we can not - # link against it, someone is ignoring the earlier warnings - if /usr/bin/file -L $add 2> /dev/null | - $GREP ": [^:]* bundle" >/dev/null ; then - if test "X$dlopenmodule" != "X$lib"; then - $ECHO "*** Warning: lib $linklib is a module, not a shared library" - if test -z "$old_library" ; then - echo - echo "*** And there doesn't seem to be a static archive available" - echo "*** The link will probably fail, sorry" - else - add="$dir/$old_library" - fi - elif test -n "$old_library"; then - add="$dir/$old_library" - fi - fi - esac - elif test "$hardcode_minus_L" = no; then - case $host in - *-*-sunos*) add_shlibpath="$dir" ;; - esac - add_dir="-L$dir" - add="-l$name" - elif test "$hardcode_shlibpath_var" = no; then - add_shlibpath="$dir" - add="-l$name" - else - lib_linked=no - fi - ;; - relink) - if test "$hardcode_direct" = yes && - test "$hardcode_direct_absolute" = no; then - add="$dir/$linklib" - elif test "$hardcode_minus_L" = yes; then - add_dir="-L$absdir" - # Try looking first in the location we're being installed to. - if test -n "$inst_prefix_dir"; then - case $libdir in - [\\/]*) - func_append add_dir " -L$inst_prefix_dir$libdir" - ;; - esac - fi - add="-l$name" - elif test "$hardcode_shlibpath_var" = yes; then - add_shlibpath="$dir" - add="-l$name" - else - lib_linked=no - fi - ;; - *) lib_linked=no ;; - esac - - if test "$lib_linked" != yes; then - func_fatal_configuration "unsupported hardcode properties" - fi - - if test -n "$add_shlibpath"; then - case :$compile_shlibpath: in - *":$add_shlibpath:"*) ;; - *) func_append compile_shlibpath "$add_shlibpath:" ;; - esac - fi - if test "$linkmode" = prog; then - test -n "$add_dir" && compile_deplibs="$add_dir $compile_deplibs" - test -n "$add" && compile_deplibs="$add $compile_deplibs" - else - test -n "$add_dir" && deplibs="$add_dir $deplibs" - test -n "$add" && deplibs="$add $deplibs" - if test "$hardcode_direct" != yes && - test "$hardcode_minus_L" != yes && - test "$hardcode_shlibpath_var" = yes; then - case :$finalize_shlibpath: in - *":$libdir:"*) ;; - *) func_append finalize_shlibpath "$libdir:" ;; - esac - fi - fi - fi - - if test "$linkmode" = prog || test "$opt_mode" = relink; then - add_shlibpath= - add_dir= - add= - # Finalize command for both is simple: just hardcode it. - if test "$hardcode_direct" = yes && - test "$hardcode_direct_absolute" = no; then - add="$libdir/$linklib" - elif test "$hardcode_minus_L" = yes; then - add_dir="-L$libdir" - add="-l$name" - elif test "$hardcode_shlibpath_var" = yes; then - case :$finalize_shlibpath: in - *":$libdir:"*) ;; - *) func_append finalize_shlibpath "$libdir:" ;; - esac - add="-l$name" - elif test "$hardcode_automatic" = yes; then - if test -n "$inst_prefix_dir" && - test -f "$inst_prefix_dir$libdir/$linklib" ; then - add="$inst_prefix_dir$libdir/$linklib" - else - add="$libdir/$linklib" - fi - else - # We cannot seem to hardcode it, guess we'll fake it. - add_dir="-L$libdir" - # Try looking first in the location we're being installed to. - if test -n "$inst_prefix_dir"; then - case $libdir in - [\\/]*) - func_append add_dir " -L$inst_prefix_dir$libdir" - ;; - esac - fi - add="-l$name" - fi - - if test "$linkmode" = prog; then - test -n "$add_dir" && finalize_deplibs="$add_dir $finalize_deplibs" - test -n "$add" && finalize_deplibs="$add $finalize_deplibs" - else - test -n "$add_dir" && deplibs="$add_dir $deplibs" - test -n "$add" && deplibs="$add $deplibs" - fi - fi - elif test "$linkmode" = prog; then - # Here we assume that one of hardcode_direct or hardcode_minus_L - # is not unsupported. This is valid on all known static and - # shared platforms. - if test "$hardcode_direct" != unsupported; then - test -n "$old_library" && linklib="$old_library" - compile_deplibs="$dir/$linklib $compile_deplibs" - finalize_deplibs="$dir/$linklib $finalize_deplibs" - else - compile_deplibs="-l$name -L$dir $compile_deplibs" - finalize_deplibs="-l$name -L$dir $finalize_deplibs" - fi - elif test "$build_libtool_libs" = yes; then - # Not a shared library - if test "$deplibs_check_method" != pass_all; then - # We're trying link a shared library against a static one - # but the system doesn't support it. - - # Just print a warning and add the library to dependency_libs so - # that the program can be linked against the static library. - echo - $ECHO "*** Warning: This system can not link to static lib archive $lib." - echo "*** I have the capability to make that library automatically link in when" - echo "*** you link to this library. But I can only do this if you have a" - echo "*** shared version of the library, which you do not appear to have." - if test "$module" = yes; then - echo "*** But as you try to build a module library, libtool will still create " - echo "*** a static module, that should work as long as the dlopening application" - echo "*** is linked with the -dlopen flag to resolve symbols at runtime." - if test -z "$global_symbol_pipe"; then - echo - echo "*** However, this would only work if libtool was able to extract symbol" - echo "*** lists from a program, using \`nm' or equivalent, but libtool could" - echo "*** not find such a program. So, this module is probably useless." - echo "*** \`nm' from GNU binutils and a full rebuild may help." - fi - if test "$build_old_libs" = no; then - build_libtool_libs=module - build_old_libs=yes - else - build_libtool_libs=no - fi - fi - else - deplibs="$dir/$old_library $deplibs" - link_static=yes - fi - fi # link shared/static library? - - if test "$linkmode" = lib; then - if test -n "$dependency_libs" && - { test "$hardcode_into_libs" != yes || - test "$build_old_libs" = yes || - test "$link_static" = yes; }; then - # Extract -R from dependency_libs - temp_deplibs= - for libdir in $dependency_libs; do - case $libdir in - -R*) func_stripname '-R' '' "$libdir" - temp_xrpath=$func_stripname_result - case " $xrpath " in - *" $temp_xrpath "*) ;; - *) func_append xrpath " $temp_xrpath";; - esac;; - *) func_append temp_deplibs " $libdir";; - esac - done - dependency_libs="$temp_deplibs" - fi - - func_append newlib_search_path " $absdir" - # Link against this library - test "$link_static" = no && newdependency_libs="$abs_ladir/$laname $newdependency_libs" - # ... and its dependency_libs - tmp_libs= - for deplib in $dependency_libs; do - newdependency_libs="$deplib $newdependency_libs" - case $deplib in - -L*) func_stripname '-L' '' "$deplib" - func_resolve_sysroot "$func_stripname_result";; - *) func_resolve_sysroot "$deplib" ;; - esac - if $opt_preserve_dup_deps ; then - case "$tmp_libs " in - *" $func_resolve_sysroot_result "*) - func_append specialdeplibs " $func_resolve_sysroot_result" ;; - esac - fi - func_append tmp_libs " $func_resolve_sysroot_result" - done - - if test "$link_all_deplibs" != no; then - # Add the search paths of all dependency libraries - for deplib in $dependency_libs; do - path= - case $deplib in - -L*) path="$deplib" ;; - *.la) - func_resolve_sysroot "$deplib" - deplib=$func_resolve_sysroot_result - func_dirname "$deplib" "" "." - dir=$func_dirname_result - # We need an absolute path. - case $dir in - [\\/]* | [A-Za-z]:[\\/]*) absdir="$dir" ;; - *) - absdir=`cd "$dir" && pwd` - if test -z "$absdir"; then - func_warning "cannot determine absolute directory name of \`$dir'" - absdir="$dir" - fi - ;; - esac - if $GREP "^installed=no" $deplib > /dev/null; then - case $host in - *-*-darwin*) - depdepl= - eval deplibrary_names=`${SED} -n -e 's/^library_names=\(.*\)$/\1/p' $deplib` - if test -n "$deplibrary_names" ; then - for tmp in $deplibrary_names ; do - depdepl=$tmp - done - if test -f "$absdir/$objdir/$depdepl" ; then - depdepl="$absdir/$objdir/$depdepl" - darwin_install_name=`${OTOOL} -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` - if test -z "$darwin_install_name"; then - darwin_install_name=`${OTOOL64} -L $depdepl | awk '{if (NR == 2) {print $1;exit}}'` - fi - func_append compiler_flags " ${wl}-dylib_file ${wl}${darwin_install_name}:${depdepl}" - func_append linker_flags " -dylib_file ${darwin_install_name}:${depdepl}" - path= - fi - fi - ;; - *) - path="-L$absdir/$objdir" - ;; - esac - else - eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $deplib` - test -z "$libdir" && \ - func_fatal_error "\`$deplib' is not a valid libtool archive" - test "$absdir" != "$libdir" && \ - func_warning "\`$deplib' seems to be moved" - - path="-L$absdir" - fi - ;; - esac - case " $deplibs " in - *" $path "*) ;; - *) deplibs="$path $deplibs" ;; - esac - done - fi # link_all_deplibs != no - fi # linkmode = lib - done # for deplib in $libs - if test "$pass" = link; then - if test "$linkmode" = "prog"; then - compile_deplibs="$new_inherited_linker_flags $compile_deplibs" - finalize_deplibs="$new_inherited_linker_flags $finalize_deplibs" - else - compiler_flags="$compiler_flags "`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` - fi - fi - dependency_libs="$newdependency_libs" - if test "$pass" = dlpreopen; then - # Link the dlpreopened libraries before other libraries - for deplib in $save_deplibs; do - deplibs="$deplib $deplibs" - done - fi - if test "$pass" != dlopen; then - if test "$pass" != conv; then - # Make sure lib_search_path contains only unique directories. - lib_search_path= - for dir in $newlib_search_path; do - case "$lib_search_path " in - *" $dir "*) ;; - *) func_append lib_search_path " $dir" ;; - esac - done - newlib_search_path= - fi - - if test "$linkmode,$pass" != "prog,link"; then - vars="deplibs" - else - vars="compile_deplibs finalize_deplibs" - fi - for var in $vars dependency_libs; do - # Add libraries to $var in reverse order - eval tmp_libs=\"\$$var\" - new_libs= - for deplib in $tmp_libs; do - # FIXME: Pedantically, this is the right thing to do, so - # that some nasty dependency loop isn't accidentally - # broken: - #new_libs="$deplib $new_libs" - # Pragmatically, this seems to cause very few problems in - # practice: - case $deplib in - -L*) new_libs="$deplib $new_libs" ;; - -R*) ;; - *) - # And here is the reason: when a library appears more - # than once as an explicit dependence of a library, or - # is implicitly linked in more than once by the - # compiler, it is considered special, and multiple - # occurrences thereof are not removed. Compare this - # with having the same library being listed as a - # dependency of multiple other libraries: in this case, - # we know (pedantically, we assume) the library does not - # need to be listed more than once, so we keep only the - # last copy. This is not always right, but it is rare - # enough that we require users that really mean to play - # such unportable linking tricks to link the library - # using -Wl,-lname, so that libtool does not consider it - # for duplicate removal. - case " $specialdeplibs " in - *" $deplib "*) new_libs="$deplib $new_libs" ;; - *) - case " $new_libs " in - *" $deplib "*) ;; - *) new_libs="$deplib $new_libs" ;; - esac - ;; - esac - ;; - esac - done - tmp_libs= - for deplib in $new_libs; do - case $deplib in - -L*) - case " $tmp_libs " in - *" $deplib "*) ;; - *) func_append tmp_libs " $deplib" ;; - esac - ;; - *) func_append tmp_libs " $deplib" ;; - esac - done - eval $var=\"$tmp_libs\" - done # for var - fi - # Last step: remove runtime libs from dependency_libs - # (they stay in deplibs) - tmp_libs= - for i in $dependency_libs ; do - case " $predeps $postdeps $compiler_lib_search_path " in - *" $i "*) - i="" - ;; - esac - if test -n "$i" ; then - func_append tmp_libs " $i" - fi - done - dependency_libs=$tmp_libs - done # for pass - if test "$linkmode" = prog; then - dlfiles="$newdlfiles" - fi - if test "$linkmode" = prog || test "$linkmode" = lib; then - dlprefiles="$newdlprefiles" - fi - - case $linkmode in - oldlib) - if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then - func_warning "\`-dlopen' is ignored for archives" - fi - - case " $deplibs" in - *\ -l* | *\ -L*) - func_warning "\`-l' and \`-L' are ignored for archives" ;; - esac - - test -n "$rpath" && \ - func_warning "\`-rpath' is ignored for archives" - - test -n "$xrpath" && \ - func_warning "\`-R' is ignored for archives" - - test -n "$vinfo" && \ - func_warning "\`-version-info/-version-number' is ignored for archives" - - test -n "$release" && \ - func_warning "\`-release' is ignored for archives" - - test -n "$export_symbols$export_symbols_regex" && \ - func_warning "\`-export-symbols' is ignored for archives" - - # Now set the variables for building old libraries. - build_libtool_libs=no - oldlibs="$output" - func_append objs "$old_deplibs" - ;; - - lib) - # Make sure we only generate libraries of the form `libNAME.la'. - case $outputname in - lib*) - func_stripname 'lib' '.la' "$outputname" - name=$func_stripname_result - eval shared_ext=\"$shrext_cmds\" - eval libname=\"$libname_spec\" - ;; - *) - test "$module" = no && \ - func_fatal_help "libtool library \`$output' must begin with \`lib'" - - if test "$need_lib_prefix" != no; then - # Add the "lib" prefix for modules if required - func_stripname '' '.la' "$outputname" - name=$func_stripname_result - eval shared_ext=\"$shrext_cmds\" - eval libname=\"$libname_spec\" - else - func_stripname '' '.la' "$outputname" - libname=$func_stripname_result - fi - ;; - esac - - if test -n "$objs"; then - if test "$deplibs_check_method" != pass_all; then - func_fatal_error "cannot build libtool library \`$output' from non-libtool objects on this host:$objs" - else - echo - $ECHO "*** Warning: Linking the shared library $output against the non-libtool" - $ECHO "*** objects $objs is not portable!" - func_append libobjs " $objs" - fi - fi - - test "$dlself" != no && \ - func_warning "\`-dlopen self' is ignored for libtool libraries" - - set dummy $rpath - shift - test "$#" -gt 1 && \ - func_warning "ignoring multiple \`-rpath's for a libtool library" - - install_libdir="$1" - - oldlibs= - if test -z "$rpath"; then - if test "$build_libtool_libs" = yes; then - # Building a libtool convenience library. - # Some compilers have problems with a `.al' extension so - # convenience libraries should have the same extension an - # archive normally would. - oldlibs="$output_objdir/$libname.$libext $oldlibs" - build_libtool_libs=convenience - build_old_libs=yes - fi - - test -n "$vinfo" && \ - func_warning "\`-version-info/-version-number' is ignored for convenience libraries" - - test -n "$release" && \ - func_warning "\`-release' is ignored for convenience libraries" - else - - # Parse the version information argument. - save_ifs="$IFS"; IFS=':' - set dummy $vinfo 0 0 0 - shift - IFS="$save_ifs" - - test -n "$7" && \ - func_fatal_help "too many parameters to \`-version-info'" - - # convert absolute version numbers to libtool ages - # this retains compatibility with .la files and attempts - # to make the code below a bit more comprehensible - - case $vinfo_number in - yes) - number_major="$1" - number_minor="$2" - number_revision="$3" - # - # There are really only two kinds -- those that - # use the current revision as the major version - # and those that subtract age and use age as - # a minor version. But, then there is irix - # which has an extra 1 added just for fun - # - case $version_type in - # correct linux to gnu/linux during the next big refactor - darwin|linux|osf|windows|none) - func_arith $number_major + $number_minor - current=$func_arith_result - age="$number_minor" - revision="$number_revision" - ;; - freebsd-aout|freebsd-elf|qnx|sunos) - current="$number_major" - revision="$number_minor" - age="0" - ;; - irix|nonstopux) - func_arith $number_major + $number_minor - current=$func_arith_result - age="$number_minor" - revision="$number_minor" - lt_irix_increment=no - ;; - *) - func_fatal_configuration "$modename: unknown library version type \`$version_type'" - ;; - esac - ;; - no) - current="$1" - revision="$2" - age="$3" - ;; - esac - - # Check that each of the things are valid numbers. - case $current in - 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; - *) - func_error "CURRENT \`$current' must be a nonnegative integer" - func_fatal_error "\`$vinfo' is not valid version information" - ;; - esac - - case $revision in - 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; - *) - func_error "REVISION \`$revision' must be a nonnegative integer" - func_fatal_error "\`$vinfo' is not valid version information" - ;; - esac - - case $age in - 0|[1-9]|[1-9][0-9]|[1-9][0-9][0-9]|[1-9][0-9][0-9][0-9]|[1-9][0-9][0-9][0-9][0-9]) ;; - *) - func_error "AGE \`$age' must be a nonnegative integer" - func_fatal_error "\`$vinfo' is not valid version information" - ;; - esac - - if test "$age" -gt "$current"; then - func_error "AGE \`$age' is greater than the current interface number \`$current'" - func_fatal_error "\`$vinfo' is not valid version information" - fi - - # Calculate the version variables. - major= - versuffix= - verstring= - case $version_type in - none) ;; - - darwin) - # Like Linux, but with the current version available in - # verstring for coding it into the library header - func_arith $current - $age - major=.$func_arith_result - versuffix="$major.$age.$revision" - # Darwin ld doesn't like 0 for these options... - func_arith $current + 1 - minor_current=$func_arith_result - xlcverstring="${wl}-compatibility_version ${wl}$minor_current ${wl}-current_version ${wl}$minor_current.$revision" - verstring="-compatibility_version $minor_current -current_version $minor_current.$revision" - ;; - - freebsd-aout) - major=".$current" - versuffix=".$current.$revision"; - ;; - - freebsd-elf) - major=".$current" - versuffix=".$current" - ;; - - irix | nonstopux) - if test "X$lt_irix_increment" = "Xno"; then - func_arith $current - $age - else - func_arith $current - $age + 1 - fi - major=$func_arith_result - - case $version_type in - nonstopux) verstring_prefix=nonstopux ;; - *) verstring_prefix=sgi ;; - esac - verstring="$verstring_prefix$major.$revision" - - # Add in all the interfaces that we are compatible with. - loop=$revision - while test "$loop" -ne 0; do - func_arith $revision - $loop - iface=$func_arith_result - func_arith $loop - 1 - loop=$func_arith_result - verstring="$verstring_prefix$major.$iface:$verstring" - done - - # Before this point, $major must not contain `.'. - major=.$major - versuffix="$major.$revision" - ;; - - linux) # correct to gnu/linux during the next big refactor - func_arith $current - $age - major=.$func_arith_result - versuffix="$major.$age.$revision" - ;; - - osf) - func_arith $current - $age - major=.$func_arith_result - versuffix=".$current.$age.$revision" - verstring="$current.$age.$revision" - - # Add in all the interfaces that we are compatible with. - loop=$age - while test "$loop" -ne 0; do - func_arith $current - $loop - iface=$func_arith_result - func_arith $loop - 1 - loop=$func_arith_result - verstring="$verstring:${iface}.0" - done - - # Make executables depend on our current version. - func_append verstring ":${current}.0" - ;; - - qnx) - major=".$current" - versuffix=".$current" - ;; - - sunos) - major=".$current" - versuffix=".$current.$revision" - ;; - - windows) - # Use '-' rather than '.', since we only want one - # extension on DOS 8.3 filesystems. - func_arith $current - $age - major=$func_arith_result - versuffix="-$major" - ;; - - *) - func_fatal_configuration "unknown library version type \`$version_type'" - ;; - esac - - # Clear the version info if we defaulted, and they specified a release. - if test -z "$vinfo" && test -n "$release"; then - major= - case $version_type in - darwin) - # we can't check for "0.0" in archive_cmds due to quoting - # problems, so we reset it completely - verstring= - ;; - *) - verstring="0.0" - ;; - esac - if test "$need_version" = no; then - versuffix= - else - versuffix=".0.0" - fi - fi - - # Remove version info from name if versioning should be avoided - if test "$avoid_version" = yes && test "$need_version" = no; then - major= - versuffix= - verstring="" - fi - - # Check to see if the archive will have undefined symbols. - if test "$allow_undefined" = yes; then - if test "$allow_undefined_flag" = unsupported; then - func_warning "undefined symbols not allowed in $host shared libraries" - build_libtool_libs=no - build_old_libs=yes - fi - else - # Don't allow undefined symbols. - allow_undefined_flag="$no_undefined_flag" - fi - - fi - - func_generate_dlsyms "$libname" "$libname" "yes" - func_append libobjs " $symfileobj" - test "X$libobjs" = "X " && libobjs= - - if test "$opt_mode" != relink; then - # Remove our outputs, but don't remove object files since they - # may have been created when compiling PIC objects. - removelist= - tempremovelist=`$ECHO "$output_objdir/*"` - for p in $tempremovelist; do - case $p in - *.$objext | *.gcno) - ;; - $output_objdir/$outputname | $output_objdir/$libname.* | $output_objdir/${libname}${release}.*) - if test "X$precious_files_regex" != "X"; then - if $ECHO "$p" | $EGREP -e "$precious_files_regex" >/dev/null 2>&1 - then - continue - fi - fi - func_append removelist " $p" - ;; - *) ;; - esac - done - test -n "$removelist" && \ - func_show_eval "${RM}r \$removelist" - fi - - # Now set the variables for building old libraries. - if test "$build_old_libs" = yes && test "$build_libtool_libs" != convenience ; then - func_append oldlibs " $output_objdir/$libname.$libext" - - # Transform .lo files to .o files. - oldobjs="$objs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; $lo2o" | $NL2SP` - fi - - # Eliminate all temporary directories. - #for path in $notinst_path; do - # lib_search_path=`$ECHO "$lib_search_path " | $SED "s% $path % %g"` - # deplibs=`$ECHO "$deplibs " | $SED "s% -L$path % %g"` - # dependency_libs=`$ECHO "$dependency_libs " | $SED "s% -L$path % %g"` - #done - - if test -n "$xrpath"; then - # If the user specified any rpath flags, then add them. - temp_xrpath= - for libdir in $xrpath; do - func_replace_sysroot "$libdir" - func_append temp_xrpath " -R$func_replace_sysroot_result" - case "$finalize_rpath " in - *" $libdir "*) ;; - *) func_append finalize_rpath " $libdir" ;; - esac - done - if test "$hardcode_into_libs" != yes || test "$build_old_libs" = yes; then - dependency_libs="$temp_xrpath $dependency_libs" - fi - fi - - # Make sure dlfiles contains only unique files that won't be dlpreopened - old_dlfiles="$dlfiles" - dlfiles= - for lib in $old_dlfiles; do - case " $dlprefiles $dlfiles " in - *" $lib "*) ;; - *) func_append dlfiles " $lib" ;; - esac - done - - # Make sure dlprefiles contains only unique files - old_dlprefiles="$dlprefiles" - dlprefiles= - for lib in $old_dlprefiles; do - case "$dlprefiles " in - *" $lib "*) ;; - *) func_append dlprefiles " $lib" ;; - esac - done - - if test "$build_libtool_libs" = yes; then - if test -n "$rpath"; then - case $host in - *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-*-beos* | *-cegcc* | *-*-haiku*) - # these systems don't actually have a c library (as such)! - ;; - *-*-rhapsody* | *-*-darwin1.[012]) - # Rhapsody C library is in the System framework - func_append deplibs " System.ltframework" - ;; - *-*-netbsd*) - # Don't link with libc until the a.out ld.so is fixed. - ;; - *-*-openbsd* | *-*-freebsd* | *-*-dragonfly*) - # Do not include libc due to us having libc/libc_r. - ;; - *-*-sco3.2v5* | *-*-sco5v6*) - # Causes problems with __ctype - ;; - *-*-sysv4.2uw2* | *-*-sysv5* | *-*-unixware* | *-*-OpenUNIX*) - # Compiler inserts libc in the correct place for threads to work - ;; - *) - # Add libc to deplibs on all other systems if necessary. - if test "$build_libtool_need_lc" = "yes"; then - func_append deplibs " -lc" - fi - ;; - esac - fi - - # Transform deplibs into only deplibs that can be linked in shared. - name_save=$name - libname_save=$libname - release_save=$release - versuffix_save=$versuffix - major_save=$major - # I'm not sure if I'm treating the release correctly. I think - # release should show up in the -l (ie -lgmp5) so we don't want to - # add it in twice. Is that correct? - release="" - versuffix="" - major="" - newdeplibs= - droppeddeps=no - case $deplibs_check_method in - pass_all) - # Don't check for shared/static. Everything works. - # This might be a little naive. We might want to check - # whether the library exists or not. But this is on - # osf3 & osf4 and I'm not really sure... Just - # implementing what was already the behavior. - newdeplibs=$deplibs - ;; - test_compile) - # This code stresses the "libraries are programs" paradigm to its - # limits. Maybe even breaks it. We compile a program, linking it - # against the deplibs as a proxy for the library. Then we can check - # whether they linked in statically or dynamically with ldd. - $opt_dry_run || $RM conftest.c - cat > conftest.c </dev/null` - $nocaseglob - else - potential_libs=`ls $i/$libnameglob[.-]* 2>/dev/null` - fi - for potent_lib in $potential_libs; do - # Follow soft links. - if ls -lLd "$potent_lib" 2>/dev/null | - $GREP " -> " >/dev/null; then - continue - fi - # The statement above tries to avoid entering an - # endless loop below, in case of cyclic links. - # We might still enter an endless loop, since a link - # loop can be closed while we follow links, - # but so what? - potlib="$potent_lib" - while test -h "$potlib" 2>/dev/null; do - potliblink=`ls -ld $potlib | ${SED} 's/.* -> //'` - case $potliblink in - [\\/]* | [A-Za-z]:[\\/]*) potlib="$potliblink";; - *) potlib=`$ECHO "$potlib" | $SED 's,[^/]*$,,'`"$potliblink";; - esac - done - if eval $file_magic_cmd \"\$potlib\" 2>/dev/null | - $SED -e 10q | - $EGREP "$file_magic_regex" > /dev/null; then - func_append newdeplibs " $a_deplib" - a_deplib="" - break 2 - fi - done - done - fi - if test -n "$a_deplib" ; then - droppeddeps=yes - echo - $ECHO "*** Warning: linker path does not have real file for library $a_deplib." - echo "*** I have the capability to make that library automatically link in when" - echo "*** you link to this library. But I can only do this if you have a" - echo "*** shared version of the library, which you do not appear to have" - echo "*** because I did check the linker path looking for a file starting" - if test -z "$potlib" ; then - $ECHO "*** with $libname but no candidates were found. (...for file magic test)" - else - $ECHO "*** with $libname and none of the candidates passed a file format test" - $ECHO "*** using a file magic. Last file checked: $potlib" - fi - fi - ;; - *) - # Add a -L argument. - func_append newdeplibs " $a_deplib" - ;; - esac - done # Gone through all deplibs. - ;; - match_pattern*) - set dummy $deplibs_check_method; shift - match_pattern_regex=`expr "$deplibs_check_method" : "$1 \(.*\)"` - for a_deplib in $deplibs; do - case $a_deplib in - -l*) - func_stripname -l '' "$a_deplib" - name=$func_stripname_result - if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then - case " $predeps $postdeps " in - *" $a_deplib "*) - func_append newdeplibs " $a_deplib" - a_deplib="" - ;; - esac - fi - if test -n "$a_deplib" ; then - libname=`eval "\\$ECHO \"$libname_spec\""` - for i in $lib_search_path $sys_lib_search_path $shlib_search_path; do - potential_libs=`ls $i/$libname[.-]* 2>/dev/null` - for potent_lib in $potential_libs; do - potlib="$potent_lib" # see symlink-check above in file_magic test - if eval "\$ECHO \"$potent_lib\"" 2>/dev/null | $SED 10q | \ - $EGREP "$match_pattern_regex" > /dev/null; then - func_append newdeplibs " $a_deplib" - a_deplib="" - break 2 - fi - done - done - fi - if test -n "$a_deplib" ; then - droppeddeps=yes - echo - $ECHO "*** Warning: linker path does not have real file for library $a_deplib." - echo "*** I have the capability to make that library automatically link in when" - echo "*** you link to this library. But I can only do this if you have a" - echo "*** shared version of the library, which you do not appear to have" - echo "*** because I did check the linker path looking for a file starting" - if test -z "$potlib" ; then - $ECHO "*** with $libname but no candidates were found. (...for regex pattern test)" - else - $ECHO "*** with $libname and none of the candidates passed a file format test" - $ECHO "*** using a regex pattern. Last file checked: $potlib" - fi - fi - ;; - *) - # Add a -L argument. - func_append newdeplibs " $a_deplib" - ;; - esac - done # Gone through all deplibs. - ;; - none | unknown | *) - newdeplibs="" - tmp_deplibs=`$ECHO " $deplibs" | $SED 's/ -lc$//; s/ -[LR][^ ]*//g'` - if test "X$allow_libtool_libs_with_static_runtimes" = "Xyes" ; then - for i in $predeps $postdeps ; do - # can't use Xsed below, because $i might contain '/' - tmp_deplibs=`$ECHO " $tmp_deplibs" | $SED "s,$i,,"` - done - fi - case $tmp_deplibs in - *[!\ \ ]*) - echo - if test "X$deplibs_check_method" = "Xnone"; then - echo "*** Warning: inter-library dependencies are not supported in this platform." - else - echo "*** Warning: inter-library dependencies are not known to be supported." - fi - echo "*** All declared inter-library dependencies are being dropped." - droppeddeps=yes - ;; - esac - ;; - esac - versuffix=$versuffix_save - major=$major_save - release=$release_save - libname=$libname_save - name=$name_save - - case $host in - *-*-rhapsody* | *-*-darwin1.[012]) - # On Rhapsody replace the C library with the System framework - newdeplibs=`$ECHO " $newdeplibs" | $SED 's/ -lc / System.ltframework /'` - ;; - esac - - if test "$droppeddeps" = yes; then - if test "$module" = yes; then - echo - echo "*** Warning: libtool could not satisfy all declared inter-library" - $ECHO "*** dependencies of module $libname. Therefore, libtool will create" - echo "*** a static module, that should work as long as the dlopening" - echo "*** application is linked with the -dlopen flag." - if test -z "$global_symbol_pipe"; then - echo - echo "*** However, this would only work if libtool was able to extract symbol" - echo "*** lists from a program, using \`nm' or equivalent, but libtool could" - echo "*** not find such a program. So, this module is probably useless." - echo "*** \`nm' from GNU binutils and a full rebuild may help." - fi - if test "$build_old_libs" = no; then - oldlibs="$output_objdir/$libname.$libext" - build_libtool_libs=module - build_old_libs=yes - else - build_libtool_libs=no - fi - else - echo "*** The inter-library dependencies that have been dropped here will be" - echo "*** automatically added whenever a program is linked with this library" - echo "*** or is declared to -dlopen it." - - if test "$allow_undefined" = no; then - echo - echo "*** Since this library must not contain undefined symbols," - echo "*** because either the platform does not support them or" - echo "*** it was explicitly requested with -no-undefined," - echo "*** libtool will only create a static version of it." - if test "$build_old_libs" = no; then - oldlibs="$output_objdir/$libname.$libext" - build_libtool_libs=module - build_old_libs=yes - else - build_libtool_libs=no - fi - fi - fi - fi - # Done checking deplibs! - deplibs=$newdeplibs - fi - # Time to change all our "foo.ltframework" stuff back to "-framework foo" - case $host in - *-*-darwin*) - newdeplibs=`$ECHO " $newdeplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` - new_inherited_linker_flags=`$ECHO " $new_inherited_linker_flags" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` - deplibs=`$ECHO " $deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` - ;; - esac - - # move library search paths that coincide with paths to not yet - # installed libraries to the beginning of the library search list - new_libs= - for path in $notinst_path; do - case " $new_libs " in - *" -L$path/$objdir "*) ;; - *) - case " $deplibs " in - *" -L$path/$objdir "*) - func_append new_libs " -L$path/$objdir" ;; - esac - ;; - esac - done - for deplib in $deplibs; do - case $deplib in - -L*) - case " $new_libs " in - *" $deplib "*) ;; - *) func_append new_libs " $deplib" ;; - esac - ;; - *) func_append new_libs " $deplib" ;; - esac - done - deplibs="$new_libs" - - # All the library-specific variables (install_libdir is set above). - library_names= - old_library= - dlname= - - # Test again, we may have decided not to build it any more - if test "$build_libtool_libs" = yes; then - # Remove ${wl} instances when linking with ld. - # FIXME: should test the right _cmds variable. - case $archive_cmds in - *\$LD\ *) wl= ;; - esac - if test "$hardcode_into_libs" = yes; then - # Hardcode the library paths - hardcode_libdirs= - dep_rpath= - rpath="$finalize_rpath" - test "$opt_mode" != relink && rpath="$compile_rpath$rpath" - for libdir in $rpath; do - if test -n "$hardcode_libdir_flag_spec"; then - if test -n "$hardcode_libdir_separator"; then - func_replace_sysroot "$libdir" - libdir=$func_replace_sysroot_result - if test -z "$hardcode_libdirs"; then - hardcode_libdirs="$libdir" - else - # Just accumulate the unique libdirs. - case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in - *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) - ;; - *) - func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" - ;; - esac - fi - else - eval flag=\"$hardcode_libdir_flag_spec\" - func_append dep_rpath " $flag" - fi - elif test -n "$runpath_var"; then - case "$perm_rpath " in - *" $libdir "*) ;; - *) func_append perm_rpath " $libdir" ;; - esac - fi - done - # Substitute the hardcoded libdirs into the rpath. - if test -n "$hardcode_libdir_separator" && - test -n "$hardcode_libdirs"; then - libdir="$hardcode_libdirs" - eval "dep_rpath=\"$hardcode_libdir_flag_spec\"" - fi - if test -n "$runpath_var" && test -n "$perm_rpath"; then - # We should set the runpath_var. - rpath= - for dir in $perm_rpath; do - func_append rpath "$dir:" - done - eval "$runpath_var='$rpath\$$runpath_var'; export $runpath_var" - fi - test -n "$dep_rpath" && deplibs="$dep_rpath $deplibs" - fi - - shlibpath="$finalize_shlibpath" - test "$opt_mode" != relink && shlibpath="$compile_shlibpath$shlibpath" - if test -n "$shlibpath"; then - eval "$shlibpath_var='$shlibpath\$$shlibpath_var'; export $shlibpath_var" - fi - - # Get the real and link names of the library. - eval shared_ext=\"$shrext_cmds\" - eval library_names=\"$library_names_spec\" - set dummy $library_names - shift - realname="$1" - shift - - if test -n "$soname_spec"; then - eval soname=\"$soname_spec\" - else - soname="$realname" - fi - if test -z "$dlname"; then - dlname=$soname - fi - - lib="$output_objdir/$realname" - linknames= - for link - do - func_append linknames " $link" - done - - # Use standard objects if they are pic - test -z "$pic_flag" && libobjs=`$ECHO "$libobjs" | $SP2NL | $SED "$lo2o" | $NL2SP` - test "X$libobjs" = "X " && libobjs= - - delfiles= - if test -n "$export_symbols" && test -n "$include_expsyms"; then - $opt_dry_run || cp "$export_symbols" "$output_objdir/$libname.uexp" - export_symbols="$output_objdir/$libname.uexp" - func_append delfiles " $export_symbols" - fi - - orig_export_symbols= - case $host_os in - cygwin* | mingw* | cegcc*) - if test -n "$export_symbols" && test -z "$export_symbols_regex"; then - # exporting using user supplied symfile - if test "x`$SED 1q $export_symbols`" != xEXPORTS; then - # and it's NOT already a .def file. Must figure out - # which of the given symbols are data symbols and tag - # them as such. So, trigger use of export_symbols_cmds. - # export_symbols gets reassigned inside the "prepare - # the list of exported symbols" if statement, so the - # include_expsyms logic still works. - orig_export_symbols="$export_symbols" - export_symbols= - always_export_symbols=yes - fi - fi - ;; - esac - - # Prepare the list of exported symbols - if test -z "$export_symbols"; then - if test "$always_export_symbols" = yes || test -n "$export_symbols_regex"; then - func_verbose "generating symbol list for \`$libname.la'" - export_symbols="$output_objdir/$libname.exp" - $opt_dry_run || $RM $export_symbols - cmds=$export_symbols_cmds - save_ifs="$IFS"; IFS='~' - for cmd1 in $cmds; do - IFS="$save_ifs" - # Take the normal branch if the nm_file_list_spec branch - # doesn't work or if tool conversion is not needed. - case $nm_file_list_spec~$to_tool_file_cmd in - *~func_convert_file_noop | *~func_convert_file_msys_to_w32 | ~*) - try_normal_branch=yes - eval cmd=\"$cmd1\" - func_len " $cmd" - len=$func_len_result - ;; - *) - try_normal_branch=no - ;; - esac - if test "$try_normal_branch" = yes \ - && { test "$len" -lt "$max_cmd_len" \ - || test "$max_cmd_len" -le -1; } - then - func_show_eval "$cmd" 'exit $?' - skipped_export=false - elif test -n "$nm_file_list_spec"; then - func_basename "$output" - output_la=$func_basename_result - save_libobjs=$libobjs - save_output=$output - output=${output_objdir}/${output_la}.nm - func_to_tool_file "$output" - libobjs=$nm_file_list_spec$func_to_tool_file_result - func_append delfiles " $output" - func_verbose "creating $NM input file list: $output" - for obj in $save_libobjs; do - func_to_tool_file "$obj" - $ECHO "$func_to_tool_file_result" - done > "$output" - eval cmd=\"$cmd1\" - func_show_eval "$cmd" 'exit $?' - output=$save_output - libobjs=$save_libobjs - skipped_export=false - else - # The command line is too long to execute in one step. - func_verbose "using reloadable object file for export list..." - skipped_export=: - # Break out early, otherwise skipped_export may be - # set to false by a later but shorter cmd. - break - fi - done - IFS="$save_ifs" - if test -n "$export_symbols_regex" && test "X$skipped_export" != "X:"; then - func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' - func_show_eval '$MV "${export_symbols}T" "$export_symbols"' - fi - fi - fi - - if test -n "$export_symbols" && test -n "$include_expsyms"; then - tmp_export_symbols="$export_symbols" - test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols" - $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' - fi - - if test "X$skipped_export" != "X:" && test -n "$orig_export_symbols"; then - # The given exports_symbols file has to be filtered, so filter it. - func_verbose "filter symbol list for \`$libname.la' to tag DATA exports" - # FIXME: $output_objdir/$libname.filter potentially contains lots of - # 's' commands which not all seds can handle. GNU sed should be fine - # though. Also, the filter scales superlinearly with the number of - # global variables. join(1) would be nice here, but unfortunately - # isn't a blessed tool. - $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter - func_append delfiles " $export_symbols $output_objdir/$libname.filter" - export_symbols=$output_objdir/$libname.def - $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols - fi - - tmp_deplibs= - for test_deplib in $deplibs; do - case " $convenience " in - *" $test_deplib "*) ;; - *) - func_append tmp_deplibs " $test_deplib" - ;; - esac - done - deplibs="$tmp_deplibs" - - if test -n "$convenience"; then - if test -n "$whole_archive_flag_spec" && - test "$compiler_needs_object" = yes && - test -z "$libobjs"; then - # extract the archives, so we have objects to list. - # TODO: could optimize this to just extract one archive. - whole_archive_flag_spec= - fi - if test -n "$whole_archive_flag_spec"; then - save_libobjs=$libobjs - eval libobjs=\"\$libobjs $whole_archive_flag_spec\" - test "X$libobjs" = "X " && libobjs= - else - gentop="$output_objdir/${outputname}x" - func_append generated " $gentop" - - func_extract_archives $gentop $convenience - func_append libobjs " $func_extract_archives_result" - test "X$libobjs" = "X " && libobjs= - fi - fi - - if test "$thread_safe" = yes && test -n "$thread_safe_flag_spec"; then - eval flag=\"$thread_safe_flag_spec\" - func_append linker_flags " $flag" - fi - - # Make a backup of the uninstalled library when relinking - if test "$opt_mode" = relink; then - $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}U && $MV $realname ${realname}U)' || exit $? - fi - - # Do each of the archive commands. - if test "$module" = yes && test -n "$module_cmds" ; then - if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then - eval test_cmds=\"$module_expsym_cmds\" - cmds=$module_expsym_cmds - else - eval test_cmds=\"$module_cmds\" - cmds=$module_cmds - fi - else - if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then - eval test_cmds=\"$archive_expsym_cmds\" - cmds=$archive_expsym_cmds - else - eval test_cmds=\"$archive_cmds\" - cmds=$archive_cmds - fi - fi - - if test "X$skipped_export" != "X:" && - func_len " $test_cmds" && - len=$func_len_result && - test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then - : - else - # The command line is too long to link in one step, link piecewise - # or, if using GNU ld and skipped_export is not :, use a linker - # script. - - # Save the value of $output and $libobjs because we want to - # use them later. If we have whole_archive_flag_spec, we - # want to use save_libobjs as it was before - # whole_archive_flag_spec was expanded, because we can't - # assume the linker understands whole_archive_flag_spec. - # This may have to be revisited, in case too many - # convenience libraries get linked in and end up exceeding - # the spec. - if test -z "$convenience" || test -z "$whole_archive_flag_spec"; then - save_libobjs=$libobjs - fi - save_output=$output - func_basename "$output" - output_la=$func_basename_result - - # Clear the reloadable object creation command queue and - # initialize k to one. - test_cmds= - concat_cmds= - objlist= - last_robj= - k=1 - - if test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "$with_gnu_ld" = yes; then - output=${output_objdir}/${output_la}.lnkscript - func_verbose "creating GNU ld script: $output" - echo 'INPUT (' > $output - for obj in $save_libobjs - do - func_to_tool_file "$obj" - $ECHO "$func_to_tool_file_result" >> $output - done - echo ')' >> $output - func_append delfiles " $output" - func_to_tool_file "$output" - output=$func_to_tool_file_result - elif test -n "$save_libobjs" && test "X$skipped_export" != "X:" && test "X$file_list_spec" != X; then - output=${output_objdir}/${output_la}.lnk - func_verbose "creating linker input file list: $output" - : > $output - set x $save_libobjs - shift - firstobj= - if test "$compiler_needs_object" = yes; then - firstobj="$1 " - shift - fi - for obj - do - func_to_tool_file "$obj" - $ECHO "$func_to_tool_file_result" >> $output - done - func_append delfiles " $output" - func_to_tool_file "$output" - output=$firstobj\"$file_list_spec$func_to_tool_file_result\" - else - if test -n "$save_libobjs"; then - func_verbose "creating reloadable object files..." - output=$output_objdir/$output_la-${k}.$objext - eval test_cmds=\"$reload_cmds\" - func_len " $test_cmds" - len0=$func_len_result - len=$len0 - - # Loop over the list of objects to be linked. - for obj in $save_libobjs - do - func_len " $obj" - func_arith $len + $func_len_result - len=$func_arith_result - if test "X$objlist" = X || - test "$len" -lt "$max_cmd_len"; then - func_append objlist " $obj" - else - # The command $test_cmds is almost too long, add a - # command to the queue. - if test "$k" -eq 1 ; then - # The first file doesn't have a previous command to add. - reload_objs=$objlist - eval concat_cmds=\"$reload_cmds\" - else - # All subsequent reloadable object files will link in - # the last one created. - reload_objs="$objlist $last_robj" - eval concat_cmds=\"\$concat_cmds~$reload_cmds~\$RM $last_robj\" - fi - last_robj=$output_objdir/$output_la-${k}.$objext - func_arith $k + 1 - k=$func_arith_result - output=$output_objdir/$output_la-${k}.$objext - objlist=" $obj" - func_len " $last_robj" - func_arith $len0 + $func_len_result - len=$func_arith_result - fi - done - # Handle the remaining objects by creating one last - # reloadable object file. All subsequent reloadable object - # files will link in the last one created. - test -z "$concat_cmds" || concat_cmds=$concat_cmds~ - reload_objs="$objlist $last_robj" - eval concat_cmds=\"\${concat_cmds}$reload_cmds\" - if test -n "$last_robj"; then - eval concat_cmds=\"\${concat_cmds}~\$RM $last_robj\" - fi - func_append delfiles " $output" - - else - output= - fi - - if ${skipped_export-false}; then - func_verbose "generating symbol list for \`$libname.la'" - export_symbols="$output_objdir/$libname.exp" - $opt_dry_run || $RM $export_symbols - libobjs=$output - # Append the command to create the export file. - test -z "$concat_cmds" || concat_cmds=$concat_cmds~ - eval concat_cmds=\"\$concat_cmds$export_symbols_cmds\" - if test -n "$last_robj"; then - eval concat_cmds=\"\$concat_cmds~\$RM $last_robj\" - fi - fi - - test -n "$save_libobjs" && - func_verbose "creating a temporary reloadable object file: $output" - - # Loop through the commands generated above and execute them. - save_ifs="$IFS"; IFS='~' - for cmd in $concat_cmds; do - IFS="$save_ifs" - $opt_silent || { - func_quote_for_expand "$cmd" - eval "func_echo $func_quote_for_expand_result" - } - $opt_dry_run || eval "$cmd" || { - lt_exit=$? - - # Restore the uninstalled library and exit - if test "$opt_mode" = relink; then - ( cd "$output_objdir" && \ - $RM "${realname}T" && \ - $MV "${realname}U" "$realname" ) - fi - - exit $lt_exit - } - done - IFS="$save_ifs" - - if test -n "$export_symbols_regex" && ${skipped_export-false}; then - func_show_eval '$EGREP -e "$export_symbols_regex" "$export_symbols" > "${export_symbols}T"' - func_show_eval '$MV "${export_symbols}T" "$export_symbols"' - fi - fi - - if ${skipped_export-false}; then - if test -n "$export_symbols" && test -n "$include_expsyms"; then - tmp_export_symbols="$export_symbols" - test -n "$orig_export_symbols" && tmp_export_symbols="$orig_export_symbols" - $opt_dry_run || eval '$ECHO "$include_expsyms" | $SP2NL >> "$tmp_export_symbols"' - fi - - if test -n "$orig_export_symbols"; then - # The given exports_symbols file has to be filtered, so filter it. - func_verbose "filter symbol list for \`$libname.la' to tag DATA exports" - # FIXME: $output_objdir/$libname.filter potentially contains lots of - # 's' commands which not all seds can handle. GNU sed should be fine - # though. Also, the filter scales superlinearly with the number of - # global variables. join(1) would be nice here, but unfortunately - # isn't a blessed tool. - $opt_dry_run || $SED -e '/[ ,]DATA/!d;s,\(.*\)\([ \,].*\),s|^\1$|\1\2|,' < $export_symbols > $output_objdir/$libname.filter - func_append delfiles " $export_symbols $output_objdir/$libname.filter" - export_symbols=$output_objdir/$libname.def - $opt_dry_run || $SED -f $output_objdir/$libname.filter < $orig_export_symbols > $export_symbols - fi - fi - - libobjs=$output - # Restore the value of output. - output=$save_output - - if test -n "$convenience" && test -n "$whole_archive_flag_spec"; then - eval libobjs=\"\$libobjs $whole_archive_flag_spec\" - test "X$libobjs" = "X " && libobjs= - fi - # Expand the library linking commands again to reset the - # value of $libobjs for piecewise linking. - - # Do each of the archive commands. - if test "$module" = yes && test -n "$module_cmds" ; then - if test -n "$export_symbols" && test -n "$module_expsym_cmds"; then - cmds=$module_expsym_cmds - else - cmds=$module_cmds - fi - else - if test -n "$export_symbols" && test -n "$archive_expsym_cmds"; then - cmds=$archive_expsym_cmds - else - cmds=$archive_cmds - fi - fi - fi - - if test -n "$delfiles"; then - # Append the command to remove temporary files to $cmds. - eval cmds=\"\$cmds~\$RM $delfiles\" - fi - - # Add any objects from preloaded convenience libraries - if test -n "$dlprefiles"; then - gentop="$output_objdir/${outputname}x" - func_append generated " $gentop" - - func_extract_archives $gentop $dlprefiles - func_append libobjs " $func_extract_archives_result" - test "X$libobjs" = "X " && libobjs= - fi - - save_ifs="$IFS"; IFS='~' - for cmd in $cmds; do - IFS="$save_ifs" - eval cmd=\"$cmd\" - $opt_silent || { - func_quote_for_expand "$cmd" - eval "func_echo $func_quote_for_expand_result" - } - $opt_dry_run || eval "$cmd" || { - lt_exit=$? - - # Restore the uninstalled library and exit - if test "$opt_mode" = relink; then - ( cd "$output_objdir" && \ - $RM "${realname}T" && \ - $MV "${realname}U" "$realname" ) - fi - - exit $lt_exit - } - done - IFS="$save_ifs" - - # Restore the uninstalled library and exit - if test "$opt_mode" = relink; then - $opt_dry_run || eval '(cd $output_objdir && $RM ${realname}T && $MV $realname ${realname}T && $MV ${realname}U $realname)' || exit $? - - if test -n "$convenience"; then - if test -z "$whole_archive_flag_spec"; then - func_show_eval '${RM}r "$gentop"' - fi - fi - - exit $EXIT_SUCCESS - fi - - # Create links to the real library. - for linkname in $linknames; do - if test "$realname" != "$linkname"; then - func_show_eval '(cd "$output_objdir" && $RM "$linkname" && $LN_S "$realname" "$linkname")' 'exit $?' - fi - done - - # If -module or -export-dynamic was specified, set the dlname. - if test "$module" = yes || test "$export_dynamic" = yes; then - # On all known operating systems, these are identical. - dlname="$soname" - fi - fi - ;; - - obj) - if test -n "$dlfiles$dlprefiles" || test "$dlself" != no; then - func_warning "\`-dlopen' is ignored for objects" - fi - - case " $deplibs" in - *\ -l* | *\ -L*) - func_warning "\`-l' and \`-L' are ignored for objects" ;; - esac - - test -n "$rpath" && \ - func_warning "\`-rpath' is ignored for objects" - - test -n "$xrpath" && \ - func_warning "\`-R' is ignored for objects" - - test -n "$vinfo" && \ - func_warning "\`-version-info' is ignored for objects" - - test -n "$release" && \ - func_warning "\`-release' is ignored for objects" - - case $output in - *.lo) - test -n "$objs$old_deplibs" && \ - func_fatal_error "cannot build library object \`$output' from non-libtool objects" - - libobj=$output - func_lo2o "$libobj" - obj=$func_lo2o_result - ;; - *) - libobj= - obj="$output" - ;; - esac - - # Delete the old objects. - $opt_dry_run || $RM $obj $libobj - - # Objects from convenience libraries. This assumes - # single-version convenience libraries. Whenever we create - # different ones for PIC/non-PIC, this we'll have to duplicate - # the extraction. - reload_conv_objs= - gentop= - # reload_cmds runs $LD directly, so let us get rid of - # -Wl from whole_archive_flag_spec and hope we can get by with - # turning comma into space.. - wl= - - if test -n "$convenience"; then - if test -n "$whole_archive_flag_spec"; then - eval tmp_whole_archive_flags=\"$whole_archive_flag_spec\" - reload_conv_objs=$reload_objs\ `$ECHO "$tmp_whole_archive_flags" | $SED 's|,| |g'` - else - gentop="$output_objdir/${obj}x" - func_append generated " $gentop" - - func_extract_archives $gentop $convenience - reload_conv_objs="$reload_objs $func_extract_archives_result" - fi - fi - - # If we're not building shared, we need to use non_pic_objs - test "$build_libtool_libs" != yes && libobjs="$non_pic_objects" - - # Create the old-style object. - reload_objs="$objs$old_deplibs "`$ECHO "$libobjs" | $SP2NL | $SED "/\.${libext}$/d; /\.lib$/d; $lo2o" | $NL2SP`" $reload_conv_objs" ### testsuite: skip nested quoting test - - output="$obj" - func_execute_cmds "$reload_cmds" 'exit $?' - - # Exit if we aren't doing a library object file. - if test -z "$libobj"; then - if test -n "$gentop"; then - func_show_eval '${RM}r "$gentop"' - fi - - exit $EXIT_SUCCESS - fi - - if test "$build_libtool_libs" != yes; then - if test -n "$gentop"; then - func_show_eval '${RM}r "$gentop"' - fi - - # Create an invalid libtool object if no PIC, so that we don't - # accidentally link it into a program. - # $show "echo timestamp > $libobj" - # $opt_dry_run || eval "echo timestamp > $libobj" || exit $? - exit $EXIT_SUCCESS - fi - - if test -n "$pic_flag" || test "$pic_mode" != default; then - # Only do commands if we really have different PIC objects. - reload_objs="$libobjs $reload_conv_objs" - output="$libobj" - func_execute_cmds "$reload_cmds" 'exit $?' - fi - - if test -n "$gentop"; then - func_show_eval '${RM}r "$gentop"' - fi - - exit $EXIT_SUCCESS - ;; - - prog) - case $host in - *cygwin*) func_stripname '' '.exe' "$output" - output=$func_stripname_result.exe;; - esac - test -n "$vinfo" && \ - func_warning "\`-version-info' is ignored for programs" - - test -n "$release" && \ - func_warning "\`-release' is ignored for programs" - - test "$preload" = yes \ - && test "$dlopen_support" = unknown \ - && test "$dlopen_self" = unknown \ - && test "$dlopen_self_static" = unknown && \ - func_warning "\`LT_INIT([dlopen])' not used. Assuming no dlopen support." - - case $host in - *-*-rhapsody* | *-*-darwin1.[012]) - # On Rhapsody replace the C library is the System framework - compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's/ -lc / System.ltframework /'` - finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's/ -lc / System.ltframework /'` - ;; - esac - - case $host in - *-*-darwin*) - # Don't allow lazy linking, it breaks C++ global constructors - # But is supposedly fixed on 10.4 or later (yay!). - if test "$tagname" = CXX ; then - case ${MACOSX_DEPLOYMENT_TARGET-10.0} in - 10.[0123]) - func_append compile_command " ${wl}-bind_at_load" - func_append finalize_command " ${wl}-bind_at_load" - ;; - esac - fi - # Time to change all our "foo.ltframework" stuff back to "-framework foo" - compile_deplibs=`$ECHO " $compile_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` - finalize_deplibs=`$ECHO " $finalize_deplibs" | $SED 's% \([^ $]*\).ltframework% -framework \1%g'` - ;; - esac - - - # move library search paths that coincide with paths to not yet - # installed libraries to the beginning of the library search list - new_libs= - for path in $notinst_path; do - case " $new_libs " in - *" -L$path/$objdir "*) ;; - *) - case " $compile_deplibs " in - *" -L$path/$objdir "*) - func_append new_libs " -L$path/$objdir" ;; - esac - ;; - esac - done - for deplib in $compile_deplibs; do - case $deplib in - -L*) - case " $new_libs " in - *" $deplib "*) ;; - *) func_append new_libs " $deplib" ;; - esac - ;; - *) func_append new_libs " $deplib" ;; - esac - done - compile_deplibs="$new_libs" - - - func_append compile_command " $compile_deplibs" - func_append finalize_command " $finalize_deplibs" - - if test -n "$rpath$xrpath"; then - # If the user specified any rpath flags, then add them. - for libdir in $rpath $xrpath; do - # This is the magic to use -rpath. - case "$finalize_rpath " in - *" $libdir "*) ;; - *) func_append finalize_rpath " $libdir" ;; - esac - done - fi - - # Now hardcode the library paths - rpath= - hardcode_libdirs= - for libdir in $compile_rpath $finalize_rpath; do - if test -n "$hardcode_libdir_flag_spec"; then - if test -n "$hardcode_libdir_separator"; then - if test -z "$hardcode_libdirs"; then - hardcode_libdirs="$libdir" - else - # Just accumulate the unique libdirs. - case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in - *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) - ;; - *) - func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" - ;; - esac - fi - else - eval flag=\"$hardcode_libdir_flag_spec\" - func_append rpath " $flag" - fi - elif test -n "$runpath_var"; then - case "$perm_rpath " in - *" $libdir "*) ;; - *) func_append perm_rpath " $libdir" ;; - esac - fi - case $host in - *-*-cygwin* | *-*-mingw* | *-*-pw32* | *-*-os2* | *-cegcc*) - testbindir=`${ECHO} "$libdir" | ${SED} -e 's*/lib$*/bin*'` - case :$dllsearchpath: in - *":$libdir:"*) ;; - ::) dllsearchpath=$libdir;; - *) func_append dllsearchpath ":$libdir";; - esac - case :$dllsearchpath: in - *":$testbindir:"*) ;; - ::) dllsearchpath=$testbindir;; - *) func_append dllsearchpath ":$testbindir";; - esac - ;; - esac - done - # Substitute the hardcoded libdirs into the rpath. - if test -n "$hardcode_libdir_separator" && - test -n "$hardcode_libdirs"; then - libdir="$hardcode_libdirs" - eval rpath=\" $hardcode_libdir_flag_spec\" - fi - compile_rpath="$rpath" - - rpath= - hardcode_libdirs= - for libdir in $finalize_rpath; do - if test -n "$hardcode_libdir_flag_spec"; then - if test -n "$hardcode_libdir_separator"; then - if test -z "$hardcode_libdirs"; then - hardcode_libdirs="$libdir" - else - # Just accumulate the unique libdirs. - case $hardcode_libdir_separator$hardcode_libdirs$hardcode_libdir_separator in - *"$hardcode_libdir_separator$libdir$hardcode_libdir_separator"*) - ;; - *) - func_append hardcode_libdirs "$hardcode_libdir_separator$libdir" - ;; - esac - fi - else - eval flag=\"$hardcode_libdir_flag_spec\" - func_append rpath " $flag" - fi - elif test -n "$runpath_var"; then - case "$finalize_perm_rpath " in - *" $libdir "*) ;; - *) func_append finalize_perm_rpath " $libdir" ;; - esac - fi - done - # Substitute the hardcoded libdirs into the rpath. - if test -n "$hardcode_libdir_separator" && - test -n "$hardcode_libdirs"; then - libdir="$hardcode_libdirs" - eval rpath=\" $hardcode_libdir_flag_spec\" - fi - finalize_rpath="$rpath" - - if test -n "$libobjs" && test "$build_old_libs" = yes; then - # Transform all the library objects into standard objects. - compile_command=`$ECHO "$compile_command" | $SP2NL | $SED "$lo2o" | $NL2SP` - finalize_command=`$ECHO "$finalize_command" | $SP2NL | $SED "$lo2o" | $NL2SP` - fi - - func_generate_dlsyms "$outputname" "@PROGRAM@" "no" - - # template prelinking step - if test -n "$prelink_cmds"; then - func_execute_cmds "$prelink_cmds" 'exit $?' - fi - - wrappers_required=yes - case $host in - *cegcc* | *mingw32ce*) - # Disable wrappers for cegcc and mingw32ce hosts, we are cross compiling anyway. - wrappers_required=no - ;; - *cygwin* | *mingw* ) - if test "$build_libtool_libs" != yes; then - wrappers_required=no - fi - ;; - *) - if test "$need_relink" = no || test "$build_libtool_libs" != yes; then - wrappers_required=no - fi - ;; - esac - if test "$wrappers_required" = no; then - # Replace the output file specification. - compile_command=`$ECHO "$compile_command" | $SED 's%@OUTPUT@%'"$output"'%g'` - link_command="$compile_command$compile_rpath" - - # We have no uninstalled library dependencies, so finalize right now. - exit_status=0 - func_show_eval "$link_command" 'exit_status=$?' - - if test -n "$postlink_cmds"; then - func_to_tool_file "$output" - postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` - func_execute_cmds "$postlink_cmds" 'exit $?' - fi - - # Delete the generated files. - if test -f "$output_objdir/${outputname}S.${objext}"; then - func_show_eval '$RM "$output_objdir/${outputname}S.${objext}"' - fi - - exit $exit_status - fi - - if test -n "$compile_shlibpath$finalize_shlibpath"; then - compile_command="$shlibpath_var=\"$compile_shlibpath$finalize_shlibpath\$$shlibpath_var\" $compile_command" - fi - if test -n "$finalize_shlibpath"; then - finalize_command="$shlibpath_var=\"$finalize_shlibpath\$$shlibpath_var\" $finalize_command" - fi - - compile_var= - finalize_var= - if test -n "$runpath_var"; then - if test -n "$perm_rpath"; then - # We should set the runpath_var. - rpath= - for dir in $perm_rpath; do - func_append rpath "$dir:" - done - compile_var="$runpath_var=\"$rpath\$$runpath_var\" " - fi - if test -n "$finalize_perm_rpath"; then - # We should set the runpath_var. - rpath= - for dir in $finalize_perm_rpath; do - func_append rpath "$dir:" - done - finalize_var="$runpath_var=\"$rpath\$$runpath_var\" " - fi - fi - - if test "$no_install" = yes; then - # We don't need to create a wrapper script. - link_command="$compile_var$compile_command$compile_rpath" - # Replace the output file specification. - link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output"'%g'` - # Delete the old output file. - $opt_dry_run || $RM $output - # Link the executable and exit - func_show_eval "$link_command" 'exit $?' - - if test -n "$postlink_cmds"; then - func_to_tool_file "$output" - postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` - func_execute_cmds "$postlink_cmds" 'exit $?' - fi - - exit $EXIT_SUCCESS - fi - - if test "$hardcode_action" = relink; then - # Fast installation is not supported - link_command="$compile_var$compile_command$compile_rpath" - relink_command="$finalize_var$finalize_command$finalize_rpath" - - func_warning "this platform does not like uninstalled shared libraries" - func_warning "\`$output' will be relinked during installation" - else - if test "$fast_install" != no; then - link_command="$finalize_var$compile_command$finalize_rpath" - if test "$fast_install" = yes; then - relink_command=`$ECHO "$compile_var$compile_command$compile_rpath" | $SED 's%@OUTPUT@%\$progdir/\$file%g'` - else - # fast_install is set to needless - relink_command= - fi - else - link_command="$compile_var$compile_command$compile_rpath" - relink_command="$finalize_var$finalize_command$finalize_rpath" - fi - fi - - # Replace the output file specification. - link_command=`$ECHO "$link_command" | $SED 's%@OUTPUT@%'"$output_objdir/$outputname"'%g'` - - # Delete the old output files. - $opt_dry_run || $RM $output $output_objdir/$outputname $output_objdir/lt-$outputname - - func_show_eval "$link_command" 'exit $?' - - if test -n "$postlink_cmds"; then - func_to_tool_file "$output_objdir/$outputname" - postlink_cmds=`func_echo_all "$postlink_cmds" | $SED -e 's%@OUTPUT@%'"$output_objdir/$outputname"'%g' -e 's%@TOOL_OUTPUT@%'"$func_to_tool_file_result"'%g'` - func_execute_cmds "$postlink_cmds" 'exit $?' - fi - - # Now create the wrapper script. - func_verbose "creating $output" - - # Quote the relink command for shipping. - if test -n "$relink_command"; then - # Preserve any variables that may affect compiler behavior - for var in $variables_saved_for_relink; do - if eval test -z \"\${$var+set}\"; then - relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" - elif eval var_value=\$$var; test -z "$var_value"; then - relink_command="$var=; export $var; $relink_command" - else - func_quote_for_eval "$var_value" - relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command" - fi - done - relink_command="(cd `pwd`; $relink_command)" - relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"` - fi - - # Only actually do things if not in dry run mode. - $opt_dry_run || { - # win32 will think the script is a binary if it has - # a .exe suffix, so we strip it off here. - case $output in - *.exe) func_stripname '' '.exe' "$output" - output=$func_stripname_result ;; - esac - # test for cygwin because mv fails w/o .exe extensions - case $host in - *cygwin*) - exeext=.exe - func_stripname '' '.exe' "$outputname" - outputname=$func_stripname_result ;; - *) exeext= ;; - esac - case $host in - *cygwin* | *mingw* ) - func_dirname_and_basename "$output" "" "." - output_name=$func_basename_result - output_path=$func_dirname_result - cwrappersource="$output_path/$objdir/lt-$output_name.c" - cwrapper="$output_path/$output_name.exe" - $RM $cwrappersource $cwrapper - trap "$RM $cwrappersource $cwrapper; exit $EXIT_FAILURE" 1 2 15 - - func_emit_cwrapperexe_src > $cwrappersource - - # The wrapper executable is built using the $host compiler, - # because it contains $host paths and files. If cross- - # compiling, it, like the target executable, must be - # executed on the $host or under an emulation environment. - $opt_dry_run || { - $LTCC $LTCFLAGS -o $cwrapper $cwrappersource - $STRIP $cwrapper - } - - # Now, create the wrapper script for func_source use: - func_ltwrapper_scriptname $cwrapper - $RM $func_ltwrapper_scriptname_result - trap "$RM $func_ltwrapper_scriptname_result; exit $EXIT_FAILURE" 1 2 15 - $opt_dry_run || { - # note: this script will not be executed, so do not chmod. - if test "x$build" = "x$host" ; then - $cwrapper --lt-dump-script > $func_ltwrapper_scriptname_result - else - func_emit_wrapper no > $func_ltwrapper_scriptname_result - fi - } - ;; - * ) - $RM $output - trap "$RM $output; exit $EXIT_FAILURE" 1 2 15 - - func_emit_wrapper no > $output - chmod +x $output - ;; - esac - } - exit $EXIT_SUCCESS - ;; - esac - - # See if we need to build an old-fashioned archive. - for oldlib in $oldlibs; do - - if test "$build_libtool_libs" = convenience; then - oldobjs="$libobjs_save $symfileobj" - addlibs="$convenience" - build_libtool_libs=no - else - if test "$build_libtool_libs" = module; then - oldobjs="$libobjs_save" - build_libtool_libs=no - else - oldobjs="$old_deplibs $non_pic_objects" - if test "$preload" = yes && test -f "$symfileobj"; then - func_append oldobjs " $symfileobj" - fi - fi - addlibs="$old_convenience" - fi - - if test -n "$addlibs"; then - gentop="$output_objdir/${outputname}x" - func_append generated " $gentop" - - func_extract_archives $gentop $addlibs - func_append oldobjs " $func_extract_archives_result" - fi - - # Do each command in the archive commands. - if test -n "$old_archive_from_new_cmds" && test "$build_libtool_libs" = yes; then - cmds=$old_archive_from_new_cmds - else - - # Add any objects from preloaded convenience libraries - if test -n "$dlprefiles"; then - gentop="$output_objdir/${outputname}x" - func_append generated " $gentop" - - func_extract_archives $gentop $dlprefiles - func_append oldobjs " $func_extract_archives_result" - fi - - # POSIX demands no paths to be encoded in archives. We have - # to avoid creating archives with duplicate basenames if we - # might have to extract them afterwards, e.g., when creating a - # static archive out of a convenience library, or when linking - # the entirety of a libtool archive into another (currently - # not supported by libtool). - if (for obj in $oldobjs - do - func_basename "$obj" - $ECHO "$func_basename_result" - done | sort | sort -uc >/dev/null 2>&1); then - : - else - echo "copying selected object files to avoid basename conflicts..." - gentop="$output_objdir/${outputname}x" - func_append generated " $gentop" - func_mkdir_p "$gentop" - save_oldobjs=$oldobjs - oldobjs= - counter=1 - for obj in $save_oldobjs - do - func_basename "$obj" - objbase="$func_basename_result" - case " $oldobjs " in - " ") oldobjs=$obj ;; - *[\ /]"$objbase "*) - while :; do - # Make sure we don't pick an alternate name that also - # overlaps. - newobj=lt$counter-$objbase - func_arith $counter + 1 - counter=$func_arith_result - case " $oldobjs " in - *[\ /]"$newobj "*) ;; - *) if test ! -f "$gentop/$newobj"; then break; fi ;; - esac - done - func_show_eval "ln $obj $gentop/$newobj || cp $obj $gentop/$newobj" - func_append oldobjs " $gentop/$newobj" - ;; - *) func_append oldobjs " $obj" ;; - esac - done - fi - func_to_tool_file "$oldlib" func_convert_file_msys_to_w32 - tool_oldlib=$func_to_tool_file_result - eval cmds=\"$old_archive_cmds\" - - func_len " $cmds" - len=$func_len_result - if test "$len" -lt "$max_cmd_len" || test "$max_cmd_len" -le -1; then - cmds=$old_archive_cmds - elif test -n "$archiver_list_spec"; then - func_verbose "using command file archive linking..." - for obj in $oldobjs - do - func_to_tool_file "$obj" - $ECHO "$func_to_tool_file_result" - done > $output_objdir/$libname.libcmd - func_to_tool_file "$output_objdir/$libname.libcmd" - oldobjs=" $archiver_list_spec$func_to_tool_file_result" - cmds=$old_archive_cmds - else - # the command line is too long to link in one step, link in parts - func_verbose "using piecewise archive linking..." - save_RANLIB=$RANLIB - RANLIB=: - objlist= - concat_cmds= - save_oldobjs=$oldobjs - oldobjs= - # Is there a better way of finding the last object in the list? - for obj in $save_oldobjs - do - last_oldobj=$obj - done - eval test_cmds=\"$old_archive_cmds\" - func_len " $test_cmds" - len0=$func_len_result - len=$len0 - for obj in $save_oldobjs - do - func_len " $obj" - func_arith $len + $func_len_result - len=$func_arith_result - func_append objlist " $obj" - if test "$len" -lt "$max_cmd_len"; then - : - else - # the above command should be used before it gets too long - oldobjs=$objlist - if test "$obj" = "$last_oldobj" ; then - RANLIB=$save_RANLIB - fi - test -z "$concat_cmds" || concat_cmds=$concat_cmds~ - eval concat_cmds=\"\${concat_cmds}$old_archive_cmds\" - objlist= - len=$len0 - fi - done - RANLIB=$save_RANLIB - oldobjs=$objlist - if test "X$oldobjs" = "X" ; then - eval cmds=\"\$concat_cmds\" - else - eval cmds=\"\$concat_cmds~\$old_archive_cmds\" - fi - fi - fi - func_execute_cmds "$cmds" 'exit $?' - done - - test -n "$generated" && \ - func_show_eval "${RM}r$generated" - - # Now create the libtool archive. - case $output in - *.la) - old_library= - test "$build_old_libs" = yes && old_library="$libname.$libext" - func_verbose "creating $output" - - # Preserve any variables that may affect compiler behavior - for var in $variables_saved_for_relink; do - if eval test -z \"\${$var+set}\"; then - relink_command="{ test -z \"\${$var+set}\" || $lt_unset $var || { $var=; export $var; }; }; $relink_command" - elif eval var_value=\$$var; test -z "$var_value"; then - relink_command="$var=; export $var; $relink_command" - else - func_quote_for_eval "$var_value" - relink_command="$var=$func_quote_for_eval_result; export $var; $relink_command" - fi - done - # Quote the link command for shipping. - relink_command="(cd `pwd`; $SHELL $progpath $preserve_args --mode=relink $libtool_args @inst_prefix_dir@)" - relink_command=`$ECHO "$relink_command" | $SED "$sed_quote_subst"` - if test "$hardcode_automatic" = yes ; then - relink_command= - fi - - # Only create the output if not a dry run. - $opt_dry_run || { - for installed in no yes; do - if test "$installed" = yes; then - if test -z "$install_libdir"; then - break - fi - output="$output_objdir/$outputname"i - # Replace all uninstalled libtool libraries with the installed ones - newdependency_libs= - for deplib in $dependency_libs; do - case $deplib in - *.la) - func_basename "$deplib" - name="$func_basename_result" - func_resolve_sysroot "$deplib" - eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $func_resolve_sysroot_result` - test -z "$libdir" && \ - func_fatal_error "\`$deplib' is not a valid libtool archive" - func_append newdependency_libs " ${lt_sysroot:+=}$libdir/$name" - ;; - -L*) - func_stripname -L '' "$deplib" - func_replace_sysroot "$func_stripname_result" - func_append newdependency_libs " -L$func_replace_sysroot_result" - ;; - -R*) - func_stripname -R '' "$deplib" - func_replace_sysroot "$func_stripname_result" - func_append newdependency_libs " -R$func_replace_sysroot_result" - ;; - *) func_append newdependency_libs " $deplib" ;; - esac - done - dependency_libs="$newdependency_libs" - newdlfiles= - - for lib in $dlfiles; do - case $lib in - *.la) - func_basename "$lib" - name="$func_basename_result" - eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib` - test -z "$libdir" && \ - func_fatal_error "\`$lib' is not a valid libtool archive" - func_append newdlfiles " ${lt_sysroot:+=}$libdir/$name" - ;; - *) func_append newdlfiles " $lib" ;; - esac - done - dlfiles="$newdlfiles" - newdlprefiles= - for lib in $dlprefiles; do - case $lib in - *.la) - # Only pass preopened files to the pseudo-archive (for - # eventual linking with the app. that links it) if we - # didn't already link the preopened objects directly into - # the library: - func_basename "$lib" - name="$func_basename_result" - eval libdir=`${SED} -n -e 's/^libdir=\(.*\)$/\1/p' $lib` - test -z "$libdir" && \ - func_fatal_error "\`$lib' is not a valid libtool archive" - func_append newdlprefiles " ${lt_sysroot:+=}$libdir/$name" - ;; - esac - done - dlprefiles="$newdlprefiles" - else - newdlfiles= - for lib in $dlfiles; do - case $lib in - [\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;; - *) abs=`pwd`"/$lib" ;; - esac - func_append newdlfiles " $abs" - done - dlfiles="$newdlfiles" - newdlprefiles= - for lib in $dlprefiles; do - case $lib in - [\\/]* | [A-Za-z]:[\\/]*) abs="$lib" ;; - *) abs=`pwd`"/$lib" ;; - esac - func_append newdlprefiles " $abs" - done - dlprefiles="$newdlprefiles" - fi - $RM $output - # place dlname in correct position for cygwin - # In fact, it would be nice if we could use this code for all target - # systems that can't hard-code library paths into their executables - # and that have no shared library path variable independent of PATH, - # but it turns out we can't easily determine that from inspecting - # libtool variables, so we have to hard-code the OSs to which it - # applies here; at the moment, that means platforms that use the PE - # object format with DLL files. See the long comment at the top of - # tests/bindir.at for full details. - tdlname=$dlname - case $host,$output,$installed,$module,$dlname in - *cygwin*,*lai,yes,no,*.dll | *mingw*,*lai,yes,no,*.dll | *cegcc*,*lai,yes,no,*.dll) - # If a -bindir argument was supplied, place the dll there. - if test "x$bindir" != x ; - then - func_relative_path "$install_libdir" "$bindir" - tdlname=$func_relative_path_result$dlname - else - # Otherwise fall back on heuristic. - tdlname=../bin/$dlname - fi - ;; - esac - $ECHO > $output "\ -# $outputname - a libtool library file -# Generated by $PROGRAM (GNU $PACKAGE$TIMESTAMP) $VERSION -# -# Please DO NOT delete this file! -# It is necessary for linking the library. - -# The name that we can dlopen(3). -dlname='$tdlname' - -# Names of this library. -library_names='$library_names' - -# The name of the static archive. -old_library='$old_library' - -# Linker flags that can not go in dependency_libs. -inherited_linker_flags='$new_inherited_linker_flags' - -# Libraries that this one depends upon. -dependency_libs='$dependency_libs' - -# Names of additional weak libraries provided by this library -weak_library_names='$weak_libs' - -# Version information for $libname. -current=$current -age=$age -revision=$revision - -# Is this an already installed library? -installed=$installed - -# Should we warn about portability when linking against -modules? -shouldnotlink=$module - -# Files to dlopen/dlpreopen -dlopen='$dlfiles' -dlpreopen='$dlprefiles' - -# Directory that this library needs to be installed in: -libdir='$install_libdir'" - if test "$installed" = no && test "$need_relink" = yes; then - $ECHO >> $output "\ -relink_command=\"$relink_command\"" - fi - done - } - - # Do a symbolic link so that the libtool archive can be found in - # LD_LIBRARY_PATH before the program is installed. - func_show_eval '( cd "$output_objdir" && $RM "$outputname" && $LN_S "../$outputname" "$outputname" )' 'exit $?' - ;; - esac - exit $EXIT_SUCCESS -} - -{ test "$opt_mode" = link || test "$opt_mode" = relink; } && - func_mode_link ${1+"$@"} - - -# func_mode_uninstall arg... -func_mode_uninstall () -{ - $opt_debug - RM="$nonopt" - files= - rmforce= - exit_status=0 - - # This variable tells wrapper scripts just to set variables rather - # than running their programs. - libtool_install_magic="$magic" - - for arg - do - case $arg in - -f) func_append RM " $arg"; rmforce=yes ;; - -*) func_append RM " $arg" ;; - *) func_append files " $arg" ;; - esac - done - - test -z "$RM" && \ - func_fatal_help "you must specify an RM program" - - rmdirs= - - for file in $files; do - func_dirname "$file" "" "." - dir="$func_dirname_result" - if test "X$dir" = X.; then - odir="$objdir" - else - odir="$dir/$objdir" - fi - func_basename "$file" - name="$func_basename_result" - test "$opt_mode" = uninstall && odir="$dir" - - # Remember odir for removal later, being careful to avoid duplicates - if test "$opt_mode" = clean; then - case " $rmdirs " in - *" $odir "*) ;; - *) func_append rmdirs " $odir" ;; - esac - fi - - # Don't error if the file doesn't exist and rm -f was used. - if { test -L "$file"; } >/dev/null 2>&1 || - { test -h "$file"; } >/dev/null 2>&1 || - test -f "$file"; then - : - elif test -d "$file"; then - exit_status=1 - continue - elif test "$rmforce" = yes; then - continue - fi - - rmfiles="$file" - - case $name in - *.la) - # Possibly a libtool archive, so verify it. - if func_lalib_p "$file"; then - func_source $dir/$name - - # Delete the libtool libraries and symlinks. - for n in $library_names; do - func_append rmfiles " $odir/$n" - done - test -n "$old_library" && func_append rmfiles " $odir/$old_library" - - case "$opt_mode" in - clean) - case " $library_names " in - *" $dlname "*) ;; - *) test -n "$dlname" && func_append rmfiles " $odir/$dlname" ;; - esac - test -n "$libdir" && func_append rmfiles " $odir/$name $odir/${name}i" - ;; - uninstall) - if test -n "$library_names"; then - # Do each command in the postuninstall commands. - func_execute_cmds "$postuninstall_cmds" 'test "$rmforce" = yes || exit_status=1' - fi - - if test -n "$old_library"; then - # Do each command in the old_postuninstall commands. - func_execute_cmds "$old_postuninstall_cmds" 'test "$rmforce" = yes || exit_status=1' - fi - # FIXME: should reinstall the best remaining shared library. - ;; - esac - fi - ;; - - *.lo) - # Possibly a libtool object, so verify it. - if func_lalib_p "$file"; then - - # Read the .lo file - func_source $dir/$name - - # Add PIC object to the list of files to remove. - if test -n "$pic_object" && - test "$pic_object" != none; then - func_append rmfiles " $dir/$pic_object" - fi - - # Add non-PIC object to the list of files to remove. - if test -n "$non_pic_object" && - test "$non_pic_object" != none; then - func_append rmfiles " $dir/$non_pic_object" - fi - fi - ;; - - *) - if test "$opt_mode" = clean ; then - noexename=$name - case $file in - *.exe) - func_stripname '' '.exe' "$file" - file=$func_stripname_result - func_stripname '' '.exe' "$name" - noexename=$func_stripname_result - # $file with .exe has already been added to rmfiles, - # add $file without .exe - func_append rmfiles " $file" - ;; - esac - # Do a test to see if this is a libtool program. - if func_ltwrapper_p "$file"; then - if func_ltwrapper_executable_p "$file"; then - func_ltwrapper_scriptname "$file" - relink_command= - func_source $func_ltwrapper_scriptname_result - func_append rmfiles " $func_ltwrapper_scriptname_result" - else - relink_command= - func_source $dir/$noexename - fi - - # note $name still contains .exe if it was in $file originally - # as does the version of $file that was added into $rmfiles - func_append rmfiles " $odir/$name $odir/${name}S.${objext}" - if test "$fast_install" = yes && test -n "$relink_command"; then - func_append rmfiles " $odir/lt-$name" - fi - if test "X$noexename" != "X$name" ; then - func_append rmfiles " $odir/lt-${noexename}.c" - fi - fi - fi - ;; - esac - func_show_eval "$RM $rmfiles" 'exit_status=1' - done - - # Try to remove the ${objdir}s in the directories where we deleted files - for dir in $rmdirs; do - if test -d "$dir"; then - func_show_eval "rmdir $dir >/dev/null 2>&1" - fi - done - - exit $exit_status -} - -{ test "$opt_mode" = uninstall || test "$opt_mode" = clean; } && - func_mode_uninstall ${1+"$@"} - -test -z "$opt_mode" && { - help="$generic_help" - func_fatal_help "you must specify a MODE" -} - -test -z "$exec_cmd" && \ - func_fatal_help "invalid operation mode \`$opt_mode'" - -if test -n "$exec_cmd"; then - eval exec "$exec_cmd" - exit $EXIT_FAILURE -fi - -exit $exit_status - - -# The TAGs below are defined such that we never get into a situation -# in which we disable both kinds of libraries. Given conflicting -# choices, we go for a static library, that is the most portable, -# since we can't tell whether shared libraries were disabled because -# the user asked for that or because the platform doesn't support -# them. This is particularly important on AIX, because we don't -# support having both static and shared libraries enabled at the same -# time on that platform, so we default to a shared-only configuration. -# If a disable-shared tag is given, we'll fallback to a static-only -# configuration. But we'll never go from static-only to shared-only. - -# ### BEGIN LIBTOOL TAG CONFIG: disable-shared -build_libtool_libs=no -build_old_libs=yes -# ### END LIBTOOL TAG CONFIG: disable-shared - -# ### BEGIN LIBTOOL TAG CONFIG: disable-static -build_old_libs=`case $build_libtool_libs in yes) echo no;; *) echo yes;; esac` -# ### END LIBTOOL TAG CONFIG: disable-static - -# Local Variables: -# mode:shell-script -# sh-indentation:2 -# End: -# vi:sw=2 - diff --git a/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/test_audio_analyzer.h b/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/test_audio_analyzer.h deleted file mode 100644 index bfe073fc697b112d6ccaba6151cd89f7de2ca598..0000000000000000000000000000000000000000 --- a/spaces/amarchheda/ChordDuplicate/portaudio/qa/loopback/src/test_audio_analyzer.h +++ /dev/null @@ -1,46 +0,0 @@ - -/* - * PortAudio Portable Real-Time Audio Library - * Latest Version at: http://www.portaudio.com - * - * Copyright (c) 1999-2010 Phil Burk and Ross Bencina - * - * Permission is hereby granted, free of charge, to any person obtaining - * a copy of this software and associated documentation files - * (the "Software"), to deal in the Software without restriction, - * including without limitation the rights to use, copy, modify, merge, - * publish, distribute, sublicense, and/or sell copies of the Software, - * and to permit persons to whom the Software is furnished to do so, - * subject to the following conditions: - * - * The above copyright notice and this permission notice shall be - * included in all copies or substantial portions of the Software. - * - * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, - * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF - * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. - * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR - * ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF - * CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION - * WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - */ - -/* - * The text above constitutes the entire PortAudio license; however, - * the PortAudio community also makes the following non-binding requests: - * - * Any person wishing to distribute modifications to the Software is - * requested to send the modifications to the original developer so that - * they can be incorporated into the canonical version. It is also - * requested that these non-binding requests be included along with the - * license above. - */ - -#ifndef _TEST_AUDIO_ANALYZER_H -#define _TEST_AUDIO_ANALYZER_H - -/** Test the audio analyzer by itself without any PortAudio calls. */ -int PaQa_TestAnalyzer( void ); - - -#endif /* _TEST_AUDIO_ANALYZER_H */ diff --git a/spaces/andikalfauzi/Churn-Prediction/ChurnModel.py b/spaces/andikalfauzi/Churn-Prediction/ChurnModel.py deleted file mode 100644 index f3e4d0c2123a5485efbc54fa62b724abe31be53d..0000000000000000000000000000000000000000 --- a/spaces/andikalfauzi/Churn-Prediction/ChurnModel.py +++ /dev/null @@ -1,96 +0,0 @@ -import numpy as np -import pandas as pd -import streamlit as st -import pickle -from datetime import datetime -from tensorflow.keras.models import load_model - -# load the model -baseFunctional = load_model('churnModel.h5') - -with open('FinalPipeline.pkl', 'rb') as file1: - finalPipeline = pickle.load(file1) - -def run(): - - def age_bin(age): - if age > 7 and age < 20: - return 'under20' - elif age >= 20 and age < 30: - return 'under30' - elif age >= 30 and age < 40: - return 'under40' - elif age >= 40 and age < 50: - return 'under50' - elif age >= 50 and age < 60: - return 'under60' - else: - return 'above60' - - # membuat input data baru - with st.form(key='churn_form'): - age = st.number_input('Age', min_value=5, max_value=120, value=10, step=1, help='Usia Customer') - gender = st.selectbox('Gender', ('Female', 'Male')) - region_category = st.selectbox('Region Category', ('Village', 'Town', 'City')) - membership_category = st.selectbox('Membership Category', ('No Membership', 'Basic Membership', 'Silver Membership', - 'Premium Membership', 'Gold Membership', 'Platinum Membership')) - joining_date = st.date_input('Joining Date') - medium_of_operation = st.selectbox('Medium of Operation', ('Smartphone', 'Desktop', 'Both')) - preferred_offer_types = st.selectbox('Preferred Offer Types', ('Without Offers', 'Credit/Debit Card Offers', 'Gift Vouchers/Coupons')) - days_since_last_login = st.number_input('Days Since Last Login', value=1, min_value=1, max_value=150, step=1) - avg_time_spent = st.number_input('Average Time Spent', min_value=0.00, step=0.10, value=0.00, format='%.2g') - avg_transaction_value = st.number_input('Average Transaction Value', min_value=0.00, step=0.10, value=0.00, format='%.2g') - avg_frequency_login_days = st.number_input('Average Frequency Login Days', min_value=0, step=1, value=0, max_value=120) - points_in_wallet = st.number_input('Points in Wallet', min_value=0.00, step=0.10, value=0.00, format='%.2g') - used_special_discount = st.selectbox('Used Special Discount', ('No', 'Yes')) - offer_application_preference = st.selectbox('Offer Application Preference', ('Yes', 'No')) - past_complaint = st.selectbox('Past Complaint', ('No', 'Yes')) - complaint_status = st.selectbox('Complaint Status', ('No Information Available', 'Not Applicable', 'Unsolved', 'Solved', 'Solved in Follow-up')) - feedback = st.selectbox('Feedback', ('Reasonable Price', 'Poor Website', 'Poor Customer Service', 'Too many ads', 'Poor Product Quality', 'No reason specified', - 'Products always in Stock', 'Quality Customer Care', 'User Friendly Website')) - st.markdown('---') - - binAge = age_bin(age) - - submitted = st.form_submit_button('Predict') - - if joining_date: - year = joining_date.year - ('Tahun', year) - - infData = { - 'region_category' : region_category, - 'membership_category' : membership_category, - 'medium_of_operation' : medium_of_operation, - 'preferred_offer_types' : preferred_offer_types, - 'days_since_last_login' : days_since_last_login, - 'avg_time_spent' : avg_time_spent, - 'avg_transaction_value' : avg_transaction_value, - 'avg_frequency_login_days' : avg_frequency_login_days, - 'points_in_wallet' : points_in_wallet, - 'used_special_discount' : used_special_discount, - 'offer_application_preference' : offer_application_preference, - 'past_complaint' : past_complaint, - 'complaint_status' : complaint_status, - 'feedback' : feedback, - 'age_bin' : binAge, - 'joining_year' : year - } - - infData = pd.DataFrame([infData]) - st.dataframe(infData, height=0) - - infPipe = finalPipeline.transform(infData) - - # Buat function di column submitted - if submitted: - - # Predict using Base Functional API - y_predInfData = baseFunctional.predict(infPipe) - if y_predInfData >= 0.5: - st.write('## Customer is Churn : Yes') - else : - st.write('## Customer is Churn : No') - -if __name__ == '__main__': - run() \ No newline at end of file diff --git a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/callbacks.py b/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/callbacks.py deleted file mode 100644 index fb87ad56470e3222a0ea7c6609c2000e5f23ca69..0000000000000000000000000000000000000000 --- a/spaces/antonovmaxim/text-generation-webui-space/text-generation-webui-main/modules/callbacks.py +++ /dev/null @@ -1,112 +0,0 @@ -import gc -import traceback -from queue import Queue -from threading import Thread - -import torch -import transformers - -import modules.shared as shared - - -class _SentinelTokenStoppingCriteria(transformers.StoppingCriteria): - - def __init__(self, sentinel_token_ids: list, starting_idx: int): - transformers.StoppingCriteria.__init__(self) - self.sentinel_token_ids = sentinel_token_ids - self.starting_idx = starting_idx - self.shortest = min([x.shape[-1] for x in sentinel_token_ids]) - - def __call__(self, input_ids: torch.LongTensor, _scores: torch.FloatTensor) -> bool: - for sample in input_ids: - trimmed_sample = sample[self.starting_idx:] - trimmed_len = trimmed_sample.shape[-1] - if trimmed_len < self.shortest: - continue - - for sentinel in self.sentinel_token_ids: - sentinel_len = sentinel.shape[-1] - if trimmed_len < sentinel_len: - continue - - window = trimmed_sample[-sentinel_len:] - if torch.all(torch.eq(sentinel, window)): - return True - - return False - - -class Stream(transformers.StoppingCriteria): - def __init__(self, callback_func=None): - self.callback_func = callback_func - - def __call__(self, input_ids, scores) -> bool: - if self.callback_func is not None: - self.callback_func(input_ids[0]) - return False - - -class Iteratorize: - - """ - Transforms a function that takes a callback - into a lazy iterator (generator). - - Adapted from: https://stackoverflow.com/a/9969000 - """ - - def __init__(self, func, kwargs={}, callback=None): - self.mfunc = func - self.c_callback = callback - self.q = Queue() - self.sentinel = object() - self.kwargs = kwargs - self.stop_now = False - - def _callback(val): - if self.stop_now or shared.stop_everything: - raise ValueError - self.q.put(val) - - def gentask(): - try: - ret = self.mfunc(callback=_callback, **self.kwargs) - except ValueError: - pass - except: - traceback.print_exc() - pass - - clear_torch_cache() - self.q.put(self.sentinel) - if self.c_callback: - self.c_callback(ret) - - self.thread = Thread(target=gentask) - self.thread.start() - - def __iter__(self): - return self - - def __next__(self): - obj = self.q.get(True, None) - if obj is self.sentinel: - raise StopIteration - else: - return obj - - def __del__(self): - clear_torch_cache() - - def __enter__(self): - return self - - def __exit__(self, exc_type, exc_val, exc_tb): - self.stop_now = True - clear_torch_cache() - - -def clear_torch_cache(): - gc.collect() - if not shared.args.cpu: - torch.cuda.empty_cache() diff --git a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js b/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js deleted file mode 100644 index 4a85c8ebf25110e911a6a1021fae6a014aa11000..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/extensions-builtin/prompt-bracket-checker/javascript/prompt-bracket-checker.js +++ /dev/null @@ -1,110 +0,0 @@ -// Stable Diffusion WebUI - Bracket checker -// Version 1.0 -// By Hingashi no Florin/Bwin4L -// Counts open and closed brackets (round, square, curly) in the prompt and negative prompt text boxes in the txt2img and img2img tabs. -// If there's a mismatch, the keyword counter turns red and if you hover on it, a tooltip tells you what's wrong. - -function checkBrackets(evt, textArea, counterElt) { - errorStringParen = '(...) - Different number of opening and closing parentheses detected.\n'; - errorStringSquare = '[...] - Different number of opening and closing square brackets detected.\n'; - errorStringCurly = '{...} - Different number of opening and closing curly brackets detected.\n'; - - openBracketRegExp = /\(/g; - closeBracketRegExp = /\)/g; - - openSquareBracketRegExp = /\[/g; - closeSquareBracketRegExp = /\]/g; - - openCurlyBracketRegExp = /\{/g; - closeCurlyBracketRegExp = /\}/g; - - totalOpenBracketMatches = 0; - totalCloseBracketMatches = 0; - totalOpenSquareBracketMatches = 0; - totalCloseSquareBracketMatches = 0; - totalOpenCurlyBracketMatches = 0; - totalCloseCurlyBracketMatches = 0; - - openBracketMatches = textArea.value.match(openBracketRegExp); - if(openBracketMatches) { - totalOpenBracketMatches = openBracketMatches.length; - } - - closeBracketMatches = textArea.value.match(closeBracketRegExp); - if(closeBracketMatches) { - totalCloseBracketMatches = closeBracketMatches.length; - } - - openSquareBracketMatches = textArea.value.match(openSquareBracketRegExp); - if(openSquareBracketMatches) { - totalOpenSquareBracketMatches = openSquareBracketMatches.length; - } - - closeSquareBracketMatches = textArea.value.match(closeSquareBracketRegExp); - if(closeSquareBracketMatches) { - totalCloseSquareBracketMatches = closeSquareBracketMatches.length; - } - - openCurlyBracketMatches = textArea.value.match(openCurlyBracketRegExp); - if(openCurlyBracketMatches) { - totalOpenCurlyBracketMatches = openCurlyBracketMatches.length; - } - - closeCurlyBracketMatches = textArea.value.match(closeCurlyBracketRegExp); - if(closeCurlyBracketMatches) { - totalCloseCurlyBracketMatches = closeCurlyBracketMatches.length; - } - - if(totalOpenBracketMatches != totalCloseBracketMatches) { - if(!counterElt.title.includes(errorStringParen)) { - counterElt.title += errorStringParen; - } - } else { - counterElt.title = counterElt.title.replace(errorStringParen, ''); - } - - if(totalOpenSquareBracketMatches != totalCloseSquareBracketMatches) { - if(!counterElt.title.includes(errorStringSquare)) { - counterElt.title += errorStringSquare; - } - } else { - counterElt.title = counterElt.title.replace(errorStringSquare, ''); - } - - if(totalOpenCurlyBracketMatches != totalCloseCurlyBracketMatches) { - if(!counterElt.title.includes(errorStringCurly)) { - counterElt.title += errorStringCurly; - } - } else { - counterElt.title = counterElt.title.replace(errorStringCurly, ''); - } - - if(counterElt.title != '') { - counterElt.classList.add('error'); - } else { - counterElt.classList.remove('error'); - } -} - -function setupBracketChecking(id_prompt, id_counter){ - var textarea = gradioApp().querySelector("#" + id_prompt + " > label > textarea"); - var counter = gradioApp().getElementById(id_counter) - textarea.addEventListener("input", function(evt){ - checkBrackets(evt, textarea, counter) - }); -} - -var shadowRootLoaded = setInterval(function() { - var shadowRoot = document.querySelector('gradio-app').shadowRoot; - if(! shadowRoot) return false; - - var shadowTextArea = shadowRoot.querySelectorAll('#txt2img_prompt > label > textarea'); - if(shadowTextArea.length < 1) return false; - - clearInterval(shadowRootLoaded); - - setupBracketChecking('txt2img_prompt', 'txt2img_token_counter') - setupBracketChecking('txt2img_neg_prompt', 'txt2img_negative_token_counter') - setupBracketChecking('img2img_prompt', 'imgimg_token_counter') - setupBracketChecking('img2img_neg_prompt', 'img2img_negative_token_counter') -}, 1000); diff --git a/spaces/aodianyun/stable-diffusion-webui/modules/images.py b/spaces/aodianyun/stable-diffusion-webui/modules/images.py deleted file mode 100644 index a58573264ee61a83873b8901336be030cf826e3f..0000000000000000000000000000000000000000 --- a/spaces/aodianyun/stable-diffusion-webui/modules/images.py +++ /dev/null @@ -1,669 +0,0 @@ -import datetime -import sys -import traceback - -import pytz -import io -import math -import os -from collections import namedtuple -import re - -import numpy as np -import piexif -import piexif.helper -from PIL import Image, ImageFont, ImageDraw, PngImagePlugin -from fonts.ttf import Roboto -import string -import json -import hashlib - -from modules import sd_samplers, shared, script_callbacks, errors -from modules.shared import opts, cmd_opts - -LANCZOS = (Image.Resampling.LANCZOS if hasattr(Image, 'Resampling') else Image.LANCZOS) - - -def image_grid(imgs, batch_size=1, rows=None): - if rows is None: - if opts.n_rows > 0: - rows = opts.n_rows - elif opts.n_rows == 0: - rows = batch_size - elif opts.grid_prevent_empty_spots: - rows = math.floor(math.sqrt(len(imgs))) - while len(imgs) % rows != 0: - rows -= 1 - else: - rows = math.sqrt(len(imgs)) - rows = round(rows) - if rows > len(imgs): - rows = len(imgs) - - cols = math.ceil(len(imgs) / rows) - - params = script_callbacks.ImageGridLoopParams(imgs, cols, rows) - script_callbacks.image_grid_callback(params) - - w, h = imgs[0].size - grid = Image.new('RGB', size=(params.cols * w, params.rows * h), color='black') - - for i, img in enumerate(params.imgs): - grid.paste(img, box=(i % params.cols * w, i // params.cols * h)) - - return grid - - -Grid = namedtuple("Grid", ["tiles", "tile_w", "tile_h", "image_w", "image_h", "overlap"]) - - -def split_grid(image, tile_w=512, tile_h=512, overlap=64): - w = image.width - h = image.height - - non_overlap_width = tile_w - overlap - non_overlap_height = tile_h - overlap - - cols = math.ceil((w - overlap) / non_overlap_width) - rows = math.ceil((h - overlap) / non_overlap_height) - - dx = (w - tile_w) / (cols - 1) if cols > 1 else 0 - dy = (h - tile_h) / (rows - 1) if rows > 1 else 0 - - grid = Grid([], tile_w, tile_h, w, h, overlap) - for row in range(rows): - row_images = [] - - y = int(row * dy) - - if y + tile_h >= h: - y = h - tile_h - - for col in range(cols): - x = int(col * dx) - - if x + tile_w >= w: - x = w - tile_w - - tile = image.crop((x, y, x + tile_w, y + tile_h)) - - row_images.append([x, tile_w, tile]) - - grid.tiles.append([y, tile_h, row_images]) - - return grid - - -def combine_grid(grid): - def make_mask_image(r): - r = r * 255 / grid.overlap - r = r.astype(np.uint8) - return Image.fromarray(r, 'L') - - mask_w = make_mask_image(np.arange(grid.overlap, dtype=np.float32).reshape((1, grid.overlap)).repeat(grid.tile_h, axis=0)) - mask_h = make_mask_image(np.arange(grid.overlap, dtype=np.float32).reshape((grid.overlap, 1)).repeat(grid.image_w, axis=1)) - - combined_image = Image.new("RGB", (grid.image_w, grid.image_h)) - for y, h, row in grid.tiles: - combined_row = Image.new("RGB", (grid.image_w, h)) - for x, w, tile in row: - if x == 0: - combined_row.paste(tile, (0, 0)) - continue - - combined_row.paste(tile.crop((0, 0, grid.overlap, h)), (x, 0), mask=mask_w) - combined_row.paste(tile.crop((grid.overlap, 0, w, h)), (x + grid.overlap, 0)) - - if y == 0: - combined_image.paste(combined_row, (0, 0)) - continue - - combined_image.paste(combined_row.crop((0, 0, combined_row.width, grid.overlap)), (0, y), mask=mask_h) - combined_image.paste(combined_row.crop((0, grid.overlap, combined_row.width, h)), (0, y + grid.overlap)) - - return combined_image - - -class GridAnnotation: - def __init__(self, text='', is_active=True): - self.text = text - self.is_active = is_active - self.size = None - - -def draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin=0): - def wrap(drawing, text, font, line_length): - lines = [''] - for word in text.split(): - line = f'{lines[-1]} {word}'.strip() - if drawing.textlength(line, font=font) <= line_length: - lines[-1] = line - else: - lines.append(word) - return lines - - def get_font(fontsize): - try: - return ImageFont.truetype(opts.font or Roboto, fontsize) - except Exception: - return ImageFont.truetype(Roboto, fontsize) - - def draw_texts(drawing, draw_x, draw_y, lines, initial_fnt, initial_fontsize): - for i, line in enumerate(lines): - fnt = initial_fnt - fontsize = initial_fontsize - while drawing.multiline_textsize(line.text, font=fnt)[0] > line.allowed_width and fontsize > 0: - fontsize -= 1 - fnt = get_font(fontsize) - drawing.multiline_text((draw_x, draw_y + line.size[1] / 2), line.text, font=fnt, fill=color_active if line.is_active else color_inactive, anchor="mm", align="center") - - if not line.is_active: - drawing.line((draw_x - line.size[0] // 2, draw_y + line.size[1] // 2, draw_x + line.size[0] // 2, draw_y + line.size[1] // 2), fill=color_inactive, width=4) - - draw_y += line.size[1] + line_spacing - - fontsize = (width + height) // 25 - line_spacing = fontsize // 2 - - fnt = get_font(fontsize) - - color_active = (0, 0, 0) - color_inactive = (153, 153, 153) - - pad_left = 0 if sum([sum([len(line.text) for line in lines]) for lines in ver_texts]) == 0 else width * 3 // 4 - - cols = im.width // width - rows = im.height // height - - assert cols == len(hor_texts), f'bad number of horizontal texts: {len(hor_texts)}; must be {cols}' - assert rows == len(ver_texts), f'bad number of vertical texts: {len(ver_texts)}; must be {rows}' - - calc_img = Image.new("RGB", (1, 1), "white") - calc_d = ImageDraw.Draw(calc_img) - - for texts, allowed_width in zip(hor_texts + ver_texts, [width] * len(hor_texts) + [pad_left] * len(ver_texts)): - items = [] + texts - texts.clear() - - for line in items: - wrapped = wrap(calc_d, line.text, fnt, allowed_width) - texts += [GridAnnotation(x, line.is_active) for x in wrapped] - - for line in texts: - bbox = calc_d.multiline_textbbox((0, 0), line.text, font=fnt) - line.size = (bbox[2] - bbox[0], bbox[3] - bbox[1]) - line.allowed_width = allowed_width - - hor_text_heights = [sum([line.size[1] + line_spacing for line in lines]) - line_spacing for lines in hor_texts] - ver_text_heights = [sum([line.size[1] + line_spacing for line in lines]) - line_spacing * len(lines) for lines in ver_texts] - - pad_top = 0 if sum(hor_text_heights) == 0 else max(hor_text_heights) + line_spacing * 2 - - result = Image.new("RGB", (im.width + pad_left + margin * (cols-1), im.height + pad_top + margin * (rows-1)), "white") - - for row in range(rows): - for col in range(cols): - cell = im.crop((width * col, height * row, width * (col+1), height * (row+1))) - result.paste(cell, (pad_left + (width + margin) * col, pad_top + (height + margin) * row)) - - d = ImageDraw.Draw(result) - - for col in range(cols): - x = pad_left + (width + margin) * col + width / 2 - y = pad_top / 2 - hor_text_heights[col] / 2 - - draw_texts(d, x, y, hor_texts[col], fnt, fontsize) - - for row in range(rows): - x = pad_left / 2 - y = pad_top + (height + margin) * row + height / 2 - ver_text_heights[row] / 2 - - draw_texts(d, x, y, ver_texts[row], fnt, fontsize) - - return result - - -def draw_prompt_matrix(im, width, height, all_prompts, margin=0): - prompts = all_prompts[1:] - boundary = math.ceil(len(prompts) / 2) - - prompts_horiz = prompts[:boundary] - prompts_vert = prompts[boundary:] - - hor_texts = [[GridAnnotation(x, is_active=pos & (1 << i) != 0) for i, x in enumerate(prompts_horiz)] for pos in range(1 << len(prompts_horiz))] - ver_texts = [[GridAnnotation(x, is_active=pos & (1 << i) != 0) for i, x in enumerate(prompts_vert)] for pos in range(1 << len(prompts_vert))] - - return draw_grid_annotations(im, width, height, hor_texts, ver_texts, margin) - - -def resize_image(resize_mode, im, width, height, upscaler_name=None): - """ - Resizes an image with the specified resize_mode, width, and height. - - Args: - resize_mode: The mode to use when resizing the image. - 0: Resize the image to the specified width and height. - 1: Resize the image to fill the specified width and height, maintaining the aspect ratio, and then center the image within the dimensions, cropping the excess. - 2: Resize the image to fit within the specified width and height, maintaining the aspect ratio, and then center the image within the dimensions, filling empty with data from image. - im: The image to resize. - width: The width to resize the image to. - height: The height to resize the image to. - upscaler_name: The name of the upscaler to use. If not provided, defaults to opts.upscaler_for_img2img. - """ - - upscaler_name = upscaler_name or opts.upscaler_for_img2img - - def resize(im, w, h): - if upscaler_name is None or upscaler_name == "None" or im.mode == 'L': - return im.resize((w, h), resample=LANCZOS) - - scale = max(w / im.width, h / im.height) - - if scale > 1.0: - upscalers = [x for x in shared.sd_upscalers if x.name == upscaler_name] - assert len(upscalers) > 0, f"could not find upscaler named {upscaler_name}" - - upscaler = upscalers[0] - im = upscaler.scaler.upscale(im, scale, upscaler.data_path) - - if im.width != w or im.height != h: - im = im.resize((w, h), resample=LANCZOS) - - return im - - if resize_mode == 0: - res = resize(im, width, height) - - elif resize_mode == 1: - ratio = width / height - src_ratio = im.width / im.height - - src_w = width if ratio > src_ratio else im.width * height // im.height - src_h = height if ratio <= src_ratio else im.height * width // im.width - - resized = resize(im, src_w, src_h) - res = Image.new("RGB", (width, height)) - res.paste(resized, box=(width // 2 - src_w // 2, height // 2 - src_h // 2)) - - else: - ratio = width / height - src_ratio = im.width / im.height - - src_w = width if ratio < src_ratio else im.width * height // im.height - src_h = height if ratio >= src_ratio else im.height * width // im.width - - resized = resize(im, src_w, src_h) - res = Image.new("RGB", (width, height)) - res.paste(resized, box=(width // 2 - src_w // 2, height // 2 - src_h // 2)) - - if ratio < src_ratio: - fill_height = height // 2 - src_h // 2 - res.paste(resized.resize((width, fill_height), box=(0, 0, width, 0)), box=(0, 0)) - res.paste(resized.resize((width, fill_height), box=(0, resized.height, width, resized.height)), box=(0, fill_height + src_h)) - elif ratio > src_ratio: - fill_width = width // 2 - src_w // 2 - res.paste(resized.resize((fill_width, height), box=(0, 0, 0, height)), box=(0, 0)) - res.paste(resized.resize((fill_width, height), box=(resized.width, 0, resized.width, height)), box=(fill_width + src_w, 0)) - - return res - - -invalid_filename_chars = '<>:"/\\|?*\n' -invalid_filename_prefix = ' ' -invalid_filename_postfix = ' .' -re_nonletters = re.compile(r'[\s' + string.punctuation + ']+') -re_pattern = re.compile(r"(.*?)(?:\[([^\[\]]+)\]|$)") -re_pattern_arg = re.compile(r"(.*)<([^>]*)>$") -max_filename_part_length = 128 - - -def sanitize_filename_part(text, replace_spaces=True): - if text is None: - return None - - if replace_spaces: - text = text.replace(' ', '_') - - text = text.translate({ord(x): '_' for x in invalid_filename_chars}) - text = text.lstrip(invalid_filename_prefix)[:max_filename_part_length] - text = text.rstrip(invalid_filename_postfix) - return text - - -class FilenameGenerator: - replacements = { - 'seed': lambda self: self.seed if self.seed is not None else '', - 'steps': lambda self: self.p and self.p.steps, - 'cfg': lambda self: self.p and self.p.cfg_scale, - 'width': lambda self: self.image.width, - 'height': lambda self: self.image.height, - 'styles': lambda self: self.p and sanitize_filename_part(", ".join([style for style in self.p.styles if not style == "None"]) or "None", replace_spaces=False), - 'sampler': lambda self: self.p and sanitize_filename_part(self.p.sampler_name, replace_spaces=False), - 'model_hash': lambda self: getattr(self.p, "sd_model_hash", shared.sd_model.sd_model_hash), - 'model_name': lambda self: sanitize_filename_part(shared.sd_model.sd_checkpoint_info.model_name, replace_spaces=False), - 'date': lambda self: datetime.datetime.now().strftime('%Y-%m-%d'), - 'datetime': lambda self, *args: self.datetime(*args), # accepts formats: [datetime], [datetime], [datetime